Archives For licensing

Output of the LG Research AI to the prompt: “a system of copyright for artificial intelligence”

Not only have digital-image generators like Stable Diffusion, DALL-E, and Midjourney—which make use of deep-learning models and other artificial-intelligence (AI) systems—created some incredible (and sometimes creepy – see above) visual art, but they’ve engendered a good deal of controversy, as well. Human artists have banded together as part of a fledgling anti-AI campaign; lawsuits have been filed; and policy experts have been trying to think through how these machine-learning systems interact with various facets of the law.

Debates about the future of AI have particular salience for intellectual-property rights. Copyright is notoriously difficult to protect online, and these expert systems add an additional wrinkle: it can at least argued that their outputs can be unique creations. There are also, of course, moral and philosophical objections to those arguments, with many grounded in the supposition that only a human (or something with a brain, like humans) can be creative.

Leaving aside for the moment a potentially pitched battle over the definition of “creation,” we should be able to find consensus that at least some of these systems produce unique outputs and are not merely cutting and pasting other pieces of visual imagery into a new whole. That is, at some level, the machines are engaging in a rudimentary sort of “learning” about how humans arrange colors and lines when generating images of certain subjects. The machines then reconstruct this process and produce a new set of lines and colors that conform to the patterns they found in the human art.

But that isn’t the end of the story. Even if some of these systems’ outputs are unique and noninfringing, the way the machines learn—by ingesting existing artwork—can raise a number of thorny issues. Indeed, these systems are arguably infringing copyright during the learning phase, and such use may not survive a fair-use analysis.

We are still in the early days of thinking through how this new technology maps onto the law. Answers will inevitably come, but for now, there are some very interesting questions about the intellectual-property implications of AI-generated art, which I consider below.

The Points of Collision Between Intellectual Property Law and AI-Generated Art

AI-generated art is not a single thing. It is, rather, a collection of differing processes, each with different implications for the law. For the purposes of this post, I am going to deal with image-generation systems that use “generated adversarial networks” (GANs) and diffusion models. The various implementations of each will differ in some respects, but from what I understand, the ways that these techniques can be used generate all sorts of media are sufficiently similar that we can begin to sketch out some of their legal implications. 

A (very) brief technical description

This is a very high-level overview of how these systems work; for a more detailed (but very readable) description, see here.

A GAN is a type of machine-learning model that consists of two parts: a generator and a discriminator. The generator is trained to create new images that look like they come from a particular dataset, while the discriminator is trained to distinguish the generated images from real images in the dataset. The two parts are trained together in an adversarial manner, with the generator trying to produce images that can fool the discriminator and the discriminator trying to correctly identify the generated images.

A diffusion model, by contrast, analyzes the distribution of information in an image, as noise is progressively added to it. This kind of algorithm analyzes characteristics of sample images—like the distribution of colors or lines—in order to “understand” what counts as an accurate representation of a subject (i.e., what makes a picture of a cat look like a cat and not like a dog).

For example, in the generation phase, systems like Stable Diffusion start with randomly generated noise, and work backward in “denoising” steps to essentially “see” shapes:

The sampled noise is predicted so that if we subtract it from the image, we get an image that’s closer to the images the model was trained on (not the exact images themselves, but the distribution – the world of pixel arrangements where the sky is usually blue and above the ground, people have two eyes, cats look a certain way – pointy ears and clearly unimpressed).

It is relevant here that, once networks using these techniques are trained, they do not need to rely on saved copies of the training images in order to generate new images. Of course, it’s possible that some implementations might be designed in a way that does save copies of those images, but for the purposes of this post, I will assume we are talking about systems that save known works only during the training phase. The models that are produced during training are, in essence, instructions to a different piece of software about how to start with a text prompt from a user—a palette of pure noise—and progressively “discover” signal in that image until some new image emerges.

Input-stage use of intellectual property

The creators of OpenAI, one of the most popular AI tools, are not shy about their use of protected works in the training phase of AI algorithms. In comments to the U.S. Patent and Trademark Office (PTO), they note that:

…[m]odern AI systems require large amounts of data. For certain tasks, that data is derived from existing publicly accessible “corpora”… of data that include copyrighted works. By analyzing large corpora (which necessarily involves first making copies of the data to be analyzed), AI systems can learn patterns inherent in human-generated data and then use those patterns to synthesize similar data which yield increasingly compelling novel media in modalities as diverse as text, image, and audio. (emphasis added).

Thus, at the training stage, the most popular forms of machine-learning systems require making copies of existing works. And where the material being used is either not in the public domain or is not licensed, an infringement occurs (as Getty Images notes in a suit against Stability AI that it recently filed). Thus, some affirmative defense is needed to excuse the infringement.

Toward this end, OpenAI believes that its algorithmic training should qualify as a fair use. Other major services that use these AI techniques to “learn” from existing media would likely make similar arguments. But, at least in the way that OpenAI has framed the fair-use analysis (that these uses are sufficiently “transformative”), it’s not clear that they should qualify.

The purpose and character of the use

In brief, fair use—found in 17 USC § 107—provides for an affirmative defense against infringement when the use is  “for purposes such as criticism, comment, news reporting, teaching…, scholarship, or research.” When weighing a fair-use defense, a court must balance a number of factors:

  1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
  2. the nature of the copyrighted work;
  3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
  4. the effect of the use upon the potential market for or value of the copyrighted work.

OpenAI’s fair-use claim is rooted in the first factor: the nature and character of the use. I should note, then, that what follows is solely a consideration of Factor 1, with special attention paid to whether these uses are “transformative.” But it is important to stipulate fair-use analysis is a multi-factor test and that, even within the first factor, it’s not mandatory that a use be “transformative.” It is entirely possible that a court balancing all of the factors could, indeed, find that OpenAI is engaged in fair use, even if it does not agree that it is “transformative.”

Whether the use of copyrighted works to train an AI is “transformative” is certainly a novel question, but it is likely answered through an observation that the U.S. Supreme Court made in Campbell v. Acuff Rose Music:

[W]hat Sony said simply makes common sense: when a commercial use amounts to mere duplication of the entirety of an original, it clearly “supersede[s] the objects,”… of the original and serves as a market replacement for it, making it likely that cognizable market harm to the original will occur… But when, on the contrary, the second use is transformative, market substitution is at least less certain, and market harm may not be so readily inferred.

A key question, then, is whether training an AI on copyrighted works amounts to mere “duplication of the entirety of an original” or is sufficiently “transformative” to support a fair-use finding. Open AI, as noted above, believes its use is highly transformative. According to its comments:

Training of AI systems is clearly highly transformative. Works in training corpora were meant primarily for human consumption for their standalone entertainment value. The “object of the original creation,” in other words, is direct human consumption of the author’s ​expression.​ Intermediate copying of works in training AI systems is, by contrast, “non-expressive” the copying helps computer programs learn the patterns inherent in human-generated media. The aim of this process—creation of a useful generative AI system—is quite different than the original object of human consumption.  The output is different too: nobody looking to read a specific webpage contained in the corpus used to train an AI system can do so by studying the AI system or its outputs. The new purpose and expression are thus both highly transformative.

But the way that Open AI frames its system works against its interests in this argument. As noted above, and reinforced in the immediately preceding quote, an AI system like DALL-E or Stable Diffusion is actually made of at least two distinct pieces. The first is a piece of software that ingests existing works and creates a file that can serve as instructions to the second piece of software. The second piece of software then takes the output of the first part and can produce independent results. Thus, there is a clear discontinuity in the process, whereby the ultimate work created by the system is disconnected from the creative inputs used to train the software.

Therefore, contrary to what Open AI asserts, the protected works are indeed ingested into the first part of the system “for their standalone entertainment value.” That is to say, the software is learning what counts as “standalone entertainment value” and therefore, the works mustbe used in those terms.

Surely, a computer is not sitting on a couch and surfing for its own entertainment. But it is solely for the very “standalone entertainment value” that the first piece of software is being shown copyrighted material. By contrast, parody or “remixing”  uses incorporate the work into some secondary expression that transforms the input. The way these systems work is to learn what makes a piece entertaining and then to discard that piece altogether. Moreover, this use of art qua art most certainly interferes with the existing market insofar as this use is in lieu of reaching a licensing agreement with rightsholders.

The 2nd U.S. Circuit Court of Appeals dealt with an analogous case. In American Geophysical Union v. Texaco, the 2nd Circuit considered whether Texaco’s photocopying of scientific articles produced by the plaintiffs qualified for a fair-use defense. Texaco employed between 400 and 500 research scientists and, as part of supporting their work, maintained subscriptions to a number of scientific journals. It was common practice for Texaco’s scientists to photocopy entire articles and save them in a file.

The plaintiffs sued for copyright infringement. Texaco asserted that photocopying by its scientists for the purposes of furthering scientific research—that is to train the scientists on the content of the journal articles—should count as a fair use, at least in part because it was sufficiently “transformative.” The 2nd Circuit disagreed:

The “transformative use” concept is pertinent to a court’s investigation under the first factor because it assesses the value generated by the secondary use and the means by which such value is generated. To the extent that the secondary use involves merely an untransformed duplication, the value generated by the secondary use is little or nothing more than the value that inheres in the original. Rather than making some contribution of new intellectual value and thereby fostering the advancement of the arts and sciences, an untransformed copy is likely to be used simply for the same intrinsic purpose as the original, thereby providing limited justification for a finding of fair use… (emphasis added).

As in the case at hand, the 2nd Circuit observed that making full copies of the scientific articles was solely for the consumption of the material itself. A rejoinder, of course, is that training these AI systems surely advances scientific research and, thus, does foster the “advancement of the arts and sciences.” But in American Geophysical Union, where the secondary use was explicitly for the creation of new and different scientific outputs, the court still held that making copies of one scientific article in order to learn and produce new scientific innovations did not count as “transformative.”

What this case represents is that one cannot merely state that some social goal will be advanced in the future by permitting an exception to copyright protection today. As the 2nd Circuit put it:

…the dominant purpose of the use is a systematic institutional policy of multiplying the available number of copies of pertinent copyrighted articles by circulating the journals among employed scientists for them to make copies, thereby serving the same purpose for which additional subscriptions are normally sold, or… for which photocopying licenses may be obtained.

The secondary use itself must be transformative and different. Where an AI system ingests copyrighted works, that use is simply not transformative; it is using the works in their original sense in order to train a system to be able to make other original works. As in American Geophysical Union, the AI creators are completely free to seek licenses from rightsholders in order to train their systems.

Finally, there is a sense in which this machine learning might not infringe on copyrights at all. To my knowledge, the technology does not itself exist, but if it were possible for a machine to somehow “see” in the way that humans do—without using stored copies of copyrighted works—merely “learning” from those works, such as we can call it learning, probably would not violate copyright laws.

Do the outputs of these systems violate intellectual property laws?

The outputs of GANs and diffusion models may or may not violate IP laws, but there is nothing inherent in the processes described above to dictate that they must. As noted, the most common AI systems do not save copies of existing works, but merely “instructions” (more or less) on how to create new works that conform to patterns they found by examining existing work. If we assume that a system isn’t violating copyright at the input stage, it’s entirely possible that it can produce completely new pieces of art that have never before existed and do not violate copyright.

They can, however, be made to violate IP rights. For example, trademark violations appear to be one of the most popular uses of these AI systems by end users. To take but one example, a quick search of Google Images for “midjourney iron man” returns a slew of images that almost certainly violate trademarks for the character Iron Man. Similarly, these systems can be instructed to generate art that is not just “in the style” of a particular artist, but that very closely resembles existing pieces. In this sense, the system would be making a copy that theoretically infringes. 

There is a common bug in such systems that leads to outputs that are more likely to violate copyright in this way. Known as “overfitting,” the training leg of these AI systems can be presented with samples that contain too many instances of a particular image. This leads to a data set that contains too much information about the specific image, such that when the AI generates a new image, it is constrained to producing something very close to the original.

An argument can also be made that generating art “in the style of” a famous artist violates moral rights (in jurisdictions where such rights exist).

At least in the copyright space, cases like Sony are going to become crucial. Does the user side of these AI systems have substantial noninfringing uses? If so, the firms that host software for end users could avoid secondary-infringement liability, and the onus would fall on users to avoid violating copyright laws. At the same time, it seems plausible that legislatures could place some obligation on these providers to implement filters to mitigate infringement by end users.

Opportunities for New IP Commercialization with AI

There are a number of ways that AI systems may inexcusably infringe on intellectual-property rights. As a best practice, I would encourage the firms that operate these services to seek licenses from rightsholders. While this would surely be an expense, it also opens new opportunities for both sides to generate revenue.

For example, an AI firm could develop its own version of YouTube’s ContentID that allows creators to opt their work into training. For some well-known artists this could be negotiated with an upfront licensing fee. On the user-side, any artist who has opted in could then be selected as a “style” for the AI to emulate. When users generate an image, a royalty payment to the artist would be created. Creators would also have the option to remove their influence from the system if they so desired. 

Undoubtedly, there are other ways to monetize the relationship between creators and the use of their work in AI systems. Ultimately, the firms that run these systems will not be able to simply wish away IP laws. There are going to be opportunities for creators and AI firms to both succeed, and the law should help to generate that result.

Late last month, 25 former judges and government officials, legal academics and economists who are experts in antitrust and intellectual property law submitted a letter to Assistant Attorney General Jonathan Kanter in support of the U.S. Justice Department’s (DOJ) July 2020 Avanci business-review letter (ABRL) dealing with patent pools. The pro-Avanci letter was offered in response to an October 2022 letter to Kanter from ABRL critics that called for reconsideration of the ABRL. A good summary account of the “battle of the scholarly letters” may be found here.

The University of Pennsylvania’s Herbert Hovenkamp defines a patent pool as “an arrangement under which patent holders in a common technology or market commit their patents to a single holder, who then licenses them out to the original patentees and perhaps to outsiders.” Although the U.S. antitrust treatment of patent pools might appear a rather arcane topic, it has major implications for U.S. innovation. As AAG Kanter ponders whether to dive into patent-pool policy, a brief review of this timely topic is in order. That review reveals that Kanter should reject the anti-Avanci letter and reaffirm the ABRL.

Background on Patent Pool Analysis

The 2017 DOJ-FTC IP Licensing Guidelines

Section 5.5 of joint DOJ-Federal Trade Commission (FTC) Antitrust Guidelines for the Licensing of Intellectual Property (2017 Guidelines, which revised a prior 1995 version) provides an overview of the agencies’ competitive assessment of patent pools. The 2017 Guidelines explain that, depending on how pools are designed and operated, they may have procompetitive (and efficiency-enhancing) or anticompetitive features.

On the positive side of the ledger, Section 5.5 states:

Cross-licensing and pooling arrangements are agreements of two or more owners of different items of intellectual property to license one another or third parties. These arrangements may provide procompetitive benefits by integrating complementary technologies, reducing transaction costs, clearing blocking positions, and avoiding costly infringement litigation. By promoting the dissemination of technology, cross-licensing and pooling arrangements are often procompetitive.

On the negative side of the ledger, Section 5.5 states (citations omitted):

Cross-licensing and pooling arrangements can have anticompetitive effects in certain circumstances. For example, collective price or output restraints in pooling arrangements, such as the joint marketing of pooled intellectual property rights with collective price setting or coordinated output restrictions, may be deemed unlawful if they do not contribute to an efficiency-enhancing integration of economic activity among the participants. When cross-licensing or pooling arrangements are mechanisms to accomplish naked price-fixing or market division, they are subject to challenge under the per se rule.

Other aspects of pool behavior may be either procompetitive or anticompetitive, depending upon the circumstances, as Section 5.5 explains. The antitrust rule of reason would apply to pool restraints that may have both procompetitive and anticompetitive features.  

For example, requirements that pool members grant licenses to each other for current and future technology at minimal cost could disincentivize research and development. Such requirements, however, could also promote competition by exploiting economies of scale and integrating complementary capabilities of the pool members. According to the 2017 Guidelines, such requirements are likely to cause competitive problems only when they include a large fraction of the potential research and development in an R&D market.

Section 5.5 also applies rule-of-reason case-specific treatment to exclusion from pools. It notes that, although pooling arrangements generally need not be open to all who wish to join (indeed, exclusion of certain parties may be designed to prevent potential free riding), they may be anticompetitive under certain circumstances (citations omitted):

[E]xclusion from a pooling or cross-licensing arrangement among competing technologies is unlikely to have anticompetitive effects unless (1) excluded firms cannot effectively compete in the relevant market for the good incorporating the licensed technologies and (2) the pool participants collectively possess market power in the relevant market. If these circumstances exist, the [federal antitrust] [a]gencies will evaluate whether the arrangement’s limitations on participation are reasonably related to the efficient development and exploitation of the pooled technologies and will assess the net effect of those limitations in the relevant market.

The 2017 Guidelines are informed by the analysis of prior agency-enforcement actions and prior DOJ business-review letters. Through the business-review-letter procedure, an organization may submit a proposed action to the DOJ Antitrust Division and receive a statement as to whether the Division currently intends to challenge the action under the antitrust laws, based on the information provided. Historically, DOJ has used these letters as a vehicle to discuss current agency thinking about safeguards that may be included in particular forms of business arrangements to alleviate DOJ competitive concerns.

DOJ patent-pool letters, in particular, have prompted DOJ to highlight specific sorts of provisions in pool agreements that forestalled competitive problems. To this point, DOJ has never commented favorably on patent-pool safeguards in a letter and then subsequently reversed course to find the safeguards inadequate.

Subsequent to issuance of the 2017 Guidelines, DOJ issued two business-review letters on patent pools: the July 2020 ABRL letter and the January 2021 University Technology Licensing Program business-review letter (UTLP letter). Those two letters favorably discussed competitive safeguards proffered by the entities requesting favorable DOJ reviews.

ABRL Letter

The ABRL letter explains (citations omitted):

[Avanci] proposed [a] joint patent-licensing pool . . . to . . . license patent claims that have been declared “essential” to implementing 5G cellular wireless standards for use in automobile vehicles and distribute royalty income among the Platform’s licensors. Avanci currently operates a licensing platform related to 4G cellular standards and offers licenses to 2G, 3G, and 4G standards-essential patents used in vehicles and smart meters.

After consulting telecommunications and automobile-industry stakeholders, conducing an independent review, and considering prior guidance to other patent pools, “DOJ conclude[d] that, on balance, Avanci’s proposed 5G Platform is unlikely to harm competition.” As such, DOJ announced it had no present intention to challenge the platform.

The DOJ press release accompanying the ABRL letter provides additional valuable information on Avanci’s potential procompetitive efficiencies; its plan to charge fair, reasonable, and non-discriminatory (FRAND) rates; and its proposed safeguards:

Avanci’s 5G Platform may make licensing standard essential patents related to vehicle connectivity more efficient by providing automakers with a “one stop shop” for licensing 5G technology. The Platform also has the potential to reduce patent infringement and ensure that patent owners who have made significant contributions to the development of 5G “Release 15” specifications are compensated for their innovation. Avanci represents that the Platform will charge FRAND rates for the patented technologies, with input from both licensors and licensees.

In addition, Avanci has incorporated a number of safeguards into its 5G Platform that can help protect competition, including licensing only technically essential patents; providing for independent evaluation of essential patents; permitting licensing outside the Platform, including in other fields of use, bilateral or multi-lateral licensing by pool members, and the formation of other pools at levels of the automotive supply chain; and by including mechanisms to prevent the sharing of competitively sensitive information.  The Department’s review found that the Platform’s essentiality review may help automakers license the patents they actually need to make connected vehicles.  In addition, the Platform license includes “Have Made” rights that creates new access to cellular standard essential patents for licensed automakers’ third-party component suppliers, permitting them to make non-infringing components for 5G connected vehicles.

UTLP Letter

The United Technology Licensing Program business-review letter (issued less than a year after the ABRL letter, at the end of the Trump administration) discussed a proposal by participating universities to offer licenses to their physical-science patents relating to specified emerging technologies. According to DOJ:

[Fifteen universities agreed to cooperate] in licensing certain complementary patents through UTLP, which will be organized into curated portfolios relating to specific technology applications for autonomous vehicles, the “Internet of Things,” and “Big Data.”  The overarching goal of UTLP is to centralize the administrative costs associated with commercializing university research and help participating universities to overcome the budget, institutional relationship, and other constraints that make licensing in these areas particularly challenging for them.

The UTLP letter concluded, based on representations made in UTLP’s letter request, that the pool was on balance unlikely to harm competition. Specifically:

UTLP has incorporated a number of safeguards into its program to help protect competition, including admitting only non-substitutable patents, with a “safety valve” if a patent to accomplish a particular task is inadvertently included in a portfolio with another, substitutable patent. The program also will allow potential sublicensees to choose an individual patent, a group of patents, or UTLP’s entire portfolio, thereby mitigating the risk that a licensee will be required to license more technology than it needs. The department’s letter notes that UTLP is a mechanism that is intended to address licensing inefficiencies and institutional challenges unique to universities in the physical science context, and makes no assessment about whether this mechanism if set up in another context would have similar procompetitive benefits.

Patent-Pool Guidance in Context

DOJ and FTC patent-pool guidance has been bipartisan. It has remained generally consistent in character from the mid-1990s (when the first 1995 IP licensing guidelines were issued) to early 2021 (the end of the Trump administration, when the UTLP letter was issued). The overarching concern expressed in agency guidance has been to prevent a pool from facilitating collusion among competitors, from discouraging innovation, and from inefficiently excluding competitors.

As technology has advanced over the last quarter century, U.S. antitrust enforcers—and, in particular, DOJ, through a series of business-review letters beginning in 1997 (see the pro-Avanci letter at pages 9-10)—consistently have emphasized the procompetitive efficiencies that pools can generate, while also noting the importance of avoiding anticompetitive harms.

Those letters have “given a pass” to pools whose rules contained safeguards against collusion among pool members (e.g., by limiting pool patents to complementary, not substitute, technologies) and against anticompetitive exclusion (e.g., by protecting pool members’ independence of action outside the pool). In assessing safeguards, DOJ has paid attention to the particular market context in which a pool arises.

Notably, economic research generally supports the conclusion that, in recent decades, patent pools have been an important factor in promoting procompetitive welfare-enhancing innovation and technology diffusion.

For example, a 2015 study by Justus Baron and Tim Pohlmann found that a significant number of pools were created following antitrust authorities’ “more permissive stance toward pooling of patents” beginning in the late 1990s. Studying these new pools, they found “a significant increase in patenting rates after pool announcement” that was “primarily attributable to future members of the pool”.

A 2009 analysis by Richard Gilbert of the University of California, Berkeley (who served as chief economist of the DOJ Antitrust Division during the Clinton administration) concluded that (consistent with the approach adopted in DOJ business-review letters) “antitrust authorities and the courts should encourage policies that promote the formation and durability of beneficial pools that combine complementary patents.”

In a 2014 assessment of the role of patent pools in combatting “patent thickets,” Jonathan Barnett of the USC Gould School of Law concluded:

Using network visualization software, I show that information and communication technology markets rely on patent pools and other cross-licensing structures to mitigate or avoid patent thickets and associated inefficiencies. Based on the composition, structure, terms and pricing of selected leading patent pools in the ICT market, I argue that those pools are best understood as mechanisms by which vertically integrated firms mitigate transactional frictions and reduce the cost of accessing technology inputs.

Admittedly, a few studies of some old patents pools (e.g., the 19th century sewing-machine pool and certain early 20th century New Deal pools) found them to be associated with a decline in patenting. Setting aside possible questions of those studies’ methodologies, the old pooling arrangements bear little resemblance to the carefully crafted pool structures today. In particular, unlike the old pools, the more recent pools embody competitive safeguards (the old pools may have combined substitute patents, for example).   

Business-review letters dealing with pools have provided a degree of legal certainty that has helped encourage their formation, to the benefit of innovation in key industries. The anti-Avanci letter ignores that salient fact, focusing instead on allegedly “abusive” SEP-licensing tactics by the Avanci 5G pool—such as refusal to automatically grant a license to all comers—without considering that the pool may have had legitimate reasons not to license particular parties (who may, for instance, have made bad faith unreasonable licensing demands). In sum, this blinkered approach is wrong as a matter of SEP law and policy (as explained in the pro-Avanci letter) and wrong in its implicit undermining of the socially beneficial patent-pool business-review process.   

The pro-Avanci letter ably describes the serious potential harm generated by the anti-Avanci letter:

In evaluating the carefully crafted Avanci pool structure, the 2020 business review letter appropriately concluded that the pool’s design conformed to the well-established, fact-intensive inquiry concerning actual market practices and efficiencies set forth in previous business review letters. Any reconsideration of the 2020 business review letter, as proposed in the October 17 letter, would give rise to significant uncertainty concerning the Antitrust Division’s commitment to the aforementioned sequence of business review letters that have been issued concerning other patent pools in the information technology industry, as well as the larger group of patent pools that did not specifically seek guidance through the business review letter process but relied on the legal template that had been set forth in those previously issued letters.

This is a point of great consequence. Pooling arrangements in the information technology industry have provided an efficient market-driven solution to the transaction costs that are inherent to patent-intensive industries and, when structured appropriately in light of agency guidance and applicable case law, do not raise undue antitrust concerns. Thanks to pooling and related collective licensing arrangements, the innovations embodied in tens of thousands of patents have been made available to hundreds of device producers and other intermediate users, while innovators have been able to earn a commensurate return on the costs and risks that they undertook to develop new technologies that have transformed entire fields and industries to the benefit of consumers.

Conclusion

President Joe Biden’s 2021 Executive Order on Competition commits the Biden administration to “the promotion of competition and innovation by firms small and large, at home and worldwide.” One factor in promoting competition and innovation has been the legal certainty flowing from well-reasoned DOJ business-review letters on patent pools, issued on a bipartisan basis for more than a quarter of a century.

A DOJ decision to reconsider (in other words, to withdraw) the sound guidance embodied in the ABRL would detract from this certainty and thereby threaten to undermine innovation promoted by patent pools. Accordingly, AAG Kanter should reject the advice proffered by the anti-Avanci letter and publicly reaffirm his support for the ABRL—and, more generally, for the DOJ business-review process.

What should a government do when it owns geese that lay golden eggs? Should it sell the geese to fund government programs? Or should it let them run wild so everyone can have a chance at a golden egg? 

That’s the question facing Congress as it considers re-authorizing the Federal Communications Commission’s (FCC’s) authority to auction and license spectrum. Should the FCC auction spectrum to maximize government revenue? Or, should it allow large portions to remain unlicensed to foster innovation and development?

The complication in this regard is that auction revenues play an outsized role in federal lawmakers’ deliberations about spectrum policy. Indeed, spectrum auctions have been wildly successful in generating revenue for the federal government. But the size of direct federal revenues are not necessarily a perfect gauge of the overall social welfare generated by particular policy choices.

As it considers future spe​​ctrum reauthorization, Congress needs to take a balanced approach that includes concern for federal revenues, but also considers the much larger social welfare that is created when diverse users in various situations can access services enabled by both licensed and unlicensed spectrum.

Licenced, Unlicensed, & Shared Spectrum

Most spectrum is licensed by the FCC to certain users. Licensees pay fees to the FCC for the exclusive right to transmit on an assigned frequency within a given geographical area. A license holder has the right to exclude others from accessing the assigned frequency and to be free from harmful interference from other service providers. In the private sector, radio and television broadcasters, as well as mobile-phone services, operate with licensed spectrum. Their right to exclude others and to be free from interference provides improved service and greater reliability in distributing their broadcasts or providing communication services.

SOURCE: U.S. Commerce Department

Licensing gets spectrum into the hands of those who are well-positioned—both technologically and financially—to deploy spectrum for commercial uses. Because a licensee has the right to exclude other operators from the licensed band, licensing offers the operator flexibility to deploy their network in ways that effectively mitigate potential interference. In addition, the auctioning of licenses provides revenues for the government, reducing pressures to increase taxes or cut spending. Spectrum auctions have reportedly raised more than $230 billion for the U.S. Treasury since their inception.

Unlicensed spectrum can be seen as an open-access resource available to all users without charge. Users are free to use as much of this spectrum as they wish, so long as it’s with FCC-certified equipment operating at authorized power levels. The most well-known example of unlicensed operations is Wi-Fi, a service that operates in the 2.4 GHz, and 5.8 GHz bands and is employed by millions of U.S. users across millions of devices in millions of locations each day. Wi-Fi isn’t the only use for unlicensed spectrum; it covers a range of devices such as those relying on Bluetooth, as well as personal medical devices, appliances, and a wide range of Internet-of-Things devices. 

As with any common resource, each user’s service-quality experience depends on how much spectrum is used by all. In particular, if the demand for spectrum at a particular place and point in time exceeds the available supply, then all users will experience diminished service quality. If you’ve been in a crowded coffee shop and complained that “the Internet sucks here,” it’s more than likely that demand for the shop’s Wi-Fi service is greater than the capacity of the Wi-Fi router.

SOURCE: Wall Street Journal

While there can be issues of interference among wireless devices, it’s not the Wild West. Equipment and software manufacturers have invested in developing technologies that work in noisy environments and in proximity to other products. The existence of sufficient unlicensed and shared spectrum allows for innovation with new technologies and services. Firms don’t have to make large upfront investments in licenses to research, develop, and experiment with their innovations. These innovations benefit consumers, businesses, and manufacturers. According to the Wi-Fi Alliance, the success of Wi-Fi has been enormous:

The United States remains one of the countries with the widest Wi-Fi adoption and use. Cisco estimates 33.5 million paid Wi-Fi access points, with estimates for free public Wi-Fi sites at around 18.6 million. Eighty-five percent of United States broadband subscribers have Wi-Fi capability at home, and mobile users connect to the internet through Wi-Fi over cellular networks more than 55 percent of the time. The United States also has a robust manufacturing ecosystem and increasing enterprise use, which have aided the rise in the value of Wi-Fi. The total economic value of Wi-Fi in 2021 is $995 billion.

The Need for Balanced Spectrum Policy

To be sure, both licensed and unlicensed spectrum play crucial roles and serve different purposes, sometimes as substitutes for one another and sometimes as complements. It can’t therefore be said that one approach is “better” than the other, as there is undeniable economic value to both.

That’s why it’s been said that the optimal amount of unlicensed spectrum is somewhere between 0% and 100%. While that’s true, it’s unhelpful as a guide for policymakers, even if it highlights the challenges they face. Not only must they balance the competing interests of consumers, wireless providers, and electronics manufacturers, but they also have to keep their own self-interest in check, insofar as they are forever tempted to use spectrum auctions to raise revenue.

To this last point, it is likely that the “optimum” amount of unlicensed spectrum for society differs significantly from the amount that maximizes government auction revenues.

For simplicity, let’s assume “consumer welfare” is a shorthand for social welfare less government-auction revenues. In the (purely hypothetical) figure below, consumer welfare is maximized when about 56% of the available spectrum is licensed. Government auction revenues, however, are maximized when all available spectrum is licensed.

SOURCE: Authors

In this example, politicians have a keen interest in licensing more spectrum than is socially optimal. Doing so provides more revenues to the government without raising taxes. The additional costs passed on to individual consumers (or voters) would be so disperse as to be virtually undetectable. It’s a textbook case of concentrated benefits and diffused costs.

Of course, we can debate about the size, shape, and position of each of the curves, as well as where on the curve the United States currently sits. Nevertheless, available evidence indicates that the consumer welfare generated through use of unlicensed broadband will often exceed the revenue generated by spectrum auctions. For example, if the Wi-Fi Alliance’s estimate of $995 billion in economic value for Wi-Fi is accurate (or even in the ballpark), then the value of Wi-Fi alone is more than three times greater than the auction revenues received by the U.S. Treasury.

Of course, licensed-spectrum technology also provides tremendous benefit to society, but the basic basic point cannot be ignored: a congressional calculation that seeks simply to maximize revenue to the U.S. Treasury will almost certainly rob society of a great deal of benefit.

Conclusion

Licensed spectrum is obviously critical, and not just because it allows politicians to raise revenue for the federal government. Cellular technology and other licensed applications are becoming even more important as a wide variety of users opt for cellular-only Internet connections, or where fixed wireless over licensed spectrum is needed to reach remote users.

At the same time, shared and unlicensed spectrum has been a major success story, and promises to keep delivering innovation and greater connectivity in a wide variety of use cases.  As we note above, the federal revenue generated from auctions should not be the only benefit counted. Unlicensed spectrum is responsible for tens of billions of dollars in direct value, and close to $1 trillion when accounting for its indirect benefits.

Ultimately, allocating spectrum needs to be a question of what most enhances consumer welfare. Raising federal revenue is great, but it is only one benefit that must be counted among a number of benefits (and costs). Any simplistic formula that pushes for maximizing a single dimension of welfare is likely to be less than ideal. As Congress considers further spectrum reauthorization, it needs to take seriously the need to encourage both private ownership of licensed spectrum, as well as innovative uses of unlicensed and shared spectrum.

U.S. antitrust policy seeks to promote vigorous marketplace competition in order to enhance consumer welfare. For more than four decades, mainstream antitrust enforcers have taken their cue from the U.S. Supreme Court’s statement in Reiter v. Sonotone (1979) that antitrust is “a consumer welfare prescription.” Recent suggestions (see here and here) by new Biden administration Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) leadership that antitrust should promote goals apart from consumer welfare have yet to be embodied in actual agency actions, and they have not been tested by the courts. (Given Supreme Court case law, judicial abandonment of the consumer welfare standard appears unlikely, unless new legislation that displaces it is enacted.)   

Assuming that the consumer welfare paradigm retains its primacy in U.S. antitrust, how do the goals of antitrust match up with those of national security? Consistent with federal government pronouncements, the “basic objective of U.S. national security policy is to preserve and enhance the security of the United States and its fundamental values and institutions.” Properly applied, antitrust can retain its consumer welfare focus in a manner consistent with national security interests. Indeed, sound antitrust and national-security policies generally go hand-in-hand. The FTC and the DOJ should keep that in mind in formulating their antitrust policies (spoiler alert: they sometimes have failed to do so).

Discussion

At first blush, it would seem odd that enlightened consumer-welfare-oriented antitrust enforcement and national-security policy would be in tension. After all, enlightened antitrust enforcement is concerned with targeting transactions that harmfully reduce output and undermine innovation, such as hard-core collusion and courses of conduct that inefficiently exclude competition and weaken marketplace competition. U.S. national security would seem to be promoted (or, at least, not harmed) by antitrust enforcement directed at supporting stronger, more vibrant American markets.

This initial instinct is correct, if antitrust-enforcement policy indeed reflects economically sound, consumer-welfare-centric principles. But are there examples where antitrust enforcement falls short and thereby is at odds with national security? An evaluation of three areas of interaction between the two American policy interests is instructive.

The degree of congruence between national security and appropriate consumer welfare-enhancing antitrust enforcement is illustrated by a brief discussion of:

  1. defense-industry mergers;
  2. the intellectual property-antitrust interface, with a focus on patent licensing; and
  3. proposed federal antitrust legislation.

The first topic presents an example of clear consistency between consumer-welfare-centric antitrust and national defense. In contrast, the second topic demonstrates that antitrust prosecutions (and policies) that inappropriately weaken intellectual-property protections are inconsistent with national defense interests. The second topic does not manifest a tension between antitrust and national security; rather, it illustrates a tension between national security and unsound antitrust enforcement. In a related vein, the third topic demonstrates how a change in the antitrust statutes that would undermine the consumer welfare paradigm would also threaten U.S. national security.

Defense-Industry Mergers

The consistency between antitrust goals and national security is relatively strong and straightforward in the field of defense-industry-related mergers and joint ventures. The FTC and DOJ traditionally have worked closely with the U.S. Defense Department (DOD) to promote competition and consumer welfare in evaluating business transactions that affect national defense needs.

The DOD has long supported policies to prevent overreliance on a single supplier for critical industrial-defense needs. Such a posture is consistent with the antitrust goal of preventing mergers to monopoly that reduce competition, raise prices, and diminish quality by creating or entrenching a dominant firm. As then-FTC Commissioner William Kovacic commented about an FTC settlement that permitted the United Launch Alliance (an American spacecraft launch service provider established in 2006 as a joint venture between Lockheed Martin and Boeing), “[i]n reviewing defense industry mergers, competition authorities and the DOD generally should apply a presumption that favors the maintenance of at least two suppliers for every weapon system or subsystem.”

Antitrust enforcers have, however, worked with DOD to allow the only two remaining suppliers of a defense-related product or service to combine their operations, subject to appropriate safeguards, when presented with scale economy and quality rationales that advanced national-security interests (see here).

Antitrust enforcers have also consulted and found common cause with DOD in opposing anticompetitive mergers that have national-security overtones. For example, antitrust enforcement actions targeting vertical defense-sector mergers that threaten anticompetitive input foreclosure or facilitate anticompetitive information exchanges are in line with the national-security goal of preserving vibrant markets that offer the federal government competitive, high-quality, innovative, and reasonably priced purchase options for its defense needs.

The FTC’s recent success in convincing Lockheed Martin to drop its proposed acquisition of Aerojet Rocketdyne holdings fits into this category. (I express no view on the merits of this matter; I merely cite it as an example of FTC-DOD cooperation in considering a merger challenge.) In its February 2022 press release announcing the abandonment of this merger, the FTC stated that “[t]he acquisition would have eliminated the country’s last independent supplier of key missile propulsion inputs and given Lockheed the ability to cut off its competitors’ access to these critical components.” The FTC also emphasized the full consistency between its enforcement action and national-security interests:

Simply put, the deal would have resulted in higher prices and diminished quality and innovation for programs that are critical to national security. The FTC’s enforcement action in this matter dovetails with the DoD report released this week recommending stronger merger oversight of the highly concentrated defense industrial base.

Intellectual-Property Licensing

Shifts in government IP-antitrust patent-licensing policy perspectives

Intellectual-property (IP) licensing, particularly involving patents, is highly important to the dynamic and efficient dissemination of new technologies throughout the economy, which, in turn, promotes innovation and increased welfare (consumers’ and producers’ surplus). See generally, for example, Daniel Spulber’s The Case for Patents and Jonathan Barnett’s Innovation, Firms, and Markets. Patents are a property right, and they do not necessarily convey market power, as the federal government has recognized (see 2017 DOJ-FTC Antitrust Guidelines for the Licensing of Intellectual Property).

Standard setting through standard setting organizations (SSOs) has been a particularly important means of spawning valuable benchmarks (standards) that have enabled new patent-backed technologies to drive innovation and enable mass distribution of new high-tech products, such as smartphones. The licensing of patents that cover and make possible valuable standards—“standard-essential patents” or SEPs—has played a crucial role in bringing to market these products and encouraging follow-on innovations that have driven fast-paced welfare-enhancing product and process quality improvements.

The DOJ and FTC have recognized specific efficiency benefits of IP licensing in their 2017 Antitrust Guidelines for the Licensing of Intellectual Property, stating (citations deleted):

Licensing, cross-licensing, or otherwise transferring intellectual property (hereinafter “licensing”) can facilitate integration of the licensed property with complementary factors of production. This integration can lead to more efficient exploitation of the intellectual property, benefiting consumers through the reduction of costs and the introduction of new products. Such arrangements increase the value of intellectual property to consumers and owners. Licensing can allow an innovator to capture returns from its investment in making and developing an invention through royalty payments from those that practice its invention, thus providing an incentive to invest in innovative efforts. …

[L]imitations on intellectual property licenses may serve procompetitive ends by allowing the licensor to exploit its property as efficiently and effectively as possible. These various forms of exclusivity can be used to give a licensee an incentive to invest in the commercialization and distribution of products embodying the licensed intellectual property and to develop additional applications for the licensed property. The restrictions may do so, for example, by protecting the licensee against free riding on the licensee’s investments by other licensees or by the licensor. They may also increase the licensor’s incentive to license, for example, by protecting the licensor from competition in the licensor’s own technology in a market niche that it prefers to keep to itself.

Unfortunately, however, FTC and DOJ antitrust policies over the last 15 years have too often belied this generally favorable view of licensing practices with respect to SEPs. (See generally here, here, and here). Notably, the antitrust agencies have at various times taken policy postures and enforcement actions indicating that SEP holders may face antitrust challenges if:

  1. they fail to license all comers, including competitors, on fair, reasonable, and nondiscriminatory (FRAND) terms; and
  2. seek to obtain injunctions against infringers.

In addition, antitrust policy officials (see 2011 FTC Report) have described FRAND price terms as cabined by the difference between the licensing rates for the first (included in the standard) and second (not included in the standard) best competing patented technologies available prior to the adoption of a standard. This pricing measure—based on the “incremental difference” between first and second-best technologies—has been described as necessary to prevent SEP holders from deriving artificial “monopoly rents” that reflect the market power conferred by a standard. (But see then FTC-Commissioner Joshua Wright’s 2013 essay to the contrary, based on the economics of incomplete contracts.)

This approach to SEPs undervalues them, harming the economy. Limitations on seeking injunctions (which are a classic property-right remedy) encourages opportunistic patent infringements and artificially disfavors SEP holders in bargaining over licensing terms with technology implementers, thereby reducing the value of SEPs. SEP holders are further disadvantaged by the presumption that they must license all comers. They also are harmed by the implication that they must be limited to a relatively low hypothetical “ex ante” licensing rate—a rate that totally fails to take into account the substantial economic welfare value that will accrue to the economy due to their contribution to the standard. Considered individually and as a whole, these negative factors discourage innovators from participating in standardization, to the detriment of standards quality. Lower-quality standards translate into inferior standardized produces and processes and reduced innovation.

Recognizing this problem, in 2018 DOJ, Assistant Attorney General for Antitrust Makan Delrahim announced a “New Madison Approach” (NMA) to SEP licensing, which recognized:

  1. antitrust remedies are inappropriate for patent-licensing disputes between SEP-holders and implementers of a standard;
  2. SSOs should not allow collective actions by standard-implementers to disfavor patent holders;
  3. SSOs and courts should be hesitant to restrict SEP holders’ right to exclude implementers from access to their patents by seeking injunctions; and
  4. unilateral and unconditional decisions not to license a patent should be per se legal. (See, for example, here and here.)

Acceptance of the NMA would have counter-acted the economically harmful degradation of SEPs stemming from prior government policies.

Regrettably, antitrust-enforcement-agency statements during the last year effectively have rejected the NMA. Most recently, in December 2021, the DOJ issued for public comment a Draft Policy Statement on Licensing Negotiations and Remedies, SEPs, which displaces a 2019 statement that had been in line with the NMA. Unless the FTC and Biden DOJ rethink their new position and decide instead to support the NMA, the anti-innovation approach to SEPs will once again prevail, with unfortunate consequences for American innovation.

The “weaker patents” implications of the draft policy statement would also prove detrimental to national security, as explained in a comment on the statement by a group of leading law, economics, and business scholars (including Nobel Laureate Vernon Smith) convened by the International Center for Law & Economics:

China routinely undermines U.S. intellectual property protections through its industrial policy. The government’s stated goal is to promote “fair and reasonable” international rules, but it is clear that China stretches its power over intellectual property around the world by granting “anti-suit injunctions” on behalf of Chinese smartphone makers, designed to curtail enforcement of foreign companies’ patent rights. …

Insufficient protections for intellectual property will hasten China’s objective of dominating collaborative standard development in the medium to long term. Simultaneously, this will engender a switch to greater reliance on proprietary, closed standards rather than collaborative, open standards. These harmful consequences are magnified in the context of the global technology landscape, and in light of China’s strategic effort to shape international technology standards. Chinese companies, directed by their government authorities, will gain significant control of the technologies that will underpin tomorrow’s digital goods and services.

A Center for Security and International Studies submission on the draft policy statement (signed by a former deputy secretary of the DOD, as well as former directors of the U.S. Patent and Trademark Office and the National Institute of Standards and Technology) also raised China-related national-security concerns:

[T]he largest short-term and long-term beneficiaries of the 2021 Draft Policy Statement are firms based in China. Currently, China is the world’s largest consumer of SEP-based technology, so weakening protection of American owned patents directly benefits Chinese manufacturers. The unintended effect of the 2021 Draft Policy Statement will be to support Chinese efforts to dominate critical technology standards and other advanced technologies, such as 5G. Put simply, devaluing U.S. patents is akin to a subsidized tech transfer to China.

Furthermore, in a more general vein, leading innovation economist David Teece also noted the negative national-security implications in his submission on the draft policy statement:

The US government, in reviewing competition policy issues that might impact standards, therefore needs to be aware that the issues at hand have tremendous geopolitical consequences and cannot be looked at in isolation. … Success in this regard will promote competition and is our best chance to maintain technological leadership—and, along with it, long-term economic growth and consumer welfare and national security.

That’s not all. In its public comment warning against precipitous finalization of the draft policy statement, the Innovation Alliance noted that, in recent years, major foreign jurisdictions have rejected the notion that SEP holders should be deprived the opportunity to seek injunctions. The Innovation Alliance opined in detail on the China national-security issues (footnotes omitted):

[T]he proposed shift in policy will undermine the confidence and clarity necessary to incentivize investments in important and risky research and development while simultaneously giving foreign competitors who do not rely on patents to drive investment in key technologies, like China, a distinct advantage. …

The draft policy statement … would devalue SEPs, and undermine the ability of U.S. firms to invest in the research and development needed to maintain global leadership in 5G and other critical technologies.

Without robust American investments, China—which has clear aspirations to control and lead in critical standards and technologies that are essential to our national security—will be left without any competition. Since 2015, President Xi has declared “whoever controls the standards controls the world.” China has rolled out the “China Standards 2035” plan and has outspent the United States by approximately $24 billion in wireless communications infrastructure, while China’s five-year economic plan calls for $400 billion in 5G-related investment.

Simply put, the draft policy statement will give an edge to China in the standards race because, without injunctions, American companies will lose the incentive to invest in the research and development needed to lead in standards setting. Chinese companies, on the other hand, will continue to race forward, funded primarily not by license fees, but by the focused investment of the Chinese government. …

Public hearings are necessary to take into full account the uncertainty of issuing yet another policy on this subject in such a short time period.

A key part of those hearings and further discussions must be the national security implications of a further shift in patent enforceability policy. Our future safety depends on continued U.S. leadership in areas like 5G and artificial intelligence. Policies that undermine the enforceability of patent rights disincentivize the substantial private sector investment necessary for research and development in these areas. Without that investment, development of these key technologies will begin elsewhere—likely China. Before any policy is accepted, key national-security stakeholders in the U.S. government should be asked for their official input.

These are not the only comments that raised the negative national-security ramifications of the draft policy statement (see here and here). For example, current Republican and Democratic senators, prior International Trade Commissioners, and former top DOJ and FTC officials also noted concerns. What’s more, the Patent Protection Society of China, which represents leading Chinese corporate implementers, filed a rather nonanalytic submission in favor of the draft statement. As one leading patent-licensing lawyer explains: “UC Berkley Law Professor Mark Cohen, whose distinguished government service includes serving as the USPTO representative in China, submitted a thoughtful comment explaining how the draft Policy Statement plays into China’s industrial and strategic interests.”

Finally, by weakening patent protection, the draft policy statement is at odds with  the 2021 National Security Commission on Artificial Intelligence Report, which called for the United States to “[d]evelop and implement national IP policies to incentivize, expand, and protect emerging technologies[,]” in response to Chinese “leveraging and exploiting intellectual property (IP) policies as a critical tool within its national strategies for emerging technologies.”

In sum, adoption of the draft policy statement would raise antitrust risks, weaken key property rights protections for SEPs, and undercut U.S. technological innovation efforts vis-à-vis China, thereby undermining U.S. national security.

FTC v. Qualcomm: Misguided enforcement and national security

U.S. national-security interests have been threatened by more than just the recent SEP policy pronouncements. In filing a January 2017 antitrust suit (at the very end of the Obama administration) against Qualcomm’s patent-licensing practices, the FTC (by a partisan 2-1 vote) ignored the economic efficiencies that underpinned this highly successful American technology company’s practices. Had the suit succeeded, U.S. innovation in a critically important technology area would have needlessly suffered, with China as a major beneficiary. A recent Federalist Society Regulatory Transparency Project report on the New Madison Approach underscored the broad policy implications of FTC V. Qualcomm (citations deleted):

The FTC’s Qualcomm complaint reflected the anti-SEP bias present during the Obama administration. If it had been successful, the FTC’s prosecution would have seriously undermined the freedom of the company to engage in efficient licensing of its SEPs.

Qualcomm is perhaps the world’s leading wireless technology innovator. It has developed, patented, and licensed key technologies that power smartphones and other wireless devices, and continues to do so. Many of Qualcomm’s key patents are SEPs subject to FRAND, directed to communications standards adopted by wireless devices makers. Qualcomm also makes computer processors and chips embodied in cutting edge wireless devices. Thanks in large part to Qualcomm technology, those devices have improved dramatically over the last decade, offering consumers a vast array of new services at a lower and lower price, when quality is factored in. Qualcomm thus is the epitome of a high tech American success story that has greatly benefited consumers.

Qualcomm: (1) sells its chips to “downstream” original equipment manufacturers (OEMs, such as Samsung and Apple), on the condition that the OEMs obtain licenses to Qualcomm SEPs; and (2) refuses to license its FRAND-encumbered SEPs to rival chip makers, while allowing those rivals to create and sell chips embodying Qualcomm SEP technologies to those OEMS that have entered a licensing agreement with Qualcomm.

The FTC’s 2017 antitrust complaint, filed in federal district court in San Francisco, charged that Qualcomm’s “no license, no chips” policy allegedly “forced” OEM cell phone manufacturers to pay elevated royalties on products that use a competitor’s baseband processors. The FTC deemed this an illegal “anticompetitive tax” on the use of rivals’ processors, since phone manufacturers “could not run the risk” of declining licenses and thus losing all access to Qualcomm’s processors (which would be needed to sell phones on important cellular networks). The FTC also argued that Qualcomm’s refusal to license its rivals despite its SEP FRAND commitment violated the antitrust laws. Finally, the FTC asserted that a 2011-2016 Qualcomm exclusive dealing contract with Apple (in exchange for reduced patent royalties) had excluded business opportunities for Qualcomm competitors.

The federal district court held for the FTC. It ordered that Qualcomm end these supposedly anticompetitive practices and renegotiate its many contracts. [Among the beneficiaries of new pro-implementer contract terms would have been a leading Chinese licensee of Qualcomm’s, Huawei, the huge Chinese telecommunications company that has been accused by the U.S. government of using technological “back doors” to spy on the United States.]

Qualcomm appealed, and in August 2020 a panel of the Ninth Circuit Court of Appeals reversed the district court, holding for Qualcomm. Some of the key points underlying this holding were: (1) Qualcomm had no antitrust duty to deal with competitors, consistent with established Supreme Court precedent (a very narrow exception to this precedent did not apply); (2) Qualcomm’s rates were chip supplier neutral because all OEMs paid royalties, not just rivals’ customers; (3) the lower court failed to show how the “no license, no chips” policy harmed Qualcomm’s competitors; and (4) Qualcomm’s agreements with Apple did not have the effect of substantially foreclosing the market to competitors. The Ninth Circuit as a whole rejected the FTC’s “en banc” appeal for review of the panel decision.

The appellate decision in Qualcomm largely supports pillar four of the NMA, that unilateral and unconditional decisions not to license a patent should be deemed legal under the antitrust laws. More generally, the decision evinces a refusal to find anticompetitive harm in licensing markets without hard empirical support. The FTC and the lower court’s findings of “harm” had been essentially speculative and anecdotal at best. They had ignored the “big picture” that the markets in which Qualcomm operates had seen vigorous competition and the conferral of enormous and growing welfare benefits on consumers, year-by-year. The lower court and the FTC had also turned a deaf ear to a legitimate efficiency-related business rationale that explained Qualcomm’s “no license, no chips” policy – a fully justifiable desire to obtain a fair return on Qualcomm’s patented technology.

Qualcomm is well reasoned, and in line with sound modern antitrust precedent, but it is only one holding. The extent to which this case’s reasoning proves influential in other courts may in part depend on the policies advanced by DOJ and the FTC going forward. Thus, a preliminary examination of the Biden administration’s emerging patent-antitrust policy is warranted. [Subsequent discussion shows that the Biden administration apparently has rejected pro-consumer policies embodied in the 9th U.S. Circuit’s Qualcomm decision and in the NMA.]

Although the 9th Circuit did not comment on them, national-security-policy concerns weighed powerfully against the FTC v. Qualcomm suit. In a July 2019 Statement of Interest (SOI) filed with the circuit court, DOJ cogently set forth the antitrust flaws in the district court’s decision favoring the FTC. Furthermore, the SOI also explained that “the public interest” favored a stay of the district court holding, due to national-security concerns (described in some detail in statements by the departments of Defense and Energy, appended to the SOI):

[T]he public interest also takes account of national security concerns. Winter v. NRDC, 555 U.S. 7, 23-24 (2008). This case presents such concerns. In the view of the Executive Branch, diminishment of Qualcomm’s competitiveness in 5G innovation and standard-setting would significantly impact U.S. national security. A251-54 (CFIUS); LD ¶¶10-16 (Department of Defense); ED ¶¶9-10 (Department of Energy). Qualcomm is a trusted supplier of mission-critical products and services to the Department of Defense and the Department of Energy. LD ¶¶5-8; ED ¶¶8-9. Accordingly, the Department of Defense “is seriously concerned that any detrimental impact on Qualcomm’s position as global leader would adversely affect its ability to support national security.” LD ¶16.

The [district] court’s remedy [requiring the renegotiation of Qualcomm’s licensing contracts] is intended to deprive, and risks depriving, Qualcomm of substantial licensing revenue that could otherwise fund time-sensitive R&D and that Qualcomm cannot recover later if it prevails. See, e.g., Op. 227-28. To be sure, if Qualcomm ultimately prevails, vacatur of the injunction will limit the severity of Qualcomm’s revenue loss and the consequent impairment of its ability to perform functions critical to national security. The Department of Defense “firmly believes,” however, “that any measure that inappropriately limits Qualcomm’s technological leadership, ability to invest in [R&D], and market competitiveness, even in the short term, could harm national security. The risks to national security include the disruption of [the Department’s] supply chain and unsure U.S. leadership in 5G.” LD ¶3. Consequently, the public interest necessitates a stay pending this Court’s resolution of the merits. In these rare circumstances, the interest in preventing even a risk to national security—“an urgent objective of the highest order”—presents reason enough not to enforce the remedy immediately. Int’l Refugee Assistance Project, 137 S. Ct. at 2088 (internal quotations omitted).

Not all national-security arguments against antitrust enforcement may be well-grounded, of course. The key point is that the interests of national security and consumer-welfare-centric antitrust are fully aligned when antitrust suits would inefficiently undermine the competitive vigor of a firm or firms that play a major role in supporting U.S. national-security interests. Such was the case in FTC v. Qualcomm. More generally, heightened antitrust scrutiny of efficient patent-licensing practices (as threatened by the Biden administration) would tend to diminish innovation by U.S. patentees, particularly in areas covered by standards that are key to leading global technologies. Such a diminution in innovation will tend to weaken American advantages in important industry sectors that are vital to U.S. national-security interests.

Proposed Federal Antitrust Legislation

Proposed federal antitrust legislation being considered by Congress (see here, here, and here for informed critiques) would prescriptively restrict certain large technology companies’ business transactions. If enacted, such legislation would thereby preclude case-specific analysis of potential transaction-specific efficiencies, thereby undermining the consumer welfare standard at the heart of current sound and principled antitrust enforcement. The legislation would also be at odds with our national-security interests, as a recent U.S. Chamber of Commerce paper explains:

Congress is considering new antitrust legislation which, perversely, would weaken leading U.S. technology companies by crafting special purpose regulations under the guise of antitrust to prohibit those firms from engaging in business conduct that is widely acceptable when engaged in by rival competitors.

A series of legislative proposals – some of which already have been approved by relevant Congressional committees – would, among other things: dismantle these companies; prohibit them from engaging in significant new acquisitions or investments; require them to disclose sensitive user data and sensitive IP and trade secrets to competitors, including those that are foreign-owned and controlled; facilitate foreign influence in the United States; and compromise cybersecurity.  These bills would fundamentally undermine American security interests while exempting from scrutiny Chinese and other foreign firms that do not meet arbitrary user and market capitalization thresholds specified in the legislation. …

The United States has never used legislation to punish success. In many industries, scale is important and has resulted in significant gains for the American economy, including small businesses.  U.S. competition law promotes the interests of consumers, not competitors. It should not be used to pick winners and losers in the market or to manage competitive outcomes to benefit select competitors.  Aggressive competition benefits consumers and society, for example by pushing down prices, disrupting existing business models, and introducing innovative products and services.

If enacted, the legislative proposals would drag the United States down in an unfolding global technological competition.  Companies captured by the legislation would be required to compete against integrated foreign rivals with one hand tied behind their backs.  Those firms that are the strongest drivers of U.S. innovation in AI, quantum computing, and other strategic technologies would be hamstrung or even broken apart, while foreign and state-backed producers of these same technologies would remain unscathed and seize the opportunity to increase market share, both in the U.S. and globally. …

Instead of warping antitrust law to punish a discrete group of American companies, the U.S. government should focus instead on vigorous enforcement of current law and on vocally opposing and effectively countering foreign regimes that deploy competition law and other legal and regulatory methods as industrial policy tools to unfairly target U.S. companies.  The U.S. should avoid self-inflicted wounds to our competitiveness and national security that would result from turning antitrust into a weapon against dynamic and successful U.S. firms.      

Consistent with this analysis, former Obama administration Defense Secretary Leon Panetta and former Trump administration Director of National Intelligence Dan Coats argued in a letter to U.S. House leadership (see here) that “imposing severe restrictions solely on U.S. giants will pave the way for a tech landscape dominated by China — echoing a position voiced by the Big Tech companies themselves.”

The national-security arguments against current antitrust legislative proposals, like the critiques of the unfounded FTC v. Qualcomm case, represent an alignment between sound antitrust policy and national-security analysis. Unfounded antitrust attacks on efficient business practices by large firms that help maintain U.S. technological leadership in key areas undermine both principled antitrust and national security.

Conclusion

Enlightened antitrust enforcement, centered on consumer welfare, can and should be read in a manner that is harmonious with national-security interests.

The cooperation between U.S. federal antitrust enforcers and the DOD in assessing defense-industry mergers and joint ventures is, generally speaking, an example of successful harmonization. This success reflects the fact that antitrust enforcers carry out their reviews of those transactions with an eye toward accommodating efficiencies that advance defense goals without sacrificing consumer welfare. Close antitrust-agency consultation with DOD is key to that approach.

Unfortunately, federal enforcement directed toward efficient intellectual-property licensing, as manifested in the Qualcomm case, reflects a disharmony between antitrust and national security. This disharmony could be eliminated if DOJ and the FTC adopted a dynamic view of intellectual property and the substantial economic-welfare benefits that flow from restrictive patent-licensing transactions.

In sum, a dynamic analysis reveals that consumer welfare is enhanced, not harmed, by not subjecting such licensing arrangements to antitrust threat. A more permissive approach to licensing is thus consistent with principled antitrust and with the national security interest of protecting and promoting strong American intellectual property (and, in particular, patent) protection. The DOJ and the FTC should keep this in mind and make appropriate changes to their IP-antitrust policies forthwith.

Finally, proposed federal antitrust legislation would bring about statutory changes that would simultaneously displace consumer welfare considerations and undercut national security interests. As such, national security is supported by rejecting unsound legislation, in order to keep in place consumer-welfare-based antitrust enforcement.

In a constructive development, the Federal Trade Commission has joined its British counterpart in investigating Nvidia’s proposed $40 billion acquisition of chip designer Arm, a subsidiary of Softbank. Arm provides the technological blueprints for wireless communications devices and, subject to a royalty fee, makes those crown-jewel assets available to all interested firms. Notwithstanding Nvidia’s stated commitment to keep the existing policy in place, there is an obvious risk that the new parent, one of the world’s leading chip makers, would at some time modify this policy with adverse competitive effects.

Ironically, the FTC is likely part of the reason that the Nvidia-Arm transaction is taking place.

Since the mid-2000s, the FTC and other leading competition regulators (except for the U.S. Department of Justice’s Antitrust Division under the leadership of former Assistant Attorney General Makan Delrahim) have intervened extensively in licensing arrangements in wireless device markets, culminating in the FTC’s recent failed suit against Qualcomm. The Nvidia-Arm transaction suggests that these actions may simply lead chip designers to abandon the licensing model and shift toward structures that monetize chip-design R&D through integrated hardware and software ecosystems. Amazon and Apple are already undertaking chip innovation through this model. Antitrust action that accelerates this movement toward in-house chip design is likely to have adverse effects for the competitive health of the wireless ecosystem.

How IP Licensing Promotes Market Access

Since its inception, the wireless communications market has relied on a handful of IP licensors to supply device producers and other intermediate users with a common suite of technology inputs. The result has been an efficient division of labor between firms that specialize in upstream innovation and firms that specialize in production and other downstream functions. Contrary to the standard assumption that IP rights limit access, this licensing-based model ensures technology access to any firm willing to pay the royalty fee.

Efforts by regulators to reengineer existing relationships between innovators and implementers endanger this market structure by inducing innovators to abandon licensing-based business models, which now operate under a cloud of legal insecurity, for integrated business models in which returns on R&D investments are captured internally through hardware and software products. Rather than expanding technology access and intensifying competition, antitrust restraints on licensing freedom are liable to limit technology access and increase market concentration.

Regulatory Intervention and Market Distortion

This interventionist approach has relied on the assertion that innovators can “lock in” producers and extract a disproportionate fee in exchange for access. This prediction has never found support in fact. Contrary to theoretical arguments that patent owners can impose double-digit “royalty stacks” on device producers, empirical researchers have repeatedly found that the estimated range of aggregate rates lies in the single digits. These findings are unsurprising given market performance over more than two decades: adoption has accelerated as quality-adjusted prices have fallen and innovation has never ceased. If rates were exorbitant, market growth would have been slow, and the smartphone would be a luxury for the rich.

Despite these empirical infirmities, the FTC and other competition regulators have persisted in taking action to mitigate “holdup risk” through policy statements and enforcement actions designed to preclude IP licensors from seeking injunctive relief. The result is a one-sided legal environment in which the world’s largest device producers can effectively infringe patents at will, knowing that the worst-case scenario is a “reasonable royalty” award determined by a court, plus attorneys’ fees. Without any credible threat to deny access even after a favorable adjudication on the merits, any IP licensor’s ability to negotiate a royalty rate that reflects the value of its technology contribution is constrained.

Assuming no change in IP licensing policy on the horizon, it is therefore not surprising that an IP licensor would seek to shift toward an integrated business model in which IP is not licensed but embedded within an integrated suite of products and services. Or alternatively, an IP licensor entity might seek to be acquired by a firm that already has such a model in place. Hence, FTC v. Qualcomm leads Arm to Nvidia.

The Error Costs of Non-Evidence-Based Antitrust

These counterproductive effects of antitrust intervention demonstrate the error costs that arise when regulators act based on unverified assertions of impending market failure. Relying on the somewhat improbable assumption that chip suppliers can dictate licensing terms to device producers that are among the world’s largest companies, competition regulators have placed at risk the legal predicates of IP rights and enforceable contracts that have made the wireless-device market an economic success. As antitrust risk intensifies, the return on licensing strategies falls and competitive advantage shifts toward integrated firms that can monetize R&D internally through stand-alone product and service ecosystems.

Far from increasing competitiveness, regulators’ current approach toward IP licensing in wireless markets is likely to reduce it.

The European Court of Justice issued its long-awaited ruling Dec. 9 in the Groupe Canal+ case. The case centered on licensing agreements in which Paramount Pictures granted absolute territorial exclusivity to several European broadcasters, including Canal+.

Back in 2015, the European Commission charged six U.S. film studios, including Paramount,  as well as British broadcaster Sky UK Ltd., with illegally limiting access to content. The crux of the EC’s complaint was that the contractual agreements to limit cross-border competition for content distribution ran afoul of European Union competition law. Paramount ultimately settled its case with the commission and agreed to remove the problematic clauses from its contracts. This affected third parties like Canal+, who lost valuable contractual protections. 

While the ECJ ultimately upheld the agreements on what amounts to procedural grounds (Canal+ was unduly affected by a decision to which it was not a party), the case provides yet another example of the European Commission’s misguided stance on absolute territorial licensing, sometimes referred to as “geo-blocking.”

The EC’s long-running efforts to restrict geo-blocking emerge from its attempts to harmonize trade across the EU. Notably, in its Digital Single Market initiative, the Commission envisioned

[A] Digital Single Market is one in which the free movement of goods, persons, services and capital is ensured and where individuals and businesses can s​eamlessly access and exercise online activities under conditions of f​air competition,​ and a high level of consumer and personal data protection, irrespective of their nationality or place of residence.

This policy stance has been endorsed consistently by the European Court of Justice. In the 2011 Murphy decision, for example, the court held that agreements between rights holders and broadcasters infringe European competition when they categorically prevent the latter from supplying “decoding devices” to consumers located in other member states. More precisely, while rights holders can license their content on a territorial basis, they cannot restrict so-called “passive sales”; broadcasters can be prevented from actively chasing consumers in other member states, but not from serving them altogether. If this sounds Kafkaesque, it’s because it is.

The problem with the ECJ’s vision is that it elides the complex factors that underlie a healthy free-trade zone. Geo-blocking frequently is misunderstood or derided by consumers as an unwarranted restriction on their consumption preferences. It doesn’t feel “fair” or “seamless” when a rights holder can decide who can access their content and on what terms. But that doesn’t mean geo-blocking is a nefarious or socially harmful practice. Quite the contrary: allowing creators to create different sets of distribution options offers both a return to the creators as well as more choice in general to consumers. 

In economic terms, geo-blocking allows rights holders to engage in third-degree price discrimination; that is, they have the ability to charge different prices for different sets of consumers. This type of pricing will increase total welfare so long as it increases output. As Hal Varian puts it:

If a new market is opened up because of price discrimination—a market that was not previously being served under the ordinary monopoly—then we will typically have a Pareto improving welfare enhancement.

Another benefit of third-degree price discrimination is that, by shifting some economic surplus from consumers to firms, it can stimulate investment in much the same way copyright and patents do. Put simply, the prospect of greater economic rents increases the maximum investment firms will be willing to make in content creation and distribution.

For these reasons, respecting parties’ freedom to license content as they see fit is likely to produce much more efficient outcomes than annulling those agreements through government-imposed “seamless access” and “fair competition” rules. Part of the value of copyright law is in creating space to contract by protecting creators’ property rights. Without geo-blocking, the enforcement of licensing agreements would become much more difficult. Laws restricting copyright owners’ ability to contract freely reduce allocational efficiency, as well as the incentives to create in the first place. Further, when individual creators have commercial and creative autonomy, they gain a degree of predictability that can ensure they will continue to produce content in the future. 

The European Union would do well to adopt a more nuanced understanding of the contractual relationships between producers and distributors. 

[TOTM: The following is the eighth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here. The blog post is based on a forthcoming paper regarding patent holdup, co-authored by Dirk Auer and Julian Morris.]

Samsung SGH-F480V – controller board – Qualcomm MSM6280

In his latest book, Tyler Cowen calls big business an “American anti-hero”. Cowen argues that the growing animosity towards successful technology firms is to a large extent unwarranted. After all, these companies have generated tremendous prosperity and jobs.

Though it is less known to the public than its Silicon Valley counterparts, Qualcomm perfectly fits the anti-hero mold. Despite being a key contributor to the communications standards that enabled the proliferation of smartphones around the globe – an estimated 5 Billion people currently own a device – Qualcomm has been on the receiving end of considerable regulatory scrutiny on both sides of the Atlantic (including two in the EU; see here and here). 

In the US, Judge Lucy Koh recently ruled that a combination of anticompetitive practices had enabled Qualcomm to charge “unreasonably high royalty rates” for its CDMA and LTE cellular communications technology. Chief among these practices was Qualcomm’s so-called “no license, no chips” policy, whereby the firm refuses to sell baseband processors to implementers that have not taken out a license for its communications technology. Other grievances included Qualcomm’s purported refusal to license its patents to rival chipmakers, and allegations that it attempted to extract exclusivity obligations from large handset manufacturers, such as Apple. According to Judge Koh, these practices resulted in “unreasonably high” royalty rates that failed to comply with Qualcomm’s FRAND obligations.

Judge Koh’s ruling offers an unfortunate example of the numerous pitfalls that decisionmakers face when they second-guess the distributional outcomes achieved through market forces. This is particularly true in the complex standardization space.

The elephant in the room

The first striking feature of Judge Koh’s ruling is what it omits. Throughout the more than two-hundred-page long document, there is not a single reference to the concepts of holdup or holdout (crucial terms of art for a ruling that grapples with the prices charged by an SEP holder). 

At first sight, this might seem like a semantic quibble. But words are important. Patent holdup (along with the “unreasonable” royalties to which it arguably gives rise) is possible only when a number of cumulative conditions are met. Most importantly, the foundational literature on economic opportunism (here and here) shows that holdup (and holdout) mostly occur when parties have made asset-specific sunk investments. This focus on asset-specific investments is echoed by even the staunchest critics of the standardization status quo (here).

Though such investments may well have been present in the case at hand, there is no evidence that they played any part in the court’s decision. This is not without consequences. If parties did not make sunk relationship-specific investments, then the antitrust case against Qualcomm should have turned upon the alleged exclusion of competitors, not the level of Qualcomm’s royalties. The DOJ said this much in its statement of interest concerning Qualcomm’s motion for partial stay of injunction pending appeal. Conversely, if these investments existed, then patent holdout (whereby implementers refuse to license key pieces of intellectual property) was just as much of a risk as patent holdup (here and here). And yet the court completely overlooked this possibility.

The misguided push for component level pricing

The court also erred by objecting to Qualcomm’s practice of basing license fees on the value of handsets, rather than that of modem chips. In simplified terms, implementers paid Qualcomm a percentage of their devices’ resale price. The court found that this was against Federal Circuit law. Instead, it argued that royalties should be based on the value the smallest salable patent-practicing component (in this case, baseband chips). This conclusion is dubious both as a matter of law and of policy.

From a legal standpoint, the question of the appropriate royalty base seems far less clear-cut than Judge Koh’s ruling might suggest. For instance, Gregory Sidak observes that in TCL v. Ericsson Judge Selna used a device’s net selling price as a basis upon which to calculate FRAND royalties. Likewise, in CSIRO v. Cisco, the Court also declined to use the “smallest saleable practicing component” as a royalty base. And finally, as Jonathan Barnett observes, the Circuit Laser Dynamics case law cited  by Judge Koh relates to the calculation of damages in patent infringement suits. There is no legal reason to believe that its findings should hold any sway outside of that narrow context. It is one thing for courts to decide upon the methodology that they will use to calculate damages in infringement cases – even if it is a contested one. It is a whole other matter to shoehorn private parties into adopting this narrow methodology in their private dealings. 

More importantly, from a policy standpoint, there are important advantages to basing royalty rates on the price of an end-product, rather than that of an intermediate component. This type of pricing notably enables parties to better allocate the risk that is inherent in launching a new product. In simplified terms: implementers want to avoid paying large (fixed) license fees for failed devices; and patent holders want to share in the benefits of successful devices that rely on their inventions. The solution, as Alain Bousquet and his co-authors explain, is to agree on royalty payments that are contingent on success in the market:

Because the demand for a new product is uncertain and/or the potential cost reduction of a new technology is not perfectly known, both seller and buyer may be better off if the payment for the right to use an innovation includes a state-contingent royalty (rather than consisting of just a fixed fee). The inventor wants to benefit from a growing demand for a new product, and the licensee wishes to avoid high payments in case of disappointing sales.

While this explains why parties might opt for royalty-based payments over fixed fees, it does not entirely elucidate the practice of basing royalties on the price of an end device. One explanation is that a technology’s value will often stem from its combination with other goods or technologies. Basing royalties on the value of an end-device enables patent holders to more effectively capture the social benefits that flow from these complementarities.

Imagine the price of the smallest saleable component is identical across all industries, despite it being incorporated into highly heterogeneous devices. For instance, the same modem chip could be incorporated into smartphones (of various price ranges), tablets, vehicles, and other connected devices. The Bousquet line of reasoning (above) suggests that it is efficient for the patent holder to earn higher royalties (from the IP that underpins the modem chips) in those segments where market demand is strongest (i.e. where there are stronger complementarities between the modem chip and the end device).

One way to make royalties more contingent on market success is to use the price of the modem (which is presumably identical across all segments) as a royalty base and negotiate a separate royalty rate for each end device (charging a higher rate for devices that will presumably benefit from stronger consumer demand). But this has important drawbacks. For a start, identifying those segments (or devices) that are most likely to be successful is informationally cumbersome for the inventor. Moreover, this practice could land the patent holder in hot water. Antitrust authorities might naïvely conclude that these varying royalty rates violate the “non-discriminatory” part of FRAND.

A much simpler solution is to apply a single royalty rate (or at least attempt to do so) but use the price of the end device as a royalty base. This ensures that the patent holder’s rewards are not just contingent on the number of devices sold, but also on their value. Royalties will thus more closely track the end-device’s success in the marketplace.   

In short, basing royalties on the value of an end-device is an informationally light way for the inventor to capture some of the unforeseen value that might stem from the inclusion of its technology in an end device. Mandating that royalty rates be based on the value of the smallest saleable component ignores this complex reality.

Prices are almost impossible to reconstruct

Judge Koh was similarly imperceptive when assessing Qualcomm’s contribution to the value of key standards, such as LTE and CDMA. 

For a start, she reasoned that Qualcomm’s royalties were large compared to the number of patents it had contributed to these technologies:

Moreover, Qualcomm’s own documents also show that Qualcomm is not the top standards contributor, which confirms Qualcomm’s own statements that QCT’s monopoly chip market share rather than the value of QTL’s patents sustain QTL’s unreasonably high royalty rates.

Given the tremendous heterogeneity that usually exists between the different technologies that make up a standard, simply counting each firm’s contributions is a crude and misleading way to gauge the value of their patent portfolios. Accordingly, Qualcomm argued that it had made pioneering contributions to technologies such as CDMA, and 4G/5G. Though the value of Qualcomm’s technologies is ultimately an empirical question, the court’s crude patent counting  was unlikely to provide a satisfying answer.

Just as problematically, the court also concluded that Qualcomm’s royalties were unreasonably high because “modem chips do not drive handset value.” In its own words:

Qualcomm’s intellectual property is for communication, and Qualcomm does not own intellectual property on color TFT LCD panel, mega-pixel DSC module, user storage memory, decoration, and mechanical parts. The costs of these non-communication-related components have become more expensive and now contribute 60-70% of the phone value. The phone is not just for communication, but also for computing, movie-playing, video-taking, and data storage.

As Luke Froeb and his co-authors have also observed, the court’s reasoning on this point is particularly unfortunate. Though it is clearly true that superior LCD panels, cameras, and storage increase a handset’s value – regardless of the modem chip that is associated with them – it is equally obvious that improvements to these components are far more valuable to consumers when they are also associated with high-performance communications technology.

For example, though there is undoubtedly standalone value in being able to take improved pictures on a smartphone, this value is multiplied by the ability to instantly share these pictures with friends, and automatically back them up on the cloud. Likewise, improving a smartphone’s LCD panel is more valuable if the device is also equipped with a cutting edge modem (both are necessary for consumers to enjoy high-definition media online).

In more technical terms, the court fails to acknowledge that, in the presence of perfect complements, each good makes an incremental contribution of 100% to the value of the whole. A smartphone’s components would be far less valuable to consumers if they were not associated with a high-performance modem, and vice versa. The fallacy to which the court falls prey is perfectly encapsulated by a quote it cites from Apple’s COO:

Apple invests heavily in the handset’s physical design and enclosures to add value, and those physical handset features clearly have nothing to do with Qualcomm’s cellular patents, it is unfair for Qualcomm to receive royalty revenue on that added value.

The question the court should be asking, however, is whether Apple would have gone to the same lengths to improve its devices were it not for Qualcomm’s complementary communications technology. By ignoring this question, Judge Koh all but guaranteed that her assessment of Qualcomm’s royalty rates would be wide of the mark.

Concluding remarks

In short, the FTC v. Qualcomm case shows that courts will often struggle when they try to act as makeshift price regulators. It thus lends further credence to Gergory Werden and Luke Froeb’s conclusion that:

Nothing is more alien to antitrust than enquiring into the reasonableness of prices. 

This is especially true in complex industries, such as the standardization space. The colossal number of parameters that affect the price for a technology are almost impossible to reproduce in a top-down fashion, as the court attempted to do in the Qualcomm case. As a result, courts will routinely draw poor inferences from factors such as the royalty base agreed upon by parties, the number of patents contributed by a firm, and the complex manner in which an individual technology may contribute to the value of an end-product. Antitrust authorities and courts would thus do well to recall the wise words of Friedrich Hayek:

If we can agree that the economic problem of society is mainly one of rapid adaptation to changes in the particular circumstances of time and place, it would seem to follow that the ultimate decisions must be left to the people who are familiar with these circumstances, who know directly of the relevant changes and of the resources immediately available to meet them. We cannot expect that this problem will be solved by first communicating all this knowledge to a central board which, after integrating all knowledge, issues its orders. We must solve it by some form of decentralization.

Underpinning many policy disputes is a frequently rehearsed conflict of visions: Should we experiment with policies that are likely to lead to superior, but unknown, solutions, or should we should stick to well-worn policies, regardless of how poorly they fit current circumstances? 

This conflict is clearly visible in the debate over whether DOJ should continue to enforce its consent decrees with the major music performing rights organizations (“PROs”), ASCAP and BMI—or terminate them. 

As we note in our recently filed comments with the DOJ, summarized below, the world has moved on since the decrees were put in place in the early twentieth century. Given the changed circumstances, the DOJ should terminate the consent decrees. This would allow entrepreneurs, armed with modern technology, to facilitate a true market for public performance rights.

The consent decrees

In the early days of radio, it was unclear how composers and publishers could effectively monitor and enforce their copyrights. Thousands of radio stations across the nation were playing the songs that tens of thousands of composers had written. Given the state of technology, there was no readily foreseeable way to enable bargaining between the stations and composers for license fees associated with these plays.

In 1914, a group of rights holders established the American Society of Composers Authors and Publishers (ASCAP) as a way to overcome these transactions costs by negotiating with radio stations on behalf of all of its members.

Even though ASCAP’s business was clearly aimed at ensuring that rightsholders’ were appropriately compensated for the use of their works, which logically would have incentivized greater output of licensable works, the nonstandard arrangement it embodied was unacceptable to the antitrust enforcers of the era. Not long after it was created, the Department of Justice began investigating ASCAP for potential antitrust violations.

While the agglomeration of rights under a single entity had obvious benefits for licensors and licensees of musical works, a power struggle nevertheless emerged between ASCAP and radio broadcasters over the terms of those licenses. Eventually this struggle led to the formation of a new PRO, the broadcaster-backed BMI, in 1939. The following year, the DOJ challenged the activities of both PROs in dual criminal antitrust proceedings. The eventual result was a set of consent decrees in 1941 that, with relatively minor modifications over the years, still regulate the music industry.

Enter the Internet

The emergence of new ways to distribute music has, perhaps unsurprisingly, resulted in renewed interest from artists in developing alternative ways to license their material. In 2014, BMI and ASCAP asked the DOJ to modify their consent decrees to permit music publishers partially to withdraw from the PROs, which would have enabled those partially-withdrawing publishers to license their works to digital services under separate agreements (and prohibited the PROs from licensing their works to those same services). However, the DOJ rejected this request and insisted that the consent decree requires “full-work” licenses — a result that would have not only entrenched the status quo, but also erased the competitive differences that currently exist between the PROs. (It might also have created other problems, such as limiting collaborations between artists who currently license through different PROs.)

This episode demonstrates a critical flaw in how the consent decrees currently operate. Imposing full-work license obligations on PROs would have short-circuited the limited market that currently exists, to the detriment of creators, competition among PROs, and, ultimately, consumers. Paradoxically these harms flow directly from a  presumption that administrative officials, seeking to enforce antitrust law — the ultimate aim of which is to promote competition and consumer welfare — can dictate through top-down regulatory intervention market terms better than participants working together. 

If a PRO wants to offer full-work licenses to its licensee-customers, it should be free to do so (including, e.g., by contracting with other PROs in cases where the PRO in question does not own the work outright). These could be a great boon to licensees and the market. But such an innovation would flow from a feedback mechanism in the market, and would be subject to that same feedback mechanism. 

However, for the DOJ as a regulatory overseer to intervene in the market and assert a preference that it deemed superior (but that was clearly not the result of market demand, or subject to market discipline) is fraught with difficulty. And this is the emblematic problem with the consent decrees and the mandated licensing regimes. It allows regulators to imagine that they have both the knowledge and expertise to manage highly complicated markets. But, as Mark Lemley has observed, “[g]one are the days when there was any serious debate about the superiority of a market-based economy over any of its traditional alternatives, from feudalism to communism.” 

It is no knock against the DOJ that it patently does not have either the knowledge or expertise to manage these markets: no one does. That’s the entire point of having markets, which facilitate the transmission and effective utilization of vast amounts of disaggregated information, including subjective preferences, that cannot be known to anyone other than the individual who holds them. When regulators can allow this process to work, they should.

Letting the market move forward

Some advocates of the status quo have recommended that the consent orders remain in place, because 

Without robust competition in the music licensing market, consumers could face higher prices, less choice, and an increase in licensing costs that could render many vibrant public spaces silent. In the absence of a truly competitive market in which PROs compete to attract services and other licensees, the consent decrees must remain in place to prevent ASCAP and BMI from abusing their substantial market power.

This gets to the very heart of the problem with the conflict of visions that undergirds policy debates. Advocating for the status quo in this manner is based on a static view of “markets,” one that is, moreover, rooted in an early twentieth-century conception of the relevant industries. The DOJ froze the licensing market in time with the consent decrees — perhaps justifiably in 1941 given the state of technology and the very high transaction costs involved. But technology and business practices have evolved and are now much more capable of handling the complex, distributed set of transactions necessary to make the performance license market a reality.

Believing that the absence of the consent decrees will force the performance licensing market to collapse into an anticompetitive wasteland reflects a failure of imagination and suggests a fundamental distrust in the power of the market to uncover novel solutions—against the overwhelming evidence to the contrary

Yet, those of a dull and pessimistic mindset need not fear unduly the revocation of the consent decrees. For if evidence emerges that the market participants (including the PROs and whatever other entities emerge) are engaging in anticompetitive practices to the detriment of consumer welfare, the DOJ can sue those entities. The threat of such actions should be sufficient in itself to deter such anticompetitive practices but if it is not, then the sword of antitrust, including potentially the imposition of consent decrees, can once again be wielded. 

Meanwhile, those of us with an optimistic, imaginative mindset, look forward to a time in the near future when entrepreneurs devise innovative and cost-effective solutions to the problem of highly-distributed music licensing. In some respects their job is made easier by the fact that an increasing proportion of music is  streamed via a small number of large companies (Spotify, Pandora, Apple, Amazon, Tencent, YouTube, Tidal, etc.). But it is quite feasible that in the absence of the consent decrees new licensing systems will emerge, using modern database technologies, blockchain and other distributed ledgers, that will enable much more effective usage-based licenses applicable not only to these streaming services but others too. 

We hope the DOJ has the foresight to allow such true competition to enter this market and the strength to believe enough in our institutions that it can permit some uncertainty while entrepreneurs experiment with superior methods of facilitating music licensing.

An important but unheralded announcement was made on October 10, 2018: The European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC) released a draft CEN CENELAC Workshop Agreement (CWA) on the licensing of Standard Essential Patents (SEPs) for 5G/Internet of Things (IoT) applications. The final agreement, due to be published in early 2019, is likely to have significant implications for the development and roll-out of both 5G and IoT applications.

CEN and CENELAC, which along with the European Telecommunications Standards Institute (ETSI) are the officially recognized standard setting bodies in Europe, are private international non profit organizations with a widespread network consisting of technical experts from industry, public administrations, associations, academia and societal organizations. This first Workshop brought together representatives of the 5G/Internet of Things (IoT) technology user and provider communities to discuss licensing best practices and recommendations for a code of conduct for licensing of SEPs. The aim was to produce a CWA that reflects and balances the needs of both communities.

The final consensus outcome of the Workshop will be published as a CEN-CENELEC Workshop Agreement (CWA). The draft, which is available for public comments, comprises principles and guidelines that prepare a foundation for future licensing of standard essential patents for fifth generation (5G) technologies. The draft also contains a section on Q&A to help aid new implementers and patent holders.

The IoT ecosystem is likely to have over 20 billion interconnected devices by 2020 and represent a market of $17 trillion (about the same as the current GDP of the U.S.). The data collected by one device, such as a smart thermostat that learns what time the consumer is likely to be at home, can be used to increase the performance of another connected device, such as a smart fridge. Cellular technologies are a core component of the IoT ecosystem, alongside applications, devices, software etc., as they provide connectivity within the IoT system. 5G technology, in particular, is expected to play a key role in complex IoT deployments, which will transcend the usage of cellular networks from smart phones to smart home appliances, autonomous vehicles, health care facilities etc. in what has been aptly described as the fourth industrial revolution.

Indeed, the role of 5G to IoT is so significant that the proposed $117 billion takeover bid for U.S. tech giant Qualcomm by Singapore-based Broadcom was blocked by President Trump, citing national security concerns. (A letter sent by the Committee on Foreign Investment in the US suggested that Broadcom might starve Qualcomm of investment, preventing it from competing effectively against foreign competitors–implicitly those in China.)

While commercial roll-out of 5G technology has not yet fully begun, several efforts are being made by innovator companies, standard setting bodies and governments to maximize the benefits from such deployment.

The draft CWA Guidelines (hereinafter “the guidelines”) are consistent with some of the recent jurisprudence on SEPs on various issues. While there is relatively less guidance specifically in relation to 5G SEPs, it provides clarifications on several aspects of SEP licensing which will be useful, particularly, the negotiating process and conduct of both parties.

The guidelines contain 6 principles followed by some questions pertaining to SEP licensing. The principles deal with:

  1. The obligation of SEP holders to license the SEPs on Fair, Reasonable and Non-Discriminatory (FRAND) terms;
  2. The obligation on both parties to conduct negotiations in good faith;
  3. The obligation of both parties to provide necessary information (subject to confidentiality) to facilitate timely conclusion of the licensing negotiation;
  4. Compensation that is “fair and reasonable” and achieves the right balance between incentives to contribute technology and the cost of accessing that technology;
  5. A non-discriminatory obligation on the SEP holder for similarly situated licensees even though they don’t need to be identical; and
  6. Recourse to a third party FRAND determination either by court or arbitration if the negotiations fail to conclude in a timely manner.

There are 22 questions and answers, as well, which define basic terms and touch on issues such as: what amounts as good faith conduct of negotiating parties, global portfolio licensing, FRAND royalty rates, patent pooling, dispute resolution, injunctions, and other issues relevant to FRAND licensing policy in general.

Below are some significant contributions that the draft report makes on issues such as the supply chain level at which licensing is best done, treatment of small and medium enterprises (SMEs), non disclosure agreements, good faith negotiations and alternative dispute resolution.

Typically in the IoT ecosystem, many technologies will be adopted of which several will be standardized. The guidelines offer help to product and service developers in this regard and suggest that one may need to obtain licenses from SEP owners for product or services incorporating communications technology like 3G UMTS, 4G LTE, Wi-Fi, NB-IoT, 31 Cat-M or video codecs such as H.264. The guidelines, however, clarify that with the deployment of IoT, licenses for several other standards may be needed and developers should be mindful of these complexities when starting out in order to avoid potential infringements.

Notably, the guidelines suggest that in order to simplify licensing, reduce costs for all parties and maintain a level playing field between licensees, SEP holders should license at one level. While this may vary between different industries, for communications technology, the licensing point is often at the end-user equipment level. There has been a fair bit of debate on this issue and the recent order by Judge Koh granting FTC’s partial summary motion deals with some of this.

In the judgment delivered on November 6, Judge Koh relied primarily on the 9th circuit decisions in Microsoft v Motorola (2012 and 2015)  to rule on the core issue of the scope of the FRAND commitments–specifically on the question of whether licensing extends to all levels or is confined to the end device level. The court interpreted the pro- competitive principles behind the non-discrimination requirement to mean that such commitments are “sweeping” and essentially that an SEP holder has to license to anyone willing to offer a FRAND rate globally. It also cited Ericsson v D-Link, where the Federal Circuit held that “compliant devices necessarily infringe certain claims in patents that cover technology incorporated into the standard and so practice of the standard is impossible without licenses to all incorporated SEP technology.”

The guidelines speak about the importance of non-disclosure agreements (NDAs) in such licensing agreements given that some of the information exchanged between parties during negotiation, such as claim charts etc., may be sensitive and confidential. Therefore, an undue delay in agreeing to an NDA, without well-founded reasons, might be taken as evidence of a lack of good faith in negotiations rendering such a licensee as unwilling.

They also provide quite a boost for small and medium enterprises (SMEs) in licensing negotiations by addressing the duty of SEP owners to be mindful of SMEs that may be less experienced and therefore lack information from which to draw assurance that proposed terms are FRAND. The guidelines provide that SEP owners should provide whatever information they can under NDA to help the negotiation process. Equally, the same obligation applies on a licensee who is more experienced in dealing with a SEP owner who is an SME.

There is some clarity on time frames for negotiations and the guidelines provide a maximum time that parties should take to respond to offers and counter offers, which could extend up to several months in complex cases involving hundreds of patents. The guidelines also prescribe conduct of potential licensees on receiving an offer and how to make counter-offers in a timely manner.

Furthermore, the guidelines lay down the various ways in which royalty rates may be structured and clarify that there is no one fixed way in which this may be done. Similarly, they offer myriad ways in which potential licensees may be able to determine for themselves if the rates offered to them are fair and reasonable, such as third party patent landscape reports, public announcements, expert advice etc.

Finally, in the case that a negotiation reaches an impasse, the guidelines endorse an alternative dispute mechanism such as mediation or arbitration for the parties to resolve the issue. Bodies such as International Chamber of Commerce and World Intellectual Property Organization may provide useful platforms in this regard.

Almost 20 years have passed since technology pioneer Kevin Ashton first coined the phrase Internet of Things. While companies are gearing up to participate in the market of IoT, regulation and policy in the IoT world seems far from a predictable framework to follow. There are a lot of guesses about how rules and standards are likely to shape up, with little or no guidance for companies on how to prepare themselves for what faces them very soon. Therefore concrete efforts such as these are rather welcome. The draft guidelines do attempt to offer some much needed clarity and are now open for public comments due by December 13. It will be good to see what the final CWA report on licensing of SEPs for 5G and IoT looks like.

 

Imagine if you will… that a federal regulatory agency were to decide that the iPhone ecosystem was too constraining and too expensive; that consumers — who had otherwise voted for iPhones with their dollars — were being harmed by the fact that the platform was not “open” enough.

Such an agency might resolve (on the basis of a very generous reading of a statute), to force Apple to make its iOS software available to any hardware platform that wished to have it, in the process making all of the apps and user data accessible to the consumer via these new third parties, on terms set by the agency… for free.

Difficult as it may be to picture this ever happening, it is exactly the sort of Twilight Zone scenario that FCC Chairman Tom Wheeler is currently proposing with his new set-top box proposal.

Based on the limited information we have so far (a fact sheet and an op-ed), Chairman Wheeler’s new proposal does claw back some of the worst excesses of his initial draft (which we critiqued in our comments and reply comments to that proposal).

But it also appears to reinforce others — most notably the plan’s disregard for the right of content creators to control the distribution of their content. Wheeler continues to dismiss the complex business models, relationships, and licensing terms that have evolved over years of competition and innovation. Instead, he offers  a one-size-fits-all “solution” to a “problem” that market participants are already falling over themselves to provide.

Plus ça change…

To begin with, Chairman Wheeler’s new proposal is based on the same faulty premise: that consumers pay too much for set-top boxes, and that the FCC is somehow both prescient enough and Congressionally ordained to “fix” this problem. As we wrote in our initial comments, however,

[a]lthough the Commission asserts that set-top boxes are too expensive, the history of overall MVPD prices tells a remarkably different story. Since 1994, per-channel cable prices including set-top box fees have fallen by 2 percent, while overall consumer prices have increased by 54 percent. After adjusting for inflation, this represents an impressive overall price decrease.

And the fact is that no one buys set-top boxes in isolation; rather, the price consumers pay for cable service includes the ability to access that service. Whether the set-top box fee is broken out on subscribers’ bills or not, the total price consumers pay is unlikely to change as a result of the Commission’s intervention.

As we have previously noted, the MVPD set-top box market is an aftermarket; no one buys set-top boxes without first (or simultaneously) buying MVPD service. And as economist Ben Klein (among others) has shown, direct competition in the aftermarket need not be plentiful for the market to nevertheless be competitive:

Whether consumers are fully informed or uninformed, consumers will pay a competitive package price as long as sufficient competition exists among sellers in the [primary] market.

Engineering the set-top box aftermarket to bring more direct competition to bear may redistribute profits, but it’s unlikely to change what consumers pay.

Stripped of its questionable claims regarding consumer prices and placed in the proper context — in which consumers enjoy more ways to access more video content than ever before — Wheeler’s initial proposal ultimately rested on its promise to “pave the way for a competitive marketplace for alternate navigation devices, and… end the need for multiple remote controls.” Weak sauce, indeed.

He now adds a new promise: that “integrated search” will be seamlessly available for consumers across the new platforms. But just as universal remotes and channel-specific apps on platforms like Apple TV have already made his “multiple remotes” promise a hollow one, so, too, have competitive pressures already begun to deliver integrated search.

Meanwhile, such marginal benefits come with a host of substantial costs, as others have pointed out. Do we really need the FCC to grant itself more powers and create a substantial and coercive new regulatory regime to mandate what the market is already poised to provide?

From ignoring copyright to obliterating copyright

Chairman Wheeler’s first proposal engendered fervent criticism for the impossible position in which it placed MVPDs — of having to disregard, even outright violate, their contractual obligations to content creators.

Commendably, the new proposal acknowledges that contractual relationships between MVPDs and content providers should remain “intact.” Thus, the proposal purports to enable programmers and MVPDs to maintain “their channel position, advertising and contracts… in place.” MVPDs will retain “end-to-end” control of the display of content through their apps, and all contractually guaranteed content protection mechanisms will remain, because the “pay-TV’s software will manage the full suite of linear and on-demand programming licensed by the pay-TV provider.”

But, improved as it is, the new proposal continues to operate in an imagined world where the incredibly intricate and complex process by which content is created and distributed can be reduced to the simplest of terms, dictated by a regulator and applied uniformly across all content and all providers.

According to the fact sheet, the new proposal would “[p]rotect[] copyrights and… [h]onor[] the sanctity of contracts” through a “standard license”:

The proposed final rules require the development of a standard license governing the process for placing an app on a device or platform. A standard license will give device manufacturers the certainty required to bring innovative products to market… The license will not affect the underlying contracts between programmers and pay-TV providers. The FCC will serve as a backstop to ensure that nothing in the standard license will harm the marketplace for competitive devices.

But programming is distributed under a diverse range of contract terms. The only way a single, “standard license” could possibly honor these contracts is by forcing content providers to license all of their content under identical terms.

Leaving aside for a moment the fact that the FCC has no authority whatever to do this, for such a scheme to work, the agency would necessarily have to strip content holders of their right to govern the terms on which their content is accessed. After all, if MVPDs are legally bound to redistribute content on fixed terms, they have no room to permit content creators to freely exercise their rights to specify terms like windowing, online distribution restrictions, geographic restrictions, and the like.

In other words, the proposal simply cannot deliver on its promise that “[t]he license will not affect the underlying contracts between programmers and pay-TV providers.”

But fear not: According to the Fact Sheet, “[p]rogrammers will have a seat at the table to ensure that content remains protected.” Such largesse! One would be forgiven for assuming that the programmers’ (single?) seat will surrounded by those of other participants — regulatory advocates, technology companies, and others — whose sole objective will be to minimize content companies’ ability to restrict the terms on which their content is accessed.

And we cannot ignore the ominous final portion of the Fact Sheet’s “Standard License” description: “The FCC will serve as a backstop to ensure that nothing in the standard license will harm the marketplace for competitive devices.” Such an arrogation of ultimate authority by the FCC doesn’t bode well for that programmer’s “seat at the table” amounting to much.

Unfortunately, we can only imagine the contours of the final proposal that will describe the many ways by which distribution licenses can “harm the marketplace for competitive devices.” But an educated guess would venture that there will be precious little room for content creators and MVPDs to replicate a large swath of the contract terms they currently employ. “Any content owner can have its content painted any color that it wants, so long as it is black.”

At least we can take solace in the fact that the FCC has no authority to do what Wheeler wants it to do

And, of course, this all presumes that the FCC will be able to plausibly muster the legal authority in the Communications Act to create what amounts to a de facto compulsory licensing scheme.

A single license imposed upon all MVPDs, along with the necessary restrictions this will place upon content creators, does just as much as an overt compulsory license to undermine content owners’ statutory property rights. For every license agreement that would be different than the standard agreement, the proposed standard license would amount to a compulsory imposition of terms that the rights holders and MVPDs would not otherwise have agreed to. And if this sounds tedious and confusing, just wait until the Commission starts designing its multistakeholder Standard Licensing Oversight Process (“SLOP”)….

Unfortunately for Chairman Wheeler (but fortunately for the rest of us), the FCC has neither the legal authority, nor the requisite expertise, to enact such a regime.

Last month, the Copyright Office was clear on this score in its letter to Congress commenting on the Chairman’s original proposal:  

[I]t is important to remember that only Congress, through the exercise of its power under the Copyright Clause, and not the FCC or any other agency, has the constitutional authority to create exceptions and limitations in copyright law. While Congress has enacted compulsory licensing schemes, they have done so in response to demonstrated market failures, and in a carefully circumscribed manner.

Assuming that Section 629 of the Communications Act — the provision that otherwise empowers the Commission to promote a competitive set-top box market — fails to empower the FCC to rewrite copyright law (which is assuredly the case), the Commission will be on shaky ground for the inevitable torrent of lawsuits that will follow the revised proposal.

In fact, this new proposal feels more like an emergency pivot by a panicked Chairman than an actual, well-grounded legal recommendation. While the new proposal improves upon the original, it retains at its core the same ill-informed, ill-advised and illegal assertion of authority that plagued its predecessor.

[Below is an excellent essay by Devlin Hartline that was first posted at the Center for the Protection of Intellectual Property blog last week, and I’m sharing it here.]

ACKNOWLEDGING THE LIMITATIONS OF THE FTC’S “PAE” STUDY

By Devlin Hartline

The FTC’s long-awaited case study of patent assertion entities (PAEs) is expected to be released this spring. Using its subpoena power under Section 6(b) to gather information from a handful of firms, the study promises us a glimpse at their inner workings. But while the results may be interesting, they’ll also be too narrow to support any informed policy changes. And you don’t have to take my word for it—the FTC admits as much. In one submission to the Office of Management and Budget (OMB), which ultimately decided whether the study should move forward, the FTC acknowledges that its findings “will not be generalizable to the universe of all PAE activity.” In another submission to the OMB, the FTC recognizes that “the case study should be viewed as descriptive and probative for future studies seeking to explore the relationships between organizational form and assertion behavior.”

However, this doesn’t mean that no one will use the study to advocate for drastic changes to the patent system. Even before the study’s release, many people—including some FTC Commissioners themselves—have already jumped to conclusions when it comes to PAEs, arguing that they are a drag on innovation and competition. Yet these same people say that we need this study because there’s no good empirical data analyzing the systemic costs and benefits of PAEs. They can’t have it both ways. The uproar about PAEs is emblematic of the broader movement that advocates for the next big change to the patent system before we’ve even seen how the last one panned out. In this environment, it’s unlikely that the FTC and other critics will responsibly acknowledge that the study simply cannot give us an accurate assessment of the bigger picture.

Limitations of the FTC Study 

Many scholars have written about the study’s fundamental limitations. As statistician Fritz Scheuren points out, there are two kinds of studies: exploratory and confirmatory. An exploratory study is a starting point that asks general questions in order to generate testable hypotheses, while a confirmatory study is then used to test the validity of those hypotheses. The FTC study, with its open-ended questions to a handful of firms, is a classic exploratory study. At best, the study will generate answers that could help researchers begin to form theories and design another round of questions for further research. Scheuren notes that while the “FTC study may well be useful at generating exploratory data with respect to PAE activity,” it “is not designed to confirm supportable subject matter conclusions.”

One significant constraint with the FTC study is that the sample size is small—only twenty-five PAEs—and the control group is even smaller—a mixture of fifteen manufacturers and non-practicing entities (NPEs) in the wireless chipset industry. Scheuren reasons that there “is also the risk of non-representative sampling and potential selection bias due to the fact that the universe of PAEs is largely unknown and likely quite diverse.” And the fact that the control group comes from one narrow industry further prevents any generalization of the results. Scheuren concludes that the FTC study “may result in potentially valuable information worthy of further study,” but that it is “not designed in a way as to support public policy decisions.”

Professor Michael Risch questions the FTC’s entire approach: “If the FTC is going to the trouble of doing a study, why not get it done right the first time and a) sample a larger number of manufacturers, in b) a more diverse area of manufacturing, and c) get identical information?” He points out that the FTC won’t be well-positioned to draw conclusions because the control group is not even being asked the same questions as the PAEs. Risch concludes that “any report risks looking like so many others: a static look at an industry with no benchmark to compare it to.” Professor Kristen Osenga echoes these same sentiments and notes that “the study has been shaped in a way that will simply add fuel to the anti–‘patent troll’ fire without providing any data that would explain the best way to fix the real problems in the patent field today.”

Osenga further argues that the study is flawed since the FTC’s definition of PAEs perpetuates the myth that patent licensing firms are all the same. The reality is that many different types of businesses fall under the “PAE” umbrella, and it makes no sense to impute the actions of a small subset to the entire group when making policy recommendations. Moreover, Osenga questions the FTC’s “shortsighted viewpoint” of the potential benefits of PAEs, and she doubts how the “impact on innovation and competition” will be ascertainable given the questions being asked. Anne Layne-Farrar expresses similar doubts about the conclusions that can be drawn from the FTC study since only licensors are being surveyed. She posits that it “cannot generate a full dataset for understanding the conduct of the parties in patent license negotiation or the reasons for the failure of negotiations.”

Layne-Farrar concludes that the FTC study “can point us in fruitful directions for further inquiry and may offer context for interpreting quantitative studies of PAE litigation, but should not be used to justify any policy changes.” Consistent with the FTC’s own admissions of the study’s limitations, this is the real bottom line of what we should expect. The study will have no predictive power because it only looks at how a small sample of firms affect a few other players within the patent ecosystem. It does not quantify how that activity ultimately affects innovation and competition—the very information needed to support policy recommendations. The FTC study is not intended to produce the sort of compelling statistical data that can be extrapolated to the larger universe of firms.

FTC Commissioners Put Cart Before Horse

The FTC has a history of bias against PAEs, as demonstrated in its 2011 report that skeptically questioned the “uncertain benefits” of PAEs while assuming their “detrimental effects” in undermining innovation. That report recommended special remedy rules for PAEs, even as the FTC acknowledged the lack of objective evidence of systemic failure and the difficulty of distinguishing “patent transactions that harm innovation from those that promote it.” With its new study, the FTC concedes to the OMB that much is still not known about PAEs and that the findings will be preliminary and non-generalizable. However, this hasn’t prevented some Commissioners from putting the cart before the horse with PAEs.

In fact, the very call for the FTC to institute the PAE study started with its conclusion. In her 2013 speech suggesting the study, FTC Chairwoman Edith Ramirez recognized that “we still have only snapshots of the costs and benefits of PAE activity” and that “we will need to learn a lot more” in order “to see the full competitive picture.” While acknowledging the vast potential benefits of PAEs in rewarding invention, benefiting competition and consumers, reducing enforcement hurdles, increasing liquidity, encouraging venture capital investment, and funding R&D, she nevertheless concluded that “PAEs exploit underlying problems in the patent system to the detriment of innovation and consumers.” And despite the admitted lack of data, Ramirez stressed “the critical importance of continuing the effort on patent reform to limit the costs associated with some types of PAE activity.”

This position is duplicitous: If the costs and benefits of PAEs are still unknown, what justifies Ramirez’s rushed call for immediate action? While benefits have to be weighed against costs, it’s clear that she’s already jumped to the conclusion that the costs outweigh the benefits. In another speech a few months later, Ramirez noted that the “troubling stories” about PAEs “don’t tell us much about the competitive costs and benefits of PAE activity.” Despite this admission, Ramirez called for “a much broader response to flaws in the patent system that fuel inefficient behavior by PAEs.” And while Ramirez said that understanding “the PAE business model will inform the policy dialogue,” she stated that “it will not change the pressing need for additional progress on patent reform.”

Likewise, in an early 2014 speech, Commissioner Julie Brill ignored the study’s inherent limitations and exploratory nature. She predicted that the study “will provide a fuller and more accurate picture of PAE activity” that “will be put to good use by Congress and others who examine closely the activities of PAEs.” Remarkably, Brill stated that “the FTC and other law enforcement agencies” should not “wait on the results of the 6(b) study before undertaking enforcement actions against PAE activity that crosses the line.” Even without the study’s results, she thought that “reforms to the patent system are clearly warranted.” In Brill’s view, the study would only be useful for determining whether “additional reforms are warranted” to curb the activities of PAEs.

It appears that these Commissioners have already decided—in the absence of any reliable data on the systemic effects of PAE activity—that drastic changes to the patent system are necessary. Given their clear bias in this area, there is little hope that they will acknowledge the deep limitations of the study once it is released.

Commentators Jump the Gun

Unsurprisingly, many supporters of the study have filed comments with the FTC arguing that the study is needed to fill the huge void in empirical data on the costs and benefits associated with PAEs. Some even simultaneously argue that the costs of PAEs far outweigh the benefits, suggesting that they have already jumped to their conclusion and just want the data to back it up. Despite the study’s serious limitations, these commentators appear primed to use it to justify their foregone policy recommendations.

For example, the Consumer Electronics Association applauded “the FTC’s efforts to assess the anticompetitive harms that PAEs cause on our economy as a whole,” and it argued that the study “will illuminate the many dimensions of PAEs’ conduct in a way that no other entity is capable.” At the same time, it stated that “completion of this FTC study should not stay or halt other actions by the administrative, legislative or judicial branches to address this serious issue.” The Internet Commerce Coalition stressed the importance of the study of “PAE activity in order to shed light on its effects on competition and innovation,” and it admitted that without the information, “the debate in this area cannot be empirically based.” Nonetheless, it presupposed that the study will uncover “hidden conduct of and abuses by PAEs” and that “it will still be important to reform the law in this area.”

Engine Advocacy admitted that “there is very little broad empirical data about the structure and conduct of patent assertion entities, and their effect on the economy.” It then argued that PAE activity “harms innovators, consumers, startups and the broader economy.” The Coalition for Patent Fairness called on the study “to contribute to the understanding of policymakers and the public” concerning PAEs, which it claimed “impose enormous costs on U.S. innovators, manufacturers, service providers, and, increasingly, consumers and end-users.” And to those suggesting “the potentially beneficial role of PAEs in the patent market,” it stressed that “reform be guided by the principle that the patent system is intended to incentivize and reward innovation,” not “rent-seeking” PAEs that are “exploiting problems.”

The joint comments of Public Knowledge, Electronic Frontier Foundation, & Engine Advocacyemphasized the fact that information about PAEs “currently remains limited” and that what is “publicly known largely consists of lawsuits filed in court and anecdotal information.” Despite admitting that “broad empirical data often remains lacking,” the groups also suggested that the study “does not mean that legislative efforts should be stalled” since “the harms of PAE activity are well known and already amenable to legislative reform.” In fact, they contended not only that “a problem exists,” but that there’s even “reason to believe the scope is even larger than what has already been reported.”

Given this pervasive and unfounded bias against PAEs, there’s little hope that these and other critics will acknowledge the study’s serious limitations. Instead, it’s far more likely that they will point to the study as concrete evidence that even more sweeping changes to the patent system are in order.

Conclusion

While the FTC study may generate interesting information about a handful of firms, it won’t tell us much about how PAEs affect competition and innovation in general. The study is simply not designed to do this. It instead is a fact-finding mission, the results of which could guide future missions. Such empirical research can be valuable, but it’s very important to recognize the limited utility of the information being collected. And it’s crucial not to draw policy conclusions from it. Unfortunately, if the comments of some of the Commissioners and supporters of the study are any indication, many critics have already made up their minds about the net effects of PAEs, and they will likely use the study to perpetuate the biased anti-patent fervor that has captured so much attention in recent years.

 

In its February 25 North Carolina Dental decision, the U.S. Supreme Court, per Justice Anthony Kennedy, held that a state regulatory board that is controlled by market participants in the industry being regulated cannot invoke “state action” antitrust immunity unless it is “actively supervised” by the state.  In so ruling, the Court struck a significant blow against protectionist rent-seeking and for economic liberty.  (As I stated in a recent Heritage Foundation legal memorandum, “[a] Supreme Court decision accepting this [active supervision] principle might help to curb special-interest favoritism conferred through state law.  At the very least, it could complicate the efforts of special interests to protect themselves from competition through regulation.”)

A North Carolina law subjects the licensing of dentistry to a North Carolina State Board of Dental Examiners (Board), six of whose eight members must be licensed dentists.  After dentists complained to the Board that non-dentists were charging lower prices than dentists for teeth whitening, the Board sent cease-and-desist letter to non-dentist teeth whitening providers, warning that the unlicensed practice dentistry is a crime.  This led non-dentists to cease teeth whitening services in North Carolina.  The Federal Trade Commission (FTC) held that the Board’s actions violated Section 5 of the FTC Act, which prohibits unfair methods of competition, the Fourth Circuit agreed, and the Court affirmed the Fourth Circuit’s decision.

In its decision, the Court rejected the claim that state action immunity, which confers immunity on the anticompetitive conduct of states acting in their sovereign capacity, applied to the Board’s actions.  The Court stressed that where a state delegates control over a market to a non-sovereign actor, immunity applies only if the state accepts political accountability by actively supervising that actor’s decisions.  The Court applied its Midcal test, which requires (1) clear state articulation and (2) active state supervision of decisions by non-sovereign actors for immunity to attach.  The Court held that entities designated as state agencies are not exempt from active supervision when they are controlled by market participants, because allowing an exemption in such circumstances would pose the risk of self-dealing that the second prong of Midcal was created to address.

Here, the Board did not contend that the state exercised any (let alone active) supervision over its anticompetitive conduct.  The Court closed by summarizing “a few constant requirements of active supervision,” namely, (1) the supervisor must review the substance of the anticompetitive decision, (2) the supervisor must have the power to veto or modify particular decisions for consistency with state policy, (3) “the mere potential for state supervision is not an adequate substitute for a decision by the State,” and (4) “the state supervisor may not itself be an active market participant.”  The Court cautioned, however, that “the adequacy of supervision otherwise will depend on all the circumstances of a case.”

Justice Samuel Alito, joined by Justices Antonin Scalia and Clarence Thomas, dissented, arguing that the Court ignored precedent that state agencies created by the state legislature (“[t]he Board is not a private or ‘nonsovereign’ entity”) are shielded by the state action doctrine.  “By straying from this simple path” and assessing instead whether individual agencies are subject to regulatory capture, the Court spawned confusion, according to the dissenters.  Midcal was inapposite, because it involved a private trade association.  The dissenters feared that the majority’s decision may require states “to change the composition of medical, dental, and other boards, but it is not clear what sort of changes are needed to satisfy the test that the Court now adopts.”  The dissenters concluded “that determining when regulatory capture has occurred is no simple task.  That answer provides a reason for relieving courts from the obligation to make such determinations at all.  It does not explain why it is appropriate for the Court to adopt the rather crude test for capture that constitutes the holding of today’s decision.”

The Court’s holding in North Carolina Dental helpfully limits the scope of the Court’s infamous Parker v. Brown decision (which shielded from federal antitrust attack a California raisin producers’ cartel overseen by a state body), without excessively interfering in sovereign state prerogatives.  State legislatures may still choose to create self-interested professional regulatory bodies – their sovereignty is not compromised.  Now, however, they will have to (1) make it clearer up front that they intend to allow those bodies to displace competition, and (2) subject those bodies to disinterested third party review.  These changes should make it far easier for competition advocates (including competition agencies) to spot and publicize welfare-inimical regulatory schemes, and weaken the incentive and ability of rent-seekers to undermine competition through state regulatory processes.  All told, the burden these new judicially-imposed constraints will impose on the states appears relatively modest, and should be far outweighed by the substantial welfare benefits they are likely to generate.