Archives For innovation

Over the past decade and a half, virtually every branch of the federal government has taken steps to weaken the patent system. As reflected in President Joe Biden’s July 2021 executive order, these restraints on patent enforcement are now being coupled with antitrust policies that, in large part, adopt a “big is bad” approach in place of decades of economically grounded case law and agency guidelines.

This policy bundle is nothing new. It largely replicates the innovation policies pursued during the late New Deal and the postwar decades. That historical experience suggests that a “weak-patent/strong-antitrust” approach is likely to encourage neither innovation nor competition.

The Overlooked Shortfalls of New Deal Innovation Policy

Starting in the early 1930s, the U.S. Supreme Court issued a sequence of decisions that raised obstacles to patent enforcement. The Franklin Roosevelt administration sought to take this policy a step further, advocating compulsory licensing for all patents. While Congress did not adopt this proposal, it was partially implemented as a de facto matter through antitrust enforcement. Starting in the early 1940s and continuing throughout the postwar decades, the antitrust agencies secured judicial precedents that treated a broad range of licensing practices as per se illegal. Perhaps most dramatically, the U.S. Justice Department (DOJ) secured more than 100 compulsory licensing orders against some of the nation’s largest companies. 

The rationale behind these policies was straightforward. By compelling access to incumbents’ patented technologies, courts and regulators would lower barriers to entry and competition would intensify. The postwar economy declined to comply with policymakers’ expectations. Implementation of a weak-IP/strong-antitrust innovation policy over the course of four decades yielded the opposite of its intended outcome. 

Market concentration did not diminish, turnover in market leadership was slow, and private research and development (R&D) was confined mostly to the research labs of the largest corporations (who often relied on generous infusions of federal defense funding). These tendencies are illustrated by the dramatically unequal allocation of innovation capital in the postwar economy.  As of the late 1950s, small firms represented approximately 7% of all private U.S. R&D expenditures.  Two decades later, that figure had fallen even further. By the late 1970s, patenting rates had plunged, and entrepreneurship and innovation were in a state of widely lamented decline.

Why Weak IP Raises Entry Costs and Promotes Concentration

The decline in entrepreneurial innovation under a weak-IP regime was not accidental. Rather, this outcome can be derived logically from the economics of information markets.

Without secure IP rights to establish exclusivity, engage securely with business partners, and deter imitators, potential innovator-entrepreneurs had little hope to obtain funding from investors. In contrast, incumbents could fund R&D internally (or with federal funds that flowed mostly to the largest computing, communications, and aerospace firms) and, even under a weak-IP regime, were protected by difficult-to-match production and distribution efficiencies. As a result, R&D mostly took place inside the closed ecosystems maintained by incumbents such as AT&T, IBM, and GE.

Paradoxically, the antitrust campaign against patent “monopolies” most likely raised entry barriers and promoted industry concentration by removing a critical tool that smaller firms might have used to challenge incumbents that could outperform on every competitive parameter except innovation. While the large corporate labs of the postwar era are rightly credited with technological breakthroughs, incumbents such as AT&T were often slow in transforming breakthroughs in basic research into commercially viable products and services for consumers. Without an immediate competitive threat, there was no rush to do so. 

Back to the Future: Innovation Policy in the New New Deal

Policymakers are now at work reassembling almost the exact same policy bundle that ended in the innovation malaise of the 1970s, accompanied by a similar reliance on public R&D funding disbursed through administrative processes. However well-intentioned, these processes are inherently exposed to political distortions that are absent in an innovation environment that relies mostly on private R&D funding governed by price signals. 

This policy bundle has emerged incrementally since approximately the mid-2000s, through a sequence of complementary actions by every branch of the federal government.

  • In 2011, Congress enacted the America Invents Act, which enables any party to challenge the validity of an issued patent through the U.S. Patent and Trademark Office’s (USPTO) Patent Trial and Appeals Board (PTAB). Since PTAB’s establishment, large information-technology companies that advocated for the act have been among the leading challengers.
  • In May 2021, the Office of the U.S. Trade Representative (USTR) declared its support for a worldwide suspension of IP protections over Covid-19-related innovations (rather than adopting the more nuanced approach of preserving patent protections and expanding funding to accelerate vaccine distribution).  
  • President Biden’s July 2021 executive order states that “the Attorney General and the Secretary of Commerce are encouraged to consider whether to revise their position on the intersection of the intellectual property and antitrust laws, including by considering whether to revise the Policy Statement on Remedies for Standard-Essential Patents Subject to Voluntary F/RAND Commitments.” This suggests that the administration has already determined to retract or significantly modify the 2019 joint policy statement in which the DOJ, USPTO, and the National Institutes of Standards and Technology (NIST) had rejected the view that standard-essential patent owners posed a high risk of patent holdup, which would therefore justify special limitations on enforcement and licensing activities.

The history of U.S. technology markets and policies casts great doubt on the wisdom of this weak-IP policy trajectory. The repeated devaluation of IP rights is likely to be a “lose-lose” approach that does little to promote competition, while endangering the incentive and transactional structures that sustain robust innovation ecosystems. A weak-IP regime is particularly likely to disadvantage smaller firms in biotech, medical devices, and certain information-technology segments that rely on patents to secure funding from venture capital and to partner with larger firms that can accelerate progress toward market release. The BioNTech/Pfizer alliance in the production and distribution of a Covid-19 vaccine illustrates how patents can enable such partnerships to accelerate market release.  

The innovative contribution of BioNTech is hardly a one-off occurrence. The restoration of robust patent protection in the early 1980s was followed by a sharp increase in the percentage of private R&D expenditures attributable to small firms, which jumped from about 5% as of 1980 to 21% by 1992. This contrasts sharply with the unequal allocation of R&D activities during the postwar period.

Remarkably, the resurgence of small-firm innovation following the strong-IP policy shift, starting in the late 20th century, mimics tendencies observed during the late 19th and early-20th centuries, when U.S. courts provided a hospitable venue for patent enforcement; there were few antitrust constraints on licensing activities; and innovation was often led by small firms in partnership with outside investors. This historical pattern, encompassing more than a century of U.S. technology markets, strongly suggests that strengthening IP rights tends to yield a policy “win-win” that bolsters both innovative and competitive intensity. 

An Alternate Path: ‘Bottom-Up’ Innovation Policy

To be clear, the alternative to the policy bundle of weak-IP/strong antitrust does not consist of a simple reversion to blind enforcement of patents and lax administration of the antitrust laws. A nuanced innovation policy would couple modern antitrust’s commitment to evidence-based enforcement—which, in particular cases, supports vigorous intervention—with a renewed commitment to protecting IP rights for innovator-entrepreneurs. That would promote competition from the “bottom up” by bolstering maverick innovators who are well-positioned to challenge (or sometimes partner with) incumbents and maintaining the self-starting engine of creative disruption that has repeatedly driven entrepreneurial innovation environments. Tellingly, technology incumbents have often been among the leading advocates for limiting patent and copyright protections.  

Advocates of a weak-patent/strong-antitrust policy believe it will enhance competitive and innovative intensity in technology markets. History suggests that this combination is likely to produce the opposite outcome.  

Jonathan M. Barnett is the Torrey H. Webb Professor of Law at the University of Southern California, Gould School of Law. This post is based on the author’s recent publications, Innovators, Firms, and Markets: The Organizational Logic of Intellectual Property (Oxford University Press 2021) and “The Great Patent Grab,” in Battles Over Patents: History and the Politics of Innovation (eds. Stephen H. Haber and Naomi R. Lamoreaux, Oxford University Press 2021).

ICLE at the Oxford Union

Sam Bowman —  13 July 2021

Earlier this year, the International Center for Law & Economics (ICLE) hosted a conference with the Oxford Union on the themes of innovation, competition, and economic growth with some of our favorite scholars. Though attendance at the event itself was reserved for Oxford Union members, videos from that day are now available for everyone to watch.

Charles Goodhart and Manoj Pradhan on demographics and growth

Charles Goodhart, of Goodhart’s Law fame, and Manoj Pradhan discussed the relationship between demographics and growth, and argued that an aging global population could mean higher inflation and interest rates sooner than many imagine.

Catherine Tucker on privacy and innovation — is there a trade-off?

Catherine Tucker of the Massachusetts Institute of Technology discussed the costs and benefits of privacy regulation with ICLE’s Sam Bowman, and considered whether we face a trade-off between privacy and innovation online and in the fight against COVID-19.

Don Rosenberg on the political and economic challenges facing a global tech company in 2021

Qualcomm’s General Counsel Don Rosenberg, formerly of Apple and IBM, discussed the political and economic challenges facing a global tech company in 2021, as well as dealing with China while working in one of the most strategically vital industries in the world.

David Teece on the dynamic capabilities framework

David Teece explained the dynamic capabilities framework, a way of understanding business strategy and behavior in an uncertain world.

Vernon Smith in conversation with Shruti Rajagopalan on what we still have to learn from Adam Smith

Nobel laureate Vernon Smith discussed the enduring insights of Adam Smith with the Mercatus Center’s Shruti Rajagopalan.

Samantha Hoffman, Robert Atkinson and Jennifer Huddleston on American and Chinese approaches to tech policy in the 2020s

The final panel, with the Information Technology and Innovation Foundation’s President Robert Atkinson, the Australian Strategic Policy Institute’s Samantha Hoffman, and the American Action Forum’s Jennifer Huddleston, discussed the role that tech policy in the U.S. and China plays in the geopolitics of the 2020s.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Harold Feld is senior vice president of Public Knowledge.]

Chairman Ajit Pai prioritized making new spectrum available for 5G. To his credit, he succeeded. Over the course of four years, Chairman Pai made available more high-band and mid-band spectrum, for licensed use and unlicensed use, than any other Federal Communications Commission chairman. He did so in the face of unprecedented opposition from other federal agencies, navigating the chaotic currents of the Trump administration with political acumen and courage. The Pai FCC will go down in history as the 5G FCC, and as the chairman who protected the primacy of FCC control over commercial spectrum policy.

At the same time, the Pai FCC will also go down in history as the most conventional FCC on spectrum policy in the modern era. Chairman Pai undertook no sweeping review of spectrum policy in the manner of former Chairman Michael Powell and no introduction of new and radically different spectrum technologies such as the introduction of unlicensed spectrum and spread spectrum in the 1980s, or the introduction of auctions in the 1990s. To the contrary, Chairman Pai actually rolled back the experimental short-term license structure adopted in the 3.5 GHz Citizens Broadband Radio Service (CBRS) band and replaced it with the conventional long-term with renewal expectation license. He missed a once-in-a-lifetime opportunity to dramatically expand the availability of unlicensed use of the TV white spaces (TVWS) via repacking after the television incentive auction. In reworking the rules for the 2.5 GHz band, although Pai laudably embraced the recommendation to create an application window for rural tribal lands, he rejected the proposal to allow nonprofits a chance to use the band for broadband in favor of conventional auction policy.

Ajit Pai’s Spectrum Policy Gave the US a Strong Position for 5G and Wi-Fi 6

To fully appreciate Chairman Pai’s accomplishments, we must first fully appreciate the urgency of opening new spectrum, and the challenges Pai faced from within the Trump administration itself. While providers can (and should) repurpose spectrum from older technologies to newer technologies, successful widespread deployment can only take place when sufficient amounts of new spectrum become available. This “green field” spectrum allows providers to build out new technologies with the most up-to-date equipment without disrupting existing subscriber services. The protocols developed for mobile 5G services work best with “mid-band” spectrum (generally considered to be frequencies between 2 GHz and 6 GHz). At the time Pai became chairman, the FCC did not have any mid-band spectrum identified for auction.

In addition, spectrum available for unlicensed use has become increasingly congested as more and more services depend on Wi-Fi and other unlicensed applications. Indeed, we have become so dependent on Wi-Fi for home broadband and networking that people routinely talk about buying “Wi-Fi” from commercial broadband providers rather than buying “internet access.” The United States further suffered a serious disadvantage moving forward to next generation Wi-Fi, Wi-Fi 6, because the U.S. lacked a contiguous block of spectrum large enough to take advantage of Wi-Fi 6’s gigabit capabilities. Without gigabit Wi-Fi, Americans will increasingly be unable to use the applications that gigabit broadband to the home makes possible.

But virtually all spectrum—particularly mid-band spectrum—have significant incumbents. These incumbents include federal users, particularly the U.S. Department of Defense. Finding new spectrum optimal for 5G required reclaiming spectrum from these incumbents. Unlicensed services do not require relocating incumbent users but creating such “underlay” unlicensed spectrum access requires rules to prevent unlicensed operations from causing harmful interference to licensed services. Needless to say, incumbent services fiercely resist any change in spectrum-allocation rules, claiming that reducing their spectrum allocation or permitting unlicensed services will compromise valuable existing services, while simultaneously causing harmful interference.

The need to reallocate unprecedented amounts of spectrum to ensure successful 5G and Wi-Fi 6 deployment in the United States created an unholy alliance of powerful incumbents, commercial and federal, dedicated to blocking FCC action. Federal agencies—in violation of established federal spectrum policy—publicly challenged the FCC’s spectrum-allocation decisions. Powerful industry incumbents—such as the auto industry, the power industry, and defense contractors—aggressively lobbied Congress to reverse the FCC’s spectrum action by legislation. The National Telecommunications and Information Agency (NTIA), the federal agency tasked with formulating federal spectrum policy, was missing in action as it rotated among different acting agency heads. As the chair and ranking member of the House Commerce Committee noted, this unprecedented and very public opposition by federal agencies to FCC spectrum policy threatened U.S. wireless interests both domestically and internationally.

Navigating this hostile terrain required Pai to exercise both political acumen and political will. Pai accomplished his goal of reallocating 600 MHz of spectrum for auction, opening over 1200 MHz of contiguous spectrum for unlicensed use, and authorized the new entrant Ligado Networks over the objections of the DOD. He did so by a combination of persuading President Donald Trump of the importance of maintaining U.S. leadership in 5G, and insisting on impeccable analysis by the FCC’s engineers to provide support for the reallocation and underlay decisions. On the most significant votes, Pai secured support (or partial support) from the Democrats. Perhaps most importantly, Pai successfully defended the institutional role of the FCC as the ultimate decisionmaker on commercial spectrum use, not subject to a “heckler’s veto” by other federal agencies.

Missed Innovation, ‘Command and Control Lite

While acknowledging Pai’s accomplishments, a fair consideration of Pai’s legacy must also consider his shortcomings. As chairman, Pai proved the most conservative FCC chair on spectrum policy since the 1980s. The Reagan FCC produced unlicensed and spread spectrum rules. The Clinton FCC created the spectrum auction regime. The Bush FCC included a spectrum task force and produced the concept of database management for unlicensed services, creating the TVWS and laying the groundwork for CBRS in the 3.5 GHz band. The Obama FCC recommended and created the world’s first incentive auction.

The Trump FCC does more than lack comparable accomplishments; it actively rolled back previous innovations. Within the first year of his chairmanship, Pai began a rulemaking designed to roll back the innovative priority access licensing (PALs). Under the rules adopted under the previous chairman, PALs provided exclusive use on a census block basis for three years with no expectation of renewal. Pai delayed the rollout of CBRS for two years to replace this approach with a standard license structure of 10 years with an expectation of renewal, explicitly to facilitate traditional carrier investment in traditional networks. Pai followed the same path when restructuring the 2.5 GHz band. While laudably creating a window for Native Americans to apply for 2.5 GHz licenses on rural tribal lands, Pai rejected proposals from nonprofits to adopt a window for non-commercial providers to offer broadband. Instead, he simply eliminated the educational requirement and adopted a standard auction for distribution of remaining licenses.

Similarly, in the unlicensed space, Pai consistently declined to promote innovation. In the repacking following the broadcast incentive auction, Pai rejected the proposal of structuring the repacking to ensure usable TVWS in every market. Instead, under Pai, the FCC managed the repacking so as to minimize the burden on incumbent primary and secondary licensees. As a result, major markets such as Los Angeles have zero channels available for unlicensed TVWS operation. This effectively relegates the service to a niche rural service, augmenting existing rural wireless ISPs.

The result is a modified form of “command and control,” the now-discredited system where the FCC would allocate licenses to provide specific services such as “FM radio” or “mobile pager service.” While preserving license flexibility in name, the licensing rules are explicitly structured to promote certain types of investment and business cases. The result is to encourage the same types of licensees to offer improved and more powerful versions of the same types of services, while discouraging more radical innovations.

Conclusion

Chairman Pai can rightly take pride in his overall 5G legacy. He preserved the institutional role of the FCC as the agency responsible for expanding our nation’s access to wireless services against sustained attack by federal agencies determined to protect their own spectrum interests. He provided enough green field spectrum for both licensed services and unlicensed services to permit the successful deployment of 5G and Wi-Fi 6. At the same time, however, he failed to encourage more radical spectrum policies that have made the United States the birthplace of such technologies as mobile broadband and Wi-Fi. We have won the “race” to next generation wireless, but the players and services are likely to stay the same.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Kristian Stout is director of innovation policy for the International Center for Law & Economics.]

Ajit Pai will step down from his position as chairman of the Federal Communications Commission (FCC) effective Jan. 20. Beginning Jan. 15, Truth on the Market will host a symposium exploring Pai’s tenure, with contributions from a range of scholars and practitioners.

As we ponder the changes to FCC policy that may arise with the next administration, it’s also a timely opportunity to reflect on the chairman’s leadership at the agency and his influence on telecommunications policy more broadly. Indeed, the FCC has faced numerous challenges and opportunities over the past four years, with implications for a wide range of federal policy and law. Our symposium will offer insights into numerous legal, economic, and policy matters of ongoing importance.

Under Pai’s leadership, the FCC took on key telecommunications issues involving spectrum policy, net neutrality, 5G, broadband deployment, the digital divide, and media ownership and modernization. Broader issues faced by the commission include agency process reform, including a greater reliance on economic analysis; administrative law; federal preemption of state laws; national security; competition; consumer protection; and innovation, including the encouragement of burgeoning space industries.

This symposium asks contributors for their thoughts on these and related issues. We will explore a rich legacy, with many important improvements that will guide the FCC for some time to come.

Truth on the Market thanks all of these excellent authors for agreeing to participate in this interesting and timely symposium.

Look for the first posts starting Jan. 15.

Big Tech but Bigger Ideas

Bowman Heiden —  30 November 2020
[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Bowman Heiden (Co-director of the Center for Intellectual Property (CIP), a joint center between University of Gothenburg (Sweden), Chalmers University of Technology (Sweden), and the Norwegian University for Science and Technology).
]

As an academic working at the intersection of economics, law, and innovation, I was excited to see Nicolas Petit apply an interdisciplinary approach to investigate big tech in the digital economy. Working across law, business, and engineering has taught me the importance of bringing together different theoretical perspectives and mindsets to address complex issues. [RL1] Below is a short discussion of a few interdisciplinary areas where Petit helps us to bridge the theoretical gap so as to unveil the complexity and provide new insights for policymakers.

Competition Strategy vs. Competition Law

Business schools do not typically teach competition law and law schools do not teach competition strategy. This creates a theoretical disconnect that spills over into professional life. Business schools tell students to create a sustainable competitive advantage, which often means building a dominant market position. Law schools, on the other hand, treat dominant positions as socially undesirable and likely illegal.

As Petit points out, one of the main competitive strategy frameworks—the Porter’s Five Forces model, named for Michael E. Porter of Harvard Business School—provides a much broader perspective on rivalry than legal regimes like the OECD’s Competition Assessment Toolkit, which use a product-centric “more restrictive method of competitive assessment.” The progression of technology and new creative business models, such as multisided platforms, thus continuously push the frontier. Over time, this fundamentally alters the nature of competition within product markets as a unit of analysis.

If one seeks answers within “the law,” there will always be a lag between commercial reality (i.e., market norms) and legal doctrine (i.e., statutory norms). Petit reminds us that big tech is a different kind of competition whose holistic nature may require a new analytic framework to understand its welfare effects. If the fundamental principles of market competition are changing, we will not likely find the answer in historical precedents.

Innovation and Entrepreneurship vs. Economics

Given the focus on innovation as the key source of economic development, it is strange that innovation plays such a small role in mainstream economics. Schumpeter tried to convince us some 80 years ago that the market power generated from innovation and entrepreneurship had a positive economic impact, even for larger firms. But his concept of “creative destruction” has never really been able to gain much ground over the “invisible hand.” One cannot help but conclude this is because the static world is easier to understand and model mathematically than the dynamic world.

But the world is not static. Even if we agree that we are concerned primarily with welfare, we still need to decide whether we want to be better off now or later. A static analysis of welfare promotes a world without innovation. It is almost as if economics doesn’t appreciate “time” as a variable. If Schumpeter is right and it is disequilibrium, not equilibrium, that is the most relevant economic phenomenon, then we have the cart in front of the horse. Does innovation breed competition or does competition breed innovation? If the market power associated with innovation produces greater welfare than perfect competition, then how useful is competition as a proxy for welfare? In other words, if perfect competition inhibits innovation and dynamic efficiency, then more competition cannot be a societal goal in of itself.

Having time as a core variable forces us to think about the future and how we get from here to there. It is not enough to model evolution in the short term as a sea full of fish, and in the long term, as a city full of people. How the world changes and how quickly it changes are also important. Even with the long-term benefits of creative destruction, it is important to remember that a significant group of voting-age citizens will likely suffer in the short term.

Petit’s discussion of big tech’s exploration, change and pivot flexibility implicitly reminds us that time matters. Time is the carrier of innovation and uncertainty. This has fundamental impacts on the nature of competition and competition’s impact on welfare, which cannot be properly understood only through comparative statics. Though he claims his goal is “not to formulate a new Schumpeterian theory of monopoly efficiency,” he is rightly Schumpeterian in moving innovation and uncertainty closer to the center stage of analysis in law and economics.

Certainly, in a dynamic, global market, there is credence to Petit’s supposition that market power in digital markets can be welfare enhancing in the short term and can potentially instigate innovation, particularly disruptive innovation, in the long term. Both constraints and uncertainty typically spur innovation.

Politics vs. Economics

If ignorance was our only challenge in pursuing enlightened public policy, there would be little to worry about. We would constantly generate hypotheses and test them empirically to adapt to a changing world. Unfortunately, we have two more formidable adversaries: ideology and self-interest.

Ideology is cognitive closure, a type of self-inflicted ignorance, where someone internalizes a set of beliefs that defines who they are. All new information that contradicts the chosen ideology will likely be rejected as a starting point (e.g., the notion that big is always bad). This is what Max Planck meant when he said “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” The world is full of ideologues; nowadays, the objective centrist is the radical, which is where I see that Petit has positioned himself.

Policy is also highly influenced by self-interest, which in a capitalist society is entirely rational. Self-interest is the cornerstone of the concept of the ”invisible hand” and basic price theory when applied to the commerical market. But what are the implications when it is applied to policy? Is lobbying simply part of a free market for policies, where self-interested actors compete, not in the game itself, but to change the rules of the game in their favor? The idea that those who control the economic base control the infrastructure of society is not new. From a policy perspective, is the invisible hand leading us to prosperity or is it giving us the finger?[RL2] 

Just as economist Robert Solow helped us to understand the extent of our ignorance about economic growth, so has this work by Nicolas Petit. Hopefully, it will ignite a new conversation about the role of innovation and uncertainty, not only in antitrust, but also in mainstream economic thought, all without the need for Planck’s funeral procession.

This blog post summarizes the findings of a paper published in Volume 21 of the Federalist Society Review. The paper was co-authored by Dirk Auer, Geoffrey A. Manne, Julian Morris, & Kristian Stout. It uses the analytical framework of law and economics to discuss recent patent law reforms in the US, and their negative ramifications for inventors. The full paper can be found on the Federalist Society’s website, here.

Property rights are a pillar of the free market. As Harold Demsetz famously argued, they spur specialization, investment and competition throughout the economy. And the same holds true for intellectual property rights (IPRs). 

However, despite the many social benefits that have been attributed to intellectual property protection, the past decades have witnessed the birth and growth of an powerful intellectual movement seeking to reduce the legal protections offered to inventors by patent law.

These critics argue that excessive patent protection is holding back western economies. For instance, they posit that the owners of the standard essential patents (“SEPs”) are charging their commercial partners too much for the rights to use their patents (this is referred to as patent holdup and royalty stacking). Furthermore, they argue that so-called patent trolls (“patent-assertion entities” or “PAEs”) are deterring innovation by small startups by employing “extortionate” litigation tactics.

Unfortunately, this movement has led to a deterioration of appropriate remedies in patent disputes.

The many benefits of patent protection

While patents likely play an important role in providing inventors with incentives to innovate, their role in enabling the commercialization of ideas is probably even more important.

By creating a system of clearly defined property rights, patents empower market players to coordinate their efforts in order to collectively produce innovations. In other words, patents greatly reduce the cost of concluding mutually-advantageous deals, whereby firms specialize in various aspects of the innovation process. Critically, these deals occur in the shadow of patent litigation and injunctive relief. The threat of these ensures that all parties have an incentive to take a seat at the negotiating table.

This is arguably nowhere more apparent than in the standardization space. Many of the most high-profile modern technologies are the fruit of large-scale collaboration coordinated through standards developing organizations (SDOs). These include technologies such as Wi-Fi, 3G, 4G, 5G, Blu-Ray, USB-C, and Thunderbolt 3. The coordination necessary to produce technologies of this sort is hard to imagine without some form of enforceable property right in the resulting inventions.

The shift away from injunctive relief

Of the many recent reforms to patent law, the most significant has arguably been a significant limitation of patent holders’ availability to obtain permanent injunctions. This is particularly true in the case of so-called standard essential patents (SEPs). 

However, intellectual property laws are meaningless without the ability to enforce them and remedy breaches. And injunctions are almost certainly the most powerful, and important, of these remedies.

The significance of injunctions is perhaps best understood by highlighting the weakness of damages awards when applied to intangible assets. Indeed, it is often difficult to establish the appropriate size of an award of damages when intangible property—such as invention and innovation in the case of patents—is the core property being protected. This is because these assets are almost always highly idiosyncratic. By blocking all infringing uses of an invention, injunctions thus prevent courts from having to act as price regulators. In doing so, they also ensure that innovators are adequately rewarded for their technological contributions.

Unfortunately, the Supreme Court’s 2006 ruling in eBay Inc. v. MercExchange, LLC significantly narrowed the circumstances under which patent holders could obtain permanent injunctions. This predictably led lower courts to grant fewer permanent injunctions in patent litigation suits. 

But while critics of injunctions had hoped that reducing their availability would spur innovation, empirical evidence suggests that this has not been the case so far. 

Other reforms

And injunctions are not the only area of patent law that have witnessed a gradual shift against the interests of patent holders. Much of the same could be said about damages awards, revised fee shifting standards, and the introduction of Inter Partes Review.

Critically, the intellectual movement to soften patent protection has also had ramifications outside of the judicial sphere. It is notably behind several legislative reforms, particularly the America Invents Act. Moreover, it has led numerous private parties – most notably Standard Developing Organizations (SDOs) – to adopt stances that have advanced the interests of technology implementers at the expense of inventors.

For instance, one of the most noteworthy reforms has been IEEE’s sweeping reforms to its IP policy, in 2015. The new rules notably prevented SEP holders from seeking permanent injunctions against so-called “willing licensees”. They also mandated that royalties pertaining to SEPs should be based upon the value of the smallest saleable component that practices the patented technology. Both of these measures ultimately sought to tilt the bargaining range in license negotiations in favor of implementers.

Concluding remarks

The developments discussed in this article might seem like small details, but they are part of a wider trend whereby U.S. patent law is becoming increasingly inhospitable for inventors. This is particularly true when it comes to the enforcement of SEPs by means of injunction.

While the short-term effect of these various reforms has yet to be quantified, there is a real risk that, by decreasing the value of patents and increasing transaction costs, these changes may ultimately limit the diffusion of innovations and harm incentives to invent.

This likely explains why some legislators have recently put forward bills that seek to reinforce the U.S. patent system (here and here).

Despite these initiatives, the fact remains that there is today a strong undercurrent pushing for weaker or less certain patent protection. If left unchecked, this threatens to undermine the utility of patents in facilitating the efficient allocation of resources for innovation and its commercialization. Policymakers should thus pay careful attention to the changes this trend may bring about and move swiftly to recalibrate the patent system where needed in order to better protect the property rights of inventors and yield more innovation overall.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Kristian Stout, (Associate Director, International Center for Law & Economics]

The public policy community’s infatuation with digital privacy has grown by leaps and bounds since the enactment of GDPR and the CCPA, but COVID-19 may leave the most enduring mark on the actual direction that privacy policy takes. As the pandemic and associated lockdowns first began, there were interesting discussions cropping up about the inevitable conflict between strong privacy fundamentalism and the pragmatic steps necessary to adequately trace the spread of infection. 

Axiomatic of this controversy is the Apple/Google contact tracing system, software developed for smartphones to assist with the identification of individuals and populations that have likely been in contact with the virus. The debate sparked by the Apple/Google proposal highlights what we miss when we treat “privacy” (however defined) as an end in itself, an end that must necessarily  trump other concerns. 

The Apple/Google contact tracing efforts

Apple/Google are doing yeoman’s work attempting to produce a useful contact tracing API given the headwinds of privacy advocacy they face. Apple’s webpage describing its new contact tracing system is a testament to the extent to which strong privacy protections are central to its efforts. Indeed, those privacy protections are in the very name of the service: “Privacy-Preserving Contact Tracing” program. But, vitally, the utility of the Apple/Google API is ultimately a function of its efficacy as a tracing tool, not in how well it protects privacy.

Apple/Google — despite the complaints of some states — are rolling out their Covid-19-tracking services with notable limitations. Most prominently, the APIs will not allow collection of location data, and will only function when users explicitly opt-in. This last point is important because there is evidence that opt-in requirements, by their nature, tend to reduce the flow of information in a system, and when we are considering tracing solutions to an ongoing pandemic surely less information is not optimal. Further, all of the data collected through the API will be anonymized, preventing even healthcare authorities from identifying particular infected individuals.

These restrictions prevent the tool from being as effective as it could be, but it’s not clear how Apple/Google could do any better given the political climate. For years, the Big Tech firms have been villainized by privacy advocates that accuse them of spying on kids and cavalierly disregarding consumer privacy as they treat individuals’ data as just another business input. The problem with this approach is that, in the midst of a generational crisis, our best tools are being excluded from the fight. Which begs the question: perhaps we have privacy all wrong? 

Privacy is one value among many

The U.S. constitutional order explicitly protects our privacy as against state intrusion in order to guarantee, among other things, fair process and equal access to justice. But this strong presumption against state intrusion—far from establishing a fundamental or absolute right to privacy—only accounts for part of the privacy story. 

The Constitution’s limit is a recognition of the fact that we humans are highly social creatures and that privacy is one value among many. Properly conceived, privacy protections are themselves valuable only insofar as they protect other things we value. Jane Bambauer explored some of this in an earlier post where she characterized privacy as, at best, an “instrumental right” — that is a tool used to promote other desirable social goals such as “fairness, safety, and autonomy.”

Following from Jane’s insight, privacy — as an instrumental good — is something that can have both positive and negative externalities, and needs to be enlarged or attenuated as its ability to serve instrumental ends changes in different contexts. 

According to Jane:

There is a moral imperative to ignore even express lack of consent when withholding important information that puts others in danger. Just as many states affirmatively require doctors, therapists, teachers, and other fiduciaries to report certain risks even at the expense of their client’s and ward’s privacy …  this same logic applies at scale to the collection and analysis of data during a pandemic.

Indeed, dealing with externalities is one of the most common and powerful justifications for regulation, and an extreme form of “privacy libertarianism” —in the context of a pandemic — is likely to be, on net, harmful to society.

Which brings us back to efforts of Apple/Google. Even if those firms wanted to risk the ire of  privacy absolutists, it’s not clear that they could do so without incurring tremendous regulatory risk, uncertainty and a popular backlash. As statutory matters, the CCPA and the GDPR chill experimentation in the face of potentially crippling fines. While the FTC Act’s Section 5 prohibition on “unfair or deceptive” practices is open to interpretation in manners which could result in existentially damaging outcomes. Further, some polling suggests that the public appetite for contact tracing is not particularly high – though, as is often the case, such pro-privacy poll outcomes rarely give appropriate shrift to the tradeoff required.

As a general matter, it’s important to think about the value of individual privacy, and how best to optimally protect it. But privacy does not stand above all other values in all contexts. It is entirely reasonable to conclude that, in a time of emergency, if private firms can devise more effective solutions for mitigating the crisis, they should have more latitude to experiment. Knee-jerk preferences for an amorphous “right of privacy” should not be used to block those experiments.

Much as with the Cosmic Turtle, its tradeoffs all the way down. Most of the U.S. is in lockdown, and while we vigorously protect our privacy, we risk frustrating the creation of tools that could put a light at the end of the tunnel. We are, in effect, trading liberty and economic self-determination for privacy.

Once the worst of the Covid-19 crisis has passed — hastened possibly by the use of contact tracing programs — we can debate the proper use of private data in exigent circumstances. For the immediate future, we should instead be encouraging firms like Apple/Google to experiment with better ways to control the pandemic. 

The case against AT&T began in 1974. The government alleged that AT&T had monopolized the market for local and long-distance telephone service as well as telephone equipment. In 1982, the company entered into a consent decree to be broken up into eight pieces (the “Baby Bells” plus the parent company), which was completed in 1984. As a remedy, the government required the company to divest its local operating companies and guarantee equal access to all long-distance and information service providers (ISPs).

Source: Mohanram & Nanda

As the chart above shows, the divestiture broke up AT&T’s national monopoly into seven regional monopolies. In general, modern antitrust analysis focuses on the local product market (because that’s the relevant level for consumer decisions). In hindsight, how did breaking up a national monopoly into seven regional monopolies increase consumer choice? It’s also important to note that, prior to its structural breakup, AT&T was a government-granted monopoly regulated by the FCC. Any antitrust remedy should be analyzed in light of the company’s unique relationship with regulators.

Breaking up one national monopoly into seven regional monopolies is not an effective way to boost innovation. And there are economies of scale and network effects to be gained by owning a national network to serve a national market. In the case of AT&T, those economic incentives are why the Baby Bells forged themselves back together in the decades following the breakup.

Source: WSJ

As Clifford Winston and Robert Crandall noted

Appearing to put Ma Bell back together again may embarrass the trustbusters, but it should not concern American consumers who, in two decades since the breakup, are overwhelmed with competitive options to provide whatever communications services they desire.

Moreover, according to Crandall & Winston (2003), the lower prices following the breakup of AT&T weren’t due to the structural remedy at all (emphasis added):

But on closer examination, the rise in competition and lower long-distance prices are attributable to just one aspect of the 1982 decree; specifically, a requirement that the Bell companies modify their switching facilities to provide equal access to all long-distance carriers. The Federal Communications Commission (FCC) could have promulgated such a requirement without the intervention of the antitrust authorities. For example, the Canadian regulatory commission imposed equal access on its vertically integrated carriers, including Bell Canada, in 1993. As a result, long-distance competition developed much more rapidly in Canada than it had in the United States (Crandall and Hazlett, 2001). The FCC, however, was trying to block MCI from competing in ordinary long-distance services when the AT&T case was filed by the Department of Justice in 1974. In contrast to Canadian and more recent European experience, a lengthy antitrust battle and a disruptive vertical dissolution were required in the U.S. market to offset the FCC’s anti-competitive policies. Thus, antitrust policy did not triumph in this case over restrictive practices by a monopolist to block competition, but instead it overcame anticompetitive policies by a federal regulatory agency.

A quick look at the data on telephone service in the US, EU, and Canada show that the latter two were able to achieve similar reductions in price without breaking up their national providers.

Source: Crandall & Jackson (2011)

The paradigm shift from wireline to wireless

The technological revolution spurred by the transition from wireline telephone service to wireless telephone service shook up the telecommunications industry in the 1990s. The rapid change caught even some of the smartest players by surprise. In 1980, the management consulting firm McKinsey and Co. produced a report for AT&T predicting how large the cellular market might become by the year 2000. Their forecast said that 900,000 cell phones would be in use. The actual number was more than 109 million.

Along with the rise of broadband, the transition to wireless technology led to an explosion in investment. In contrast, the breakup of AT&T in 1984 had no discernible effect on the trend in industry investment:

The lesson for antitrust enforcers is clear: breaking up national monopolies into regional monopolies is no remedy. In certain cases, mandating equal access to critical networks may be warranted. Most of all, technology shocks will upend industries in ways that regulators — and dominant incumbents — fail to predict.

Big Tech continues to be mired in “a very antitrust situation,” as President Trump put it in 2018. Antitrust advocates have zeroed in on Facebook, Google, Apple, and Amazon as their primary targets. These advocates justify their proposals by pointing to the trio of antitrust cases against IBM, AT&T, and Microsoft. Elizabeth Warren, in announcing her plan to break up the tech giants, highlighted the case against Microsoft:

The government’s antitrust case against Microsoft helped clear a path for Internet companies like Google and Facebook to emerge. The story demonstrates why promoting competition is so important: it allows new, groundbreaking companies to grow and thrive — which pushes everyone in the marketplace to offer better products and services.

Tim Wu, a law professor at Columbia University, summarized the overarching narrative recently (emphasis added):

If there is one thing I’d like the tech world to understand better, it is that the trilogy of antitrust suits against IBM, AT&T, and Microsoft played a major role in making the United States the world’s preeminent tech economy.

The IBM-AT&T-Microsoft trilogy of antitrust cases each helped prevent major monopolists from killing small firms and asserting control of the future (of the 80s, 90s, and 00s, respectively).

A list of products and firms that owe at least something to the IBM-AT&T-Microsoft trilogy.

(1) IBM: software as product, Apple, Microsoft, Intel, Seagate, Sun, Dell, Compaq

(2) AT&T: Modems, ISPs, AOL, the Internet and Web industries

(3) Microsoft: Google, Facebook, Amazon

Wu argues that by breaking up the current crop of dominant tech companies, we can sow the seeds for the next one. But this reasoning depends on an incorrect — albeit increasingly popular — reading of the history of the tech industry. Entrepreneurs take purposeful action to produce innovative products for an underserved segment of the market. They also respond to broader technological change by integrating or modularizing different products in their market. This bundling and unbundling is a never-ending process.

Whether the government distracts a dominant incumbent with a failed lawsuit (e.g., IBM), imposes an ineffective conduct remedy (e.g., Microsoft), or breaks up a government-granted national monopoly into regional monopolies (e.g., AT&T), the dynamic nature of competition between tech companies will far outweigh the effects of antitrust enforcers tilting at windmills.

In a series of posts for Truth on the Market, I will review the cases against IBM, AT&T, and Microsoft and discuss what we can learn from them. In this introductory article, I will explain the relevant concepts necessary for understanding the history of market competition in the tech industry.

Competition for the Market

In industries like tech that tend toward “winner takes most,” it’s important to distinguish between competition during the market maturation phase — when no clear winner has emerged and the technology has yet to be widely adopted — and competition after the technology has been diffused in the economy. Benedict Evans recently explained how this cycle works (emphasis added):

When a market is being created, people compete at doing the same thing better. Windows versus Mac. Office versus Lotus. MySpace versus Facebook. Eventually, someone wins, and no-one else can get in. The market opportunity has closed. Be, NeXT/Path were too late. Monopoly!

But then the winner is overtaken by something completely different that makes it irrelevant. PCs overtook mainframes. HTML/LAMP overtook Win32. iOS & Android overtook Windows. Google overtook Microsoft.

Tech antitrust too often wants to insert a competitor to the winning monopolist, when it’s too late. Meanwhile, the monopolist is made irrelevant by something that comes from totally outside the entire conversation and owes nothing to any antitrust interventions.

In antitrust parlance, this is known as competing for the market. By contrast, in more static industries where the playing field doesn’t shift so radically and the market doesn’t tip toward “winner take most,” firms compete within the market. What Benedict Evans refers to as “something completely different” is often a disruptive product.

Disruptive Innovation

As Clay Christensen explains in the Innovator’s Dilemma, a disruptive product is one that is low-quality (but fast-improving), low-margin, and targeted at an underserved segment of the market. Initially, it is rational for the incumbent firms to ignore the disruptive technology and focus on improving their legacy technology to serve high-margin customers. But once the disruptive technology improves to the point it can serve the whole market, it’s too late for the incumbent to switch technologies and catch up. This process looks like overlapping s-curves:

Source: Max Mayblum

We see these S-curves in the technology industry all the time:

Source: Benedict Evans

As Christensen explains in the Innovator’s Solution, consumer needs can be thought of as “jobs-to-be-done.” Early on, when a product is just good enough to get a job done, firms compete on product quality and pursue an integrated strategy — designing, manufacturing, and distributing the product in-house. As the underlying technology improves and the product overshoots the needs of the jobs-to-be-done, products become modular and the primary dimension of competition moves to cost and convenience. As this cycle repeats itself, companies are either bundling different modules together to create more integrated products or unbundling integrated products to create more modular products.

Moore’s Law

Source: Our World in Data

Moore’s Law is the gasoline that gets poured on the fire of technology cycles. Though this “law” is nothing more than the observation that “the number of transistors in a dense integrated circuit doubles about every two years,” the implications for dynamic competition are difficult to overstate. As Bill Gates explained in a 1994 interview with Playboy magazine, Moore’s Law means that computer power is essentially “free” from an engineering perspective:

When you have the microprocessor doubling in power every two years, in a sense you can think of computer power as almost free. So you ask, Why be in the business of making something that’s almost free? What is the scarce resource? What is it that limits being able to get value out of that infinite computing power? Software.

Exponentially smaller integrated circuits can be combined with new user interfaces and networks to create new computer classes, which themselves represent the opportunity for disruption.

Bell’s Law of Computer Classes

Source: Brad Campbell

A corollary to Moore’s Law, Bell’s law of computer classes predicts that “roughly every decade a new, lower priced computer class forms based on a new programming platform, network, and interface resulting in new usage and the establishment of a new industry.” Originally formulated in 1972, we have seen this prediction play out in the birth of mainframes, minicomputers, workstations, personal computers, laptops, smartphones, and the Internet of Things.

Understanding these concepts — competition for the market, disruptive innovation, Moore’s Law, and Bell’s Law of Computer Classes — will be crucial for understanding the true effects (or lack thereof) of the antitrust cases against IBM, AT&T, and Microsoft. In my next post, I will look at the DOJ’s (ultimately unsuccessful) 13-year antitrust battle with IBM.

An oft-repeated claim of conferences, media, and left-wing think tanks is that lax antitrust enforcement has led to a substantial increase in concentration in the US economy of late, strangling the economy, harming workers, and saddling consumers with greater markups in the process. But what if rising concentration (and the current level of antitrust enforcement) were an indication of more competition, not less?

By now the concentration-as-antitrust-bogeyman story is virtually conventional wisdom, echoed, of course, by political candidates such as Elizabeth Warren trying to cash in on the need for a government response to such dire circumstances:

In industry after industry — airlines, banking, health care, agriculture, tech — a handful of corporate giants control more and more. The big guys are locking out smaller, newer competitors. They are crushing innovation. Even if you don’t see the gears turning, this massive concentration means prices go up and quality goes down for everything from air travel to internet service.  

But the claim that lax antitrust enforcement has led to increased concentration in the US and that it has caused economic harm has been debunked several times (for some of our own debunking, see Eric Fruits’ posts here, here, and here). Or, more charitably to those who tirelessly repeat the claim as if it is “settled science,” it has been significantly called into question

Most recently, several working papers looking at the data on concentration in detail and attempting to identify the likely cause for the observed data, show precisely the opposite relationship. The reason for increased concentration appears to be technological, not anticompetitive. And, as might be expected from that cause, its effects are beneficial. Indeed, the story is both intuitive and positive.

What’s more, while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.

The most recent — and, I believe, most significant — corrective to the conventional story comes from economists Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University. As they write in a recent paper titled, “The Industrial Revolution in Services”: 

We show that new technologies have enabled firms that adopt them to scale production over a large number of establishments dispersed across space. Firms that adopt this technology grow by increasing the number of local markets that they serve, but on average are smaller in the markets that they do serve. Unlike Henry Ford’s revolution in manufacturing more than a hundred years ago when manufacturing firms grew by concentrating production in a given location, the new industrial revolution in non-traded sectors takes the form of horizontal expansion across more locations. At the same time, multi-product firms are forced to exit industries where their productivity is low or where the new technology has had no effect. Empirically we see that top firms in the overall economy are more focused and have larger market shares in their chosen sectors, but their size as a share of employment in the overall economy has not changed. (pp. 42-43) (emphasis added).

This makes perfect sense. And it has the benefit of not second-guessing structural changes made in response to technological change. Rather, it points to technological change as doing what it regularly does: improving productivity.

The implementation of new technology seems to be conferring benefits — it’s just that these benefits are not evenly distributed across all firms and industries. But the assumption that larger firms are causing harm (or even that there is any harm in the first place, whatever the cause) is unmerited. 

What the authors find is that the apparent rise in national concentration doesn’t tell the relevant story, and the data certainly aren’t consistent with assumptions that anticompetitive conduct is either a cause or a result of structural changes in the economy.

Hsieh and Rossi-Hansberg point out that increased concentration is not happening everywhere, but is being driven by just three industries:

First, we show that the phenomena of rising concentration . . . is only seen in three broad sectors – services, wholesale, and retail. . . . [T]op firms have become more efficient over time, but our evidence indicates that this is only true for top firms in these three sectors. In manufacturing, for example, concentration has fallen.

Second, rising concentration in these sectors is entirely driven by an increase [in] the number of local markets served by the top firms. (p. 4) (emphasis added).

These findings are a gloss on a (then) working paper — The Fall of the Labor Share and the Rise of Superstar Firms — by David Autor, David Dorn, Lawrence F. Katz, Christina Patterson, and John Van Reenan (now forthcoming in the QJE). Autor et al. (2019) finds that concentration is rising, and that it is the result of increased productivity:

If globalization or technological changes push sales towards the most productive firms in each industry, product market concentration will rise as industries become increasingly dominated by superstar firms, which have high markups and a low labor share of value-added.

We empirically assess seven predictions of this hypothesis: (i) industry sales will increasingly concentrate in a small number of firms; (ii) industries where concentration rises most will have the largest declines in the labor share; (iii) the fall in the labor share will be driven largely by reallocation rather than a fall in the unweighted mean labor share across all firms; (iv) the between-firm reallocation component of the fall in the labor share will be greatest in the sectors with the largest increases in market concentration; (v) the industries that are becoming more concentrated will exhibit faster growth of productivity; (vi) the aggregate markup will rise more than the typical firm’s markup; and (vii) these patterns should be observed not only in U.S. firms, but also internationally. We find support for all of these predictions. (emphasis added).

This is alone is quite important (and seemingly often overlooked). Autor et al. (2019) finds that rising concentration is a result of increased productivity that weeds out less-efficient producers. This is a good thing. 

But Hsieh & Rossi-Hansberg drill down into the data to find something perhaps even more significant: the rise in concentration itself is limited to just a few sectors, and, where it is observed, it is predominantly a function of more efficient firms competing in more — and more localized — markets. This means that competition is increasing, not decreasing, whether it is accompanied by an increase in concentration or not. 

No matter how may times and under how many monikers the antitrust populists try to revive it, the Structure-Conduct-Performance paradigm remains as moribund as ever. Indeed, on this point, as one of the new antitrust agonists’ own, Fiona Scott Morton, has written (along with co-authors Martin Gaynor and Steven Berry):

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration. As Bresnahan (1989) argued three decades ago, no clear interpretation of the impact of concentration is possible without a clear focus on equilibrium oligopoly demand and “supply,” where supply includes the list of the marginal cost functions of the firms and the nature of oligopoly competition. 

Some of the recent literature on concentration, profits, and markups has simply reasserted the relevance of the old-style structure-conduct-performance correlations. For economists trained in subfields outside industrial organization, such correlations can be attractive. 

Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates. Such correlations will not produce information about the causal estimates that policy demands. It is these causal relationships that will help us understand what, if anything, may be causing markups to rise. (emphasis added).

Indeed! And one reason for the enduring irrelevance of market concentration measures is well laid out in Hsieh and Rossi-Hansberg’s paper:

This evidence is consistent with our view that increasing concentration is driven by new ICT-enabled technologies that ultimately raise aggregate industry TFP. It is not consistent with the view that concentration is due to declining competition or entry barriers . . . , as these forces will result in a decline in industry employment. (pp. 4-5) (emphasis added)

The net effect is that there is essentially no change in concentration by the top firms in the economy as a whole. The “super-star” firms of today’s economy are larger in their chosen sectors and have unleashed productivity growth in these sectors, but they are not any larger as a share of the aggregate economy. (p. 5) (emphasis added)

Thus, to begin with, the claim that increased concentration leads to monopsony in labor markets (and thus unemployment) appears to be false. Hsieh and Rossi-Hansberg again:

[W]e find that total employment rises substantially in industries with rising concentration. This is true even when we look at total employment of the smaller firms in these industries. (p. 4)

[S]ectors with more top firm concentration are the ones where total industry employment (as a share of aggregate employment) has also grown. The employment share of industries with increased top firm concentration grew from 70% in 1977 to 85% in 2013. (p. 9)

Firms throughout the size distribution increase employment in sectors with increasing concentration, not only the top 10% firms in the industry, although by definition the increase is larger among the top firms. (p. 10) (emphasis added)

Again, what actually appears to be happening is that national-level growth in concentration is actually being driven by increased competition in certain industries at the local level:

93% of the growth in concentration comes from growth in the number of cities served by top firms, and only 7% comes from increased employment per city. . . . [A]verage employment per county and per establishment of top firms falls. So necessarily more than 100% of concentration growth has to come from the increase in the number of counties and establishments served by the top firms. (p.13)

The net effect is a decrease in the power of top firms relative to the economy as a whole, as the largest firms specialize more, and are dominant in fewer industries:

Top firms produce in more industries than the average firm, but less so in 2013 compared to 1977. The number of industries of a top 0.001% firm (relative to the average firm) fell from 35 in 1977 to 17 in 2013. The corresponding number for a top 0.01% firm is 21 industries in 1977 and 9 industries in 2013. (p. 17)

Thus, summing up, technology has led to increased productivity as well as greater specialization by large firms, especially in relatively concentrated industries (exactly the opposite of the pessimistic stories):  

[T]op firms are now more specialized, are larger in the chosen industries, and these are precisely the industries that have experienced concentration growth. (p. 18)

Unsurprisingly (except to some…), the increase in concentration in certain industries does not translate into an increase in concentration in the economy as a whole. In other words, workers can shift jobs between industries, and there is enough geographic and firm mobility to prevent monopsony. (Despite rampant assumptions that increased concentration is constraining labor competition everywhere…).

Although the employment share of top firms in an average industry has increased substantially, the employment share of the top firms in the aggregate economy has not. (p. 15)

It is also simply not clearly the case that concentration is causing prices to rise or otherwise causing any harm. As Hsieh and Rossi-Hansberg note:

[T]he magnitude of the overall trend in markups is still controversial . . . and . . . the geographic expansion of top firms leads to declines in local concentration . . . that could enhance competition. (p. 37)

Indeed, recent papers such as Traina (2018), Gutiérrez and Philippon (2017), and the IMF (2019) have found increasing markups over the last few decades but at much more moderate rates than the famous De Loecker and Eeckhout (2017) study. Other parts of the anticompetitive narrative have been challenged as well. Karabarbounis and Neiman (2018) finds that profits have increased, but are still within their historical range. Rinz (2018) shows decreased wages in concentrated markets but also points out that local concentration has been decreasing over the relevant time period.

None of this should be so surprising. Has antitrust enforcement gotten more lax, leading to greater concentration? According to Vita and Osinski (2018), not so much. And how about the stagnant rate of new firms? Are incumbent monopolists killing off new startups? The more likely — albeit mundane — explanation, according to Hopenhayn et al. (2018), is that increased average firm age is due to an aging labor force. Lastly, the paper from Hsieh and Rossi-Hansberg discussed above is only the latest in a series of papers, including Bessen (2017), Van Reenen (2018), and Autor et al. (2019), that shows a rise in fixed costs due to investments in proprietary information technology, which correlates with increased concentration. 

So what is the upshot of all this?

  • First, as noted, employment has not decreased because of increased concentration; quite the opposite. Employment has increased in the industries that have experienced the most concentration at the national level.
  • Second, this result suggests that the rise in concentrated industries has not led to increased market power over labor.
  • Third, concentration itself needs to be understood more precisely. It is not explained by a simple narrative that the economy as a whole has experienced a great deal of concentration and this has been detrimental for consumers and workers. Specific industries have experienced national level concentration, but simultaneously those same industries have become more specialized and expanded competition into local markets. 

Surprisingly (because their paper has been around for a while and yet this conclusion is rarely recited by advocates for more intervention — although they happily use the paper to support claims of rising concentration), Autor et al. (2019) finds the same thing:

Our formal model, detailed below, generates superstar effects from increases in the toughness of product market competition that raise the market share of the most productive firms in each sector at the expense of less productive competitors. . . . An alternative perspective on the rise of superstar firms is that they reflect a diminution of competition, due to a weakening of U.S. antitrust enforcement (Dottling, Gutierrez and Philippon, 2018). Our findings on the similarity of trends in the U.S. and Europe, where antitrust authorities have acted more aggressively on large firms (Gutierrez and Philippon, 2018), combined with the fact that the concentrating sectors appear to be growing more productive and innovative, suggests that this is unlikely to be the primary explanation, although it may important in some specific industries (see Cooper et al, 2019, on healthcare for example). (emphasis added).

The popular narrative among Neo-Brandeisian antitrust scholars that lax antitrust enforcement has led to concentration detrimental to society is at base an empirical one. The findings of these empirical papers severely undermine the persuasiveness of that story.

Last week the Senate Judiciary Committee held a hearing, Intellectual Property and the Price of Prescription Drugs: Balancing Innovation and Competition, that explored whether changes to the pharmaceutical patent process could help lower drug prices.  The committee’s goal was to evaluate various legislative proposals that might facilitate the entry of cheaper generic drugs, while also recognizing that strong patent rights for branded drugs are essential to incentivize drug innovation.  As Committee Chairman Lindsey Graham explained:

One thing you don’t want to do is kill the goose who laid the golden egg, which is pharmaceutical development. But you also don’t want to have a system that extends unnecessarily beyond the ability to get your money back and make a profit, a patent system that drives up costs for the average consumer.

Several proposals that were discussed at the hearing have the potential to encourage competition in the pharmaceutical industry and help rein in drug prices. Below, I discuss these proposals, plus a few additional reforms. I also point out some of the language in the current draft proposals that goes a bit too far and threatens the ability of drug makers to remain innovative.  

1. Prevent brand drug makers from blocking generic companies’ access to drug samples. Some brand drug makers have attempted to delay generic entry by restricting generics’ access to the drug samples necessary to conduct FDA-required bioequivalence studies.  Some brand drug manufacturers have limited the ability of pharmacies or wholesalers to sell samples to generic companies or abused the REMS (Risk Evaluation Mitigation Strategy) program to refuse samples to generics under the auspices of REMS safety requirements.  The Creating and Restoring Equal Access To Equivalent Samples (CREATES) Act of 2019 would allow potential generic competitors to bring an action in federal court for both injunctive relief and damages when brand companies block access to drug samples.  It also gives the FDA discretion to approve alternative REMS safety protocols for generic competitors that have been denied samples under the brand companies’ REMS protocol.  Although the vast majority of brand drug companies do not engage in the delay tactics addressed by CREATES, the Act would prevent the handful that do from thwarting generic competition.  Increased generic competition should, in turn, reduce drug prices.

2. Restrict abuses of FDA Citizen Petitions.  The citizen petition process was created as a way for individuals and community groups to flag legitimate concerns about drugs awaiting FDA approval.  However, critics claim that the process has been misused by some brand drug makers who file petitions about specific generic drugs in the hopes of delaying their approval and market entry.  Although FDA has indicated that citizens petitions rarely delay the approval of generic drugs, there have been a few drug makers, such as Shire ViroPharma, that have clearly abused the process and put unnecessary strain on FDA resources. The Stop The Overuse of Petitions and Get Affordable Medicines to Enter Soon (STOP GAMES) Act is intended to prevent such abuses.  The Act reinforces the FDA and FTC’s ability to crack down on petitions meant to lengthen the approval process of a generic competitor, which should deter abuses of the system that can occasionally delay generic entry.  However, lawmakers should make sure that adopted legislation doesn’t limit the ability of stakeholders (including drug makers that often know more about the safety of drugs than ordinary citizens) to raise serious concerns with the FDA. 

3. Curtail Anticompetitive Pay-for-Delay Settlements.  The Hatch-Waxman Act incentivizes generic companies to challenge brand drug patents by granting the first successful generic challenger a period of marketing exclusivity. Like all litigation, many of these patent challenges result in settlements instead of trials.  The FTC and some courts have concluded that these settlements can be anticompetitive when the brand companies agree to pay the generic challenger in exchange for the generic company agreeing to forestall the launch of their lower-priced drug. Settlements that result in a cash payment are a red flag for anti-competitive behavior, so pay-for-delay settlements have evolved to involve other forms of consideration instead.  As a result, the Preserve Access to Affordable Generics and Biosimilars Act aims to make an exchange of anything of value presumptively anticompetitive if the terms include a delay in research, development, manufacturing, or marketing of a generic drug. Deterring obvious pay-for-delay settlements will prevent delays to generic entry, making cheaper drugs available as quickly as possible to patients. 

However, the Act’s rigid presumption that an exchange of anything of value is presumptively anticompetitive may also prevent legitimate settlements that ultimately benefit consumers.  Brand drug makers should be allowed to compensate generic challengers to eliminate litigation risk and escape litigation expenses, and many settlements result in the generic drug coming to market before the expiration of the brand patent and possibly earlier than if there was prolonged litigation between the generic and brand company.  A rigid presumption of anticompetitive behavior will deter these settlements, thereby increasing expenses for all parties that choose to litigate and possibly dissuading generics from bringing patent challenges in the first place.  Indeed, the U.S. Supreme Court has declined to define these settlements as per se anticompetitive, and the FTC’s most recent agreement involving such settlements exempts several forms of exchanges of value.  Any adopted legislation should follow the FTC’s lead and recognize that some exchanges of value are pro-consumer and pro-competitive.

4. Restore the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers.  I have previously discussed how an unbalanced inter partes review (IPR) process for challenging patents threatens to stifle drug innovation.  Moreover, current law allows generic challengers to file duplicative claims in both federal court and through the IPR process.  And because IPR proceedings do not have a standing requirement, the process has been exploited  by entities that would never be granted standing in traditional patent litigation—hedge funds betting against a company by filing an IPR challenge in hopes of crashing the stock and profiting from the bet. The added expense to drug makers of defending both duplicative claims and claims against challengers that are exploiting the system increases litigation costs, which may be passed on to consumers in the form of higher prices. 

The Hatch-Waxman Integrity Act (HWIA) is designed to return the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. It requires generic challengers to choose between either Hatch-Waxman litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) or an IPR proceeding (which is faster and provides certain pro-challenger provisions). The HWIA would also eliminate the ability of hedge funds and similar entities to file IPR claims while shorting the stock.  By reducing duplicative litigation and the exploitation of the IPR process, the HWIA will reduce costs and strengthen innovation incentives for drug makers.  This will ensure that patent owners achieve clarity on the validity of their patents, which will spur new drug innovation and make sure that consumers continue to have access to life-improving drugs.

5. Curb illegal product hopping and patent thickets.  Two drug maker tactics currently garnering a lot of attention are so-called “product hopping” and “patent thickets.”  At its worst, product hopping involves brand drug makers making minor changes to a drug nearing the end of its patent so that they gets a new patent on the slightly-tweaked drug, and then withdrawing the original drug from the market so that patients shift to the newly patented drug and pharmacists can’t substitute a generic version of the original drug.  Similarly, at their worst, patent thickets involve brand drug makers obtaining a web of patents on a single drug to extend the life of their exclusivity and make it too costly for other drug makers to challenge all of the patents associated with a drug.  The proposed Affordable Prescriptions for Patients Act of 2019 is meant to stop these abuses of the patent system, which would facilitate generic entry and help to lower drug prices.

However, the Act goes too far by also capturing many legitimate activities in its definitions. For example, the bill defines as anticompetitive product-hopping the selling of any improved version of a drug during a window which extends to a year after the launch of the first generic competitor.  Presently, to acquire a patent and FDA approval, the improved version of the drug must be different and innovative enough from the original drug, yet the Act would prevent the drug maker from selling such a product without satisfying a demanding three-pronged test before the FTC or a district court.  Similarly, the Act defines as anticompetitive patent thickets any new patents filed on a drug in the same general family as the original patent, and this presumption can only be rebutted by providing extensive evidence and satisfying demanding standards to the FTC or a district court.  As a result, the Act deters innovation activity that is at all related to an initial patent and, in doing so, ignores the fact that most important drug innovation is incremental innovation based on previous inventions.  Thus, the proposal should be redrafted to capture truly anticompetitive product hopping and patent thicket activity, while exempting behavior this is critical for drug innovation. 

Reforms that close loopholes in the current patent process should facilitate competition in the pharmaceutical industry and help to lower drug prices.  However, lawmakers need to be sure that they don’t restrict patent rights to the extent that they deter innovation because a significant body of research predicts that patients’ health outcomes will suffer as a result.

[TOTM: The following is the fourth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here. This post originally appeared on the Federalist Society Blog.]

The courtroom trial in the Federal Trade Commission’s (FTC’s) antitrust case against Qualcomm ended in January with a promise from the judge in the case, Judge Lucy Koh, to issue a ruling as quickly as possible — caveated by her acknowledgement that the case is complicated and the evidence voluminous. Well, things have only gotten more complicated since the end of the trial. Not only did Apple and Qualcomm reach a settlement in the antitrust case against Qualcomm that Apple filed just three days after the FTC brought its suit, but the abbreviated trial in that case saw the presentation by Qualcomm of some damning evidence that, if accurate, seriously calls into (further) question the merits of the FTC’s case.

Apple v. Qualcomm settles — and the DOJ takes notice

The Apple v. Qualcomm case, which was based on substantially the same arguments brought by the FTC in its case, ended abruptly last month after only a day and a half of trial — just enough time for the parties to make their opening statements — when Apple and Qualcomm reached an out-of-court settlement. The settlement includes a six-year global patent licensing deal, a multi-year chip supplier agreement, an end to all of the patent disputes around the world between the two companies, and a $4.5 billion settlement payment from Apple to Qualcomm.

That alone complicates the economic environment into which Judge Koh will issue her ruling. But the Apple v. Qualcomm trial also appears to have induced the Department of Justice Antitrust Division (DOJ) to weigh in on the FTC’s case with a Statement of Interest requesting Judge Koh to use caution in fashioning a remedy in the case should she side with the FTC, followed by a somewhat snarky Reply from the FTC arguing the DOJ’s filing was untimely (and, reading the not-so-hidden subtext, unwelcome).

But buried in the DOJ’s Statement is an important indication of why it filed its Statement when it did, just about a week after the end of the Apple v. Qualcomm case, and a pointer to a much larger issue that calls the FTC’s case against Qualcomm even further into question (I previously wrote about the lack of theoretical and evidentiary merit in the FTC’s case here).

Footnote 6 of the DOJ’s Statement reads:

Internal Apple documents that recently became public describe how, in an effort to “[r]educe Apple’s net royalty to Qualcomm,” Apple planned to “[h]urt Qualcomm financially” and “[p]ut Qualcomm’s licensing model at risk,” including by filing lawsuits raising claims similar to the FTC’s claims in this case …. One commentator has observed that these documents “potentially reveal[] that Apple was engaging in a bad faith argument both in front of antitrust enforcers as well as the legal courts about the actual value and nature of Qualcomm’s patented innovation.” (Emphasis added).

Indeed, the slides presented by Qualcomm during that single day of trial in Apple v. Qualcomm are significant, not only for what they say about Apple’s conduct, but, more importantly, for what they say about the evidentiary basis for the FTC’s claims against the company.

The evidence presented by Qualcomm in its opening statement suggests some troubling conduct by Apple

Others have pointed to Qualcomm’s opening slides and the Apple internal documents they present to note Apple’s apparent bad conduct. As one commentator sums it up:

Although we really only managed to get a small glimpse of Qualcomm’s evidence demonstrating the extent of Apple’s coordinated strategy to manipulate the FRAND license rate, that glimpse was particularly enlightening. It demonstrated a decade-long coordinated effort within Apple to systematically engage in what can only fairly be described as manipulation (if not creation of evidence) and classic holdout.

Qualcomm showed during opening arguments that, dating back to at least 2009, Apple had been laying the foundation for challenging its longstanding relationship with Qualcomm. (Emphasis added).

The internal Apple documents presented by Qualcomm to corroborate this claim appear quite damning. Of course, absent explanation and cross-examination, it’s impossible to know for certain what the documents mean. But on their face they suggest Apple knowingly undertook a deliberate scheme (and knowingly took upon itself significant legal risk in doing so) to devalue comparable patent portfolios to Qualcomm’s:

The apparent purpose of this scheme was to devalue comparable patent licensing agreements where Apple had the power to do so (through litigation or the threat of litigation) in order to then use those agreements to argue that Qualcomm’s royalty rates were above the allowable, FRAND level, and to undermine the royalties Qualcomm would be awarded in courts adjudicating its FRAND disputes with the company. As one commentator put it:

Apple embarked upon a coordinated scheme to challenge weaker patents in order to beat down licensing prices. Once the challenges to those weaker patents were successful, and the licensing rates paid to those with weaker patent portfolios were minimized, Apple would use the lower prices paid for weaker patent portfolios as proof that Qualcomm was charging a super-competitive licensing price; a licensing price that violated Qualcomm’s FRAND obligations. (Emphasis added).

That alone is a startling revelation, if accurate, and one that would seem to undermine claims that patent holdout isn’t a real problem. It also would undermine Apple’s claims that it is a “willing licensee,” engaging with SEP licensors in good faith. (Indeed, this has been called into question before, and one Federal Circuit judge has noted in dissent that “[t]he record in this case shows evidence that Apple may have been a hold out.”). If the implications drawn from the Apple documents shown in Qualcomm’s opening statement are accurate, there is good reason to doubt that Apple has been acting in good faith.

Even more troubling is what it means for the strength of the FTC’s case

But the evidence offered in Qualcomm’s opening argument point to another, more troubling implication, as well. We know that Apple has been coordinating with the FTC and was likely an important impetus for the FTC’s decision to bring an action in the first place. It seems reasonable to assume that Apple used these “manipulated” agreements to help make its case.

But what is most troubling is the extent to which it appears to have worked.

The FTC’s action against Qualcomm rested in substantial part on arguments that Qualcomm’s rates were too high (even though the FTC constructed its case without coming right out and saying this, at least until trial). In its opening statement the FTC said:

Qualcomm’s practices, including no license, no chips, skewed negotiations towards the outcomes that favor Qualcomm and lead to higher royalties. Qualcomm is committed to license its standard essential patents on fair, reasonable, and non-discriminatory terms. But even before doing market comparison, we know that the license rates charged by Qualcomm are too high and above FRAND because Qualcomm uses its chip power to require a license.

* * *

Mr. Michael Lasinski [the FTC’s patent valuation expert] compared the royalty rates received by Qualcomm to … the range of FRAND rates that ordinarily would form the boundaries of a negotiation … Mr. Lasinski’s expert opinion … is that Qualcomm’s royalty rates are far above any indicators of fair and reasonable rates. (Emphasis added).

The key question is what constitutes the “range of FRAND rates that ordinarily would form the boundaries of a negotiation”?

Because they were discussed under seal, we don’t know the precise agreements that the FTC’s expert, Mr. Lasinski, used for his analysis. But we do know something about them: His analysis entailed a study of only eight licensing agreements; in six of them, the licensee was either Apple or Samsung; and in all of them the licensor was either Interdigital, Nokia, or Ericsson. We also know that Mr. Lasinski’s valuation study did not include any Qualcomm licenses, and that the eight agreements he looked at were all executed after the district court’s decision in Microsoft vs. Motorola in 2013.

A curiously small number of agreements

Right off the bat there is a curiosity in the FTC’s valuation analysis. Even though there are hundreds of SEP license agreements involving the relevant standards, the FTC’s analysis relied on only eight, three-quarters of which involved licensing by only two companies: Apple and Samsung.

Indeed, even since 2013 (a date to which we will return) there have been scads of licenses (see, e.g., herehere, and here). Not only Apple and Samsung make CDMA and LTE devices; there are — quite literally — hundreds of other manufacturers out there, all of them licensing essentially the same technology — including global giants like LG, Huawei, HTC, Oppo, Lenovo, and Xiaomi. Why were none of their licenses included in the analysis? 

At the same time, while Interdigital, Nokia, and Ericsson are among the largest holders of CDMA and LTE SEPs, several dozen companies have declared such patents, including Motorola (Alphabet), NEC, Huawei, Samsung, ZTE, NTT DOCOMO, etc. Again — why were none of their licenses included in the analysis?

All else equal, more data yields better results. This is particularly true where the data are complex license agreements which are often embedded in larger, even-more-complex commercial agreements and which incorporate widely varying patent portfolios, patent implementers, and terms.

Yet the FTC relied on just eight agreements in its comparability study, covering a tiny fraction of the industry’s licensors and licensees, and, notably, including primarily licenses taken by the two companies (Samsung and Apple) that have most aggressively litigated their way to lower royalty rates.

A curiously crabbed selection of licensors

And it is not just that the selected licensees represent a weirdly small and biased sample; it is also not necessarily even a particularly comparable sample.

One thing we can be fairly confident of, given what we know of the agreements used, is that at least one of the license agreements involved Nokia licensing to Apple, and another involved InterDigital licensing to Apple. But these companies’ patent portfolios are not exactly comparable to Qualcomm’s. About Nokia’s patents, Apple said:

And about InterDigital’s:

Meanwhile, Apple’s view of Qualcomm’s patent portfolio (despite its public comments to the contrary) was that it was considerably better than the others’:

The FTC’s choice of such a limited range of comparable license agreements is curious for another reason, as well: It includes no Qualcomm agreements. Qualcomm is certainly one of the biggest players in the cellular licensing space, and no doubt more than a few license agreements involve Qualcomm. While it might not make sense to include Qualcomm licenses that the FTC claims incorporate anticompetitive terms, that doesn’t describe the huge range of Qualcomm licenses with which the FTC has no quarrel. Among other things, Qualcomm licenses from before it began selling chips would not have been affected by its alleged “no license, no chips” scheme, nor would licenses granted to companies that didn’t also purchase Qualcomm chips. Furthermore, its licenses for technology reading on the WCDMA standard are not claimed to be anticompetitive by the FTC.

And yet none of these licenses were deemed “comparable” by the FTC’s expert, even though, on many dimensions — most notably, with respect to the underlying patent portfolio being valued — they would have been the most comparable (i.e., identical).

A curiously circumscribed timeframe

That the FTC’s expert should use the 2013 cut-off date is also questionable. According to Lasinski, he chose to use agreements after 2013 because it was in 2013 that the U.S. District Court for the Western District of Washington decided the Microsoft v. Motorola case. Among other things, the court in Microsoft v Motorola held that the proper value of a SEP is its “intrinsic” patent value, including its value to the standard, but not including the additional value it derives from being incorporated into a widely used standard.

According to the FTC’s expert,

prior to [Microsoft v. Motorola], people were trying to value … the standard and the license based on the value of the standard, not the value of the patents ….

Asked by Qualcomm’s counsel if his concern was that the “royalty rates derived in license agreements for cellular SEPs [before Microsoft v. Motorola] could very well have been above FRAND,” Mr. Lasinski concurred.

The problem with this approach is that it’s little better than arbitrary. The Motorola decision was an important one, to be sure, but the notion that sophisticated parties in a multi-billion dollar industry were systematically agreeing to improper terms until a single court in Washington suggested otherwise is absurd. To be sure, such agreements are negotiated in “the shadow of the law,” and judicial decisions like the one in Washington (later upheld by the Ninth Circuit) can affect the parties’ bargaining positions.

But even if it were true that the court’s decision had some effect on licensing rates, the decision would still have been only one of myriad factors determining parties’ relative bargaining  power and their assessment of the proper valuation of SEPs. There is no basis to support the assertion that the Motorola decision marked a sea-change between “improper” and “proper” patent valuations. And, even if it did, it was certainly not alone in doing so, and the FTC’s expert offers no justification for determining that agreements reached before, say, the European Commission’s decision against Qualcomm in 2018 were “proper,” or that the Korea FTC’s decision against Qualcomm in 2009 didn’t have the same sort of corrective effect as the Motorola court’s decision in 2013. 

At the same time, a review of a wider range of agreements suggested that Qualcomm’s licensing royalties weren’t inflated

Meanwhile, one of Qualcomm’s experts in the FTC case, former DOJ Chief Economist Aviv Nevo, looked at whether the FTC’s theory of anticompetitive harm was borne out by the data by looking at Qualcomm’s royalty rates across time periods and standards, and using a much larger set of agreements. Although his remit was different than Mr. Lasinski’s, and although he analyzed only Qualcomm licenses, his analysis still sheds light on Mr. Lasinski’s conclusions:

[S]pecifically what I looked at was the predictions from the theory to see if they’re actually borne in the data….

[O]ne of the clear predictions from the theory is that during periods of alleged market power, the theory predicts that we should see higher royalty rates.

So that’s a very clear prediction that you can take to data. You can look at the alleged market power period, you can look at the royalty rates and the agreements that were signed during that period and compare to other periods to see whether we actually see a difference in the rates.

Dr. Nevo’s analysis, which looked at royalty rates in Qualcomm’s SEP license agreements for CDMA, WCDMA, and LTE ranging from 1990 to 2017, found no differences in rates between periods when Qualcomm was alleged to have market power and when it was not alleged to have market power (or could not have market power, on the FTC’s theory, because it did not sell corresponding chips).

The reason this is relevant is that Mr. Lasinski’s assessment implies that Qualcomm’s higher royalty rates weren’t attributable to its superior patent portfolio, leaving either anticompetitive conduct or non-anticompetitive, superior bargaining ability as the explanation. No one thinks Qualcomm has cornered the market on exceptional negotiators, so really the only proffered explanation for the results of Mr. Lasinski’s analysis is anticompetitive conduct. But this assumes that his analysis is actually reliable. Prof. Nevo’s analysis offers some reason to think that it is not.

All of the agreements studied by Mr. Lasinski were drawn from the period when Qualcomm is alleged to have employed anticompetitive conduct to elevate its royalty rates above FRAND. But when the actual royalties charged by Qualcomm during its alleged exercise of market power are compared to those charged when and where it did not have market power, the evidence shows it received identical rates. Mr Lasinki’s results, then, would imply that Qualcomm’s royalties were “too high” not only while it was allegedly acting anticompetitively, but also when it was not. That simple fact suggests on its face that Mr. Lasinski’s analysis may have been flawed, and that it systematically under-valued Qualcomm’s patents.

Connecting the dots and calling into question the strength of the FTC’s case

In its closing argument, the FTC pulled together the implications of its allegations of anticompetitive conduct by pointing to Mr. Lasinski’s testimony:

Now, looking at the effect of all of this conduct, Qualcomm’s own documents show that it earned many times the licensing revenue of other major licensors, like Ericsson.

* * *

Mr. Lasinski analyzed whether this enormous difference in royalties could be explained by the relative quality and size of Qualcomm’s portfolio, but that massive disparity was not explained.

Qualcomm’s royalties are disproportionate to those of other SEP licensors and many times higher than any plausible calculation of a FRAND rate.

* * *

The overwhelming direct evidence, some of which is cited here, shows that Qualcomm’s conduct led licensees to pay higher royalties than they would have in fair negotiations.

It is possible, of course, that Lasinki’s methodology was flawed; indeed, at trial Qualcomm argued exactly this in challenging his testimony. But it is also possible that, whether his methodology was flawed or not, his underlying data was flawed.

It is impossible from the publicly available evidence to definitively draw this conclusion, but the subsequent revelation that Apple may well have manipulated at least a significant share of the eight agreements that constituted Mr. Lasinski’s data certainly increases the plausibility of this conclusion: We now know, following Qualcomm’s opening statement in Apple v. Qualcomm, that that stilted set of comparable agreements studied by the FTC’s expert also happens to be tailor-made to be dominated by agreements that Apple may have manipulated to reflect lower-than-FRAND rates.

What is most concerning is that the FTC may have built up its case on such questionable evidence, either by intentionally cherry picking the evidence upon which it relied, or inadvertently because it rested on such a needlessly limited range of data, some of which may have been tainted.

Intentionally or not, the FTC appears to have performed its valuation analysis using a needlessly circumscribed range of comparable agreements and justified its decision to do so using questionable assumptions. This seriously calls into question the strength of the FTC’s case.