Archives For

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

If S.2992—the American Innovation and Choice Online Act or AICOA—were to become law, it would be, at the very least, an incomplete law. By design—and not for good reason, but for political expediency—AICOA is riddled with intentional uncertainty. In theory, the law’s glaring definitional deficiencies are meant to be rectified by “expert” agencies (i.e., the DOJ and FTC) after passage. But in actuality, no such certainty would ever emerge, and the law would stand as a testament to the crass political machinations and absence of rigor that undergird it. Among many other troubling outcomes, this is what the future under AICOA would hold.

Two months ago, the American Bar Association’s (ABA) Antitrust Section published a searing critique of AICOA in which it denounced the bill for being poorly written, vague, and departing from established antitrust-law principles. As Lazar Radic and I discussed in a previous post, what made the ABA’s letter to Congress so eye-opening was that it was penned by a typically staid group with a reputation for independence, professionalism, and ideational heterogeneity.

One of the main issues the ABA flagged in its letter is that the introduction of vague new concepts—like “materially harm competition,” which does not exist anywhere in current antitrust law—into the antitrust mainstream will risk substantial legal uncertainty and produce swathes of unintended consequences.

According to some, however, the bill’s inherent uncertainty is a feature, not a bug. It leaves enough space for specialist agencies to define the precise meaning of key terms without unduly narrowing the scope of the bill ex ante.

In particular, supporters of the bill have pointed to the prospect of agency guidelines under the law to rescue it from the starkest of the fundamental issues identified by the ABA. Section 4 of AICOA requires the DOJ and FTC to issue “agency enforcement guidelines” no later than 270 days after the date of enactment:

outlining policies and practices relating to conduct that may materially harm competition under section 3(a), agency interpretations of the affirmative defenses under section 3(b), and policies for determining the appropriate amount of a civil penalty to be sought under section 3(c).

In pointing to the prospect of guidelines, however, supporters are inadvertently admitting defeat—and proving the ABA’s point: AICOA is not ready for prime time.

This thinking is misguided for at least three reasons:

Guidelines are not rules

As section 4(d) of AICOA recognizes, guidelines are emphatically nonbinding:

The joint guidelines issued under this section do not … operate to bind the Commission, Department of Justice, or any person, State, or locality to the approach recommended in the guidelines.

As such, the value of guidelines in dispelling legal uncertainty is modest, at best.

This is even more so in today’s highly politicized atmosphere, where guidelines can be withdrawn at the tip of the ballot (we’ve just seen the FTC rescind the Vertical Merger Guidelines it put in place less than a year ago). Given how politicized the issuing agencies themselves have become, it’s a virtual certainty that the guidelines produced in response to AICOA would be steeped in partisan politics and immediately changed with a change in administration, thus providing no more lasting legal certainty than speculation by a member of Congress.

Guidelines are not the appropriate tool to define novel concepts

Regardless of this political reality, however, the mixture of vagueness and novelty inherent in the key concepts that underpin the infringements and affirmative defenses under AICOA—such as “fairness,” “preferencing,” “materiality”, or the “intrinsic” value of a product—undermine the usefulness (and legitimacy) of guidelines.

Indeed, while laws are sometimes purposefully vague—operating as standards rather than prescriptive rules—to allow for more flexibility, the concepts introduced by AICOA don’t even offer any cognizable standards suitable for fine-tuning.

The operative terms of AICOA don’t have definitive meanings under antitrust law, either because they are wholly foreign to accepted antitrust law (as in the case of “self-preferencing”) or because the courts have never agreed on an accepted definition (as in the case of “fairness”). Nor are they technical standards, which are better left to specialized agencies rather than to legislators to define, such as in the case of, e.g., pollution (by contrast: what is the technical standard for “fairness”?).

Indeed, as Elyse Dorsey has noted, the only certainty that would emerge from this state of affairs is the certainty of pervasive rent-seeking by non-altruistic players seeking to define the rules in their favor.

As we’ve pointed out elsewhere, the purpose of guidelines is to reflect the state of the art in a certain area of antitrust law and not to push the accepted scope of knowledge and practice in a new direction. This not only overreaches the FTC’s and DOJ’s powers, but also risks galvanizing opposition from the courts, thereby undermining the utility of adopting guidelines in the first place.

Guidelines can’t fix a fundamentally flawed law

Expecting guidelines to provide sensible, administrable content for the bill sets the bar overly high for guidelines, and unduly low for AICOA.

The alleged harms at the heart of AICOA are foreign to antitrust law, and even to the economic underpinnings of competition policy more broadly. Indeed, as Sean Sullivan has pointed out, the law doesn’t even purport to define “harms,” but only serves to make specific conduct illegal:

Even if the conduct has no effect, it’s made illegal, unless an affirmative defense is raised. And the affirmative defense requires that it doesn’t ‘harm competition.’ But ‘harm competition’ is undefined…. You have to prove that harm doesn’t result, but it’s not really ever made clear what the harm is in the first place.”

“Self-preferencing” is not a competitive defect, and simply declaring it to be so does not make it one. As I’ve noted elsewhere:

The notion that platform entry into competition with edge providers is harmful to innovation is entirely speculative. Moreover, it is flatly contrary to a range of studies showing that the opposite is likely true…. The theory of vertical discrimination harm is at odds not only with this platform-specific empirical evidence, it is also contrary to the long-standing evidence on the welfare effects of vertical restraints more broadly …

… [M]andating openness is not without costs, most importantly in terms of the effective operation of the platform and its own incentives for innovation.

Asking agencies with an expertise in competition policy to enact economically sensible guidelines to direct enforcement against such conduct is a fool’s errand. It is a recipe for purely political legislation adopted by competition agencies that does nothing to further their competition missions.

AICOA’s Catch-22 Is Its Own Doing, and Will Be Its Downfall

AICOA’s Catch-22 is that, by making the law so vague that it needs enforcement guidelines to flesh it out, AICOA renders both itself and those same guidelines irrelevant and misses the point of both legal instruments.

Ultimately, guidelines cannot resolve the fundamental rule-of-law issues raised by the bill and highlighted by the ABA in its letter. To the contrary, they confirm the ABA’s concerns that AICOA is a poorly written and indeterminate bill. Further, the contentious elements of the bill that need clarification are inherently legislative ones that—paradoxically—shouldn’t be left to competition-agency guidelines to elucidate.

The upshot is that any future under AICOA will be one marked by endless uncertainty and the extreme politicization of both competition policy and the agencies that enforce it.

We will learn more in the coming weeks about the fate of the proposed American Innovation and Choice Online Act (AICOA), legislation sponsored by Sens. Amy Klobuchar (D-Minn.) and Chuck Grassley (R-Iowa) that would, among other things, prohibit “self-preferencing” by large digital platforms like Google, Amazon, Facebook, Apple, and Microsoft. But while the bill has already been subject to significant scrutiny, a crucially important topic has been absent from that debate: the measure’s likely effect on startup acquisitions. 

Of course, AICOA doesn’t directly restrict startup acquisitions, but the activities it would restrict most certainly do dramatically affect the incentives that drive many startup acquisitions. If a platform is prohibited from engaging in cross-platform integration of acquired technologies, or if it can’t monetize its purchase by prioritizing its own technology, it may lose the motivation to make a purchase in the first place.

This would be a significant loss. As Dirk Auer, Sam Bowman, and I discuss in a recent article in the Missouri Law Review, acquisitions are arguably the most important component in providing vitality to the overall venture ecosystem:  

Startups generally have two methods for achieving liquidity for their shareholders: IPOs or acquisitions. According to the latest data from Orrick and Crunchbase, between 2010 and 2018 there were 21,844 acquisitions of tech startups for a total deal value of $1.193 trillion. By comparison, according to data compiled by Jay R. Ritter, a professor at the University of Florida, there were 331 tech IPOs for a total market capitalization of $649.6 billion over the same period. As venture capitalist Scott Kupor said in his testimony during the FTC’s hearings on “Competition and Consumer Protection in the 21st Century,” “these large players play a significant role as acquirers of venture-backed startup companies, which is an important part of the overall health of the venture ecosystem.”

Moreover, acquisitions by large incumbents are known to provide a crucial channel for liquidity in the venture capital and startup communities: While at one time the source of the “liquidity events” required to yield sufficient returns to fuel venture capital was evenly divided between IPOs and mergers, “[t]oday that math is closer to about 80 percent M&A and about 20 percent IPOs—[with important implications for any] potential actions that [antitrust enforcers] might be considering with respect to the large platform players in this industry.” As investor and serial entrepreneur Leonard Speiser said recently, “if the DOJ starts going after tech companies for making acquisitions, venture investors will be much less likely to invest in new startups, thereby reducing competition in a far more harmful way.” (emphasis added)

Going after self-preferencing may have exactly the same harmful effect on venture investors and competition. 

It’s unclear exactly how the legislation would be applied in any given context (indeed, this uncertainty is one of the most significant problems with the bill, as the ABA Antitrust Section has argued at length). But AICOA is designed, at least in part, to keep large online platforms in their own lanes—to keep them from “leveraging their dominance” to compete against more politically favored competitors in ancillary markets. Indeed, while covered platforms potentially could defend against application of the law by demonstrating that self-preferencing is necessary to “maintain or substantially enhance the core functionality” of the service, no such defense exists for non-core (whatever that means…) functionality, the enhancement of which through self-preferencing is strictly off limits under AICOA.

As I have written (and so have many, many, many, many others), this is terrible policy on its face. But it is also likely to have significant, adverse, indirect consequences for startup acquisitions, given the enormous number of such acquisitions that are outside the covered platforms’ “core functionality.” 

Just take a quick look at a sample of the largest acquisitions made by Apple, Microsoft, Amazon, and Alphabet, for example. (These are screenshots of the first several acquisitions by size drawn from imperfect lists collected by Wikipedia, but for purposes of casual empiricism they are well-suited to give an idea of the diversity of acquisitions at issue):

Apple:

Microsoft:

Amazon:

Alphabet (Google):

Vanishingly few of these acquisitions go to the “core functionalities” of these platforms. Alphabet’s acquisitions, for example, involve (among many other things) cybersecurity; home automation; cloud computing; wearables, smart glasses, and AR hardware; GPS navigation software; communications security; satellite technology; and social gaming. Microsoft’s acquisitions include companies specializing in video games; social networking; software versioning; drawing software; cable television; cybersecurity; employee engagement; and e-commerce. The technologies and applications involved in acquisitions by Apple and Amazon are similarly varied.

Drilling down a bit, consider the companies Alphabet acquired and put to use in the service of Google Maps:

Which, if any, of these companies would Google have purchased if it knew it would be unable to prioritize Maps in its search results? Would Google have invested more than $1 billion in these companies—and likely significantly more in internal R&D to develop Maps—if it had to speculate whether it would be required (or even be able) to prove someday in the future that prioritizing Google Maps results would enhance its core functionality?

What about Xbox? As noted, AICOA’s terms aren’t perfectly clear, so I’m not certain it would apply to Xbox (is Xbox a “website, online or mobile application, operating system, digital assistant, or online service”?). Here are Microsoft’s video-gaming-related purchases:

The vast majority of these (and all of the acquisitions for which Wikipedia has purchase-price information, totaling some $80 billion of investment) involve video games, not the development of hardware or the functionality of the Xbox platform. Would Microsoft have made these investments if it knew it would be prohibited from prioritizing its own games or exclusively using data gleaned through these games to improve its platform? No one can say for certain, but, at the margin, it is absolutely certain that these self-preferencing bills would make such acquisitions less likely.

Perhaps the most obvious—and concerning—example of the problem arises in the context of Google’s Android platform. Google famously gives Android away for free, of course, and makes its operating system significantly open for bespoke use by all comers. In exchange, Google requires that implementers of the Android OS provide some modicum of favoritism to Google’s revenue-generating products, like Search. For all its uncertainty, there is no question that AICOA’s terms would prohibit this self-preferencing. Intentionally or not, it would thus prohibit the way in which Google monetizes Android and thus hopes to recoup some of the—literally—billions of dollars it has invested in the development and maintenance of Android. 

Here are Google’s Android-related acquisitions:

Would Google have bought Android in the first place (to say nothing of subsequent acquisitions and its massive ongoing investment in Android) if it had been foreclosed from adopting its preferred business model to monetize its investment? In the absence of Google bidding for these companies, would they have earned as much from other potential bidders? Would they even have come into existence at all?

Of course, AICOA wouldn’t preclude Google charging device makers for Android and thus raising the price of mobile devices. But that mechanism may not have been sufficient to support Google’s investment in Android, and it would certainly constrain its ability to compete. Even if rules like those proposed by AICOA didn’t undermine Google’s initial purchase of and investment in Android, it is manifestly unclear how forcing Google to adopt a business model that increases consumer prices and constrains its ability to compete head-to-head with Apple’s iOS ecosystem would benefit consumers. (This excellent series of posts—1, 2, 3, 4—by Dirk Auer on the European Commission’s misguided Android decision discusses in detail the significant costs of prohibiting self-preferencing on Android.)

There are innumerable further examples, as well. In all of these cases, it seems clear not only that an AICOA-like regime would diminish competition and reduce consumer welfare across important dimensions, but also that it would impoverish the startup ecosystem more broadly. 

And that may be an even bigger problem. Even if you think, in the abstract, that it would be better for “Big Tech” not to own these startups, there is a real danger that putting that presumption into force would drive down acquisition prices, kill at least some tech-startup exits, and ultimately imperil the initial financing of tech startups. It should go without saying that this would be a troubling outcome. Yet there is no evidence to suggest that AICOA’s proponents have even considered whether the presumed benefits of the bill would be worth this immense cost.

Responding to a new draft policy statement from the U.S. Patent & Trademark Office (USPTO), the National Institute of Standards and Technology (NIST), and the U.S. Department of Justice, Antitrust Division (DOJ) regarding remedies for infringement of standard-essential patents (SEPs), a group of 19 distinguished law, economics, and business scholars convened by the International Center for Law & Economics (ICLE) submitted comments arguing that the guidance would improperly tilt the balance of power between implementers and inventors, and could undermine incentives for innovation.

As explained in the scholars’ comments, the draft policy statement misunderstands many aspects of patent and antitrust policy. The draft notably underestimates the value of injunctions and the circumstances in which they are a necessary remedy. It also overlooks important features of the standardization process that make opportunistic behavior much less likely than policymakers typically recognize. These points are discussed in even more detail in previous work by ICLE scholars, including here and here.

These first-order considerations are only the tip of the iceberg, however. Patent policy has a huge range of second-order effects that the draft policy statement and policymakers more generally tend to overlook. Indeed, reducing patent protection has more detrimental effects on economic welfare than the conventional wisdom typically assumes. 

The comments highlight three important areas affected by SEP policy that would be undermined by the draft statement. 

  1. First, SEPs are established through an industry-wide, collaborative process that develops and protects innovations considered essential to an industry’s core functioning. This process enables firms to specialize in various functions throughout an industry, rather than vertically integrate to ensure compatibility. 
  2. Second, strong patent protection, especially of SEPs, boosts startup creation via a broader set of mechanisms than is typically recognized. 
  3. Finally, strong SEP protection is essential to safeguard U.S. technology leadership and sovereignty. 

As explained in the scholars’ comments, the draft policy statement would be detrimental on all three of these dimensions. 

To be clear, the comments do not argue that addressing these secondary effects should be a central focus of patent and antitrust policy. Instead, the point is that policymakers must deal with a far more complex set of issues than is commonly recognized; the effects of SEP policy aren’t limited to the allocation of rents among inventors and implementers (as they are sometimes framed in policy debates). Accordingly, policymakers should proceed with caution and resist the temptation to alter by fiat terms that have emerged through careful negotiation among inventors and implementers, and which have been governed for centuries by the common law of contract. 

Collaborative Standard-Setting and Specialization as Substitutes for Proprietary Standards and Vertical Integration

Intellectual property in general—and patents, more specifically—is often described as a means to increase the monetary returns from the creation and distribution of innovations. While this is undeniably the case, this framing overlooks the essential role that IP also plays in promoting specialization throughout the economy.

As Ronald Coase famously showed in his Nobel-winning work, firms must constantly decide whether to perform functions in-house (by vertically integrating), or contract them out to third parties (via the market mechanism). Coase concluded that these decisions hinge on whether the transaction costs associated with the market mechanism outweigh the cost of organizing production internally. Decades later, Oliver Williamson added a key finding to this insight. He found that among the most important transaction costs that firms encounter are those that stem from incomplete contracts and the scope for opportunistic behavior they entail.

This leads to a simple rule of thumb: as the scope for opportunistic behavior increases, firms are less likely to use the market mechanism and will instead perform tasks in-house, leading to increased vertical integration.

IP plays a key role in this process. Patents drastically reduce the transaction costs associated with the transfer of knowledge. This gives firms the opportunity to develop innovations collaboratively and without fear that trading partners might opportunistically appropriate their inventions. In turn, this leads to increased specialization. As Robert Merges observes

Patents facilitate arms-length trade of a technology-intensive input, leading to entry and specialization.

More specifically, it is worth noting that the development and commercialization of inventions can lead to two important sources of opportunistic behavior: patent holdup and patent holdout. As the assembled scholars explain in their comments, while patent holdup has drawn the lion’s share of policymaker attention, empirical and anecdotal evidence suggest that holdout is the more salient problem.

Policies that reduce these costs—especially patent holdout—in a cost-effective manner are worthwhile, with the immediate result that technologies are more widely distributed than would otherwise be the case. Inventors also see more intense and extensive incentives to produce those technologies in the first place.

The Importance of Intellectual Property Rights for Startup Activity

Strong patent rights are essential to monetize innovation, thus enabling new firms to gain a foothold in the marketplace. As the scholars’ comments explain, this is even more true for startup companies. There are three main reasons for this: 

  1. Patent rights protected by injunctions prevent established companies from simply copying innovative startups, with the expectation that they will be able to afford court-set royalties; 
  2. Patent rights can be the basis for securitization, facilitating access to startup funding; and
  3. Patent rights drive venture capital (VC) investment.

While point (1) is widely acknowledged, many fail to recognize it is particularly important for startup companies. There is abundant literature on firms’ appropriability mechanisms (these are essentially the strategies firms employ to prevent rivals from copying their inventions). The literature tells us that patent protection is far from the only strategy firms use to protect their inventions (see. e.g., here, here and here). 

The alternative appropriability mechanisms identified by these studies tend to be easier to implement for well-established firms. For instance, many firms earn returns on their inventions by incorporating them into physical products that cannot be reverse engineered. This is much easier for firms that already have a large industry presence and advanced manufacturing capabilities.  In contrast, startup companies—almost by definition—must outsource production.

Second, property rights could drive startup activity through the collateralization of IP. By offering security interests in patents, trademarks, and copyrights, startups with little or no tangible assets can obtain funding without surrendering significant equity. As Gaétan de Rassenfosse puts it

SMEs can leverage their IP to facilitate R&D financing…. [P]atents materialize the value of knowledge stock: they codify the knowledge and make it tradable, such that they can be used as collaterals. Recent theoretical evidence by Amable et al. (2010) suggests that a systematic use of patents as collateral would allow a high growth rate of innovations despite financial constraints.

Finally, there is reason to believe intellectual-property protection is an important driver of venture capital activity. Beyond simply enabling firms to earn returns on their investments, patents might signal to potential investors that a company is successful and/or valuable. Empirical research by Hsu and Ziedonis, for instance, supports this hypothesis

[W]e find a statistically significant and economically large effect of patent filings on investor estimates of start-up value…. A doubling in the patent application stock of a new venture [in] this sector is associated with a 28 percent increase in valuation, representing an upward funding-round adjustment of approximately $16.8 million for the average start-up in our sample.

In short, intellectual property can stimulate startup activity through various mechanisms. There is thus a sense that, at the margin, weakening patent protection will make it harder for entrepreneurs to embark on new business ventures.

The Role of Strong SEP Rights in Guarding Against China’s ‘Cyber Great Power’ Ambitions 

The United States, due in large measure to its strong intellectual-property protections, is a nation of innovators, and its production of IP is one of its most important comparative advantages. 

IP and its legal protections become even more important, however, when dealing with international jurisdictions, like China, that don’t offer similar levels of legal protection. By making it harder for patent holders to obtain injunctions, licensees and implementers gain the advantage in the short term, because they are able to use patented technology without having to engage in negotiations to pay the full market price. 

In the case of many SEPs—particularly those in the telecommunications sector—a great many patent holders are U.S.-based, while the lion’s share of implementers are Chinese. The anti-injunction policy espoused in the draft policy statement thus amounts to a subsidy to Chinese infringers of U.S. technology.

At the same time, China routinely undermines U.S. intellectual property protections through its industrial policy. The government’s stated goal is to promote “fair and reasonable” international rules, but it is clear that China stretches its power over intellectual property around the world by granting “anti-suit injunctions” on behalf of Chinese smartphone makers, designed to curtail enforcement of foreign companies’ patent rights.

This is part of the Chinese government’s larger approach to industrial policy, which seeks to expand Chinese power in international trade negotiations and in global standards bodies. As one Chinese Communist Party official put it

Standards are the commanding heights, the right to speak, and the right to control. Therefore, the one who obtains the standards gains the world.

Insufficient protections for intellectual property will hasten China’s objective of dominating collaborative standard development in the medium to long term. Simultaneously, this will engender a switch to greater reliance on proprietary, closed standards rather than collaborative, open standards. These harmful consequences are magnified in the context of the global technology landscape, and in light of China’s strategic effort to shape international technology standards. Chinese companies, directed by their government authorities, will gain significant control of the technologies that will underpin tomorrow’s digital goods and services.

The scholars convened by ICLE were not alone in voicing these fears. David Teece (also a signatory to the ICLE-convened comments), for example, surmises in his comments that: 

The US government, in reviewing competition policy issues that might impact standards, therefore needs to be aware that the issues at hand have tremendous geopolitical consequences and cannot be looked at in isolation…. Success in this regard will promote competition and is our best chance to maintain technological leadership—and, along with it, long-term economic growth and consumer welfare and national security.

Similarly, comments from the Center for Strategic and International Studies (signed by, among others, former USPTO Director Anrei Iancu, former NIST Director Walter Copan, and former Deputy Secretary of Defense John Hamre) argue that the draft policy statement would benefit Chinese firms at U.S. firms’ expense:

What is more, the largest short-term and long-term beneficiaries of the 2021 Draft Policy Statement are firms based in China. Currently, China is the world’s largest consumer of SEP-based technology, so weakening protection of American owned patents directly benefits Chinese manufacturers. The unintended effect of the 2021 Draft Policy Statement will be to support Chinese efforts to dominate critical technology standards and other advanced technologies, such as 5G. Put simply, devaluing U.S. patents is akin to a subsidized tech transfer to China.

With Chinese authorities joining standardization bodies and increasingly claiming jurisdiction over F/RAND disputes, there should be careful reevaluation of the ways the draft policy statement would further weaken the United States’ comparative advantage in IP-dependent technological innovation. 

Conclusion

In short, weakening patent protection could have detrimental ramifications that are routinely overlooked by policymakers. These include increasing inventors’ incentives to vertically integrate rather than develop innovations collaboratively; reducing startup activity (especially when combined with antitrust enforcers’ newfound proclivity to challenge startup acquisitions); and eroding America’s global technology leadership, particularly with respect to China.

For these reasons (and others), the text of the draft policy statement should be reconsidered and either revised substantially to better reflect these concerns or withdrawn entirely. 

The signatories to the comments are:

Alden F. AbbottSenior Research Fellow, Mercatus Center
George Mason University
Former General Counsel, U.S. Federal Trade Commission
Jonathan BarnettTorrey H. Webb Professor of Law
University of Southern California
Ronald A. CassDean Emeritus, School of Law
Boston University
Former Commissioner and Vice-Chairman, U.S. International Trade Commission
Giuseppe ColangeloJean Monnet Chair in European Innovation Policy and Associate Professor of Competition Law & Economics
University of Basilicata and LUISS (Italy)
Richard A. EpsteinLaurence A. Tisch Professor of Law
New York University
Bowman HeidenExecutive Director, Tusher Initiative at the Haas School of Business
University of California, Berkeley
Justin (Gus) HurwitzProfessor of Law
University of Nebraska
Thomas A. LambertWall Chair in Corporate Law and Governance
University of Missouri
Stan J. LiebowitzAshbel Smith Professor of Economics
University of Texas at Dallas
John E. LopatkaA. Robert Noll Distinguished Professor of Law
Penn State University
Keith MallinsonFounder and Managing Partner
WiseHarbor
Geoffrey A. MannePresident and Founder
International Center for Law & Economics
Adam MossoffProfessor of Law
George Mason University
Kristen Osenga Austin E. Owen Research Scholar and Professor of Law
University of Richmond
Vernon L. SmithGeorge L. Argyros Endowed Chair in Finance and Economics
Chapman University
Nobel Laureate in Economics (2002)
Daniel F. SpulberElinor Hobbs Distinguished Professor of International Business
Northwestern University
David J. TeeceThomas W. Tusher Professor in Global Business
University of California, Berkeley
Joshua D. WrightUniversity Professor of Law
George Mason University
Former Commissioner, U.S. Federal Trade Commission
John M. YunAssociate Professor of Law
George Mason University
Former Acting Deputy Assistant Director, Bureau of Economics, U.S. Federal Trade Commission 

I am delighted to announce that Alden Abbott has returned to TOTM as a regular blogger following his recent stint as General Counsel of the FTC. You can find his first post since his return, on the NCAA v. Alston case, here.

Regular readers know Alden well, of course. Not only has he long been one of the most prolific and insightful thinkers on the antitrust scene, but from 2014 until his departure when he (re)joined the FTC in 2018, he had done some of his most prolific and insightful thinking here at TOTM.

As I wrote when Alden joined TOTM in 2014:

Alden has been at the center of the US antitrust universe for most of his career. When he retired from the FTC in 2012, he had served as Deputy Director of the Office of International Affairs for three years. Before that he was Director of Policy and Coordination, FTC Bureau of Competition; Acting General Counsel, Department of Commerce; Chief Counsel, National Telecommunications and Information Administration; Senior Counsel, Office of Legal Counsel, DOJ; and Special Assistant to the Assistant Attorney General for Antitrust, DOJ.

For those who may not know, Alden left the FTC earlier this year and is now a Senior Research Fellow at the Mercatus Center.

Welcome back, Alden!

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Geoffrey A. Manne is the president and founder of the International Center for Law and Economics.]

I’m delighted to add my comments to the chorus of voices honoring Ajit Pai’s remarkable tenure at the Federal Communications Commission. I’ve known Ajit longer than most. We were classmates in law school … let’s just say “many” years ago. Among the other symposium contributors I know of only one—fellow classmate, Tom Nachbar—who can make a similar claim. I wish I could say this gives me special insight into his motivations, his actions, and the significance of his accomplishments, but really it means only that I have endured his dad jokes and interminable pop-culture references longer than most. 

But I can say this: Ajit has always stood out as a genuinely humble, unfailingly gregarious, relentlessly curious, and remarkably intelligent human being, and he deployed these characteristics to great success at the FCC.   

Ajit’s tenure at the FCC was marked by an abiding appreciation for the importance of competition, both as a guiding principle for new regulations and as a touchstone to determine when to challenge existing ones. As others have noted (and as we have written elsewhere), that approach was reflected significantly in the commission’s Restoring Internet Freedom Order, which made competition—and competition enforcement by the antitrust agencies—the centerpiece of the agency’s approach to net neutrality. But I would argue that perhaps Chairman Pai’s greatest contribution to bringing competition to the forefront of the FCC’s mandate came in his work on media modernization.

Fairly early in his tenure at the commission, Ajit raised concerns with the FCC’s failure to modernize its media-ownership rules. In response to the FCC’s belated effort to initiate the required 2010 and 2014 Quadrennial Reviews of those rules, then-Commissioner Pai noted that the commission had abdicated its responsibility under the statute to promote competition. Not only was the FCC proposing to maintain a host of outdated existing rules, but it was also moving to impose further constraints (through new limitations on the use of Joint Sales Agreements (JSAs)). As Ajit noted, such an approach was antithetical to competition:

In smaller markets, the choice is not between two stations entering into a JSA and those same two stations flourishing while operating completely independently. Rather, the choice is between two stations entering into a JSA and at least one of those stations’ viability being threatened. If stations in these smaller markets are to survive and provide many of the same services as television stations in larger markets, they must cut costs. And JSAs are a vital mechanism for doing that.

The efficiencies created by JSAs are not a luxury in today’s digital age. They are necessary, as local broadcasters face fierce competition for viewers and advertisers.

Under then-Chairman Tom Wheeler, the commission voted to adopt the Quadrennial Review in 2016, issuing rules that largely maintained the status quo and, at best, paid tepid lip service to the massive changes in the competitive landscape. As Ajit wrote in dissent:

The changes to the media marketplace since the FCC adopted the Newspaper-Broadcast Cross-Ownership Rule in 1975 have been revolutionary…. Yet, instead of repealing the Newspaper-Broadcast Cross-Ownership Rule to account for the massive changes in how Americans receive news and information, we cling to it.

And over the near-decade since the FCC last finished a “quadrennial” review, the video marketplace has transformed dramatically…. Yet, instead of loosening the Local Television Ownership Rule to account for the increasing competition to broadcast television stations, we actually tighten that regulation.

And instead of updating the Local Radio Ownership Rule, the Radio-Television Cross-Ownership Rule, and the Dual Network Rule, we merely rubber-stamp them.

The more the media marketplace changes, the more the FCC’s media regulations stay the same.

As Ajit also accurately noted at the time:

Soon, I expect outside parties to deliver us to the denouement: a decisive round of judicial review. I hope that the court that reviews this sad and total abdication of the administrative function finds, once and for all, that our media ownership rules can no longer stay stuck in the 1970s consistent with the Administrative Procedure Act, the Communications Act, and common sense. The regulations discussed above are as timely as “rabbit ears,” and it’s about time they go the way of those relics of the broadcast world. I am hopeful that the intervention of the judicial branch will bring us into the digital age.

And, indeed, just this week the case was argued before the Supreme Court.

In the interim, however, Ajit became Chairman of the FCC. And in his first year in that capacity, he took up a reconsideration of the 2016 Order. This 2017 Order on Reconsideration is the one that finally came before the Supreme Court. 

Consistent with his unwavering commitment to promote media competition—and no longer a minority commissioner shouting into the wind—Chairman Pai put forward a proposal substantially updating the media-ownership rules to reflect the dramatically changed market realities facing traditional broadcasters and newspapers:

Today we end the 2010/2014 Quadrennial Review proceeding. In doing so, the Commission not only acknowledges the dynamic nature of the media marketplace, but takes concrete steps to update its broadcast ownership rules to reflect reality…. In this Order on Reconsideration, we refuse to ignore the changed landscape and the mandates of Section 202(h), and we deliver on the Commission’s promise to adopt broadcast ownership rules that reflect the present, not the past. Because of our actions today to relax and eliminate outdated rules, broadcasters and local newspapers will at last be given a greater opportunity to compete and thrive in the vibrant and fast-changing media marketplace. And in the end, it is consumers that will benefit, as broadcast stations and newspapers—those media outlets most committed to serving their local communities—will be better able to invest in local news and public interest programming and improve their overall service to those communities.

Ajit’s approach was certainly deregulatory. But more importantly, it was realistic, well-reasoned, and responsive to changing economic circumstances. Unlike most of his predecessors, Ajit was unwilling to accede to the torpor of repeated judicial remands (on dubious legal grounds, as we noted in our amicus brief urging the Court to grant certiorari in the case), permitting facially and wildly outdated rules to persist in the face of massive and obvious economic change. 

Like Ajit, I am not one to advocate regulatory action lightly, especially in the (all-too-rare) face of judicial review that suggests an agency has exceeded its discretion. But in this case, the need for dramatic rule change—here, to deregulate—was undeniable. The only abuse of discretion was on the part of the court, not the agency. As we put it in our amicus brief:

[T]he panel vacated these vital reforms based on mere speculation that they would hinder minority and female ownership, rather than grounding its action on any record evidence of such an effect. In fact, the 2017 Reconsideration Order makes clear that the FCC found no evidence in the record supporting the court’s speculative concern.

…In rejecting the FCC’s stated reasons for repealing or modifying the rules, absent any evidence in the record to the contrary, the panel substituted its own speculative concerns for the judgment of the FCC, notwithstanding the FCC’s decades of experience regulating the broadcast and newspaper industries. By so doing, the panel exceeded the bounds of its judicial review powers under the APA.

Key to Ajit’s conclusion that competition in local media markets could be furthered by permitting more concentration was his awareness that the relevant market for analysis couldn’t be limited to traditional media outlets like broadcasters and newspapers; it must include the likes of cable networks, streaming video providers, and social-media platforms, as well. As Ajit put it in a recent speech:

The problem is a fundamental refusal to grapple with today’s marketplace: what the service market is, who the competitors are, and the like. When assessing competition, some in Washington are so obsessed with the numerator, so to speak—the size of a particular company, for instance—that they’ve completely ignored the explosion of the denominator—the full range of alternatives in media today, many of which didn’t exist a few years ago.

When determining a particular company’s market share, a candid assessment of the denominator should include far more than just broadcast networks or cable channels. From any perspective (economic, legal, or policy), it should include any kinds of media consumption that consumers consider to be substitutes. That could be TV. It could be radio. It could be cable. It could be streaming. It could be social media. It could be gaming. It could be still something else. The touchstone of that denominator should be “what content do people choose today?”, not “what content did people choose in 1975 or 1992, and how can we artificially constrict our inquiry today to match that?”

For some reason, this simple and seemingly undeniable conception of the market escapes virtually all critics of Ajit’s media-modernization agenda. Indeed, even Justice Stephen Breyer in this week’s oral argument seemed baffled by the notion that more concentration could entail more competition:

JUSTICE BREYER: I’m thinking of it solely as a — the anti-merger part, in — in anti-merger law, merger law generally, I think, has a theory, and the theory is, beyond a certain point and other things being equal, you have fewer companies in a market, the harder it is to enter, and it’s particularly harder for smaller firms. And, here, smaller firms are heavily correlated or more likely to be correlated with women and minorities. All right?

The opposite view, which is what the FCC has now chosen, is — is they want to move or allow to be moved towards more concentration. So what’s the theory that that wouldn’t hurt the minorities and women or smaller businesses? What’s the theory the opposite way, in other words? I’m not asking for data. I’m asking for a theory.

Of course, as Justice Breyer should surely know—and as I know Ajit Pai knows—counting the number of firms in a market is a horrible way to determine its competitiveness. In this case, the competition from internet media platforms, particularly for advertising dollars, is immense. A regulatory regime that prohibits traditional local-media outlets from forging efficient joint ventures or from obtaining the scale necessary to compete with those platforms does not further competition. Even if such a rule might temporarily result in more media outlets, eventually it would result in no media outlets, other than the large online platforms. The basic theory behind the Reconsideration Order—to answer Justice Breyer—is that outdated government regulation imposes artificial constraints on the ability of local media to adopt the organizational structures necessary to compete. Removing those constraints may not prove a magic bullet that saves local broadcasters and newspapers, but allowing the rules to remain absolutely ensures their demise. 

Ajit’s commitment to furthering competition in telecommunications markets remained steadfast throughout his tenure at the FCC. From opposing restrictive revisions to the agency’s spectrum screen to dissenting from the effort to impose a poorly conceived and retrograde regulatory regime on set-top boxes, to challenging the agency’s abuse of its merger review authority to impose ultra vires regulations, to, of course, rolling back his predecessor’s unsupportable Title II approach to net neutrality—and on virtually every issue in between—Ajit sought at every turn to create a regulatory backdrop conducive to competition.

Tom Wheeler, Pai’s predecessor at the FCC, claimed that his personal mantra was “competition, competition, competition.” His greatest legacy, in that regard, was in turning over the agency to Ajit.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Geoffrey A. Manne, (President, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics).]

There has been much (admittedly important) discussion of the economic woes of mass quarantine to thwart the spread and “flatten the curve” of the virus and its health burdens — as well as some extremely interesting discussion of the long-term health woes of quarantine and the resulting economic downturn: see, e.g., previous work by Christopher Ruhm suggesting mortality rates may improve during economic downturns, and this thread on how that might play out differently in the current health crisis.

But there is perhaps insufficient attention being paid to the more immediate problem of medical resource scarcity to treat large, localized populations of acutely sick people — something that will remain a problem for some time in places like New York, no matter how successful we are at flattening the curve. 

Yet the fact that we may have failed to prepare adequately for the current emergency does not mean that we can’t improve our ability to respond to the current emergency and build up our ability to respond to subsequent emergencies — both in terms of future, localized outbreaks of COVID-19, as well as for other medical emergencies more broadly.

In what follows I lay out the outlines of a proposal for an OPTN (Organ Procurement and Transplantation Network) analogue for allocating emergency medical resources. In order to make the idea more concrete (and because no doubt there is a limit to the types of medical resources for which such a program would be useful or necessary), let’s call it the VPAN — Ventilator Procurement and Allocation Network.

As quickly as possible in order to address the current crisis — and definitely with enough speed to address the next crisis — we should develop a program to collect relevant data and enable deployment of medical resources where they are most needed, using such data, wherever possible, to enable deployment before shortages become the enormous problem they are today

Data and information are important tools for mitigating emergencies

Hal’s post, especially in combination with Julian’s, offers a really useful suggestion for using modern information technology to help mitigate one of the biggest problems of the current crisis: The ability to return to economic activity (and a semblance of normalcy) as quickly as possible.

What I like most about his idea (and, again, Julian’s) is its incremental approach: We don’t have to wait until it’s safe for everyone to come outside in order for some people to do so. And, properly collected, assessed, and deployed, information is a key part of making that possible for more and more people every day.

Here I want to build on Hal’s idea to suggest another — perhaps even more immediately crucial — use of data to alleviate the COVID-19 crisis: The allocation of scarce medical resources.

In the current crisis, the “what” of this data is apparent: it is the testing data described by Julian in his post, and implemented in digital form by Hal in his. Thus, whereas Hal’s proposal contemplates using this data solely to allow proprietors (public transportation, restaurants, etc.) to admit entry to users, my proposal contemplates something more expansive: the provision of Hal’s test-verification vendors’ data to a centralized database in order to use it to assess current medical resource needs and to predict future needs.

The apparent ventilator availability crisis

As I have learned at great length from a friend whose spouse is an ICU doctor on the front lines, the current ventilator scarcity in New York City is worrisome (from a personal email, edited slightly for clarity):

When doctors talk about overwhelming a medical system, and talk about making life/death decisions, often they are talking about ventilators. A ventilator costs somewhere between $25K to $50K. Not cheap, but not crazy expensive. Most of the time these go unused, so hospitals have not stocked up on them, even in first-rate medical systems. Certainly not in the US, where equipment has to get used or the hospital does not get reimbursed for the purchase.

With a bad case of this virus you can put somebody — the sickest of the sickest — on one of those for three days and many of them don’t die. That frames a brutal capacity issue in a local area. And that is what has happened in Italy. They did not have enough ventilators in specific cities where the cases spiked. The mortality rates were much higher solely due to lack of these machines. Doctors had to choose who got on the machine and who did not. When you read these stories about a choice of life and death, that could be one reason for it.

Now the brutal part: This is what NYC might face soon. Faster than expected, by the way. Maybe they will ship patients to hospitals in other parts of NY state, and in NJ and CT. Maybe they can send them to the V.A. hospitals. Those are the options for how they hope to avoid this particular capacity issue. Maybe they will flatten the curve just enough with all the social distancing. Hard to know just now. But right now the doctors are pretty scared, and they are planning for the worst.

A recent PBS Report describes the current ventilator situation in the US:

A 2018 analysis from the Johns Hopkins University Center for Health Security estimated we have around 160,000 ventilators in the U.S. If the “worst-case scenario” were to come to pass in the U.S., “there might not be” enough ventilators, Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases, told CNN on March 15.

“If you don’t have enough ventilators, that means [obviously] that people who need it will not be able to get it,” Fauci said. He stressed that it was most important to mitigate the virus’ spread before it could overwhelm American health infrastructure.

Reports say that the American Hospital Association believes almost 1 million COVID-19 patients in the country will require a ventilator. Not every patient will require ventilation at the same time, but the numbers are still concerning. Dr. Daniel Horn, a physician at Massachusetts General Hospital in Boston, warned in a March 22 editorial in The New York Times that “There simply will not be enough of these machines, especially in major cities.”

The recent report of 9,000 COVID-19-related deaths in Italy brings the ventilator scarcity crisis into stark relief: There is little doubt that a substantial number of these deaths stem from the unavailability of key medical resources, including, most importantly, ventilators.  

Medical resource scarcity in the current crisis is a drastic problem. And without significant efforts to ameliorate it it is likely to get worse before it gets better. 

Using data to allocate scarce resources: The basic outlines of a proposed “Ventilator Procurement and Allocation Network”

But that doesn’t mean that the scarce resources we do have can’t be better allocated. As the PBS story quoted above notes, there are some 160,000 ventilators in the US. While that may not be enough in the aggregate, it’s considerably more than are currently needed in, say, New York City — and a great number of them are surely not currently being used, nor likely immediately to need to be used. 

The basic outline of the idea for redistributing these resources is fairly simple: 

  1. First, register all of the US’s existing ventilators in a centralized database. 
  2. Second (using a system like the one Hal describes), collect and update in real time the relevant test results, contact tracing, demographic, and other epidemiological data and input it into a database.
  3. Third, analyze this data using one or more compartmental models (or more targeted, virus-specific models) — (NB: I am the furthest thing from an epidemiologist, so I make no claims about how best to do this; the link above, e.g., is merely meant to be illustrative and not a recommendation) — to predict the demand for ventilators at various geographic levels, ranging from specific hospitals to counties or states. In much the same way, allocation of organs in the OPTN is based on a set of “allocation calculators” (which in turn are intended to implement the “Final Rule” adopted by HHS to govern transplant organ allocation decisions).   
  4. Fourth, ask facilities in low-expected-demand areas to send their unused (or excess above the level required to address “normal” demand) ventilators to those in high-expected-demand areas, with the expectation that they will be consistently reallocated across all hospitals and emergency care facilities according to the agreed-upon criteria. Of course, the allocation “algorithm” would be more complicated than this (as is the HHS Final Rule for organ allocation). But in principle this would be the primary basis for allocation. 

Not surprisingly, some guidelines for the allocation of ventilators in such emergencies already exist — like New York’s Ventilator Allocation Guidelines for triaging ventilators during an influenza pandemic. But such guidelines address the protocols for each facility to use in determining how to allocate its own scarce resources; they do not contemplate the ability to alleviate shortages in the first place by redistributing ventilators across facilities (or cities, states, etc.).

I believe that such a system — like the OPTN — could largely work on a voluntary basis. Of course, I’m quick to point out that the OPTN is a function of a massive involuntary and distortionary constraint: the illegality of organ sales. But I suspect that a crisis like the one we’re currently facing is enough to engender much the same sort of shortage (as if such a constraint were in place with respect to the use of ventilators), and thus that a similar system would be similarly useful. If not, of course, it’s possible that the government could, in emergency situations, actually commandeer privately-owned ventilators in order to effectuate the system. I leave for another day the consideration of the merits and defects of such a regime.

Of course, it need not rely on voluntary participation. There could be any number of feasible means of inducing hospitals that have unused ventilators to put their surpluses into the allocation network, presumably involving some sort of cash or other compensation. Or perhaps, if and when such a system were expanded to include other medical resources, it might involve moving donor hospitals up the queue for some other scarce resources they need that don’t face a current crisis. Surely there must be equipment that a New York City hospital has in relative surplus that a small town hospital covets.

But the key point is this: It doesn’t make sense to produce and purchase enough ventilators so that every hospital in the country can simultaneously address extremely rare peak demands. Doing so would be extraordinarily — and almost always needlessly — expensive. And emergency preparedness is never about ensuring that there are no shortages in the worst-case scenario; it’s about making a minimax calculation (as odious as those are) — i.e., minimizing the maximal cost/risk, not mitigating risk entirely. (For a literature review of emergency logistics in the context of large-scale disasters, see, e.g., here)

But nor does it make sense — as a policy matter — to allocate the new ventilators that will be produced in response to current demand solely on the basis of current demand. The epidemiological externalities of the current pandemic are substantial, and there is little reason to think that currently over-taxed emergency facilities — or even those preparing for their own expected demand — will make procurement decisions that reflect the optimal national (let alone global) allocation of such resources. A system like the one I outline here would effectively enable the conversion of private, constrained decisions to serve the broader demands required for optimal allocation of scarce resources in the face of epidemiological externalities

Indeed — and importantly — such a program allows the government to supplement existing and future public and private procurement decisions to ensure an overall optimal level of supply (and, of course, government-owned ventilators — 10,000 of which already exist in the Strategic National Stockpile — would similarly be put into the registry and deployed using the same criteria). Meanwhile, it would allow private facilities to confront emergency scenarios like the current one with far more resources than it would ever make sense for any given facility to have on hand in normal times.

Some caveats

There are, as always, caveats. First, such a program relies on the continued, effective functioning of transportation networks. If any given emergency were to disrupt these — and surely some would — the program would not necessarily function as planned. Of course, some of this can be mitigated by caching emergency equipment in key locations, and, over the course of an emergency, regularly redistributing those caches to facilitate expected deployments as the relevant data comes in. But, to be sure, at the end of the day such a program depends on the ability to transport ventilators.

In addition, there will always be the risk that emergency needs swamp even the aggregate available resources simultaneously (as may yet occur during the current crisis). But at the limit there is nothing that can be done about such an eventuality: Short of having enough ventilators on hand so that every needy person in the country can use one essentially simultaneously, there will always be the possibility that some level of demand will outpace our resources. But even in such a situation — where allocation of resources is collectively guided by epidemiological (or, in the case of other emergencies, other relevant) criteria — the system will work to mitigate the likely overburdening of resources, and ensure that overall resource allocation is guided by medically relevant criteria, rather than merely the happenstance of geography, budget constraints, storage space, or the like.     

Finally, no doubt a host of existing regulations make such a program difficult or impossible. Obviously, these should be rescinded. One set of policy concerns is worth noting: privacy concerns. There is an inherent conflict between strong data privacy, in which decisions about the sharing of information belong to each individual, and the data needs to combat an epidemic, in which each person’s privately optimal level of data sharing may result in a socially sub-optimal level of shared data. To the extent that HIPAA or other privacy regulations would stand in the way of a program like this, it seems singularly important to relax them. Much of the relevant data cannot be efficiently collected on an opt-in basis (as is easily done, by contrast, for the OPTN). Certainly appropriate safeguards should be put in place (particularly with respect to the ability of government agencies/law enforcement to access the data). But an individual’s idiosyncratic desire to constrain the sharing of personal data in this context seems manifestly less important than the benefits of, at the very least, a default rule that the relevant data be shared for these purposes.

Appropriate standards for emergency preparedness policy generally

Importantly, such a plan would have broader applicability beyond ventilators and the current crisis. And this is a key aspect of addressing the problem: avoiding a myopic focus on the current emergency in lieu of more clear-eyed emergency preparedness plan

It’s important to be thinking not only about the current crisis but also about the next emergency. But it’s equally important not to let political point-scoring and a bias in favor of focusing on the seen over the unseen coopt any such efforts. A proper assessment entails the following considerations (surely among others) (and hat tip to Ron Cass for bringing to my attention most of the following insights):

  1. Arguably we are overweighting health and safety concerns with respect to COVID-19 compared to our assessments in other areas (such as ordinary flu (on which see this informative thread by Anup Malani), highway safety, heart & coronary artery diseases, etc.). That’s inevitable when one particular concern is currently so omnipresent and so disruptive. But it is important that we not let our preparations for future problems focus myopically on this cause, because the next crisis may be something entirely different. 
  2. Nor is it reasonable to expect that we would ever have been (or be in the future) fully prepared for a global pandemic. It may not be an “unknown unknown,” but it is impossible to prepare for all possible contingencies, and simply not sensible to prepare fully for such rare and difficult-to-predict events.
  3. That said, we also shouldn’t be surprised that we’re seeing more frequent global pandemics (a function of broader globalization), and there’s little reason to think that we won’t continue to do so. It makes sense to be optimally prepared for such eventualities, and if this one has shown us anything, it’s that our ability to allocate medical resources that are made suddenly scarce by a widespread emergency is insufficient. 
  4. But rather than overreact to such crises — which is difficult, given that overreaction typically aligns with the private incentives of key decision makers, the media, and many in the “chattering class” — we should take a broader, more public-focused view of our response. Moreover, political and bureaucratic incentives not only produce overreactions to visible crises, they also undermine the appropriate preparation for such crises in the future.
  5. Thus, we should create programs that identify and mobilize generically useful emergency equipment not likely to be made obsolete within a short period and likely to be needed whatever the source of the next emergency. In other words, we should continue to focus the bulk of our preparedness on things like quickly deployable ICU facilities, ventilators, and clean blood supplies — not, as we may be wrongly inclined to do given the salience of the current crisis, primarily on specially targeted drugs and test kits. Our predictive capacity for our future demand of more narrowly useful products is too poor to justify substantial investment.
  6. Given the relative likelihood of another pandemic, generic preparedness certainly includes the ability to inhibit overly fast spread of a disease that can clog critical health care facilities. This isn’t disease-specific (or, that is, while the specific rate and contours of infection are specific to each disease, relatively fast and widespread contagion is what causes any such disease to overtax our medical resources, so if we’re preparing for a future virus-related emergency, we’re necessarily preparing for a disease that spreads quickly and widely).

Because the next emergency isn’t necessarily going to be — and perhaps isn’t even likely to be — a pandemic, our preparedness should not be limited to pandemic preparedness. This means, as noted above, overcoming the political and other incentives to focus myopically on the current problem even when nominally preparing for the next one. But doing so is difficult, and requires considerable political will and leadership. It’s hard to conceive of our current federal leadership being up to the task, but it’s certainly not the case that our current problems are entirely the makings of this administration. All governments spend too much time and attention solving — and regulating — the most visible problems, whether doing so is socially optimal or not.   

Thus, in addition to (1) providing for the efficient and effective use of data to allocate emergency medical resources (e.g., as described above), and (2) ensuring that our preparedness centers primarily on generically useful emergency equipment, our overall response should also (3) recognize and correct the way current regulatory regimes also overweight visible adverse health effects and inhibit competition and adaptation by industry and those utilizing health services, and (4) make sure that the economic and health consequences of emergency and regulatory programs (such as the current quarantine) are fully justified and optimized.

A proposal like the one I outline above would, I believe, be consistent with these considerations and enable more effective medical crisis response in general.

The 2020 Draft Joint Vertical Merger Guidelines:

What’s in, what’s out — and do we need them anyway?

February 6 & 7, 2020

Welcome! We’re delighted to kick off our two-day blog symposium on the recently released Draft Joint Vertical Merger Guidelines from the DOJ Antitrust Division and the Federal Trade Commission. 

If adopted by the agencies, the guidelines would mark the first time since 1984 that U.S. federal antitrust enforcers have provided official, public guidance on their approach to the increasingly important issue of vertical merger enforcement. 

As previously noted, the release of the draft guidelines was controversial from the outset: The FTC vote to issue the draft was mixed, with a dissent from Commissioner Slaughter, an abstention from Commissioner Chopra, and a concurring statement from Commissioner Wilson.

As the antitrust community gears up to debate the draft guidelines, we have assembled an outstanding group of antitrust experts to weigh in with their initial thoughts on the guidelines here at Truth on the Market. We hope this symposium will provide important insights and stand as a useful resource for the ongoing discussion.

The scholars and practitioners who will participate in the symposium are:

  • Timothy J. Brennan (Professor, Public Policy and Economics, University of Maryland; former Chief Economist, FCC; former economist, DOJ Antitrust Division)
  • Steven Cernak (Partner, Bona Law PC; former antitrust counsel, GM)
  • Eric Fruits (Chief Economist, ICLE; Professor of Economics, Portland State University)
  • Herbert Hovenkamp (James G. Dinan University Professor of Law, University of Pennsylvania)
  • Jonathan M. Jacobson (Partner, Wilson Sonsini Goodrich & Rosati) and Kenneth Edelson (Associate, Wilson Sonsini Goodrich & Rosati)
  • William J. Kolasky (Partner, Hughes Hubbard & Reed; former Deputy Assistant Attorney General, DOJ Antitrust Division) and Philip A. Giordano (Partner, Hughes Hubbard & Reed LLP)
  • Geoffrey A. Manne (President & Founder, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics) and Kristian Stout (Associate Director, ICLE)
  • Jonathan E. Nuechterlein (Partner, Sidley Austin LLP; former General Counsel, FTC; former Deputy General Counsel, FCC)
  • Sharis A. Pozen (Partner, Clifford Chance; former Vice President of Global Competition Law and Policy, GE; former Acting Assistant Attorney General, DOJ Antitrust Division), Timothy Cornell (Partner, Clifford Chance), Brian Concklin (Counsel, Clifford Chance), and Michael Van Arsdall (Counsel, Clifford Chance)
  • Jan Rybnicek (Counsel, Freshfields Bruckhaus Deringer; former attorney adviser to Commissioner Joshua D. Wright, FTC)
  • Steven C. Salop (tent.) (Professor of Economics and Law, Georgetown University; former Associate Director, FTC Bureau of Economics)
  • Scott A. Sher (Partner, Wilson Sonsini Goodrich & Rosati) and Matthew McDonald (Associate, Wilson Sonsini Goodrich & Rosati)
  • Margaret Slade (Professor Emeritus, Vancouver School of Economics, University of British Columbia)
  • Gregory Werden (former Senior Economic Counsel, DOJ Antitrust Division) and Luke M. Froeb (William C. Oehmig Chair in Free Enterprise and Entrepreneurship, Vanderbilt University; former Chief Economist, DOJ Antitrust Division; former Chief Economist, FTC)
  • Lawrence J. White (Robert Kavesh Professor of Economics, New York University; former Chief Economist, DOJ Antitrust Division)
  • Joshua D. Wright (University Professor of Law, George Mason University; former Commissioner, FTC), Douglas H. Ginsburg (Senior Circuit Judge, US Court of Appeals for the DC Circuit; Professor of Law, George Mason University; former Assistant Attorney General, DOJ Antitrust Division), Tad Lipsky (Assistant Professor of Law, George Mason University; former Acting Director, FTC Bureau of Competition; former chief antitrust counsel, Coca-Cola; former Deputy Assistant Attorney General, DOJ Antitrust Division), and John M. Yun (Associate Professor of Law, George Mason University; former Acting Deputy Assistant Director, FTC Bureau of Economics)

The first of the participants’ initial posts will appear momentarily, with additional posts appearing throughout the day today and tomorrow. We hope to generate a lively discussion, and expect some of the participants to offer follow up posts and/or comments on their fellow participants’ posts — please be sure to check back throughout the day and be sure to check the comments. We hope our readers will join us in the comments, as well.

Once again, welcome!

Truth on the Market is pleased to announce its next blog symposium:

The 2020 Draft Joint Vertical Merger Guidelines: What’s in, what’s out — and do we need them anyway?

February 6 & 7, 2020

Symposium background

On January 10, 2020, the DOJ Antitrust Division and the Federal Trade Commission released Draft Joint Vertical Merger Guidelines for public comment. If adopted by the agencies, the guidelines would mark the first time since 1984 that U.S. federal antitrust enforcers have provided official, public guidance on their approach to the increasingly important issue of vertical merger enforcement: 

“Challenging anticompetitive vertical mergers is essential to vigorous enforcement. The agencies’ vertical merger policy has evolved substantially since the issuance of the 1984 Non-Horizontal Merger Guidelines, and our guidelines should reflect the current enforcement approach. Greater transparency about the complex issues surrounding vertical mergers will benefit the business community, practitioners, and the courts,” said FTC Chairman Joseph J. Simons.

As evidenced by FTC Commissioner Slaughter’s dissent and FTC Commissioner Chopra’s abstention from the FTC’s vote to issue the draft guidelines, the topic is a contentious one. Similarly, as FTC Commissioner Wilson noted in her concurring statement, the recent FTC hearing on vertical mergers demonstrated that there is a vigorous dispute over what new guidelines should look like (or even if the 1984 Non-Horizontal Guidelines should be updated at all).

The agencies have announced two upcoming workshops to discuss the draft guidelines and have extended the comment period on the draft until February 26.

In advance of the workshops and the imminent discussions over the draft guidelines, we have asked a number of antitrust experts to weigh in here at Truth on the Market: to preview the coming debate by exploring the economic underpinnings of the draft guidelines and their likely role in the future of merger enforcement at the agencies, as well as what is in the guidelines and — perhaps more important — what is left out.  

Beginning the morning of Thursday, February 6, and continuing during business hours through Friday, February 7, Truth on the Market (TOTM) and the International Center for Law & Economics (ICLE) will host a blog symposium on the draft guidelines. 

Symposium participants

As in the past (see examples of previous TOTM blog symposia here), we’ve lined up an outstanding and diverse group of scholars to discuss these issues, including:

  • Timothy J. Brennan (Professor, Public Policy and Economics, University of Maryland; former Chief Economist, FCC; former economist, DOJ Antitrust Division)
  • Steven Cernak (Partner, Bona Law PC; former antitrust counsel, GM)
  • Luke M. Froeb (William C. Oehmig Chair in Free Enterprise and Entrepreneurship, Vanderbilt University; former Chief Economist, DOJ Antitrust Division; former Chief Economist, FTC)
  • Eric Fruits (Chief Economist, ICLE; Professor of Economics, Portland State University)
  • Douglas H. Ginsburg (Senior Circuit Judge, US Court of Appeals for the DC Circuit; Professor of Law, George Mason University; former Assistant Attorney General, DOJ Antitrust Division)
  • Herbert Hovenkamp (James G. Dinan University Professor of Law, University of Pennsylvania)
  • Jonathan M. Jacobson (Partner, Wilson Sonsini Goodrich & Rosati)
  • William J. Kolasky (Partner, Hughes Hubbard & Reed; former Deputy Assistant Attorney General, DOJ Antitrust Division)
  • Tad Lipsky (Assistant Professor of Law, George Mason University; former Acting Director, FTC Bureau of Competition; former chief antitrust counsel, Coca-Cola; former Deputy Assistant Attorney General, DOJ Antitrust Division) 
  • Geoffrey A. Manne (President & Founder, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics)
  • Jonathan E. Nuechterlein (Partner, Sidley Austin LLP; former General Counsel, FTC; former Deputy General Counsel, FCC)
  • Sharis A. Pozen (Partner, Clifford Chance; former Vice President of Global Competition Law and Policy, GE; former Acting Assistant Attorney General, DOJ Antitrust Division) 
  • Jan Rybnicek (Counsel, Freshfields Bruckhaus Deringer; former attorney adviser to Commissioner Joshua D. Wright, FTC)
  • Steven C. Salop (tent.) (Professor of Economics and Law, Georgetown University; former Associate Director, FTC Bureau of Economics)
  • Scott A. Sher (Partner, Wilson Sonsini Goodrich & Rosati)
  • Margaret Slade (Professor Emeritus, Vancouver School of Economics, University of British Columbia)
  • Kristian Stout (Associate Director, ICLE)
  • Gregory Werden (former Senior Economic Counsel, DOJ Antitrust Division)
  • Lawrence J. White (Robert Kavesh Professor of Economics, New York University; former Chief Economist, DOJ Antitrust Division)
  • Joshua D. Wright (University Professor of Law, George Mason University; former Commissioner, FTC)
  • John M. Yun (Associate Professor of Law, George Mason University; former Acting Deputy Assistant Director, FTC Bureau of Economics)

We want to thank all of these excellent panelists for agreeing to take time away from their busy schedules to participate in this symposium. We are hopeful that this discussion will provide invaluable insight and perspective on the Draft Joint Vertical Merger Guidelines.

Look for the first posts starting Thursday, February 6!

An oft-repeated claim of conferences, media, and left-wing think tanks is that lax antitrust enforcement has led to a substantial increase in concentration in the US economy of late, strangling the economy, harming workers, and saddling consumers with greater markups in the process. But what if rising concentration (and the current level of antitrust enforcement) were an indication of more competition, not less?

By now the concentration-as-antitrust-bogeyman story is virtually conventional wisdom, echoed, of course, by political candidates such as Elizabeth Warren trying to cash in on the need for a government response to such dire circumstances:

In industry after industry — airlines, banking, health care, agriculture, tech — a handful of corporate giants control more and more. The big guys are locking out smaller, newer competitors. They are crushing innovation. Even if you don’t see the gears turning, this massive concentration means prices go up and quality goes down for everything from air travel to internet service.  

But the claim that lax antitrust enforcement has led to increased concentration in the US and that it has caused economic harm has been debunked several times (for some of our own debunking, see Eric Fruits’ posts here, here, and here). Or, more charitably to those who tirelessly repeat the claim as if it is “settled science,” it has been significantly called into question

Most recently, several working papers looking at the data on concentration in detail and attempting to identify the likely cause for the observed data, show precisely the opposite relationship. The reason for increased concentration appears to be technological, not anticompetitive. And, as might be expected from that cause, its effects are beneficial. Indeed, the story is both intuitive and positive.

What’s more, while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.

The most recent — and, I believe, most significant — corrective to the conventional story comes from economists Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University. As they write in a recent paper titled, “The Industrial Revolution in Services”: 

We show that new technologies have enabled firms that adopt them to scale production over a large number of establishments dispersed across space. Firms that adopt this technology grow by increasing the number of local markets that they serve, but on average are smaller in the markets that they do serve. Unlike Henry Ford’s revolution in manufacturing more than a hundred years ago when manufacturing firms grew by concentrating production in a given location, the new industrial revolution in non-traded sectors takes the form of horizontal expansion across more locations. At the same time, multi-product firms are forced to exit industries where their productivity is low or where the new technology has had no effect. Empirically we see that top firms in the overall economy are more focused and have larger market shares in their chosen sectors, but their size as a share of employment in the overall economy has not changed. (pp. 42-43) (emphasis added).

This makes perfect sense. And it has the benefit of not second-guessing structural changes made in response to technological change. Rather, it points to technological change as doing what it regularly does: improving productivity.

The implementation of new technology seems to be conferring benefits — it’s just that these benefits are not evenly distributed across all firms and industries. But the assumption that larger firms are causing harm (or even that there is any harm in the first place, whatever the cause) is unmerited. 

What the authors find is that the apparent rise in national concentration doesn’t tell the relevant story, and the data certainly aren’t consistent with assumptions that anticompetitive conduct is either a cause or a result of structural changes in the economy.

Hsieh and Rossi-Hansberg point out that increased concentration is not happening everywhere, but is being driven by just three industries:

First, we show that the phenomena of rising concentration . . . is only seen in three broad sectors – services, wholesale, and retail. . . . [T]op firms have become more efficient over time, but our evidence indicates that this is only true for top firms in these three sectors. In manufacturing, for example, concentration has fallen.

Second, rising concentration in these sectors is entirely driven by an increase [in] the number of local markets served by the top firms. (p. 4) (emphasis added).

These findings are a gloss on a (then) working paper — The Fall of the Labor Share and the Rise of Superstar Firms — by David Autor, David Dorn, Lawrence F. Katz, Christina Patterson, and John Van Reenan (now forthcoming in the QJE). Autor et al. (2019) finds that concentration is rising, and that it is the result of increased productivity:

If globalization or technological changes push sales towards the most productive firms in each industry, product market concentration will rise as industries become increasingly dominated by superstar firms, which have high markups and a low labor share of value-added.

We empirically assess seven predictions of this hypothesis: (i) industry sales will increasingly concentrate in a small number of firms; (ii) industries where concentration rises most will have the largest declines in the labor share; (iii) the fall in the labor share will be driven largely by reallocation rather than a fall in the unweighted mean labor share across all firms; (iv) the between-firm reallocation component of the fall in the labor share will be greatest in the sectors with the largest increases in market concentration; (v) the industries that are becoming more concentrated will exhibit faster growth of productivity; (vi) the aggregate markup will rise more than the typical firm’s markup; and (vii) these patterns should be observed not only in U.S. firms, but also internationally. We find support for all of these predictions. (emphasis added).

This is alone is quite important (and seemingly often overlooked). Autor et al. (2019) finds that rising concentration is a result of increased productivity that weeds out less-efficient producers. This is a good thing. 

But Hsieh & Rossi-Hansberg drill down into the data to find something perhaps even more significant: the rise in concentration itself is limited to just a few sectors, and, where it is observed, it is predominantly a function of more efficient firms competing in more — and more localized — markets. This means that competition is increasing, not decreasing, whether it is accompanied by an increase in concentration or not. 

No matter how may times and under how many monikers the antitrust populists try to revive it, the Structure-Conduct-Performance paradigm remains as moribund as ever. Indeed, on this point, as one of the new antitrust agonists’ own, Fiona Scott Morton, has written (along with co-authors Martin Gaynor and Steven Berry):

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration. As Bresnahan (1989) argued three decades ago, no clear interpretation of the impact of concentration is possible without a clear focus on equilibrium oligopoly demand and “supply,” where supply includes the list of the marginal cost functions of the firms and the nature of oligopoly competition. 

Some of the recent literature on concentration, profits, and markups has simply reasserted the relevance of the old-style structure-conduct-performance correlations. For economists trained in subfields outside industrial organization, such correlations can be attractive. 

Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates. Such correlations will not produce information about the causal estimates that policy demands. It is these causal relationships that will help us understand what, if anything, may be causing markups to rise. (emphasis added).

Indeed! And one reason for the enduring irrelevance of market concentration measures is well laid out in Hsieh and Rossi-Hansberg’s paper:

This evidence is consistent with our view that increasing concentration is driven by new ICT-enabled technologies that ultimately raise aggregate industry TFP. It is not consistent with the view that concentration is due to declining competition or entry barriers . . . , as these forces will result in a decline in industry employment. (pp. 4-5) (emphasis added)

The net effect is that there is essentially no change in concentration by the top firms in the economy as a whole. The “super-star” firms of today’s economy are larger in their chosen sectors and have unleashed productivity growth in these sectors, but they are not any larger as a share of the aggregate economy. (p. 5) (emphasis added)

Thus, to begin with, the claim that increased concentration leads to monopsony in labor markets (and thus unemployment) appears to be false. Hsieh and Rossi-Hansberg again:

[W]e find that total employment rises substantially in industries with rising concentration. This is true even when we look at total employment of the smaller firms in these industries. (p. 4)

[S]ectors with more top firm concentration are the ones where total industry employment (as a share of aggregate employment) has also grown. The employment share of industries with increased top firm concentration grew from 70% in 1977 to 85% in 2013. (p. 9)

Firms throughout the size distribution increase employment in sectors with increasing concentration, not only the top 10% firms in the industry, although by definition the increase is larger among the top firms. (p. 10) (emphasis added)

Again, what actually appears to be happening is that national-level growth in concentration is actually being driven by increased competition in certain industries at the local level:

93% of the growth in concentration comes from growth in the number of cities served by top firms, and only 7% comes from increased employment per city. . . . [A]verage employment per county and per establishment of top firms falls. So necessarily more than 100% of concentration growth has to come from the increase in the number of counties and establishments served by the top firms. (p.13)

The net effect is a decrease in the power of top firms relative to the economy as a whole, as the largest firms specialize more, and are dominant in fewer industries:

Top firms produce in more industries than the average firm, but less so in 2013 compared to 1977. The number of industries of a top 0.001% firm (relative to the average firm) fell from 35 in 1977 to 17 in 2013. The corresponding number for a top 0.01% firm is 21 industries in 1977 and 9 industries in 2013. (p. 17)

Thus, summing up, technology has led to increased productivity as well as greater specialization by large firms, especially in relatively concentrated industries (exactly the opposite of the pessimistic stories):  

[T]op firms are now more specialized, are larger in the chosen industries, and these are precisely the industries that have experienced concentration growth. (p. 18)

Unsurprisingly (except to some…), the increase in concentration in certain industries does not translate into an increase in concentration in the economy as a whole. In other words, workers can shift jobs between industries, and there is enough geographic and firm mobility to prevent monopsony. (Despite rampant assumptions that increased concentration is constraining labor competition everywhere…).

Although the employment share of top firms in an average industry has increased substantially, the employment share of the top firms in the aggregate economy has not. (p. 15)

It is also simply not clearly the case that concentration is causing prices to rise or otherwise causing any harm. As Hsieh and Rossi-Hansberg note:

[T]he magnitude of the overall trend in markups is still controversial . . . and . . . the geographic expansion of top firms leads to declines in local concentration . . . that could enhance competition. (p. 37)

Indeed, recent papers such as Traina (2018), Gutiérrez and Philippon (2017), and the IMF (2019) have found increasing markups over the last few decades but at much more moderate rates than the famous De Loecker and Eeckhout (2017) study. Other parts of the anticompetitive narrative have been challenged as well. Karabarbounis and Neiman (2018) finds that profits have increased, but are still within their historical range. Rinz (2018) shows decreased wages in concentrated markets but also points out that local concentration has been decreasing over the relevant time period.

None of this should be so surprising. Has antitrust enforcement gotten more lax, leading to greater concentration? According to Vita and Osinski (2018), not so much. And how about the stagnant rate of new firms? Are incumbent monopolists killing off new startups? The more likely — albeit mundane — explanation, according to Hopenhayn et al. (2018), is that increased average firm age is due to an aging labor force. Lastly, the paper from Hsieh and Rossi-Hansberg discussed above is only the latest in a series of papers, including Bessen (2017), Van Reenen (2018), and Autor et al. (2019), that shows a rise in fixed costs due to investments in proprietary information technology, which correlates with increased concentration. 

So what is the upshot of all this?

  • First, as noted, employment has not decreased because of increased concentration; quite the opposite. Employment has increased in the industries that have experienced the most concentration at the national level.
  • Second, this result suggests that the rise in concentrated industries has not led to increased market power over labor.
  • Third, concentration itself needs to be understood more precisely. It is not explained by a simple narrative that the economy as a whole has experienced a great deal of concentration and this has been detrimental for consumers and workers. Specific industries have experienced national level concentration, but simultaneously those same industries have become more specialized and expanded competition into local markets. 

Surprisingly (because their paper has been around for a while and yet this conclusion is rarely recited by advocates for more intervention — although they happily use the paper to support claims of rising concentration), Autor et al. (2019) finds the same thing:

Our formal model, detailed below, generates superstar effects from increases in the toughness of product market competition that raise the market share of the most productive firms in each sector at the expense of less productive competitors. . . . An alternative perspective on the rise of superstar firms is that they reflect a diminution of competition, due to a weakening of U.S. antitrust enforcement (Dottling, Gutierrez and Philippon, 2018). Our findings on the similarity of trends in the U.S. and Europe, where antitrust authorities have acted more aggressively on large firms (Gutierrez and Philippon, 2018), combined with the fact that the concentrating sectors appear to be growing more productive and innovative, suggests that this is unlikely to be the primary explanation, although it may important in some specific industries (see Cooper et al, 2019, on healthcare for example). (emphasis added).

The popular narrative among Neo-Brandeisian antitrust scholars that lax antitrust enforcement has led to concentration detrimental to society is at base an empirical one. The findings of these empirical papers severely undermine the persuasiveness of that story.

Neither side in the debate over Section 230 is blameless for the current state of affairs. Reform/repeal proponents have tended to offer ill-considered, irrelevant, or often simply incorrect justifications for amending or tossing Section 230. Meanwhile, many supporters of the law in its current form are reflexively resistant to any change and too quick to dismiss the more reasonable concerns that have been voiced.

Most of all, the urge to politicize this issue — on all sides — stands squarely in the way of any sensible discussion and thus of any sensible reform.

Continue Reading...

[TOTM: The following is the fourth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here. This post originally appeared on the Federalist Society Blog.]

The courtroom trial in the Federal Trade Commission’s (FTC’s) antitrust case against Qualcomm ended in January with a promise from the judge in the case, Judge Lucy Koh, to issue a ruling as quickly as possible — caveated by her acknowledgement that the case is complicated and the evidence voluminous. Well, things have only gotten more complicated since the end of the trial. Not only did Apple and Qualcomm reach a settlement in the antitrust case against Qualcomm that Apple filed just three days after the FTC brought its suit, but the abbreviated trial in that case saw the presentation by Qualcomm of some damning evidence that, if accurate, seriously calls into (further) question the merits of the FTC’s case.

Apple v. Qualcomm settles — and the DOJ takes notice

The Apple v. Qualcomm case, which was based on substantially the same arguments brought by the FTC in its case, ended abruptly last month after only a day and a half of trial — just enough time for the parties to make their opening statements — when Apple and Qualcomm reached an out-of-court settlement. The settlement includes a six-year global patent licensing deal, a multi-year chip supplier agreement, an end to all of the patent disputes around the world between the two companies, and a $4.5 billion settlement payment from Apple to Qualcomm.

That alone complicates the economic environment into which Judge Koh will issue her ruling. But the Apple v. Qualcomm trial also appears to have induced the Department of Justice Antitrust Division (DOJ) to weigh in on the FTC’s case with a Statement of Interest requesting Judge Koh to use caution in fashioning a remedy in the case should she side with the FTC, followed by a somewhat snarky Reply from the FTC arguing the DOJ’s filing was untimely (and, reading the not-so-hidden subtext, unwelcome).

But buried in the DOJ’s Statement is an important indication of why it filed its Statement when it did, just about a week after the end of the Apple v. Qualcomm case, and a pointer to a much larger issue that calls the FTC’s case against Qualcomm even further into question (I previously wrote about the lack of theoretical and evidentiary merit in the FTC’s case here).

Footnote 6 of the DOJ’s Statement reads:

Internal Apple documents that recently became public describe how, in an effort to “[r]educe Apple’s net royalty to Qualcomm,” Apple planned to “[h]urt Qualcomm financially” and “[p]ut Qualcomm’s licensing model at risk,” including by filing lawsuits raising claims similar to the FTC’s claims in this case …. One commentator has observed that these documents “potentially reveal[] that Apple was engaging in a bad faith argument both in front of antitrust enforcers as well as the legal courts about the actual value and nature of Qualcomm’s patented innovation.” (Emphasis added).

Indeed, the slides presented by Qualcomm during that single day of trial in Apple v. Qualcomm are significant, not only for what they say about Apple’s conduct, but, more importantly, for what they say about the evidentiary basis for the FTC’s claims against the company.

The evidence presented by Qualcomm in its opening statement suggests some troubling conduct by Apple

Others have pointed to Qualcomm’s opening slides and the Apple internal documents they present to note Apple’s apparent bad conduct. As one commentator sums it up:

Although we really only managed to get a small glimpse of Qualcomm’s evidence demonstrating the extent of Apple’s coordinated strategy to manipulate the FRAND license rate, that glimpse was particularly enlightening. It demonstrated a decade-long coordinated effort within Apple to systematically engage in what can only fairly be described as manipulation (if not creation of evidence) and classic holdout.

Qualcomm showed during opening arguments that, dating back to at least 2009, Apple had been laying the foundation for challenging its longstanding relationship with Qualcomm. (Emphasis added).

The internal Apple documents presented by Qualcomm to corroborate this claim appear quite damning. Of course, absent explanation and cross-examination, it’s impossible to know for certain what the documents mean. But on their face they suggest Apple knowingly undertook a deliberate scheme (and knowingly took upon itself significant legal risk in doing so) to devalue comparable patent portfolios to Qualcomm’s:

The apparent purpose of this scheme was to devalue comparable patent licensing agreements where Apple had the power to do so (through litigation or the threat of litigation) in order to then use those agreements to argue that Qualcomm’s royalty rates were above the allowable, FRAND level, and to undermine the royalties Qualcomm would be awarded in courts adjudicating its FRAND disputes with the company. As one commentator put it:

Apple embarked upon a coordinated scheme to challenge weaker patents in order to beat down licensing prices. Once the challenges to those weaker patents were successful, and the licensing rates paid to those with weaker patent portfolios were minimized, Apple would use the lower prices paid for weaker patent portfolios as proof that Qualcomm was charging a super-competitive licensing price; a licensing price that violated Qualcomm’s FRAND obligations. (Emphasis added).

That alone is a startling revelation, if accurate, and one that would seem to undermine claims that patent holdout isn’t a real problem. It also would undermine Apple’s claims that it is a “willing licensee,” engaging with SEP licensors in good faith. (Indeed, this has been called into question before, and one Federal Circuit judge has noted in dissent that “[t]he record in this case shows evidence that Apple may have been a hold out.”). If the implications drawn from the Apple documents shown in Qualcomm’s opening statement are accurate, there is good reason to doubt that Apple has been acting in good faith.

Even more troubling is what it means for the strength of the FTC’s case

But the evidence offered in Qualcomm’s opening argument point to another, more troubling implication, as well. We know that Apple has been coordinating with the FTC and was likely an important impetus for the FTC’s decision to bring an action in the first place. It seems reasonable to assume that Apple used these “manipulated” agreements to help make its case.

But what is most troubling is the extent to which it appears to have worked.

The FTC’s action against Qualcomm rested in substantial part on arguments that Qualcomm’s rates were too high (even though the FTC constructed its case without coming right out and saying this, at least until trial). In its opening statement the FTC said:

Qualcomm’s practices, including no license, no chips, skewed negotiations towards the outcomes that favor Qualcomm and lead to higher royalties. Qualcomm is committed to license its standard essential patents on fair, reasonable, and non-discriminatory terms. But even before doing market comparison, we know that the license rates charged by Qualcomm are too high and above FRAND because Qualcomm uses its chip power to require a license.

* * *

Mr. Michael Lasinski [the FTC’s patent valuation expert] compared the royalty rates received by Qualcomm to … the range of FRAND rates that ordinarily would form the boundaries of a negotiation … Mr. Lasinski’s expert opinion … is that Qualcomm’s royalty rates are far above any indicators of fair and reasonable rates. (Emphasis added).

The key question is what constitutes the “range of FRAND rates that ordinarily would form the boundaries of a negotiation”?

Because they were discussed under seal, we don’t know the precise agreements that the FTC’s expert, Mr. Lasinski, used for his analysis. But we do know something about them: His analysis entailed a study of only eight licensing agreements; in six of them, the licensee was either Apple or Samsung; and in all of them the licensor was either Interdigital, Nokia, or Ericsson. We also know that Mr. Lasinski’s valuation study did not include any Qualcomm licenses, and that the eight agreements he looked at were all executed after the district court’s decision in Microsoft vs. Motorola in 2013.

A curiously small number of agreements

Right off the bat there is a curiosity in the FTC’s valuation analysis. Even though there are hundreds of SEP license agreements involving the relevant standards, the FTC’s analysis relied on only eight, three-quarters of which involved licensing by only two companies: Apple and Samsung.

Indeed, even since 2013 (a date to which we will return) there have been scads of licenses (see, e.g., herehere, and here). Not only Apple and Samsung make CDMA and LTE devices; there are — quite literally — hundreds of other manufacturers out there, all of them licensing essentially the same technology — including global giants like LG, Huawei, HTC, Oppo, Lenovo, and Xiaomi. Why were none of their licenses included in the analysis? 

At the same time, while Interdigital, Nokia, and Ericsson are among the largest holders of CDMA and LTE SEPs, several dozen companies have declared such patents, including Motorola (Alphabet), NEC, Huawei, Samsung, ZTE, NTT DOCOMO, etc. Again — why were none of their licenses included in the analysis?

All else equal, more data yields better results. This is particularly true where the data are complex license agreements which are often embedded in larger, even-more-complex commercial agreements and which incorporate widely varying patent portfolios, patent implementers, and terms.

Yet the FTC relied on just eight agreements in its comparability study, covering a tiny fraction of the industry’s licensors and licensees, and, notably, including primarily licenses taken by the two companies (Samsung and Apple) that have most aggressively litigated their way to lower royalty rates.

A curiously crabbed selection of licensors

And it is not just that the selected licensees represent a weirdly small and biased sample; it is also not necessarily even a particularly comparable sample.

One thing we can be fairly confident of, given what we know of the agreements used, is that at least one of the license agreements involved Nokia licensing to Apple, and another involved InterDigital licensing to Apple. But these companies’ patent portfolios are not exactly comparable to Qualcomm’s. About Nokia’s patents, Apple said:

And about InterDigital’s:

Meanwhile, Apple’s view of Qualcomm’s patent portfolio (despite its public comments to the contrary) was that it was considerably better than the others’:

The FTC’s choice of such a limited range of comparable license agreements is curious for another reason, as well: It includes no Qualcomm agreements. Qualcomm is certainly one of the biggest players in the cellular licensing space, and no doubt more than a few license agreements involve Qualcomm. While it might not make sense to include Qualcomm licenses that the FTC claims incorporate anticompetitive terms, that doesn’t describe the huge range of Qualcomm licenses with which the FTC has no quarrel. Among other things, Qualcomm licenses from before it began selling chips would not have been affected by its alleged “no license, no chips” scheme, nor would licenses granted to companies that didn’t also purchase Qualcomm chips. Furthermore, its licenses for technology reading on the WCDMA standard are not claimed to be anticompetitive by the FTC.

And yet none of these licenses were deemed “comparable” by the FTC’s expert, even though, on many dimensions — most notably, with respect to the underlying patent portfolio being valued — they would have been the most comparable (i.e., identical).

A curiously circumscribed timeframe

That the FTC’s expert should use the 2013 cut-off date is also questionable. According to Lasinski, he chose to use agreements after 2013 because it was in 2013 that the U.S. District Court for the Western District of Washington decided the Microsoft v. Motorola case. Among other things, the court in Microsoft v Motorola held that the proper value of a SEP is its “intrinsic” patent value, including its value to the standard, but not including the additional value it derives from being incorporated into a widely used standard.

According to the FTC’s expert,

prior to [Microsoft v. Motorola], people were trying to value … the standard and the license based on the value of the standard, not the value of the patents ….

Asked by Qualcomm’s counsel if his concern was that the “royalty rates derived in license agreements for cellular SEPs [before Microsoft v. Motorola] could very well have been above FRAND,” Mr. Lasinski concurred.

The problem with this approach is that it’s little better than arbitrary. The Motorola decision was an important one, to be sure, but the notion that sophisticated parties in a multi-billion dollar industry were systematically agreeing to improper terms until a single court in Washington suggested otherwise is absurd. To be sure, such agreements are negotiated in “the shadow of the law,” and judicial decisions like the one in Washington (later upheld by the Ninth Circuit) can affect the parties’ bargaining positions.

But even if it were true that the court’s decision had some effect on licensing rates, the decision would still have been only one of myriad factors determining parties’ relative bargaining  power and their assessment of the proper valuation of SEPs. There is no basis to support the assertion that the Motorola decision marked a sea-change between “improper” and “proper” patent valuations. And, even if it did, it was certainly not alone in doing so, and the FTC’s expert offers no justification for determining that agreements reached before, say, the European Commission’s decision against Qualcomm in 2018 were “proper,” or that the Korea FTC’s decision against Qualcomm in 2009 didn’t have the same sort of corrective effect as the Motorola court’s decision in 2013. 

At the same time, a review of a wider range of agreements suggested that Qualcomm’s licensing royalties weren’t inflated

Meanwhile, one of Qualcomm’s experts in the FTC case, former DOJ Chief Economist Aviv Nevo, looked at whether the FTC’s theory of anticompetitive harm was borne out by the data by looking at Qualcomm’s royalty rates across time periods and standards, and using a much larger set of agreements. Although his remit was different than Mr. Lasinski’s, and although he analyzed only Qualcomm licenses, his analysis still sheds light on Mr. Lasinski’s conclusions:

[S]pecifically what I looked at was the predictions from the theory to see if they’re actually borne in the data….

[O]ne of the clear predictions from the theory is that during periods of alleged market power, the theory predicts that we should see higher royalty rates.

So that’s a very clear prediction that you can take to data. You can look at the alleged market power period, you can look at the royalty rates and the agreements that were signed during that period and compare to other periods to see whether we actually see a difference in the rates.

Dr. Nevo’s analysis, which looked at royalty rates in Qualcomm’s SEP license agreements for CDMA, WCDMA, and LTE ranging from 1990 to 2017, found no differences in rates between periods when Qualcomm was alleged to have market power and when it was not alleged to have market power (or could not have market power, on the FTC’s theory, because it did not sell corresponding chips).

The reason this is relevant is that Mr. Lasinski’s assessment implies that Qualcomm’s higher royalty rates weren’t attributable to its superior patent portfolio, leaving either anticompetitive conduct or non-anticompetitive, superior bargaining ability as the explanation. No one thinks Qualcomm has cornered the market on exceptional negotiators, so really the only proffered explanation for the results of Mr. Lasinski’s analysis is anticompetitive conduct. But this assumes that his analysis is actually reliable. Prof. Nevo’s analysis offers some reason to think that it is not.

All of the agreements studied by Mr. Lasinski were drawn from the period when Qualcomm is alleged to have employed anticompetitive conduct to elevate its royalty rates above FRAND. But when the actual royalties charged by Qualcomm during its alleged exercise of market power are compared to those charged when and where it did not have market power, the evidence shows it received identical rates. Mr Lasinki’s results, then, would imply that Qualcomm’s royalties were “too high” not only while it was allegedly acting anticompetitively, but also when it was not. That simple fact suggests on its face that Mr. Lasinski’s analysis may have been flawed, and that it systematically under-valued Qualcomm’s patents.

Connecting the dots and calling into question the strength of the FTC’s case

In its closing argument, the FTC pulled together the implications of its allegations of anticompetitive conduct by pointing to Mr. Lasinski’s testimony:

Now, looking at the effect of all of this conduct, Qualcomm’s own documents show that it earned many times the licensing revenue of other major licensors, like Ericsson.

* * *

Mr. Lasinski analyzed whether this enormous difference in royalties could be explained by the relative quality and size of Qualcomm’s portfolio, but that massive disparity was not explained.

Qualcomm’s royalties are disproportionate to those of other SEP licensors and many times higher than any plausible calculation of a FRAND rate.

* * *

The overwhelming direct evidence, some of which is cited here, shows that Qualcomm’s conduct led licensees to pay higher royalties than they would have in fair negotiations.

It is possible, of course, that Lasinki’s methodology was flawed; indeed, at trial Qualcomm argued exactly this in challenging his testimony. But it is also possible that, whether his methodology was flawed or not, his underlying data was flawed.

It is impossible from the publicly available evidence to definitively draw this conclusion, but the subsequent revelation that Apple may well have manipulated at least a significant share of the eight agreements that constituted Mr. Lasinski’s data certainly increases the plausibility of this conclusion: We now know, following Qualcomm’s opening statement in Apple v. Qualcomm, that that stilted set of comparable agreements studied by the FTC’s expert also happens to be tailor-made to be dominated by agreements that Apple may have manipulated to reflect lower-than-FRAND rates.

What is most concerning is that the FTC may have built up its case on such questionable evidence, either by intentionally cherry picking the evidence upon which it relied, or inadvertently because it rested on such a needlessly limited range of data, some of which may have been tainted.

Intentionally or not, the FTC appears to have performed its valuation analysis using a needlessly circumscribed range of comparable agreements and justified its decision to do so using questionable assumptions. This seriously calls into question the strength of the FTC’s case.

The German Bundeskartellamt’s Facebook decision is unsound from either a competition or privacy policy perspective, and will only make the fraught privacy/antitrust relationship worse.

Continue Reading...