Archives For Contract

Responding to a new draft policy statement from the U.S. Patent & Trademark Office (USPTO), the National Institute of Standards and Technology (NIST), and the U.S. Department of Justice, Antitrust Division (DOJ) regarding remedies for infringement of standard-essential patents (SEPs), a group of 19 distinguished law, economics, and business scholars convened by the International Center for Law & Economics (ICLE) submitted comments arguing that the guidance would improperly tilt the balance of power between implementers and inventors, and could undermine incentives for innovation.

As explained in the scholars’ comments, the draft policy statement misunderstands many aspects of patent and antitrust policy. The draft notably underestimates the value of injunctions and the circumstances in which they are a necessary remedy. It also overlooks important features of the standardization process that make opportunistic behavior much less likely than policymakers typically recognize. These points are discussed in even more detail in previous work by ICLE scholars, including here and here.

These first-order considerations are only the tip of the iceberg, however. Patent policy has a huge range of second-order effects that the draft policy statement and policymakers more generally tend to overlook. Indeed, reducing patent protection has more detrimental effects on economic welfare than the conventional wisdom typically assumes. 

The comments highlight three important areas affected by SEP policy that would be undermined by the draft statement. 

  1. First, SEPs are established through an industry-wide, collaborative process that develops and protects innovations considered essential to an industry’s core functioning. This process enables firms to specialize in various functions throughout an industry, rather than vertically integrate to ensure compatibility. 
  2. Second, strong patent protection, especially of SEPs, boosts startup creation via a broader set of mechanisms than is typically recognized. 
  3. Finally, strong SEP protection is essential to safeguard U.S. technology leadership and sovereignty. 

As explained in the scholars’ comments, the draft policy statement would be detrimental on all three of these dimensions. 

To be clear, the comments do not argue that addressing these secondary effects should be a central focus of patent and antitrust policy. Instead, the point is that policymakers must deal with a far more complex set of issues than is commonly recognized; the effects of SEP policy aren’t limited to the allocation of rents among inventors and implementers (as they are sometimes framed in policy debates). Accordingly, policymakers should proceed with caution and resist the temptation to alter by fiat terms that have emerged through careful negotiation among inventors and implementers, and which have been governed for centuries by the common law of contract. 

Collaborative Standard-Setting and Specialization as Substitutes for Proprietary Standards and Vertical Integration

Intellectual property in general—and patents, more specifically—is often described as a means to increase the monetary returns from the creation and distribution of innovations. While this is undeniably the case, this framing overlooks the essential role that IP also plays in promoting specialization throughout the economy.

As Ronald Coase famously showed in his Nobel-winning work, firms must constantly decide whether to perform functions in-house (by vertically integrating), or contract them out to third parties (via the market mechanism). Coase concluded that these decisions hinge on whether the transaction costs associated with the market mechanism outweigh the cost of organizing production internally. Decades later, Oliver Williamson added a key finding to this insight. He found that among the most important transaction costs that firms encounter are those that stem from incomplete contracts and the scope for opportunistic behavior they entail.

This leads to a simple rule of thumb: as the scope for opportunistic behavior increases, firms are less likely to use the market mechanism and will instead perform tasks in-house, leading to increased vertical integration.

IP plays a key role in this process. Patents drastically reduce the transaction costs associated with the transfer of knowledge. This gives firms the opportunity to develop innovations collaboratively and without fear that trading partners might opportunistically appropriate their inventions. In turn, this leads to increased specialization. As Robert Merges observes

Patents facilitate arms-length trade of a technology-intensive input, leading to entry and specialization.

More specifically, it is worth noting that the development and commercialization of inventions can lead to two important sources of opportunistic behavior: patent holdup and patent holdout. As the assembled scholars explain in their comments, while patent holdup has drawn the lion’s share of policymaker attention, empirical and anecdotal evidence suggest that holdout is the more salient problem.

Policies that reduce these costs—especially patent holdout—in a cost-effective manner are worthwhile, with the immediate result that technologies are more widely distributed than would otherwise be the case. Inventors also see more intense and extensive incentives to produce those technologies in the first place.

The Importance of Intellectual Property Rights for Startup Activity

Strong patent rights are essential to monetize innovation, thus enabling new firms to gain a foothold in the marketplace. As the scholars’ comments explain, this is even more true for startup companies. There are three main reasons for this: 

  1. Patent rights protected by injunctions prevent established companies from simply copying innovative startups, with the expectation that they will be able to afford court-set royalties; 
  2. Patent rights can be the basis for securitization, facilitating access to startup funding; and
  3. Patent rights drive venture capital (VC) investment.

While point (1) is widely acknowledged, many fail to recognize it is particularly important for startup companies. There is abundant literature on firms’ appropriability mechanisms (these are essentially the strategies firms employ to prevent rivals from copying their inventions). The literature tells us that patent protection is far from the only strategy firms use to protect their inventions (see. e.g., here, here and here). 

The alternative appropriability mechanisms identified by these studies tend to be easier to implement for well-established firms. For instance, many firms earn returns on their inventions by incorporating them into physical products that cannot be reverse engineered. This is much easier for firms that already have a large industry presence and advanced manufacturing capabilities.  In contrast, startup companies—almost by definition—must outsource production.

Second, property rights could drive startup activity through the collateralization of IP. By offering security interests in patents, trademarks, and copyrights, startups with little or no tangible assets can obtain funding without surrendering significant equity. As Gaétan de Rassenfosse puts it

SMEs can leverage their IP to facilitate R&D financing…. [P]atents materialize the value of knowledge stock: they codify the knowledge and make it tradable, such that they can be used as collaterals. Recent theoretical evidence by Amable et al. (2010) suggests that a systematic use of patents as collateral would allow a high growth rate of innovations despite financial constraints.

Finally, there is reason to believe intellectual-property protection is an important driver of venture capital activity. Beyond simply enabling firms to earn returns on their investments, patents might signal to potential investors that a company is successful and/or valuable. Empirical research by Hsu and Ziedonis, for instance, supports this hypothesis

[W]e find a statistically significant and economically large effect of patent filings on investor estimates of start-up value…. A doubling in the patent application stock of a new venture [in] this sector is associated with a 28 percent increase in valuation, representing an upward funding-round adjustment of approximately $16.8 million for the average start-up in our sample.

In short, intellectual property can stimulate startup activity through various mechanisms. There is thus a sense that, at the margin, weakening patent protection will make it harder for entrepreneurs to embark on new business ventures.

The Role of Strong SEP Rights in Guarding Against China’s ‘Cyber Great Power’ Ambitions 

The United States, due in large measure to its strong intellectual-property protections, is a nation of innovators, and its production of IP is one of its most important comparative advantages. 

IP and its legal protections become even more important, however, when dealing with international jurisdictions, like China, that don’t offer similar levels of legal protection. By making it harder for patent holders to obtain injunctions, licensees and implementers gain the advantage in the short term, because they are able to use patented technology without having to engage in negotiations to pay the full market price. 

In the case of many SEPs—particularly those in the telecommunications sector—a great many patent holders are U.S.-based, while the lion’s share of implementers are Chinese. The anti-injunction policy espoused in the draft policy statement thus amounts to a subsidy to Chinese infringers of U.S. technology.

At the same time, China routinely undermines U.S. intellectual property protections through its industrial policy. The government’s stated goal is to promote “fair and reasonable” international rules, but it is clear that China stretches its power over intellectual property around the world by granting “anti-suit injunctions” on behalf of Chinese smartphone makers, designed to curtail enforcement of foreign companies’ patent rights.

This is part of the Chinese government’s larger approach to industrial policy, which seeks to expand Chinese power in international trade negotiations and in global standards bodies. As one Chinese Communist Party official put it

Standards are the commanding heights, the right to speak, and the right to control. Therefore, the one who obtains the standards gains the world.

Insufficient protections for intellectual property will hasten China’s objective of dominating collaborative standard development in the medium to long term. Simultaneously, this will engender a switch to greater reliance on proprietary, closed standards rather than collaborative, open standards. These harmful consequences are magnified in the context of the global technology landscape, and in light of China’s strategic effort to shape international technology standards. Chinese companies, directed by their government authorities, will gain significant control of the technologies that will underpin tomorrow’s digital goods and services.

The scholars convened by ICLE were not alone in voicing these fears. David Teece (also a signatory to the ICLE-convened comments), for example, surmises in his comments that: 

The US government, in reviewing competition policy issues that might impact standards, therefore needs to be aware that the issues at hand have tremendous geopolitical consequences and cannot be looked at in isolation…. Success in this regard will promote competition and is our best chance to maintain technological leadership—and, along with it, long-term economic growth and consumer welfare and national security.

Similarly, comments from the Center for Strategic and International Studies (signed by, among others, former USPTO Director Anrei Iancu, former NIST Director Walter Copan, and former Deputy Secretary of Defense John Hamre) argue that the draft policy statement would benefit Chinese firms at U.S. firms’ expense:

What is more, the largest short-term and long-term beneficiaries of the 2021 Draft Policy Statement are firms based in China. Currently, China is the world’s largest consumer of SEP-based technology, so weakening protection of American owned patents directly benefits Chinese manufacturers. The unintended effect of the 2021 Draft Policy Statement will be to support Chinese efforts to dominate critical technology standards and other advanced technologies, such as 5G. Put simply, devaluing U.S. patents is akin to a subsidized tech transfer to China.

With Chinese authorities joining standardization bodies and increasingly claiming jurisdiction over F/RAND disputes, there should be careful reevaluation of the ways the draft policy statement would further weaken the United States’ comparative advantage in IP-dependent technological innovation. 


In short, weakening patent protection could have detrimental ramifications that are routinely overlooked by policymakers. These include increasing inventors’ incentives to vertically integrate rather than develop innovations collaboratively; reducing startup activity (especially when combined with antitrust enforcers’ newfound proclivity to challenge startup acquisitions); and eroding America’s global technology leadership, particularly with respect to China.

For these reasons (and others), the text of the draft policy statement should be reconsidered and either revised substantially to better reflect these concerns or withdrawn entirely. 

The signatories to the comments are:

Alden F. AbbottSenior Research Fellow, Mercatus Center
George Mason University
Former General Counsel, U.S. Federal Trade Commission
Jonathan BarnettTorrey H. Webb Professor of Law
University of Southern California
Ronald A. CassDean Emeritus, School of Law
Boston University
Former Commissioner and Vice-Chairman, U.S. International Trade Commission
Giuseppe ColangeloJean Monnet Chair in European Innovation Policy and Associate Professor of Competition Law & Economics
University of Basilicata and LUISS (Italy)
Richard A. EpsteinLaurence A. Tisch Professor of Law
New York University
Bowman HeidenExecutive Director, Tusher Initiative at the Haas School of Business
University of California, Berkeley
Justin (Gus) HurwitzProfessor of Law
University of Nebraska
Thomas A. LambertWall Chair in Corporate Law and Governance
University of Missouri
Stan J. LiebowitzAshbel Smith Professor of Economics
University of Texas at Dallas
John E. LopatkaA. Robert Noll Distinguished Professor of Law
Penn State University
Keith MallinsonFounder and Managing Partner
Geoffrey A. MannePresident and Founder
International Center for Law & Economics
Adam MossoffProfessor of Law
George Mason University
Kristen Osenga Austin E. Owen Research Scholar and Professor of Law
University of Richmond
Vernon L. SmithGeorge L. Argyros Endowed Chair in Finance and Economics
Chapman University
Nobel Laureate in Economics (2002)
Daniel F. SpulberElinor Hobbs Distinguished Professor of International Business
Northwestern University
David J. TeeceThomas W. Tusher Professor in Global Business
University of California, Berkeley
Joshua D. WrightUniversity Professor of Law
George Mason University
Former Commissioner, U.S. Federal Trade Commission
John M. YunAssociate Professor of Law
George Mason University
Former Acting Deputy Assistant Director, Bureau of Economics, U.S. Federal Trade Commission 

This post is the second in a planned series. The first installment can be found here.

In just over a century since its dawn, liberalism had reshaped much of the world along the lines of individualism, free markets, private property, contract, trade, and competition. A modest laissez-faire political philosophy that had begun to germinate in the minds of French Physiocrats in the early 18th century had, scarcely 150 years later, inspired the constitution of the world’s nascent leading power, the United States. But it wasn’t all plain sailing, as liberalism’s expansion eventually galvanized strong social, political, cultural, economic and even spiritual opposition, which coalesced around two main ideologies: socialism and fascism.

In this post, I explore the collectivist backlash against liberalism, its deeper meaning from the perspective of political philosophy, and the main features of its two main antagonists—especially as they relate to competition and competition regulation. Ultimately, the purpose is to show that, in trying to respond to the collectivist threat, successive iterations of neoliberalism integrated some of collectivism’s key postulates in an attempt to create a synthesis between opposing philosophical currents. Yet this “mostly” liberal synthesis, which serves as the philosophical basis of many competition systems today, is afflicted with the same collectivist flaws that the synthesis purported to overthrow (as I will elaborate in subsequent posts).

The Collectivist Backlash

By the early 20th century, two deeply illiberal movements bent on exposing and demolishing the fallacies and contradictions of liberalism had succeeded in capturing the imagination and support of the masses. These collectivist ideologies were Marxian socialism/communism on the left and fascism/Nazism on the right. Although ultimately distinct, they both rejected the basic postulates of classical liberalism. 

Socially, both agreed that liberalism uprooted traditional ways of life and dissolved the bonds of solidarity that had hitherto governed social relationships. This is the view expressed, e.g., in Karl Polanyi’s influential book The Great Transformation, in which the Christian socialist Polanyi contends that “disembedded” liberal markets would inevitably come to be governed again by the principles of solidarity and reciprocity (under socialism/communism). Similarly, although not technically a work on political economy or philosophy, Knut Hamsun’s 1917 novel Growth of the Soil perfectly captures the right’s rejection of liberal progress, materialism, industrialization, and the idealization of traditional bucolic life. The Norwegian Hamsun, winner of the 1920 Nobel Prize in Literature, later became an enthusiastic supporter of the Third Reich. 

Politically and culturally, Marxist historical materialism posited that liberal democracy (individual freedoms, periodic elections, etc.) and liberal culture (literature, art, cinema) served the interests of the economically dominant class: the bourgeoisie, i.e., the owners of the means of production. Fascists and Nazis likewise deplored liberal democracy as a sign of decadence and weakness and viewed liberal culture as an oxymoron: a hotbed of degeneracy built on the dilution of national and racial identities. 

Economically, the more theoretically robust leftist critiques rallied around Marx’ scientific socialism, which held that capitalism—the economic system that served as the embodiment of a liberal social order built on private property, contract, and competition—was exploitative and doomed to consume itself. From the right, it was argued that liberalism enabled individual interest to override what was good for the collective—an unpardonable sin in the eyes of an ideology built around robust nodes of collectivist identity, such as nation, race, and history.

A Recurrent Civilizational Struggle

The rise of socialism and fascism marked the beginning of a civilizational shift that many have referred to as the lowest ebb of liberalism. By the 1930s, totalitarian regimes utterly incompatible with a liberal worldview were in place in several European countries, such as Italy, Russia, Germany, Portugal, Spain, and Romania. As Austrian economist Ludwig Von Mises lamented, liberals and liberal ideas—at least, in the classical sense—had been driven to the fringes of society and academia, subject of scorn and ridicule. Even the liberally oriented, like economist John Maynard Keynes, were declaring the “end of laissez-faire.” 

At its most basic level, I believe that the conflict can be understood, from a philosophical perspective, as an iteration of the recurrent struggle between individualism and collectivism.

For instance, the German sociologist Ferdinand Tonnies has described the perennial tension between two elementary ways of conceiving the social order: Gesellschaft and Gemeinschaft. Gesellschaft refers to societies made up of individuals held together by formal bonds, such as contracts, whereas Gemeinschaft refers to communities held together by organic bonds, such as kinship, which function together as parts of an integrated whole. American law professor David Gerber explains that, from the Gemeinschaft perspective, competition was seen as an enemy:

Gemeinschaft required co-operation and the accommodation of individual interests to the commonwealth, but competition, in contrast, demanded that individuals be concerned first and foremost with their own self-interest. From this communitarian perspective, competition looked suspiciously like exploitation. The combined effect of competition and of political and economic inequality was that the strong would get stronger, the weak would get weaker, and the strong would use their strength to take from the weak.

Tonnies himself thought that dominant liberal notions of Gesellschaft would inevitably give way to greater integration of a socialist Gemeinschaft. This was somewhat reminiscent of Polanyi’s distinction between embedded and disembedded markets; Karl Popper’s “open” and “closed” societies; and possibly, albeit somewhat more remotely, David Hume’s distinction between “concord” and “union.” While we should be wary of reductivism, a common theme underlying these works (at least two of which are not liberal) is the conflict between opposing views of society: one that posits the subordination of the individual to some larger community or group versus another that anoints the individual’s well-being as the ultimate measure of the value of social arrangements. That basic tension, in turn, reverberates across social and economic questions, including as they relate to markets, competition, and the functions of the state.

 Competition Under Marxism

Karl Marx argued that the course of history was determined by material relations among the social classes under any given system of production (historical materialism and dialectical materialism, respectively). Under that view, communism was not a desirable “state of affairs,” but the inevitable consequence of social forces as they then existed. As Marx and Friedrich Engels wrote in The Communist Manifesto:

Communism is for us not a state of affairs which is to be established, an ideal to which reality [will] have to adjust itself. We call communism the real movement which abolishes the present state of things. The conditions of this movement result from the premises now in existence.

Thus, following the ineluctable laws of history, which Marx claimed to have discovered, capitalism would inevitably come to be replaced by socialism and, subsequently, communism. Under socialism, the means of production would be controlled not by individuals interacting in a free market, but by the political process under the aegis of the state, with the corollary that planning would come to substitute for competition as the economy’s steering mechanism. This would then give way to communism: a stateless utopia in which everything would be owned by the community and where there would be no class divisions. This would come about as a result of the interplay of several factors inherent to capitalism, such as the exploitation of the working class and the impossibility of sustained competition.

Per Marx, under capitalism, owners of the means of production (i.e., the capitalists or the bourgeoisie) appropriate the surplus value (i.e., the difference between the sale price of a product and the cost to produce it) generated by workers. Thus, the lower the wages and the longer the working hours of the worker, the greater the profit accrued to the capitalist. This was not an unfortunate byproduct that could be reformed, Marx posited, but a central feature of the system that was solvable only through revolution. Moreover, the laws, culture, media, politics, faith, and other institutions that might ordinarily open alternative avenues to nonviolent resolution of class tensions (the “super-structure”) were themselves byproducts of the underlying material relations of production (“structure” or “base”), and thus served to justify and uphold them.

The Marxian position further held that competition—the lodestar and governing principle of the capitalist economy—was, like the system itself, unsustainable. It would inevitably end up cannibalizing itself. But the claim is a bit more subtle than critics of communism often assume. As Leon Trotsky wrote in the 1939 pamphlet Marxism in our time:

Relations between capitalists, who exploit the workers, are defined by competition, which for long endures as the mainspring of capitalist progress.

Two notions expressed seamlessly in Trotsky’s statement need to be understood about the Marxian perception of competition. The first is that, since capitalism is exploitative of workers and competition among capitalists is the engine of capitalism, competition is itself effectively a mechanism of exploitation. Capitalists compete through the cheapening of commodities and the subsequent reinvestment of the surplus appropriated from labor into the expansion of productivity. The most exploitative capitalist, therefore, generally has the advantage (this hinges, of course, largely on the validity of the labor theory of value).

At the same time, however, Marxists (including Marx himself) recognized the economic and technological progress brought about through capitalism and competition. This is what Trotsky means when he refers to competition as the “mainspring of capitalist progress” and, by extension, the “historical justification of the capitalist.” The implication is that, if competition were to cease, the entire capitalist edifice and the political philosophy undergirding it (liberalism) would crumble, as well.

Whereas liberalism and competition were intertwined, liberalism and monopoly could not coexist. Instead, monopolists demanded—and, due to their political clout, were able to obtain—an increasingly powerful central state capable of imposing protective tariffs and other measures for their benefit and protection. Trotsky again:

The elimination of competition by monopoly marks the beginning of the disintegration of capitalist society. Competition was the creative mainspring of capitalism and the historical justification of the capitalist. By the same token the elimination of competition marks the transformation of stockholders into social parasites. Competition had to have certain liberties, a liberal atmosphere, a regime of democracy, of commercial cosmopolitanism. Monopoly needs as authoritative government as possible, tariff walls, “its own” sources of raw materials and arenas of marketing (colonies). The last word in the disintegration of monopolistic capital is fascism.

Marxian theory posited that this outcome was destined to happen for two reasons. First, because:

The battle of competition is fought by cheapening of commodities. The cheapness of commodities depends, ceteris paribus, on the productiveness of labor, and this again on the scale of production. Therefore, the larger capital beats the smaller.

In other words, competition stimulated the progressive development of productivity, which depended on the scale of production, which depended, in turn, on firm size. Ultimately, therefore, competition ended up producing a handful of large companies that would subjugate competitors and cannibalize competition. Thus, the more wealth that capitalism generated—and Marx had no doubts that capitalism was a wealth-generating machine—the more it sowed the seeds of its own destruction. Hence:

While stimulating the progressive development of technique, competition gradually consumes, not only the intermediary layers but itself as well. Over the corpses and the semi-corpses of small and middling capitalists, emerges an ever-decreasing number of ever more powerful capitalist overlords. Thus, out of “honest”, “democratic”, “progressive” competition grows irrevocably “harmful”, “parasitic”, “reactionary” monopoly.

The second reason Marxists believed the downfall of capitalism was inevitable is that the capitalists squeezed out of the market by the competitive process would become proletarians, which would create a glut of labor (“a growing reserve army of the unemployed”), which would in turn depress wages. This process of proletarianization, combined with the “revolutionary combination by association” of workers in factories would raise class consciousness and ultimately lead to the toppling of capitalism and the ushering in of socialism.

Thus, there is a clear nexus in Marxian theory between the end of competition and the end of capitalism (and therefore liberalism), whereby monopoly is deduced from the inherent tendencies of capitalism, and the end of capitalism, in turn, is deduced from the ineluctable advent of monopoly. What follows (i.e., socialism and communism) are collectivist systems that purport to be run according to the principles of solidarity and cooperation (“from each according to his abilities, to each according to his needs”), where there is therefore no place (and no need) for competition. Instead, the Marxian Gemeinschaft would organize the economy around rationalistic lines, substituting cut-throat competition for centralized command by the state (later, the community) that would rein in hitherto uncontrollable economic forces in a heroic victory over the chaos and unpredictability of capitalism. This would, of course, also bring about the end of liberalism, with individualism, private property, and other liberal freedoms jettisoned as mouthpieces of bourgeoisie class interests. Chairman Mao Zedong put it succinctly:

We must affirm anew the discipline of the Party, namely:

1. The individual is subordinate to the organization;

2. The minority is subordinate to the majority.

Competition Under Fascism/Nazism

Formidable as it was, the Marxian attack on liberalism was just one side of the coin. Decades after the articulation of Marxian theory in the mid-19th century, fascism—founded by former socialist Benito Mussolini in 1915—emerged as a militant alternative to both liberalism and socialism/communism.

In essence, fascism was, like communism, unapologetically collectivist. But whereas socialists considered class to be the relevant building block of society, fascists viewed the individual as part of a greater national, racial, and historical entity embodied in the state and its leadership. As Mussolini wrote in his 1932 pamphlet The Doctrine of Fascism:

Anti-individualistic, the Fascist conception of life stresses the importance of the State and accepts the individual only in so far as his interests coincide with those of the State, which stands for the conscience of the universal, will of man as a historic entity. It is opposed to classical liberalism […] liberalism denied the State in the name of the individual; Fascism reasserts.

Accordingly, fascism leads to an amalgamation of state and individual that is not just a politico-economic arrangement where the latter formally submits to the former, but a conception of life. This worldview is, of course, diametrically opposed to core liberal principles, such as personal freedom, individualism, and the minimal state. And surely enough, fascists saw these liberal values as signs of civilizational decadence (as expressed most notably by Oswald Spengler in The Decline of the West—a book that greatly inspired Nazi ideology). Instead, they posited that the only freedom worthy of the name existed within the state; that peace and cosmopolitanism were illusory; and that man was man only by virtue of his membership and contribution to nation and race.

But fascism was also opposed to Marxian socialism. At its most basic, the schism between the two worldviews can be understood in terms of the fascist rejection of materialism, which was a centerpiece of Marxian thought. Fascists denied the equivalence of material well-being and happiness, instead viewing man as fulfilled by hardship, war, and by playing his part in the grand tapestry of history, whose real protagonists were nation-states. While admitting the importance of economic life—e.g., of efficiency and technological innovation—fascists denied that material relations unequivocally determined the course of history, insisting instead on the preponderance of spiritual and heroic acts (i.e., acts with no economic motive) as drivers of social change. “Sanctity and heroism,” Mussolini wrote, are at the root of the fascist belief system, not material self-interest.  

This belief system also extended to economic matters, including competition. The Third Reich respected private property rights to some degree—among other reasons, because Adolf Hitler believed it would encourage creative competition and innovation. The Nazis’ overarching principle, however, was that all economic activity and all private property ultimately be subordinated to the “common good,” as interpreted by the state. In the words of Hitler:

I want everyone to keep what he has earned subject to the principle that the good of the community takes priority over that of the individual. But the State should retain control; every owner should feel himself to be an agent of the State. […] The Third Reich will always retain the right to control property owners.

The solution was a totalitarian system of government control that maintained private enterprise and profit incentives as spurs to efficient management, but narrowly circumscribed the traditional freedom of entrepreneurs. Economic historians Christoph Buchheim and Jonas Scherner have characterized the Nazis’ economic system as a “state-directed private ownership economy,” a partnership in which the state was the principal and the business was the agent. Economic activity would be judged according to the criteria of “strategic necessity and social utility,” encompassing an array of social, political, practical, and ideological goals. Some have referred to this as the “primacy of politics over economics” approach.

For instance, in supervising cross-border acquisitions (today’s mergers), the state “sought to suppress purely economic motives and to substitute some rough notion of ‘racial political’ priority when supervising industrial acquisitions or controlling existing German subsidiaries.” The Reich selectively applied the 1933 Act for the Formation of Compulsory Cartels in regulating cartels that had been formed under the Weimar Republic with the Cartel Act of 1923. But the legislation also appears to have been applied to protect small and medium-sized enterprises, an important source of the party’s political support, from ruinous competition. This is reminiscent of German industrialist and Nazi supporter Gustav Krupp’s “Third Form”: 

Between “free” economy and state capitalism there is a third form: the economy that is free from obligations, but has a sense of inner duty to the state. 

In short, competition and individual achievement had to be balanced with cooperation, mediated by the self-appointed guardians of the “general interest.” In contrast with Marxian socialism/communism, the long-term goal of the Nazi regime was not to abolish competition, but to harness it to serve the aims of the regime. As Franz Böhm—cofounder, with Walter Eucken, of the Freiburg School and its theory of “ordoliberalism”—wrote in his advice to the Nazi government:

The state regulatory framework gives the Reich economic leadership the power to make administrative commands applying either the indirect or the direct steering competence according to need, functionality, and political intent. The leadership may go as far as it wishes in this regard, for example, by suspending competition-based economic steering and returning to it when appropriate. 


After a century of expansion, opposition to classical liberalism started to coalesce around two nodes: Marxism on the left, and fascism/Nazism on the right. What ensued was a civilizational crisis of material, social, and spiritual proportions that, at its most basic level, can be understood as an iteration of the perennial struggle between individualism and collectivism. On the one hand, liberals like J.S. Mill had argued forcefully that “the only freedom which deserves the name, is that of pursuing our own good in our own way.” In stark contrast, Mussolini wrote that “fascism stands for liberty, and for the only liberty worth having, the liberty of the state and of the individual within the state.” The former position is rooted in a humanist view that enshrines the individual at the center of the social order; the latter in a communitarian ideal that sees him as subordinate to forces that supersede him.

As I have explained in the previous post, the philosophical undercurrents of both positions are ancient. A more immediate precursor of the collectivist standpoint, however, can be found in German idealism and particularly in Georg Wilhelm Friedrich Hegel. In The Philosophy of Right, he wrote:

A single person, I need hardly say, is something subordinate, and as such he must dedicate himself to the ethical whole. Hence, if the state claims life, the individual must surrender it. All the worth which the human being possesses […] he possesses only through the state.

This broader clash is reflected, directly and indirectly, in notions of competition and competition regulation. Classical liberals sought to liberate competition from regulatory fetters. Marxism “predicted” its downfall and envisioned a social order without it. Fascism/Nazism sought to wrest it from the hands of greedy self-interest and mold it to serve the many and the fluctuating objectives of the state and its vision of the common good

In the next post, I will discuss how this has influenced the neoliberal philosophy that is still at the heart of many competition systems today. I will argue that two strands of neoliberalism emerged, which each attempted to resolve the challenge of collectivism in distinct ways. 

One strand, associated with a continental understanding of liberalism and epitomized by the Freiburg School, sought to strike a “mostly liberal” compromise between liberalism and collectivism—a “Third Way” between opposites. In doing so, however, it may have indulged in some of the same collectivist vices that it initially sought to avoid— such as vast government discretion and the imposition of myriad “higher” goals on society. 

The other strand, represented by Anglo-American liberalism of the sort espoused by Friedrich Hayek and Milton Friedman, was less conciliatory. It attempted to reform, rather than reinvent, liberalism. Their prescriptions involved creating a strong legal framework conducive to economic efficiency against a background of limited government discretion, freedom, and the rule of law.

Interrogations concerning the role that economic theory should play in policy decisions are nothing new. Milton Friedman famously drew a distinction between “positive” and “normative” economics, notably arguing that theoretical models were valuable, despite their unrealistic assumptions. Kenneth Arrow and Gerard Debreu’s highly theoretical work on General Equilibrium Theory is widely acknowledged as one of the most important achievements of modern economics.

But for all their intellectual value and academic merit, the use of models to inform policy decisions is not uncontroversial. There is indeed a long and unfortunate history of influential economic models turning out to be poor depictions (and predictors) of real-world outcomes.

This raises a key question: should policymakers use economic models to inform their decisions and, if so, how? This post uses the economics of externalities to illustrate both the virtues and pitfalls of economic modeling. Throughout economic history, externalities have routinely been cited to support claims of market failure and calls for government intervention. However, as explained below, these fears have frequently failed to withstand empirical scrutiny.

Today, similar models are touted to support government intervention in digital industries. Externalities are notably said to prevent consumers from switching between platforms, allegedly leading to unassailable barriers to entry and deficient venture-capital investment. Unfortunately, as explained below, the models that underpin these fears are highly abstracted and far removed from underlying market realities.

Ultimately, this post argues that, while models provide a powerful way of thinking about the world, naïvely transposing them to real-world settings is misguided. This is not to say that models are useless—quite the contrary. Indeed, “falsified” models can shed powerful light on economic behavior that would otherwise prove hard to understand.


Fears surrounding economic externalities are as old as modern economics. For example, in the 1950s, economists routinely cited bee pollination as a source of externalities and, ultimately, market failure.

The basic argument was straightforward: Bees and orchards provide each other with positive externalities. Bees cross-pollinate flowers and orchards contain vast amounts of nectar upon which bees feed, thus improving honey yields. Accordingly, several famous economists argued that there was a market failure; bees fly where they please and farmers cannot prevent bees from feeding on their blossoming flowers—allegedly causing underinvestment in both. This led James Meade to conclude:

[T]he apple-farmer provides to the beekeeper some of his factors free of charge. The apple-farmer is paid less than the value of his marginal social net product, and the beekeeper receives more than the value of his marginal social net product.

A finding echoed by Francis Bator:

If, then, apple producers are unable to protect their equity in apple-nectar and markets do not impute to apple blossoms their correct shadow value, profit-maximizing decisions will fail correctly to allocate resources at the margin. There will be failure “by enforcement.” This is what I would call an ownership externality. It is essentially Meade’s “unpaid factor” case.

It took more than 20 years and painstaking research by Steven Cheung to conclusively debunk these assertions. So how did economic agents overcome this “insurmountable” market failure?

The answer, it turns out, was extremely simple. While bees do fly where they please, the relative placement of beehives and orchards has a tremendous impact on both fruit and honey yields. This is partly because bees have a very limited mean foraging range (roughly 2-3km). This left economic agents with ample scope to prevent free-riding.

Using these natural sources of excludability, they built a web of complex agreements that internalize the symbiotic virtues of beehives and fruit orchards. To cite Steven Cheung’s research

Pollination contracts usually include stipulations regarding the number and strength of the colonies, the rental fee per hive, the time of delivery and removal of hives, the protection of bees from pesticide sprays, and the strategic placing of hives. Apiary lease contracts differ from pollination contracts in two essential aspects. One is, predictably, that the amount of apiary rent seldom depends on the number of colonies, since the farmer is interested only in obtaining the rent per apiary offered by the highest bidder. Second, the amount of apiary rent is not necessarily fixed. Paid mostly in honey, it may vary according to either the current honey yield or the honey yield of the preceding year.

But what of neighboring orchards? Wouldn’t these entail a more complex externality (i.e., could one orchard free-ride on agreements concluded between other orchards and neighboring apiaries)? Apparently not:

Acknowledging the complication, beekeepers and farmers are quick to point out that a social rule, or custom of the orchards, takes the place of explicit contracting: during the pollination period the owner of an orchard either keeps bees himself or hires as many hives per area as are employed in neighboring orchards of the same type. One failing to comply would be rated as a “bad neighbor,” it is said, and could expect a number of inconveniences imposed on him by other orchard owners. This customary matching of hive densities involves the exchange of gifts of the same kind, which apparently entails lower transaction costs than would be incurred under explicit contracting, where farmers would have to negotiate and make money payments to one another for the bee spillover.

In short, not only did the bee/orchard externality model fail, but it failed to account for extremely obvious counter-evidence. Even a rapid flip through the Yellow Pages (or, today, a search on Google) would have revealed a vibrant market for bee pollination. In short, the bee externalities, at least as presented in economic textbooks, were merely an economic “fable.” Unfortunately, they would not be the last.

The Lighthouse

Lighthouses provide another cautionary tale. Indeed, Henry Sidgwick, A.C. Pigou, John Stuart Mill, and Paul Samuelson all cited the externalities involved in the provision of lighthouse services as a source of market failure.

Here, too, the problem was allegedly straightforward. A lighthouse cannot prevent ships from free-riding on its services when they sail by it (i.e., it is mostly impossible to determine whether a ship has paid fees and to turn off the lighthouse if that is not the case). Hence there can be no efficient market for light dues (lighthouses were seen as a “public good”). As Paul Samuelson famously put it:

Take our earlier case of a lighthouse to warn against rocks. Its beam helps everyone in sight. A businessman could not build it for a profit, since he cannot claim a price from each user. This certainly is the kind of activity that governments would naturally undertake.

He added that:

[E]ven if the operators were able—say, by radar reconnaissance—to claim a toll from every nearby user, that fact would not necessarily make it socially optimal for this service to be provided like a private good at a market-determined individual price. Why not? Because it costs society zero extra cost to let one extra ship use the service; hence any ships discouraged from those waters by the requirement to pay a positive price will represent a social economic loss—even if the price charged to all is no more than enough to pay the long-run expenses of the lighthouse.

More than a century after it was first mentioned in economics textbooks, Ronald Coase finally laid the lighthouse myth to rest—rebutting Samuelson’s second claim in the process.

What piece of evidence had eluded economists for all those years? As Coase observed, contemporary economists had somehow overlooked the fact that large parts of the British lighthouse system were privately operated, and had been for centuries:

[T]he right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament. The tolls were collected at the ports by agents (who might act for several lighthouses), who might be private individuals but were commonly customs officials. The toll varied with the lighthouse and ships paid a toll, varying with the size of the vessel, for each lighthouse passed. It was normally a rate per ton (say 1/4d or 1/2d) for each voyage. Later, books were published setting out the lighthouses passed on different voyages and the charges that would be made.

In other words, lighthouses used a simple physical feature to create “excludability” and prevent free-riding. The main reason ships require lighthouses is to avoid hitting rocks when they make their way to a port. By tying port fees and light dues, lighthouse owners—aided by mild government-enforced property rights—could easily earn a return on their investments, thus disproving the lighthouse free-riding myth.

Ultimately, this meant that a large share of the British lighthouse system was privately operated throughout the 19th century, and this share would presumably have been more pronounced if government-run “Trinity House” lighthouses had not crowded out private investment:

The position in 1820 was that there were 24 lighthouses operated by Trinity House and 22 by private individuals or organizations. But many of the Trinity House lighthouses had not been built originally by them but had been acquired by purchase or as the result of the expiration of a lease.

Of course, this system was not perfect. Some ships (notably foreign ones that did not dock in the United Kingdom) might free-ride on this arrangement. It also entailed some level of market power. The ability to charge light dues meant that prices were higher than the “socially optimal” baseline of zero (the marginal cost of providing light is close to zero). Though it is worth noting that tying port fees and light dues might also have decreased double marginalization, to the benefit of sailors.

Samuelson was particularly weary of this market power that went hand in hand with the private provision of public goods, including lighthouses:

Being able to limit a public good’s consumption does not make it a true-blue private good. For what, after all, are the true marginal costs of having one extra family tune in on the program? They are literally zero. Why then prevent any family which would receive positive pleasure from tuning in on the program from doing so?

However, as Coase explained, light fees represented only a tiny fraction of a ship’s costs. In practice, they were thus unlikely to affect market output meaningfully:

[W]hat is the gain which Samuelson sees as coming from this change in the way in which the lighthouse service is financed? It is that some ships which are now discouraged from making a voyage to Britain because of the light dues would in future do so. As it happens, the form of the toll and the exemptions mean that for most ships the number of voyages will not be affected by the fact that light dues are paid. There may be some ships somewhere which are laid up or broken up because of the light dues, but the number cannot be great, if indeed there are any ships in this category.

Samuelson’s critique also falls prey to the Nirvana Fallacy pointed out by Harold Demsetz: markets might not be perfect, but neither is government intervention. Market power and imperfect appropriability are the two (paradoxical) pitfalls of the first; “white elephants,” underinvestment, and lack of competition (and the information it generates) tend to stem from the latter.

Which of these solutions is superior, in each case, is an empirical question that early economists had simply failed to consider—assuming instead that market failure was systematic in markets that present prima facie externalities. In other words, models were taken as gospel without any circumspection about their relevance to real-world settings.

The Tragedy of the Commons

Externalities were also said to undermine the efficient use of “common pool resources,” such grazing lands, common irrigation systems, and fisheries—resources where one agent’s use diminishes that of others, and where exclusion is either difficult or impossible.

The most famous formulation of this problem is Garret Hardin’s highly influential (over 47,000 cites) “tragedy of the commons.” Hardin cited the example of multiple herdsmen occupying the same grazing ground:

The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another … But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.

In more technical terms, each economic agent purportedly exerts an unpriced negative externality on the others, thus leading to the premature depletion of common pool resources. Hardin extended this reasoning to other problems, such as pollution and allegations of global overpopulation.

Although Hardin hardly documented any real-world occurrences of this so-called tragedy, his policy prescriptions were unequivocal:

The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from the misery of overpopulation. Freedom to breed will bring ruin to all.

As with many other theoretical externalities, empirical scrutiny revealed that these fears were greatly overblown. In her Nobel-winning work, Elinor Ostrom showed that economic agents often found ways to mitigate these potential externalities markedly. For example, mountain villages often implement rules and norms that limit the use of grazing grounds and wooded areas. Likewise, landowners across the world often set up “irrigation communities” that prevent agents from overusing water.

Along similar lines, Julian Morris and I conjecture that informal arrangements and reputational effects might mitigate opportunistic behavior in the standard essential patent industry.

These bottom-up solutions are certainly not perfect. Many common institutions fail—for example, Elinor Ostrom documents several problematic fisheries, groundwater basins and forests, although it is worth noting that government intervention was sometimes behind these failures. To cite but one example:

Several scholars have documented what occurred when the Government of Nepal passed the “Private Forest Nationalization Act” […]. Whereas the law was officially proclaimed to “protect, manage and conserve the forest for the benefit of the entire country”, it actually disrupted previously established communal control over the local forests. Messerschmidt (1986, p.458) reports what happened immediately after the law came into effect:

Nepalese villagers began freeriding — systematically overexploiting their forest resources on a large scale.

In any case, the question is not so much whether private institutions fail, but whether they do so more often than government intervention. be it regulation or property rights. In short, the “tragedy of the commons” is ultimately an empirical question: what works better in each case, government intervention, propertization, or emergent rules and norms?

More broadly, the key lesson is that it is wrong to blindly apply models while ignoring real-world outcomes. As Elinor Ostrom herself put it:

The intellectual trap in relying entirely on models to provide the foundation for policy analysis is that scholars then presume that they are omniscient observers able to comprehend the essentials of how complex, dynamic systems work by creating stylized descriptions of some aspects of those systems.

Dvorak Keyboards

In 1985, Paul David published an influential paper arguing that market failures undermined competition between the QWERTY and Dvorak keyboard layouts. This version of history then became a dominant narrative in the field of network economics, including works by Joseph Farrell & Garth Saloner, and Jean Tirole.

The basic claim was that QWERTY users’ reluctance to switch toward the putatively superior Dvorak layout exerted a negative externality on the rest of the ecosystem (and a positive externality on other QWERTY users), thus preventing the adoption of a more efficient standard. As Paul David put it:

Although the initial lead acquired by QWERTY through its association with the Remington was quantitatively very slender, when magnified by expectations it may well have been quite sufficient to guarantee that the industry eventually would lock in to a de facto QWERTY standard. […]

Competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system — where decentralized decision making subsequently has sufficed to hold it.

Unfortunately, many of the above papers paid little to no attention to actual market conditions in the typewriter and keyboard layout industries. Years later, Stan Liebowitz and Stephen Margolis undertook a detailed analysis of the keyboard layout market. They almost entirely rejected any notion that QWERTY prevailed despite it being the inferior standard:

Yet there are many aspects of the QWERTY-versus-Dvorak fable that do not survive scrutiny. First, the claim that Dvorak is a better keyboard is supported only by evidence that is both scant and suspect. Second, studies in the ergonomics literature find no significant advantage for Dvorak that can be deemed scientifically reliable. Third, the competition among producers of typewriters, out of which the standard emerged, was far more vigorous than is commonly reported. Fourth, there were far more typing contests than just the single Cincinnati contest. These contests provided ample opportunity to demonstrate the superiority of alternative keyboard arrangements. That QWERTY survived significant challenges early in the history of typewriting demonstrates that it is at least among the reasonably fit, even if not the fittest that can be imagined.

In short, there was little to no evidence supporting the view that QWERTY inefficiently prevailed because of network effects. The falsification of this narrative also weakens broader claims that network effects systematically lead to either excess momentum or excess inertia in standardization. Indeed, it is tempting to characterize all network industries with heavily skewed market shares as resulting from market failure. Yet the QWERTY/Dvorak story suggests that such a conclusion would be premature.

Killzones, Zoom, and TikTok

If you are still reading at this point, you might think that contemporary scholars would know better than to base calls for policy intervention on theoretical externalities. Alas, nothing could be further from the truth.

For instance, a recent paper by Sai Kamepalli, Raghuram Rajan and Luigi Zingales conjectures that the interplay between mergers and network externalities discourages the adoption of superior independent platforms:

If techies expect two platforms to merge, they will be reluctant to pay the switching costs and adopt the new platform early on, unless the new platform significantly outperforms the incumbent one. After all, they know that if the entering platform’s technology is a net improvement over the existing technology, it will be adopted by the incumbent after merger, with new features melded with old features so that the techies’ adjustment costs are minimized. Thus, the prospect of a merger will dissuade many techies from trying the new technology.

Although this key behavioral assumption drives the results of the theoretical model, the paper presents no evidence to support the contention that it occurs in real-world settings. Admittedly, the paper does present evidence of reduced venture capital investments after mergers involving large tech firms. But even on their own terms, this data simply does not support the authors’ behavioral assumption.

And this is no isolated example. Over the past couple of years, several scholars have called for more muscular antitrust intervention in networked industries. A common theme is that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in, thus raising barriers to entry for potential rivals (here, here, here).

But there are also countless counterexamples, where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.

Zoom is one of the most salient instances. As I have written previously:

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.

Along similar lines, Geoffrey Manne and Alec Stapp have put forward a multitude of other examples. These include: The demise of Yahoo; the disruption of early instant-messaging applications and websites; MySpace’s rapid decline; etc. In all these cases, outcomes do not match the predictions of theoretical models.

More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and its powerful algorithm are the most likely explanations for its success.

While these developments certainly do not disprove network effects theory, they eviscerate the common belief in antitrust circles that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. As in the previous examples, the question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions.

In Conclusion

My argument is not that models are without value. To the contrary, framing problems in economic terms—and simplifying them in ways that make them cognizable—enables scholars and policymakers to better understand where market failures might arise, and how these problems can be anticipated and solved by private actors. In other words, models alone cannot tell us that markets will fail, but they can direct inquiries and help us to understand why firms behave the way they do, and why markets (including digital ones) are organized in a given way.

In that respect, both the theoretical and empirical research cited throughout this post offer valuable insights for today’s policymakers.

For a start, as Ronald Coase famously argued in what is perhaps his most famous work, externalities (and market failure more generally) are a function of transaction costs. When these are low (relative to the value of a good), market failures are unlikely. This is perhaps clearest in the “Fable of the Bees” example. Given bees’ short foraging range, there were ultimately few real-world obstacles to writing contracts that internalized the mutual benefits of bees and orchards.

Perhaps more importantly, economic research sheds light on behavior that might otherwise be seen as anticompetitive. The rules and norms that bind farming/beekeeping communities, as well as users of common pool resources, could easily be analyzed as a cartel by naïve antitrust authorities. Yet externality theory suggests they play a key role in preventing market failure.

Along similar lines, mergers and acquisitions (as well as vertical integration, more generally) can reduce opportunism and other externalities that might otherwise undermine collaboration between firms (here, here and here). And much of the same is true for certain types of unilateral behavior. Tying video games to consoles (and pricing the console below cost) can help entrants overcome network externalities that might otherwise shield incumbents. Likewise, Google tying its proprietary apps to the open source Android operating system arguably enabled it to earn a return on its investments, thus overcoming the externality problem that plagues open source software.

All of this raises a tantalizing prospect that deserves far more attention than it is currently given in policy circles: authorities around the world are seeking to regulate the tech space. Draft legislation has notably been tabled in the United States, European Union and the United Kingdom. These draft bills would all make it harder for large tech firms to implement various economic hierarchies, including mergers and certain contractual arrangements.

This is highly paradoxical. If digital markets are indeed plagued by network externalities and high transaction costs, as critics allege, then preventing firms from adopting complex hierarchies—which have traditionally been seen as a way to solve externalities—is just as likely to exacerbate problems. In other words, like the economists of old cited above, today’s policymakers appear to be focusing too heavily on simple models that predict market failure, and far too little on the mechanisms that firms have put in place to thrive within this complex environment.

The bigger picture is that far more circumspection is required when using theoretical models in real-world policy settings. Indeed, as Harold Demsetz famously put it, the purpose of normative economics is not so much to identify market failures, but to help policymakers determine which of several alternative institutions will deliver the best outcomes for consumers:

This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem […].

The European Court of Justice issued its long-awaited ruling Dec. 9 in the Groupe Canal+ case. The case centered on licensing agreements in which Paramount Pictures granted absolute territorial exclusivity to several European broadcasters, including Canal+.

Back in 2015, the European Commission charged six U.S. film studios, including Paramount,  as well as British broadcaster Sky UK Ltd., with illegally limiting access to content. The crux of the EC’s complaint was that the contractual agreements to limit cross-border competition for content distribution ran afoul of European Union competition law. Paramount ultimately settled its case with the commission and agreed to remove the problematic clauses from its contracts. This affected third parties like Canal+, who lost valuable contractual protections. 

While the ECJ ultimately upheld the agreements on what amounts to procedural grounds (Canal+ was unduly affected by a decision to which it was not a party), the case provides yet another example of the European Commission’s misguided stance on absolute territorial licensing, sometimes referred to as “geo-blocking.”

The EC’s long-running efforts to restrict geo-blocking emerge from its attempts to harmonize trade across the EU. Notably, in its Digital Single Market initiative, the Commission envisioned

[A] Digital Single Market is one in which the free movement of goods, persons, services and capital is ensured and where individuals and businesses can s​eamlessly access and exercise online activities under conditions of f​air competition,​ and a high level of consumer and personal data protection, irrespective of their nationality or place of residence.

This policy stance has been endorsed consistently by the European Court of Justice. In the 2011 Murphy decision, for example, the court held that agreements between rights holders and broadcasters infringe European competition when they categorically prevent the latter from supplying “decoding devices” to consumers located in other member states. More precisely, while rights holders can license their content on a territorial basis, they cannot restrict so-called “passive sales”; broadcasters can be prevented from actively chasing consumers in other member states, but not from serving them altogether. If this sounds Kafkaesque, it’s because it is.

The problem with the ECJ’s vision is that it elides the complex factors that underlie a healthy free-trade zone. Geo-blocking frequently is misunderstood or derided by consumers as an unwarranted restriction on their consumption preferences. It doesn’t feel “fair” or “seamless” when a rights holder can decide who can access their content and on what terms. But that doesn’t mean geo-blocking is a nefarious or socially harmful practice. Quite the contrary: allowing creators to create different sets of distribution options offers both a return to the creators as well as more choice in general to consumers. 

In economic terms, geo-blocking allows rights holders to engage in third-degree price discrimination; that is, they have the ability to charge different prices for different sets of consumers. This type of pricing will increase total welfare so long as it increases output. As Hal Varian puts it:

If a new market is opened up because of price discrimination—a market that was not previously being served under the ordinary monopoly—then we will typically have a Pareto improving welfare enhancement.

Another benefit of third-degree price discrimination is that, by shifting some economic surplus from consumers to firms, it can stimulate investment in much the same way copyright and patents do. Put simply, the prospect of greater economic rents increases the maximum investment firms will be willing to make in content creation and distribution.

For these reasons, respecting parties’ freedom to license content as they see fit is likely to produce much more efficient outcomes than annulling those agreements through government-imposed “seamless access” and “fair competition” rules. Part of the value of copyright law is in creating space to contract by protecting creators’ property rights. Without geo-blocking, the enforcement of licensing agreements would become much more difficult. Laws restricting copyright owners’ ability to contract freely reduce allocational efficiency, as well as the incentives to create in the first place. Further, when individual creators have commercial and creative autonomy, they gain a degree of predictability that can ensure they will continue to produce content in the future. 

The European Union would do well to adopt a more nuanced understanding of the contractual relationships between producers and distributors. 

In the hands of a wise philosopher-king, the Sherman Act’s hard-to-define prohibitions of “restraints of trade” and “monopolization” are tools that will operate inevitably to advance the public interest in competitive markets. In the hands of real-world litigators, regulators and judges, those same words can operate to advance competitors’ private interests in securing commercial advantages through litigation that could not be secured through competition in the marketplace. If successful, this strategy may yield outcomes that run counter to antitrust law’s very purpose.

The antitrust lawsuit filed by Epic Games against Apple in August 2020, and Apple’s antitrust lawsuit against Qualcomm (settled in April 2019), suggest that antitrust law is heading in this unfortunate direction.

From rent-minimization to rent-maximization

The first step in converting antitrust law from an instrument to minimize rents to an instrument to maximize rents lies in expanding the statute’s field of application on the apparently uncontroversial grounds of advancing the public interest in “vigorous” enforcement. In surprisingly short order, this largely unbounded vision of antitrust’s proper scope has become the dominant fashion in policy discussions, at least as expressed by some legislators, regulators, and commentators.

Following the new conventional wisdom, antitrust law has pursued over the past decades an overly narrow path, consequently overlooking and exacerbating a panoply of social ills that extend well beyond the mission to “merely” protect the operation of the market pricing mechanism. This line of argument is typically coupled with the assertion that courts, regulators and scholars have been led down this path by incumbents that welcome the relaxed scrutiny of a purportedly deferential antitrust policy.

This argument, and related theory of regulatory capture, has things roughly backwards.

Placing antitrust law at the service of a largely undefined range of social purposes set by judicial and regulatory fiat threatens to render antitrust a tool that can be easily deployed to favor the private interests of competitors rather than the public interest in competition. Without the intellectual discipline imposed by the consumer welfare standard (and, outside of per se illegal restraints, operationalized through the evidentiary requirement of competitive harm), the rhetoric of antitrust provides excellent cover for efforts to re-engineer the rules of the game in lieu of seeking to win the game as it has been played.

Epic Games v. Apple

A nascent symptom of this expansive form of antitrust is provided by the much-publicized lawsuit brought by Epic Games, the maker of the wildly popular video game, Fortnite, against Apple, the operator of the even more wildly popular App Store. On August 13, 2020, Epic added a “direct” payment processing services option to its Fortnite game, which violated the developer terms of use that govern the App Store. In response, Apple exercised its contractual right to remove Fortnite from the App Store, triggering Fortnite’s antitrust suit. The same sequence has ensued between Epic Games and Google in connection with the Google Play Store. Both litigations are best understood as a breach of contract dispute cloaked in the guise of an antitrust cause of action.

In suggesting that a jury trial would be appropriate in Epic Games’ suit against Apple, the district court judge reportedly stated that the case is “on the frontier of antitrust law” and [i]t is important enough to understand what real people think.” That statement seems to suggest that this is a close case under antitrust law. I respectfully disagree. Based on currently available information and applicable law, Epic’s argument suffers from two serious vulnerabilities that would seem to be difficult for the plaintiff to overcome.

A contestably narrow market definition

Epic states three related claims: (1) Apple has a monopoly in the relevant market, defined as the App Store, (2) Apple maintains its monopoly by contractually precluding developers from distributing iOS-compatible versions of their apps outside the App Store, and (3) Apple maintains a related monopoly in the payment processing services market for the App Store by contractually requiring developers to use Apple’s processing service.

This market definition, and the associated chain of reasoning, is subject to significant doubt, both as a legal and factual matter.

Epic’s narrow definition of the relevant market as the App Store (rather than app distribution platforms generally) conveniently results in a 100% market share for Apple. Inconveniently, federal case law is generally reluctant to adopt single-brand market definitions. While the Supreme Court recognized in 1992 a single-brand market in Eastman Kodak Co. v. Image Technical Services, the case is widely considered to be an outlier in light of subsequent case law. As a federal district court observed in Spahr v. Leegin Creative Leather Products (E.D. Tenn. 2008): “Courts have consistently refused to consider one brand to be a relevant market of its own when the brand competes with other potential substitutes.”

The App Store would seem to fall into this typical category. The customer base of existing and new Fortnite users can still accessthe gamethrough multiple platforms and on multiple devices other than the iPhone, including a PC, laptop, game console, and non-Apple mobile devices. (While Google has also removed Fortnite from the Google Play store due to the added direct payment feature, users can, at some inconvenience, access the game manually on Android phones.)

Given these alternative distribution channels, it is at a minimum unclear whether Epic is foreclosed from reaching a substantial portion of its consumer base, which may already access the game on alternative platforms or could potentially do so at moderate incremental transaction costs. In the language of platform economics, it appears to be technologically and economically feasible for the target consumer base to “multi-home.” If multi-homing and related switching costs are low, even a 100% share of the App Store submarket would not translate into market power in the broader and potentially more economically relevant market for app distribution generally.

An implausible theory of platform lock-in

Even if it were conceded that the App Store is the relevant market, Epic’s claim is not especially persuasive, both as an economic and a legal matter. That is because there is no evidence that Apple is exploiting any such hypothetically attributed market power to increase the rents extracted from developers and indirectly impose deadweight losses on consumers.

In the classic scenario of platform lock-in, a three-step sequence is observed: (1) a new firm acquires a high market share in a race for platform dominance, (2) the platform winner is protected by network effects and switching costs, and (3) the entrenched platform “exploits” consumers by inflating prices (or imposing other adverse terms) to capture monopoly rents. This economic model is reflected in the case law on lock-in claims, which typically requires that the plaintiff identify an adverse change by the defendant in pricing or other terms after users were allegedly locked-in.

The history of the App Store does not conform to this model. Apple has always assessed a 30% fee and the same is true of every other leading distributor of games for the mobile and PC market, including Google Play Store, App Store’s rival in the mobile market, and Steam, the dominant distributor of video games in the PC market. This long-standing market practice suggests that the 30% fee is most likely motivated by an efficiency-driven business motivation, rather than seeking to entrench a monopoly position that Apple did not enjoy when the practice was first adopted. That is: even if Apple is deemed to be a “monopolist” for Section 2 purposes, it is not taking any “illegitimate” actions that could constitute monopolization or attempted monopolization.

The logic of the 70/30 split

Uncovering the business logic behind the 70/30 split in the app distribution market is not too difficult.

The 30% fee appears to be a low transaction-cost practice that enables the distributor to fund a variety of services, including app development tools, marketing support, and security and privacy protections, all of which are supplied at no separately priced fee and therefore do not require service-by-service negotiation and renegotiation. The same rationale credibly applies to the integrated payment processing services that Apple supplies for purposes of in-app purchases.

These services deliver significant value and would otherwise be difficult to replicate cost-effectively, protect the App Store’s valuable stock of brand capital (which yields positive spillovers for app developers on the site), and lower the costs of joining and participating in the App Store. Additionally, the 30% fee cross-subsidizes the delivery of these services to the approximately 80% of apps on the App Store that are ad-based and for which no fee is assessed, which in turn lowers entry costs and expands the number and variety of product options for platform users. These would all seem to be attractive outcomes from a competition policy perspective.

Epic’s objection

Epic would object to this line of argument by observing that it only charges a 12% fee to distribute other developers’ games on its own Epic Games Store.

Yet Epic’s lower fee is reportedly conditioned, at least in some cases, on the developer offering the game exclusively on the Epic Games Store for a certain period of time. Moreover, the services provided on the Epic Games Store may not be comparable to the extensive suite of services provided on the App Store and other leading distributors that follow the 30% standard. Additionally, the user base a developer can expect to access through the Epic Games Store is in all likelihood substantially smaller than the audience that can be reached through the App Store and other leading app and game distributors, which is then reflected in the higher fees charged by those platforms.

Hence, even the large fee differential may simply reflect the higher services and larger audiences available on the App Store, Google Play Store and other leading platforms, as compared to the Epic Games Store, rather than the unilateral extraction of market rents at developers’ and consumers’ expense.

Antitrust is about efficiency, not distribution

Epic says the standard 70/30 split between game publishers and app distributors is “excessive” while others argue that it is historically outdated.

Neither of these are credible antitrust arguments. Renegotiating the division of economic surplus between game suppliers and distributors is not the concern of antitrust law, which (as properly defined) should only take an interest if either (i) Apple is colluding on the 30% fee with other app distributors, or (ii) Apple is taking steps that preclude entry into the apps distribution market and lack any legitimate business justification. No one claims evidence for the former possibility and, without further evidence, the latter possibility is not especially compelling given the uniform use of the 70/30 split across the industry (which, as noted, can be derived from a related set of credible efficiency justifications). It is even less compelling in the face of evidence that output is rapidly accelerating, not declining, in the gaming app market: in the first half of 2020, approximately 24,500 new games were added to the App Store.

If this conclusion is right, then Epic’s lawsuit against Apple does not seem to have much to do with the public interest in preserving market competition.

But it clearly has much to do with the business interest of an input supplier in minimizing its distribution costs and maximizing its profit margin. That category includes not only Epic Games but Tencent, the world’s largest video game publisher and the holder of a 40% equity stake in Epic. Tencent also owns Riot Games (the publisher of “League of Legends”), an 84% stake in Supercell (the publisher of “Clash of Clans”), and a 5% stake in Activision Blizzard (the publisher of “Call of Duty”). It is unclear how an antitrust claim that, if successful, would simply redistribute economic value from leading game distributors to leading game developers has any necessary relevance to antitrust’s objective to promote consumer welfare.

The prequel: Apple v. Qualcomm

Ironically (and, as Dirk Auer has similarly observed), there is a symmetry between Epic’s claims against Apple and the claims previously pursued by Apple (and, concurrently, the Federal Trade Commission) against Qualcomm.

In that litigation, Apple contested the terms of the licensing arrangements under which Qualcomm made available its wireless communications patents to Apple (more precisely, Foxconn, Apple’s contract manufacturer), arguing that the terms were incompatible with Qualcomm’s commitment to “fair, reasonable and nondiscriminatory” (“FRAND”) licensing of its “standard-essential” patents (“SEPs”). Like Epic v. Apple, Apple v. Qualcomm was fundamentally a contract dispute, with the difference that Apple was in the position of a third-party beneficiary of the commitment that Qualcomm had made to the governing standard-setting organization. Like Epic, Apple sought to recharacterize this contractual dispute as an antitrust question, arguing that Qualcomm’s licensing practices constituted anticompetitive actions to “monopolize” the market for smartphone modem chipsets.

Theory meets evidence

The rhetoric used by Epic in its complaint echoes the rhetoric used by Apple in its briefs and other filings in the Qualcomm litigation. Apple (like the FTC) had argued that Qualcomm imposed a “tax” on competitors by requiring that any purchaser of Qualcomm’s chipsets concurrently enter into a license for Qualcomm’s SEP portfolio relating to 3G and 4G/LTE-enabled mobile communications devices.

Yet the history and performance of the mobile communications market simply did not track Apple’s (and the FTC’s continuing) characterization of Qualcomm’s licensing fee as a socially costly drag on market growth and, by implication, consumer welfare.

If this assertion had merit, then the decades-old wireless market should have exhibited a dismal history of increasing prices, slow user adoption and lagging innovation. In actuality, the wireless market since its inception has grown relentlessly, characterized by declining quality-adjusted prices, expanding output, relentless innovation, and rapid adoption across a broad range of income segments.

Given this compelling real-world evidence, the only remaining line of argument (still being pursued by the FTC) that could justify antitrust intervention is a theoretical conjecture that the wireless market might have grown even faster under some alternative IP licensing arrangement. This assertion rests precariously on the speculative assumption that any such arrangement would have induced the same or higher level of aggregate investment in innovation and commercialization activities. That fragile chain of “what if” arguments hardly seems a sound basis on which to rewrite the legal infrastructure behind the billions of dollars of licensing transactions that support the economically thriving smartphone market and the even larger ecosystem that has grown around it.

Antitrust litigation as business strategy

Given the absence of compelling evidence of competitive harm from Qualcomm’s allegedly anticompetitive licensing practices, Apple’s litigation would seem to be best interpreted as an economically rational attempt by a downstream producer to renegotiate a downward adjustment in the fees paid to an upstream supplier of critical technology inputs. (In fact, those are precisely the terms on which Qualcomm in 2015 settled the antitrust action brought against it by China’s competition regulator, to the obvious benefit of local device producers.) The Epic Games litigation is a mirror image fact pattern in which an upstream supplier of content inputs seeks to deploy antitrust law strategically for the purposes of minimizing the fees it pays to a leading downstream distributor.

Both litigations suffer from the same flaw. Private interests concerning the division of an existing economic value stream—a business question that is matter of indifference from an efficiency perspective—are erroneously (or, at least, reflexively) conflated with the public interest in preserving the free play of competitive forces that maximizes the size of the economic value stream.

Conclusion: Remaking the case for “narrow” antitrust

The Epic v. Apple and Apple v. Qualcomm disputes illustrate the unproductive rent-seeking outcomes to which antitrust law will inevitably be led if, as is being widely advocated, it is decoupled from its well-established foundation in promoting consumer welfare—and not competitor welfare.

Some proponents of a more expansive approach to antitrust enforcement are convinced that expanding the law’s scope of application will improve market efficiency by providing greater latitude for expert regulators and courts to reengineer market structures to the public benefit. Yet any substitution of top-down expert wisdom for the bottom-up trial-and-error process of market competition can easily yield “false positives” in which courts and regulators take actions that counterproductively intervene in markets that are already operating under reasonably competitive conditions. Additionally, an overly expansive approach toward the scope of antitrust law will induce private firms to shift resources toward securing advantages over competitors through lobbying and litigation, rather than seeking to win the race to deliver lower-cost and higher-quality products and services. Neither outcome promotes the public’s interest in a competitive marketplace.

Much has already been said about the twin antitrust suits filed by Epic Games against Apple and Google. For those who are not familiar with the cases, the game developer – most famous for its hit title Fortnite and the “Unreal Engine” that underpins much of the game (and movie) industry – is complaining that Apple and Google are thwarting competition from rival app stores and in-app payment processors. 

Supporters have been quick to see in these suits a long-overdue challenge against the 30% commissions that Apple and Google charge. Some have even portrayed Epic as a modern-day Robin Hood, leading the fight against Big Tech to the benefit of small app developers and consumers alike. Epic itself has been keen to stoke this image, comparing its litigation to a fight for basic freedoms in the face of Big Brother:

However, upon closer inspection, cracks rapidly appear in this rosy picture. What is left is a company partaking in blatant rent-seeking that threatens to harm the sprawling ecosystems that have emerged around both Apple and Google’s app stores.

Two issues are particularly salient. First, Epic is trying to protect its own interests at the expense of the broader industry. If successful, its suit would merely lead to alternative revenue schemes that – although more beneficial to itself – would leave smaller developers to shoulder higher fees. Second, the fees that Epic portrays as extortionate were in fact key to the emergence of mobile gaming.

Epic’s utopia is not an equilibrium

Central to Epic’s claims is the idea that both Apple and Google: (i) thwart competition from rival app stores, and implement a series of measures that prevent developers from reaching gamers through alternative means (such as pre-installing apps, or sideloading them in the case of Apple’s platforms); and (ii) tie their proprietary payment processing services to their app stores. According to Epic, this ultimately enables both Apple and Google to extract “extortionate” commissions (30%) from app developers.

But Epic’s whole case is based on the unrealistic assumption that both Apple and Google will sit idly by while rival play stores and payment systems take a free-ride on the vast investments they have ploughed into their respective smartphone platforms. In other words, removing Apple and Google’s ability to charge commissions on in-app purchases does not prevent them from monetizing their platforms elsewhere.

Indeed, economic and strategic management theory tells us that so long as Apple and Google single-handedly control one of the necessary points of access to their respective ecosystems, they should be able to extract a sizable share of the revenue generated on their platforms. One can only speculate, but it is easy to imagine Apple and Google charging rival app stores for access to their respective platforms, or charging developers for access to critical APIs.

Epic itself seems to concede this point. In a recent Verge article, it argued that Apple was threatening to cut off its access to iOS and Mac developer tools, which Apple currently offers at little to no cost:

Apple will terminate Epic’s inclusion in the Apple Developer Program, a membership that’s necessary to distribute apps on iOS devices or use Apple developer tools, if the company does not “cure your breaches” to the agreement within two weeks, according to a letter from Apple that was shared by Epic. Epic won’t be able to notarize Mac apps either, a process that could make installing Epic’s software more difficult or block it altogether. Apple requires that all apps are notarized before they can be run on newer versions of macOS, even if they’re distributed outside the App Store.

There is little to prevent Apple from more heavily monetizing these tools – should Epic’s antitrust case successfully prevent it from charging commissions via its app store.

All of this raises the question: why is Epic bringing a suit that, if successful, would merely result in the emergence of alternative fee schedules (as opposed to a significant reduction of the overall fees paid by developers).

One potential answer is that the current system is highly favorable to small apps that earn little to no revenue from purchases and who benefit most from the trust created by Apple and Google’s curation of their stores. It is, however, much less favorable to developers like Epic who no longer require any curation to garner the necessary trust from consumers and who earn a large share of their revenue from in-app purchases.

In more technical terms, the fact that all in-game payments are made through Apple and Google’s payment processing enables both platforms to more easily price-discriminate. Unlike fixed fees (but just like royalties), percentage commissions are necessarily state-contingent (i.e. the same commission will lead to vastly different revenue depending on an underlying app’s success). The most successful apps thus contribute far more to a platform’s fixed costs. For instance, it is estimated that mobile games account for 72% of all app store spend. Likewise, more than 80% of the apps on Apple’s store pay no commission at all.

This likely expands app store output by getting lower value developers on board. In that sense, it is akin to Ramsey pricing (where a firm/utility expands social welfare by allocating a higher share of fixed costs to the most inelastic consumers). Unfortunately, this would be much harder to accomplish if high value developers could easily bypass Apple or Google’s payment systems.

The bottom line is that Epic appears to be fighting to change Apple and Google’s app store business models in order to obtain fee schedules that are better aligned with its own interests. This is all the more important for Epic Games, given that mobile gaming is becoming increasingly popular relative to other gaming mediums (also here).

The emergence of new gaming platforms

Up to this point, I have mostly presented a zero-sum view of Epic’s lawsuit – i.e. developers and platforms are fighting over the distribution app store profits (though some smaller developers may lose out). But this ignores what is likely the chief virtue of Apple and Google’s “closed” distribution model. Namely, that it has greatly expanded the market for mobile gaming (and other mobile software), and will likely continue to do so in the future.

Much has already been said about the significant security and trust benefits that Apple and Google’s curation of their app stores (including their control of in-app payments) provide to users. Benedict Evans and Ben Thompson have both written excellent pieces on this very topic. 

In a nutshell, the closed model allows previously unknown developers to rapidly expand because (i) users do not have to fear their apps contain some form of malware, and (ii) they greatly reduce payments frictions, most notably security related ones. But while these are indeed tremendous benefits, another important upside seems to have gone relatively unnoticed. 

The “closed” business model also gives Apple and Google (as well as other platforms) significant incentives to develop new distribution mediums (smart TVs spring to mind) and improve existing ones. In turn, this greatly expands the audience that software developers can reach. In short, developers get a smaller share of a much larger pie.

The economics of two-sided markets are enlightening in this respect. Apple and Google’s stores are what Armstrong and Wright (here and here) refer to as “competitive bottlenecks”. That is, they compete aggressively (amongst themselves, and with other gaming platforms) to attract exclusive users. They can then charge developers a premium to access those users (note, however, that in the case at hand the incidence of those platform fees is unclear).

This gives platforms significant incentives to continuously attract and retain new users. For instance, if Steve Jobs is to be believed, giving consumers better access to media such as eBooks, video and games was one of the driving forces behind the launch of the iPad

This model of innovation would be seriously undermined if developers and consumers could easily bypass platforms (as Epic games is seeking to do).

In response, some commentators have countered that platforms may use their strong market positions to squeeze developers, thereby undermining software investments. But such a course of action may ultimately be self-defeating. For instance, writing about retail platforms imitating third-party sellers, Anfrei Hagiu, Tat-How Teh and Julian Wright have argued that:

[T]he platform has an incentive to commit itself not to imitate highly innovative third-party products in order to preserve their incentives to innovate.

Seen in this light, Apple and Google’s 30% commissions can be seen as a soft commitment not to expropriate developers, thus leaving them with a sizable share of the revenue generated on each platform. This may explain why the 30% commission has become a standard in the games industry (and beyond). 

Furthermore, from an evolutionary perspective, it is hard to argue that the 30% commission is somehow extortionate. If game developers were systematically expropriated, then the gaming industry – in particular its mobile segment – would not have grown so drastically over the past years:

All of this this likely explains why a recent survey found that 81% of app developers believed regulatory intervention would be misguided:

81% of developers and publishers believe that the relationship between them and platforms is best handled within the industry, rather than through government intervention. Competition and choice mean that developers will use platforms that they work with best.

The upshot is that the “closed” model employed by Apple and Google has served the gaming industry well. There is little compelling reason to overhaul that model today.

Final thoughts

When all is said and done, there is no escaping the fact that Epic games is currently playing a high-stakes rent-seeking game. As Apple noted in its opposition to Epic’s motion for a temporary restraining order:

Epic did not, and has not, contested that it is in breach of the App Store Guidelines and the License Agreement. Epic’s plan was to violate the agreements intentionally in order to manufacture an emergency. The moment Fortnite was removed from the App Store, Epic launched an extensive PR smear campaign against Apple and a litigation plan was orchestrated to the minute; within hours, Epic had filed a 56-page complaint, and within a few days, filed nearly 200 pages with this Court in a pre-packaged “emergency” motion. And just yesterday, it even sought to leverage its request to this Court for a sales promotion, announcing a “#FreeFortniteCup” to take place on August 23, inviting players for one last “Battle Royale” across “all platforms” this Sunday, with prizes targeting Apple.

Epic is ultimately seeking to introduce its own app store on both Apple and Google’s platforms, or at least bypass their payment processing services (as Spotify is seeking to do in the EU).

Unfortunately, as this post has argued, condoning this type of free-riding could prove highly detrimental to the entire mobile software industry. Smaller companies would almost inevitably be left to foot a larger share of the bill, existing platforms would become less secure, and the development of new ones could be hindered. At the end of the day, 30% might actually be a small price to pay.

Copyright law, ever a sore point in some quarters, has found a new field of battle in the FCC’s recent set-top box proposal. At the request of members of Congress, the Copyright Office recently wrote a rather thorough letter outlining its view of the FCC’s proposal on rightsholders.

In sum, the CR’s letter was an even-handed look at the proposal which concluded:

As a threshold matter, it seems critical that any revised proposal respect the authority of creators to manage the exploitation of their copyrighted works through private licensing arrangements, because regulatory actions that undermine such arrangements would be inconsistent with the rights granted under the Copyright Act.

This fairly uncontroversial statement of basic legal principle was met with cries of alarm. And Stanford’s CIS had a post from Affiliated Scholar Annemarie Bridy that managed to trot out breathless comparisons to inapposite legal theories while simultaneously misconstruing the “fair use” doctrine (as well as how Copyright law works in the video market, for that matter).

Look out! Lochner is coming!

In its letter the Copyright Office warned the FCC that its proposed rules have the potential to disrupt the web of contracts that underlie cable programming, and by extension, risk infringing the rights of copyright holders to commercially exploit their property. This analysis actually tracks what Geoff Manne and I wrote in both our initial comment and our reply comment to the set-top box proposal.

Yet Professor Bridy seems to believe that, notwithstanding the guarantees of both the Constitution and Section 106 of the Copyright Act, the FCC should have the power to abrogate licensing contracts between rightsholders and third parties.  She believes that

[t]he Office’s view is essentially that the Copyright Act gives right holders not only the limited range of rights enumerated in Section 106 (i.e., reproduction, preparation of derivative works, distribution, public display, and public performance), but also a much broader and more amorphous right to “manage the commercial exploitation” of copyrighted works in whatever ways they see fit and can accomplish in the marketplace, without any regulatory interference from the government.

What in the world does this even mean? A necessary logical corollary of the Section 106 rights includes the right to exploit works commercially as rightsholders see fit. Otherwise, what could it possibly mean to have the right to control the reproduction or distribution of a work? The truth is that Section 106 sets out a general set of rights that inhere in rightsholders with respect to their protected works, and that commercial exploitation is merely a subset of this total bundle of rights.

The ability to contract with other parties over these rights is also a necessary corollary of the property rights recognized in Section 106. After all, the right to exclude implies by necessity the right to include. Which is exactly what a licensing arrangement is.

But wait, there’s more — she actually managed to pull out the Lochner bogeyman to validate her argument!

The Office’s absolutist logic concerning freedom of contract in the copyright licensing domain is reminiscent of the Supreme Court’s now-infamous reasoning in Lochner v. New York, a 1905 case that invalidated a state law limiting maximum working hours for bakers on the ground that it violated employer-employee freedom of contract. The Court in Lochner deprived the government of the ability to provide basic protections for workers in a labor environment that subjected them to unhealthful and unsafe conditions. As Julie Cohen describes it, “‘Lochner’ has become an epithet used to characterize an outmoded, over-narrow way of thinking about state and federal economic regulation; it goes without saying that hardly anybody takes the doctrine it represents seriously.”

This is quite a leap of logic, as there is precious little in common between the letter from the Copyright Office and the Lochner opinion aside from the fact that both contain the word “contracts” in their pages.  Perhaps the most critical problem with Professor Bridy’s analogy is the fact that Lochner was about a legislature interacting with the common law system of contract, whereas the FCC is a body subordinate to Congress, and IP is both constitutionally and statutorily guaranteed. A sovereign may be entitled to interfere with the operation of common law, but an administrative agency does not have the same sort of legal status as a legislature when redefining general legal rights.

The key argument that Professor Bridy offered in support of her belief that the FCC should be free to abrogate contracts at will is that “[r]egulatory limits on private bargains may come in the form of antitrust laws or telecommunications laws or, as here, telecommunications regulations that further antitrust ends.”  However, this completely misunderstand U.S. constitutional doctrine.

In particular, as Geoff Manne and I discussed in our set-top box comments to the FCC, using one constitutional clause to end-run another constitutional clause is generally a no-no:

Regardless of whether or how well the rules effect the purpose of Sec. 629, copyright violations cannot be justified by recourse to the Communications Act. Provisions of the Communications Act — enacted under Congress’s Commerce Clause power — cannot be used to create an end run around limitations imposed by the Copyright Act under the Constitution’s Copyright Clause. “Congress cannot evade the limits of one clause of the Constitution by resort to another,” and thus neither can an agency acting within the scope of power delegated to it by Congress. Establishing a regulatory scheme under the Communications Act whereby compliance by regulated parties forces them to violate content creators’ copyrights is plainly unconstitutional.

Congress is of course free to establish the implementation of the Copyright Act as it sees fit. However, unless Congress itself acts to change that implementation, the FCC — or any other party — is not at liberty to interfere with rightsholders’ constitutionally guaranteed rights.

You Have to Break the Law Before You Raise a Defense

Another bone of contention upon which Professor Bridy gnaws is a concern that licensing contracts will abrogate an alleged right to “fair use” by making the defense harder to muster:  

One of the more troubling aspects of the Copyright Office’s letter is the length to which it goes to assert that right holders must be free in their licensing agreements with MVPDs to bargain away the public’s fair use rights… Of course, the right of consumers to time-shift video programming for personal use has been enshrined in law since Sony v. Universal in 1984. There’s no uncertainty about that particular fair use question—none at all.

The major problem with this reasoning (notwithstanding the somewhat misleading drafting of Section 107) is that “fair use” is not an affirmative right, it is an affirmative defense. Despite claims that “fair use” is a right, the Supreme Court has noted on at least two separate occasions (1, 2) that Section 107 was “structured… [as]… an affirmative defense requiring a case-by-case analysis.”

Moreover, important as the Sony case is, it does not not establish that “[t]here’s no uncertainty about [time-shifting as a] fair use question—none at all.” What it actually establishes is that, given the facts of that case, time-shifting was a fair use. Not for nothing the Sony Court notes at the outset of its opinion that

An explanation of our rejection of respondents’ unprecedented attempt to impose copyright liability upon the distributors of copying equipment requires a quite detailed recitation of the findings of the District Court.

But more generally, the Sony doctrine stands for the proposition that:

“The limited scope of the copyright holder’s statutory monopoly, like the limited copyright duration required by the Constitution, reflects a balance of competing claims upon the public interest: creative work is to be encouraged and rewarded, but private motivation must ultimately serve the cause of promoting broad public availability of literature, music, and the other arts. The immediate effect of our copyright law is to secure a fair return for an ‘author’s’ creative labor. But the ultimate aim is, by this incentive, to stimulate artistic creativity for the general public good. ‘The sole interest of the United States and the primary object in conferring the monopoly,’ this Court has said, ‘lie in the general benefits derived by the public from the labors of authors.’ Fox Film Corp. v. Doyal, 286 U. S. 123, 286 U. S. 127. See Kendall v. Winsor, 21 How. 322, 62 U. S. 327-328; Grant v. Raymond, 6 Pet. 218, 31 U. S. 241-242. When technological change has rendered its literal terms ambiguous, the Copyright Act must be construed in light of this basic purpose.” Twentieth Century Music Corp. v. Aiken, 422 U. S. 151, 422 U. S. 156 (1975) (footnotes omitted).

In other words, courts must balance competing interests to maximize “the general benefits derived by the public,” subject to technological change and other criteria that might shift that balance in any particular case.  

Thus, even as an affirmative defense, nothing is guaranteed. The court will have to walk through a balancing test, and only after that point, and if the accused party’s behavior has not tipped the scales against herself, will the court find the use a “fair use.”  

As I noted before,

Not surprisingly, other courts are inclined to follow the Supreme Court. Thus the Eleventh Circuit, the Southern District of New York, and the Central District of California (here and here), to name but a few, all explicitly refer to fair use as an affirmative defense. Oh, and the Ninth Circuit did too, at least until Lenz.

The Lenz case was an interesting one because, despite the above noted Supreme Court precedent treating “fair use” as a defense, it is one of the very few cases that has held “fair use” to be an affirmative right (in that case, the court decided that Section 1201 of the DMCA required consideration of “fair use” as a part of filling out a take-down notice). And in doing so, it too tried to rely on Sony to restructure the nature of “fair use.” But as I have previously written, “[i]t bears noting that the Court in Sony Corp. did not discuss whether or not fair use is an affirmative defense, whereas Acuff Rose (decided 10 years after Sony Corp.) and Harper & Row decisions do.”

Further, even the Eleventh Circuit, which the Ninth relied upon in Lenz, later clarified its position that the above-noted Supreme Court precedent definitely binds lower courts, and that “fair use” is in fact an affirmative defense.

Thus, to say that rightsholders’ licensing contracts somehow impinge a “right” of fair use completely puts the cart before the horse. Remember, as an affirmative defense, “fair use” is an excuse for otherwise infringing behavior, and rightsholders are well within their constitutional and statutory rights to avoid potential infringing uses.

Think about it this way. When you commit a crime you can raise a defense: for instance, an insanity defense. But just because you might be excused for committing a crime if a court finds you were not operating with full faculties, this does not entitle every insane person to go out and commit that crime. The insanity defense can be raised only after a crime is committed, and at that point it will be examined by a judge and jury to determine if applying the defense furthers the overall criminal law scheme.

“Fair use” works in exactly the same manner. And even though Sony described how time- and space-shifting were potentially permissible, it did so only by determining on those facts that the balancing test came out to allow it. So, maybe a particular time-shifting use would be “fair use.” But maybe not. More likely, in this case, even the allegedly well-established “fair use” of time-shifting in the context of today’s digital media, on-demand programing, Netflix and the like may not meet that burden.

And what this means is that a rightsholder does not have an ex ante obligation to consider whether a particular contractual clause might in some fashion or other give rise to a “fair use” defense.

The contrary point of view makes no sense. Because “fair use” is a defense, forcing parties to build “fair use” considerations into their contractual negotiations essentially requires them to build in an allowance for infringement — and one that a court might or might not ever find appropriate in light of the requisite balancing of interests. That just can’t be right.

Instead, I think this article is just a piece of the larger IP-skeptic movement. I suspect that when “fair use” was in its initial stages of development, it was intended as a fairly gentle softening on the limits of intellectual property — something like the “public necessity” doctrine in common law with respect to real property and trespass. However, that is just not how “fair use” advocates see it today. As Geoff Manne has noted, the idea of “permissionless innovation” has wrongly come to mean “no contracts required (or permitted)”:  

[Permissionless innovation] is used to justify unlimited expansion of fair use, and is extended by advocates to nearly all of copyright…, which otherwise requires those pernicious licenses (i.e., permission) from others.

But this position is nonsense — intangible property is still property. And at root, property is just a set of legal relations between persons that defines their rights and obligations with respect to some “thing.” It doesn’t matter if you can hold that thing in your hand or not. As property, IP can be subject to transfer and control through voluntarily created contracts.

Even if “fair use” were some sort of as-yet unknown fundamental right, it would still be subject to limitations upon it by other rights and obligations. To claim that “fair use” should somehow trump the right of a property holder to dispose of the property as she wishes is completely at odds with our legal system.

In a thorough and convincing paper, “The FTC’s Proposal for Regulating IP through SSOs Would Replace Private Coordination with Government Hold-Up,” Richard Epstein, Scott Kieff and Dan Spulber assess and then decimate the FTC’s proposal on patent notice and remedies, “The Evolving IP Marketplace: Aligning Patent Notice and Remedies with Competition.”  Note Epstein, Kieff and Spulber:

In its recent report entitled “The Evolving IP Marketplace,” the Federal Trade Commission (FTC) advances a far‐reaching regulatory approach (Proposal) whose likely effect would be to distort the operation of the intellectual property (IP) marketplace in ways that will hamper the innovation and commercialization of new technologies. The gist of the FTC Proposal is to rely on highly non-­standard and misguided definitions of economic terms of art such as “ex ante” and “hold-­up,” while urging new inefficient rules for calculating damages for patent infringement. Stripped of the technicalities, the FTC Proposal would so reduce the costs of infringement by downstream users that the rate of infringement would unduly increase, as potential infringers find it in their interest to abandon the voluntary market in favor of a more attractive system of judicial pricing. As the number of nonmarket transactions increases, the courts will play an ever larger role in deciding the terms on which the patents of one party may be used by another party. The adverse effects of this new trend will do more than reduce the incentives for innovation; it will upset the current set of well-­‐functioning private coordination activities in the IP marketplace that are needed to accomplish the commercialization of new technologies. Such a trend would seriously undermine capital formation, job growth, competition, and the consumer welfare the FTC seeks to promote.

Focusing in particular on SSOs, the trio homes in on the potential incentive problem created by the FTC’s proposal:

The central problem with the FTC’s approach is that it would interfere seriously with the helpful incentives all parties in the IP marketplace presently have to contract with each other. The FTC’s approach ignores the powerful incentives that it creates in putative licenses to spurn the voluntary market in order to obtain a strategic advantage over the licensor. In any voluntary market, the low rates that go to initial licensees reflect the uncertainty of the value of the patented technology at the time the license is issued. Once that technology has proven its worth, there is no sound reason to allow any potential licensee who instead held out from the originally offered deal to get bargain rates down the road. Allowing such an option would make the holdout better off than the contracting party. Such holdouts would not need to take licenses for technologies with low value, while resting assured they would still get technologies with high value at below market rates. The FTC seems to overlook that a well-­‐functioning patent damage system should do more than merely calibrate damages after the fact. An efficient approach to damages is one that also reduces the number of infringements overall by making sure that the infringer cannot improve his economic position by his own wrong.

The FTC Proposal rests on the misguided conviction that the law should not allow a licensor to “demand and obtain royalty payments based on the infringer’s switching costs” once the manufacturer has “sunk costs into using the technology;” and it labels any such payments as the result of “hold-­up.”

As Epstein, et al. discuss, current private ordering (reciprocal dealing, repeat play, RAND terms, etc.) works perfectly well to address real hold-up problems, and the FTC seems to be both defining the problem oddly and, thus, creating a problem that doesn’t really exist.

Although not discussed directly, the paper owes a great deal to the great Ben Klein and especially his paper, Why Hold-Ups Occur: The Self-Enforcing Range of Contractual Relationships (to say nothing of Klein, Crawford & Alchian, of course).  Likewise, although not discussed in the paper, Josh and Bruce Kobayashi’s excellent paper, Federalism, Substantive Preemption and Limits on Antitrust: An Application to Patent Holdup is an essential precursor to this paper, addressing the comparative merits of antitrust  and contract-based evaluation of claimed patent holdups in SSOs.

Highly-recommended and an important addition to the ever-interesting antitrust/IP discussion.