Archives For Contract

Interrogations concerning the role that economic theory should play in policy decisions are nothing new. Milton Friedman famously drew a distinction between “positive” and “normative” economics, notably arguing that theoretical models were valuable, despite their unrealistic assumptions. Kenneth Arrow and Gerard Debreu’s highly theoretical work on General Equilibrium Theory is widely acknowledged as one of the most important achievements of modern economics.

But for all their intellectual value and academic merit, the use of models to inform policy decisions is not uncontroversial. There is indeed a long and unfortunate history of influential economic models turning out to be poor depictions (and predictors) of real-world outcomes.

This raises a key question: should policymakers use economic models to inform their decisions and, if so, how? This post uses the economics of externalities to illustrate both the virtues and pitfalls of economic modeling. Throughout economic history, externalities have routinely been cited to support claims of market failure and calls for government intervention. However, as explained below, these fears have frequently failed to withstand empirical scrutiny.

Today, similar models are touted to support government intervention in digital industries. Externalities are notably said to prevent consumers from switching between platforms, allegedly leading to unassailable barriers to entry and deficient venture-capital investment. Unfortunately, as explained below, the models that underpin these fears are highly abstracted and far removed from underlying market realities.

Ultimately, this post argues that, while models provide a powerful way of thinking about the world, naïvely transposing them to real-world settings is misguided. This is not to say that models are useless—quite the contrary. Indeed, “falsified” models can shed powerful light on economic behavior that would otherwise prove hard to understand.

Bees

Fears surrounding economic externalities are as old as modern economics. For example, in the 1950s, economists routinely cited bee pollination as a source of externalities and, ultimately, market failure.

The basic argument was straightforward: Bees and orchards provide each other with positive externalities. Bees cross-pollinate flowers and orchards contain vast amounts of nectar upon which bees feed, thus improving honey yields. Accordingly, several famous economists argued that there was a market failure; bees fly where they please and farmers cannot prevent bees from feeding on their blossoming flowers—allegedly causing underinvestment in both. This led James Meade to conclude:

[T]he apple-farmer provides to the beekeeper some of his factors free of charge. The apple-farmer is paid less than the value of his marginal social net product, and the beekeeper receives more than the value of his marginal social net product.

A finding echoed by Francis Bator:

If, then, apple producers are unable to protect their equity in apple-nectar and markets do not impute to apple blossoms their correct shadow value, profit-maximizing decisions will fail correctly to allocate resources at the margin. There will be failure “by enforcement.” This is what I would call an ownership externality. It is essentially Meade’s “unpaid factor” case.

It took more than 20 years and painstaking research by Steven Cheung to conclusively debunk these assertions. So how did economic agents overcome this “insurmountable” market failure?

The answer, it turns out, was extremely simple. While bees do fly where they please, the relative placement of beehives and orchards has a tremendous impact on both fruit and honey yields. This is partly because bees have a very limited mean foraging range (roughly 2-3km). This left economic agents with ample scope to prevent free-riding.

Using these natural sources of excludability, they built a web of complex agreements that internalize the symbiotic virtues of beehives and fruit orchards. To cite Steven Cheung’s research

Pollination contracts usually include stipulations regarding the number and strength of the colonies, the rental fee per hive, the time of delivery and removal of hives, the protection of bees from pesticide sprays, and the strategic placing of hives. Apiary lease contracts differ from pollination contracts in two essential aspects. One is, predictably, that the amount of apiary rent seldom depends on the number of colonies, since the farmer is interested only in obtaining the rent per apiary offered by the highest bidder. Second, the amount of apiary rent is not necessarily fixed. Paid mostly in honey, it may vary according to either the current honey yield or the honey yield of the preceding year.

But what of neighboring orchards? Wouldn’t these entail a more complex externality (i.e., could one orchard free-ride on agreements concluded between other orchards and neighboring apiaries)? Apparently not:

Acknowledging the complication, beekeepers and farmers are quick to point out that a social rule, or custom of the orchards, takes the place of explicit contracting: during the pollination period the owner of an orchard either keeps bees himself or hires as many hives per area as are employed in neighboring orchards of the same type. One failing to comply would be rated as a “bad neighbor,” it is said, and could expect a number of inconveniences imposed on him by other orchard owners. This customary matching of hive densities involves the exchange of gifts of the same kind, which apparently entails lower transaction costs than would be incurred under explicit contracting, where farmers would have to negotiate and make money payments to one another for the bee spillover.

In short, not only did the bee/orchard externality model fail, but it failed to account for extremely obvious counter-evidence. Even a rapid flip through the Yellow Pages (or, today, a search on Google) would have revealed a vibrant market for bee pollination. In short, the bee externalities, at least as presented in economic textbooks, were merely an economic “fable.” Unfortunately, they would not be the last.

The Lighthouse

Lighthouses provide another cautionary tale. Indeed, Henry Sidgwick, A.C. Pigou, John Stuart Mill, and Paul Samuelson all cited the externalities involved in the provision of lighthouse services as a source of market failure.

Here, too, the problem was allegedly straightforward. A lighthouse cannot prevent ships from free-riding on its services when they sail by it (i.e., it is mostly impossible to determine whether a ship has paid fees and to turn off the lighthouse if that is not the case). Hence there can be no efficient market for light dues (lighthouses were seen as a “public good”). As Paul Samuelson famously put it:

Take our earlier case of a lighthouse to warn against rocks. Its beam helps everyone in sight. A businessman could not build it for a profit, since he cannot claim a price from each user. This certainly is the kind of activity that governments would naturally undertake.

He added that:

[E]ven if the operators were able—say, by radar reconnaissance—to claim a toll from every nearby user, that fact would not necessarily make it socially optimal for this service to be provided like a private good at a market-determined individual price. Why not? Because it costs society zero extra cost to let one extra ship use the service; hence any ships discouraged from those waters by the requirement to pay a positive price will represent a social economic loss—even if the price charged to all is no more than enough to pay the long-run expenses of the lighthouse.

More than a century after it was first mentioned in economics textbooks, Ronald Coase finally laid the lighthouse myth to rest—rebutting Samuelson’s second claim in the process.

What piece of evidence had eluded economists for all those years? As Coase observed, contemporary economists had somehow overlooked the fact that large parts of the British lighthouse system were privately operated, and had been for centuries:

[T]he right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament. The tolls were collected at the ports by agents (who might act for several lighthouses), who might be private individuals but were commonly customs officials. The toll varied with the lighthouse and ships paid a toll, varying with the size of the vessel, for each lighthouse passed. It was normally a rate per ton (say 1/4d or 1/2d) for each voyage. Later, books were published setting out the lighthouses passed on different voyages and the charges that would be made.

In other words, lighthouses used a simple physical feature to create “excludability” and prevent free-riding. The main reason ships require lighthouses is to avoid hitting rocks when they make their way to a port. By tying port fees and light dues, lighthouse owners—aided by mild government-enforced property rights—could easily earn a return on their investments, thus disproving the lighthouse free-riding myth.

Ultimately, this meant that a large share of the British lighthouse system was privately operated throughout the 19th century, and this share would presumably have been more pronounced if government-run “Trinity House” lighthouses had not crowded out private investment:

The position in 1820 was that there were 24 lighthouses operated by Trinity House and 22 by private individuals or organizations. But many of the Trinity House lighthouses had not been built originally by them but had been acquired by purchase or as the result of the expiration of a lease.

Of course, this system was not perfect. Some ships (notably foreign ones that did not dock in the United Kingdom) might free-ride on this arrangement. It also entailed some level of market power. The ability to charge light dues meant that prices were higher than the “socially optimal” baseline of zero (the marginal cost of providing light is close to zero). Though it is worth noting that tying port fees and light dues might also have decreased double marginalization, to the benefit of sailors.

Samuelson was particularly weary of this market power that went hand in hand with the private provision of public goods, including lighthouses:

Being able to limit a public good’s consumption does not make it a true-blue private good. For what, after all, are the true marginal costs of having one extra family tune in on the program? They are literally zero. Why then prevent any family which would receive positive pleasure from tuning in on the program from doing so?

However, as Coase explained, light fees represented only a tiny fraction of a ship’s costs. In practice, they were thus unlikely to affect market output meaningfully:

[W]hat is the gain which Samuelson sees as coming from this change in the way in which the lighthouse service is financed? It is that some ships which are now discouraged from making a voyage to Britain because of the light dues would in future do so. As it happens, the form of the toll and the exemptions mean that for most ships the number of voyages will not be affected by the fact that light dues are paid. There may be some ships somewhere which are laid up or broken up because of the light dues, but the number cannot be great, if indeed there are any ships in this category.

Samuelson’s critique also falls prey to the Nirvana Fallacy pointed out by Harold Demsetz: markets might not be perfect, but neither is government intervention. Market power and imperfect appropriability are the two (paradoxical) pitfalls of the first; “white elephants,” underinvestment, and lack of competition (and the information it generates) tend to stem from the latter.

Which of these solutions is superior, in each case, is an empirical question that early economists had simply failed to consider—assuming instead that market failure was systematic in markets that present prima facie externalities. In other words, models were taken as gospel without any circumspection about their relevance to real-world settings.

The Tragedy of the Commons

Externalities were also said to undermine the efficient use of “common pool resources,” such grazing lands, common irrigation systems, and fisheries—resources where one agent’s use diminishes that of others, and where exclusion is either difficult or impossible.

The most famous formulation of this problem is Garret Hardin’s highly influential (over 47,000 cites) “tragedy of the commons.” Hardin cited the example of multiple herdsmen occupying the same grazing ground:

The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another … But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.

In more technical terms, each economic agent purportedly exerts an unpriced negative externality on the others, thus leading to the premature depletion of common pool resources. Hardin extended this reasoning to other problems, such as pollution and allegations of global overpopulation.

Although Hardin hardly documented any real-world occurrences of this so-called tragedy, his policy prescriptions were unequivocal:

The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from the misery of overpopulation. Freedom to breed will bring ruin to all.

As with many other theoretical externalities, empirical scrutiny revealed that these fears were greatly overblown. In her Nobel-winning work, Elinor Ostrom showed that economic agents often found ways to mitigate these potential externalities markedly. For example, mountain villages often implement rules and norms that limit the use of grazing grounds and wooded areas. Likewise, landowners across the world often set up “irrigation communities” that prevent agents from overusing water.

Along similar lines, Julian Morris and I conjecture that informal arrangements and reputational effects might mitigate opportunistic behavior in the standard essential patent industry.

These bottom-up solutions are certainly not perfect. Many common institutions fail—for example, Elinor Ostrom documents several problematic fisheries, groundwater basins and forests, although it is worth noting that government intervention was sometimes behind these failures. To cite but one example:

Several scholars have documented what occurred when the Government of Nepal passed the “Private Forest Nationalization Act” […]. Whereas the law was officially proclaimed to “protect, manage and conserve the forest for the benefit of the entire country”, it actually disrupted previously established communal control over the local forests. Messerschmidt (1986, p.458) reports what happened immediately after the law came into effect:

Nepalese villagers began freeriding — systematically overexploiting their forest resources on a large scale.

In any case, the question is not so much whether private institutions fail, but whether they do so more often than government intervention. be it regulation or property rights. In short, the “tragedy of the commons” is ultimately an empirical question: what works better in each case, government intervention, propertization, or emergent rules and norms?

More broadly, the key lesson is that it is wrong to blindly apply models while ignoring real-world outcomes. As Elinor Ostrom herself put it:

The intellectual trap in relying entirely on models to provide the foundation for policy analysis is that scholars then presume that they are omniscient observers able to comprehend the essentials of how complex, dynamic systems work by creating stylized descriptions of some aspects of those systems.

Dvorak Keyboards

In 1985, Paul David published an influential paper arguing that market failures undermined competition between the QWERTY and Dvorak keyboard layouts. This version of history then became a dominant narrative in the field of network economics, including works by Joseph Farrell & Garth Saloner, and Jean Tirole.

The basic claim was that QWERTY users’ reluctance to switch toward the putatively superior Dvorak layout exerted a negative externality on the rest of the ecosystem (and a positive externality on other QWERTY users), thus preventing the adoption of a more efficient standard. As Paul David put it:

Although the initial lead acquired by QWERTY through its association with the Remington was quantitatively very slender, when magnified by expectations it may well have been quite sufficient to guarantee that the industry eventually would lock in to a de facto QWERTY standard. […]

Competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system — where decentralized decision making subsequently has sufficed to hold it.

Unfortunately, many of the above papers paid little to no attention to actual market conditions in the typewriter and keyboard layout industries. Years later, Stan Liebowitz and Stephen Margolis undertook a detailed analysis of the keyboard layout market. They almost entirely rejected any notion that QWERTY prevailed despite it being the inferior standard:

Yet there are many aspects of the QWERTY-versus-Dvorak fable that do not survive scrutiny. First, the claim that Dvorak is a better keyboard is supported only by evidence that is both scant and suspect. Second, studies in the ergonomics literature find no significant advantage for Dvorak that can be deemed scientifically reliable. Third, the competition among producers of typewriters, out of which the standard emerged, was far more vigorous than is commonly reported. Fourth, there were far more typing contests than just the single Cincinnati contest. These contests provided ample opportunity to demonstrate the superiority of alternative keyboard arrangements. That QWERTY survived significant challenges early in the history of typewriting demonstrates that it is at least among the reasonably fit, even if not the fittest that can be imagined.

In short, there was little to no evidence supporting the view that QWERTY inefficiently prevailed because of network effects. The falsification of this narrative also weakens broader claims that network effects systematically lead to either excess momentum or excess inertia in standardization. Indeed, it is tempting to characterize all network industries with heavily skewed market shares as resulting from market failure. Yet the QWERTY/Dvorak story suggests that such a conclusion would be premature.

Killzones, Zoom, and TikTok

If you are still reading at this point, you might think that contemporary scholars would know better than to base calls for policy intervention on theoretical externalities. Alas, nothing could be further from the truth.

For instance, a recent paper by Sai Kamepalli, Raghuram Rajan and Luigi Zingales conjectures that the interplay between mergers and network externalities discourages the adoption of superior independent platforms:

If techies expect two platforms to merge, they will be reluctant to pay the switching costs and adopt the new platform early on, unless the new platform significantly outperforms the incumbent one. After all, they know that if the entering platform’s technology is a net improvement over the existing technology, it will be adopted by the incumbent after merger, with new features melded with old features so that the techies’ adjustment costs are minimized. Thus, the prospect of a merger will dissuade many techies from trying the new technology.

Although this key behavioral assumption drives the results of the theoretical model, the paper presents no evidence to support the contention that it occurs in real-world settings. Admittedly, the paper does present evidence of reduced venture capital investments after mergers involving large tech firms. But even on their own terms, this data simply does not support the authors’ behavioral assumption.

And this is no isolated example. Over the past couple of years, several scholars have called for more muscular antitrust intervention in networked industries. A common theme is that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in, thus raising barriers to entry for potential rivals (here, here, here).

But there are also countless counterexamples, where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.

Zoom is one of the most salient instances. As I have written previously:

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.

Along similar lines, Geoffrey Manne and Alec Stapp have put forward a multitude of other examples. These include: The demise of Yahoo; the disruption of early instant-messaging applications and websites; MySpace’s rapid decline; etc. In all these cases, outcomes do not match the predictions of theoretical models.

More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and its powerful algorithm are the most likely explanations for its success.

While these developments certainly do not disprove network effects theory, they eviscerate the common belief in antitrust circles that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. As in the previous examples, the question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions.

In Conclusion

My argument is not that models are without value. To the contrary, framing problems in economic terms—and simplifying them in ways that make them cognizable—enables scholars and policymakers to better understand where market failures might arise, and how these problems can be anticipated and solved by private actors. In other words, models alone cannot tell us that markets will fail, but they can direct inquiries and help us to understand why firms behave the way they do, and why markets (including digital ones) are organized in a given way.

In that respect, both the theoretical and empirical research cited throughout this post offer valuable insights for today’s policymakers.

For a start, as Ronald Coase famously argued in what is perhaps his most famous work, externalities (and market failure more generally) are a function of transaction costs. When these are low (relative to the value of a good), market failures are unlikely. This is perhaps clearest in the “Fable of the Bees” example. Given bees’ short foraging range, there were ultimately few real-world obstacles to writing contracts that internalized the mutual benefits of bees and orchards.

Perhaps more importantly, economic research sheds light on behavior that might otherwise be seen as anticompetitive. The rules and norms that bind farming/beekeeping communities, as well as users of common pool resources, could easily be analyzed as a cartel by naïve antitrust authorities. Yet externality theory suggests they play a key role in preventing market failure.

Along similar lines, mergers and acquisitions (as well as vertical integration, more generally) can reduce opportunism and other externalities that might otherwise undermine collaboration between firms (here, here and here). And much of the same is true for certain types of unilateral behavior. Tying video games to consoles (and pricing the console below cost) can help entrants overcome network externalities that might otherwise shield incumbents. Likewise, Google tying its proprietary apps to the open source Android operating system arguably enabled it to earn a return on its investments, thus overcoming the externality problem that plagues open source software.

All of this raises a tantalizing prospect that deserves far more attention than it is currently given in policy circles: authorities around the world are seeking to regulate the tech space. Draft legislation has notably been tabled in the United States, European Union and the United Kingdom. These draft bills would all make it harder for large tech firms to implement various economic hierarchies, including mergers and certain contractual arrangements.

This is highly paradoxical. If digital markets are indeed plagued by network externalities and high transaction costs, as critics allege, then preventing firms from adopting complex hierarchies—which have traditionally been seen as a way to solve externalities—is just as likely to exacerbate problems. In other words, like the economists of old cited above, today’s policymakers appear to be focusing too heavily on simple models that predict market failure, and far too little on the mechanisms that firms have put in place to thrive within this complex environment.

The bigger picture is that far more circumspection is required when using theoretical models in real-world policy settings. Indeed, as Harold Demsetz famously put it, the purpose of normative economics is not so much to identify market failures, but to help policymakers determine which of several alternative institutions will deliver the best outcomes for consumers:

This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem […].

The European Court of Justice issued its long-awaited ruling Dec. 9 in the Groupe Canal+ case. The case centered on licensing agreements in which Paramount Pictures granted absolute territorial exclusivity to several European broadcasters, including Canal+.

Back in 2015, the European Commission charged six U.S. film studios, including Paramount,  as well as British broadcaster Sky UK Ltd., with illegally limiting access to content. The crux of the EC’s complaint was that the contractual agreements to limit cross-border competition for content distribution ran afoul of European Union competition law. Paramount ultimately settled its case with the commission and agreed to remove the problematic clauses from its contracts. This affected third parties like Canal+, who lost valuable contractual protections. 

While the ECJ ultimately upheld the agreements on what amounts to procedural grounds (Canal+ was unduly affected by a decision to which it was not a party), the case provides yet another example of the European Commission’s misguided stance on absolute territorial licensing, sometimes referred to as “geo-blocking.”

The EC’s long-running efforts to restrict geo-blocking emerge from its attempts to harmonize trade across the EU. Notably, in its Digital Single Market initiative, the Commission envisioned

[A] Digital Single Market is one in which the free movement of goods, persons, services and capital is ensured and where individuals and businesses can s​eamlessly access and exercise online activities under conditions of f​air competition,​ and a high level of consumer and personal data protection, irrespective of their nationality or place of residence.

This policy stance has been endorsed consistently by the European Court of Justice. In the 2011 Murphy decision, for example, the court held that agreements between rights holders and broadcasters infringe European competition when they categorically prevent the latter from supplying “decoding devices” to consumers located in other member states. More precisely, while rights holders can license their content on a territorial basis, they cannot restrict so-called “passive sales”; broadcasters can be prevented from actively chasing consumers in other member states, but not from serving them altogether. If this sounds Kafkaesque, it’s because it is.

The problem with the ECJ’s vision is that it elides the complex factors that underlie a healthy free-trade zone. Geo-blocking frequently is misunderstood or derided by consumers as an unwarranted restriction on their consumption preferences. It doesn’t feel “fair” or “seamless” when a rights holder can decide who can access their content and on what terms. But that doesn’t mean geo-blocking is a nefarious or socially harmful practice. Quite the contrary: allowing creators to create different sets of distribution options offers both a return to the creators as well as more choice in general to consumers. 

In economic terms, geo-blocking allows rights holders to engage in third-degree price discrimination; that is, they have the ability to charge different prices for different sets of consumers. This type of pricing will increase total welfare so long as it increases output. As Hal Varian puts it:

If a new market is opened up because of price discrimination—a market that was not previously being served under the ordinary monopoly—then we will typically have a Pareto improving welfare enhancement.

Another benefit of third-degree price discrimination is that, by shifting some economic surplus from consumers to firms, it can stimulate investment in much the same way copyright and patents do. Put simply, the prospect of greater economic rents increases the maximum investment firms will be willing to make in content creation and distribution.

For these reasons, respecting parties’ freedom to license content as they see fit is likely to produce much more efficient outcomes than annulling those agreements through government-imposed “seamless access” and “fair competition” rules. Part of the value of copyright law is in creating space to contract by protecting creators’ property rights. Without geo-blocking, the enforcement of licensing agreements would become much more difficult. Laws restricting copyright owners’ ability to contract freely reduce allocational efficiency, as well as the incentives to create in the first place. Further, when individual creators have commercial and creative autonomy, they gain a degree of predictability that can ensure they will continue to produce content in the future. 

The European Union would do well to adopt a more nuanced understanding of the contractual relationships between producers and distributors. 

In the hands of a wise philosopher-king, the Sherman Act’s hard-to-define prohibitions of “restraints of trade” and “monopolization” are tools that will operate inevitably to advance the public interest in competitive markets. In the hands of real-world litigators, regulators and judges, those same words can operate to advance competitors’ private interests in securing commercial advantages through litigation that could not be secured through competition in the marketplace. If successful, this strategy may yield outcomes that run counter to antitrust law’s very purpose.

The antitrust lawsuit filed by Epic Games against Apple in August 2020, and Apple’s antitrust lawsuit against Qualcomm (settled in April 2019), suggest that antitrust law is heading in this unfortunate direction.

From rent-minimization to rent-maximization

The first step in converting antitrust law from an instrument to minimize rents to an instrument to maximize rents lies in expanding the statute’s field of application on the apparently uncontroversial grounds of advancing the public interest in “vigorous” enforcement. In surprisingly short order, this largely unbounded vision of antitrust’s proper scope has become the dominant fashion in policy discussions, at least as expressed by some legislators, regulators, and commentators.

Following the new conventional wisdom, antitrust law has pursued over the past decades an overly narrow path, consequently overlooking and exacerbating a panoply of social ills that extend well beyond the mission to “merely” protect the operation of the market pricing mechanism. This line of argument is typically coupled with the assertion that courts, regulators and scholars have been led down this path by incumbents that welcome the relaxed scrutiny of a purportedly deferential antitrust policy.

This argument, and related theory of regulatory capture, has things roughly backwards.

Placing antitrust law at the service of a largely undefined range of social purposes set by judicial and regulatory fiat threatens to render antitrust a tool that can be easily deployed to favor the private interests of competitors rather than the public interest in competition. Without the intellectual discipline imposed by the consumer welfare standard (and, outside of per se illegal restraints, operationalized through the evidentiary requirement of competitive harm), the rhetoric of antitrust provides excellent cover for efforts to re-engineer the rules of the game in lieu of seeking to win the game as it has been played.

Epic Games v. Apple

A nascent symptom of this expansive form of antitrust is provided by the much-publicized lawsuit brought by Epic Games, the maker of the wildly popular video game, Fortnite, against Apple, the operator of the even more wildly popular App Store. On August 13, 2020, Epic added a “direct” payment processing services option to its Fortnite game, which violated the developer terms of use that govern the App Store. In response, Apple exercised its contractual right to remove Fortnite from the App Store, triggering Fortnite’s antitrust suit. The same sequence has ensued between Epic Games and Google in connection with the Google Play Store. Both litigations are best understood as a breach of contract dispute cloaked in the guise of an antitrust cause of action.

In suggesting that a jury trial would be appropriate in Epic Games’ suit against Apple, the district court judge reportedly stated that the case is “on the frontier of antitrust law” and [i]t is important enough to understand what real people think.” That statement seems to suggest that this is a close case under antitrust law. I respectfully disagree. Based on currently available information and applicable law, Epic’s argument suffers from two serious vulnerabilities that would seem to be difficult for the plaintiff to overcome.

A contestably narrow market definition

Epic states three related claims: (1) Apple has a monopoly in the relevant market, defined as the App Store, (2) Apple maintains its monopoly by contractually precluding developers from distributing iOS-compatible versions of their apps outside the App Store, and (3) Apple maintains a related monopoly in the payment processing services market for the App Store by contractually requiring developers to use Apple’s processing service.

This market definition, and the associated chain of reasoning, is subject to significant doubt, both as a legal and factual matter.

Epic’s narrow definition of the relevant market as the App Store (rather than app distribution platforms generally) conveniently results in a 100% market share for Apple. Inconveniently, federal case law is generally reluctant to adopt single-brand market definitions. While the Supreme Court recognized in 1992 a single-brand market in Eastman Kodak Co. v. Image Technical Services, the case is widely considered to be an outlier in light of subsequent case law. As a federal district court observed in Spahr v. Leegin Creative Leather Products (E.D. Tenn. 2008): “Courts have consistently refused to consider one brand to be a relevant market of its own when the brand competes with other potential substitutes.”

The App Store would seem to fall into this typical category. The customer base of existing and new Fortnite users can still accessthe gamethrough multiple platforms and on multiple devices other than the iPhone, including a PC, laptop, game console, and non-Apple mobile devices. (While Google has also removed Fortnite from the Google Play store due to the added direct payment feature, users can, at some inconvenience, access the game manually on Android phones.)

Given these alternative distribution channels, it is at a minimum unclear whether Epic is foreclosed from reaching a substantial portion of its consumer base, which may already access the game on alternative platforms or could potentially do so at moderate incremental transaction costs. In the language of platform economics, it appears to be technologically and economically feasible for the target consumer base to “multi-home.” If multi-homing and related switching costs are low, even a 100% share of the App Store submarket would not translate into market power in the broader and potentially more economically relevant market for app distribution generally.

An implausible theory of platform lock-in

Even if it were conceded that the App Store is the relevant market, Epic’s claim is not especially persuasive, both as an economic and a legal matter. That is because there is no evidence that Apple is exploiting any such hypothetically attributed market power to increase the rents extracted from developers and indirectly impose deadweight losses on consumers.

In the classic scenario of platform lock-in, a three-step sequence is observed: (1) a new firm acquires a high market share in a race for platform dominance, (2) the platform winner is protected by network effects and switching costs, and (3) the entrenched platform “exploits” consumers by inflating prices (or imposing other adverse terms) to capture monopoly rents. This economic model is reflected in the case law on lock-in claims, which typically requires that the plaintiff identify an adverse change by the defendant in pricing or other terms after users were allegedly locked-in.

The history of the App Store does not conform to this model. Apple has always assessed a 30% fee and the same is true of every other leading distributor of games for the mobile and PC market, including Google Play Store, App Store’s rival in the mobile market, and Steam, the dominant distributor of video games in the PC market. This long-standing market practice suggests that the 30% fee is most likely motivated by an efficiency-driven business motivation, rather than seeking to entrench a monopoly position that Apple did not enjoy when the practice was first adopted. That is: even if Apple is deemed to be a “monopolist” for Section 2 purposes, it is not taking any “illegitimate” actions that could constitute monopolization or attempted monopolization.

The logic of the 70/30 split

Uncovering the business logic behind the 70/30 split in the app distribution market is not too difficult.

The 30% fee appears to be a low transaction-cost practice that enables the distributor to fund a variety of services, including app development tools, marketing support, and security and privacy protections, all of which are supplied at no separately priced fee and therefore do not require service-by-service negotiation and renegotiation. The same rationale credibly applies to the integrated payment processing services that Apple supplies for purposes of in-app purchases.

These services deliver significant value and would otherwise be difficult to replicate cost-effectively, protect the App Store’s valuable stock of brand capital (which yields positive spillovers for app developers on the site), and lower the costs of joining and participating in the App Store. Additionally, the 30% fee cross-subsidizes the delivery of these services to the approximately 80% of apps on the App Store that are ad-based and for which no fee is assessed, which in turn lowers entry costs and expands the number and variety of product options for platform users. These would all seem to be attractive outcomes from a competition policy perspective.

Epic’s objection

Epic would object to this line of argument by observing that it only charges a 12% fee to distribute other developers’ games on its own Epic Games Store.

Yet Epic’s lower fee is reportedly conditioned, at least in some cases, on the developer offering the game exclusively on the Epic Games Store for a certain period of time. Moreover, the services provided on the Epic Games Store may not be comparable to the extensive suite of services provided on the App Store and other leading distributors that follow the 30% standard. Additionally, the user base a developer can expect to access through the Epic Games Store is in all likelihood substantially smaller than the audience that can be reached through the App Store and other leading app and game distributors, which is then reflected in the higher fees charged by those platforms.

Hence, even the large fee differential may simply reflect the higher services and larger audiences available on the App Store, Google Play Store and other leading platforms, as compared to the Epic Games Store, rather than the unilateral extraction of market rents at developers’ and consumers’ expense.

Antitrust is about efficiency, not distribution

Epic says the standard 70/30 split between game publishers and app distributors is “excessive” while others argue that it is historically outdated.

Neither of these are credible antitrust arguments. Renegotiating the division of economic surplus between game suppliers and distributors is not the concern of antitrust law, which (as properly defined) should only take an interest if either (i) Apple is colluding on the 30% fee with other app distributors, or (ii) Apple is taking steps that preclude entry into the apps distribution market and lack any legitimate business justification. No one claims evidence for the former possibility and, without further evidence, the latter possibility is not especially compelling given the uniform use of the 70/30 split across the industry (which, as noted, can be derived from a related set of credible efficiency justifications). It is even less compelling in the face of evidence that output is rapidly accelerating, not declining, in the gaming app market: in the first half of 2020, approximately 24,500 new games were added to the App Store.

If this conclusion is right, then Epic’s lawsuit against Apple does not seem to have much to do with the public interest in preserving market competition.

But it clearly has much to do with the business interest of an input supplier in minimizing its distribution costs and maximizing its profit margin. That category includes not only Epic Games but Tencent, the world’s largest video game publisher and the holder of a 40% equity stake in Epic. Tencent also owns Riot Games (the publisher of “League of Legends”), an 84% stake in Supercell (the publisher of “Clash of Clans”), and a 5% stake in Activision Blizzard (the publisher of “Call of Duty”). It is unclear how an antitrust claim that, if successful, would simply redistribute economic value from leading game distributors to leading game developers has any necessary relevance to antitrust’s objective to promote consumer welfare.

The prequel: Apple v. Qualcomm

Ironically (and, as Dirk Auer has similarly observed), there is a symmetry between Epic’s claims against Apple and the claims previously pursued by Apple (and, concurrently, the Federal Trade Commission) against Qualcomm.

In that litigation, Apple contested the terms of the licensing arrangements under which Qualcomm made available its wireless communications patents to Apple (more precisely, Foxconn, Apple’s contract manufacturer), arguing that the terms were incompatible with Qualcomm’s commitment to “fair, reasonable and nondiscriminatory” (“FRAND”) licensing of its “standard-essential” patents (“SEPs”). Like Epic v. Apple, Apple v. Qualcomm was fundamentally a contract dispute, with the difference that Apple was in the position of a third-party beneficiary of the commitment that Qualcomm had made to the governing standard-setting organization. Like Epic, Apple sought to recharacterize this contractual dispute as an antitrust question, arguing that Qualcomm’s licensing practices constituted anticompetitive actions to “monopolize” the market for smartphone modem chipsets.

Theory meets evidence

The rhetoric used by Epic in its complaint echoes the rhetoric used by Apple in its briefs and other filings in the Qualcomm litigation. Apple (like the FTC) had argued that Qualcomm imposed a “tax” on competitors by requiring that any purchaser of Qualcomm’s chipsets concurrently enter into a license for Qualcomm’s SEP portfolio relating to 3G and 4G/LTE-enabled mobile communications devices.

Yet the history and performance of the mobile communications market simply did not track Apple’s (and the FTC’s continuing) characterization of Qualcomm’s licensing fee as a socially costly drag on market growth and, by implication, consumer welfare.

If this assertion had merit, then the decades-old wireless market should have exhibited a dismal history of increasing prices, slow user adoption and lagging innovation. In actuality, the wireless market since its inception has grown relentlessly, characterized by declining quality-adjusted prices, expanding output, relentless innovation, and rapid adoption across a broad range of income segments.

Given this compelling real-world evidence, the only remaining line of argument (still being pursued by the FTC) that could justify antitrust intervention is a theoretical conjecture that the wireless market might have grown even faster under some alternative IP licensing arrangement. This assertion rests precariously on the speculative assumption that any such arrangement would have induced the same or higher level of aggregate investment in innovation and commercialization activities. That fragile chain of “what if” arguments hardly seems a sound basis on which to rewrite the legal infrastructure behind the billions of dollars of licensing transactions that support the economically thriving smartphone market and the even larger ecosystem that has grown around it.

Antitrust litigation as business strategy

Given the absence of compelling evidence of competitive harm from Qualcomm’s allegedly anticompetitive licensing practices, Apple’s litigation would seem to be best interpreted as an economically rational attempt by a downstream producer to renegotiate a downward adjustment in the fees paid to an upstream supplier of critical technology inputs. (In fact, those are precisely the terms on which Qualcomm in 2015 settled the antitrust action brought against it by China’s competition regulator, to the obvious benefit of local device producers.) The Epic Games litigation is a mirror image fact pattern in which an upstream supplier of content inputs seeks to deploy antitrust law strategically for the purposes of minimizing the fees it pays to a leading downstream distributor.

Both litigations suffer from the same flaw. Private interests concerning the division of an existing economic value stream—a business question that is matter of indifference from an efficiency perspective—are erroneously (or, at least, reflexively) conflated with the public interest in preserving the free play of competitive forces that maximizes the size of the economic value stream.

Conclusion: Remaking the case for “narrow” antitrust

The Epic v. Apple and Apple v. Qualcomm disputes illustrate the unproductive rent-seeking outcomes to which antitrust law will inevitably be led if, as is being widely advocated, it is decoupled from its well-established foundation in promoting consumer welfare—and not competitor welfare.

Some proponents of a more expansive approach to antitrust enforcement are convinced that expanding the law’s scope of application will improve market efficiency by providing greater latitude for expert regulators and courts to reengineer market structures to the public benefit. Yet any substitution of top-down expert wisdom for the bottom-up trial-and-error process of market competition can easily yield “false positives” in which courts and regulators take actions that counterproductively intervene in markets that are already operating under reasonably competitive conditions. Additionally, an overly expansive approach toward the scope of antitrust law will induce private firms to shift resources toward securing advantages over competitors through lobbying and litigation, rather than seeking to win the race to deliver lower-cost and higher-quality products and services. Neither outcome promotes the public’s interest in a competitive marketplace.

Much has already been said about the twin antitrust suits filed by Epic Games against Apple and Google. For those who are not familiar with the cases, the game developer – most famous for its hit title Fortnite and the “Unreal Engine” that underpins much of the game (and movie) industry – is complaining that Apple and Google are thwarting competition from rival app stores and in-app payment processors. 

Supporters have been quick to see in these suits a long-overdue challenge against the 30% commissions that Apple and Google charge. Some have even portrayed Epic as a modern-day Robin Hood, leading the fight against Big Tech to the benefit of small app developers and consumers alike. Epic itself has been keen to stoke this image, comparing its litigation to a fight for basic freedoms in the face of Big Brother:

However, upon closer inspection, cracks rapidly appear in this rosy picture. What is left is a company partaking in blatant rent-seeking that threatens to harm the sprawling ecosystems that have emerged around both Apple and Google’s app stores.

Two issues are particularly salient. First, Epic is trying to protect its own interests at the expense of the broader industry. If successful, its suit would merely lead to alternative revenue schemes that – although more beneficial to itself – would leave smaller developers to shoulder higher fees. Second, the fees that Epic portrays as extortionate were in fact key to the emergence of mobile gaming.

Epic’s utopia is not an equilibrium

Central to Epic’s claims is the idea that both Apple and Google: (i) thwart competition from rival app stores, and implement a series of measures that prevent developers from reaching gamers through alternative means (such as pre-installing apps, or sideloading them in the case of Apple’s platforms); and (ii) tie their proprietary payment processing services to their app stores. According to Epic, this ultimately enables both Apple and Google to extract “extortionate” commissions (30%) from app developers.

But Epic’s whole case is based on the unrealistic assumption that both Apple and Google will sit idly by while rival play stores and payment systems take a free-ride on the vast investments they have ploughed into their respective smartphone platforms. In other words, removing Apple and Google’s ability to charge commissions on in-app purchases does not prevent them from monetizing their platforms elsewhere.

Indeed, economic and strategic management theory tells us that so long as Apple and Google single-handedly control one of the necessary points of access to their respective ecosystems, they should be able to extract a sizable share of the revenue generated on their platforms. One can only speculate, but it is easy to imagine Apple and Google charging rival app stores for access to their respective platforms, or charging developers for access to critical APIs.

Epic itself seems to concede this point. In a recent Verge article, it argued that Apple was threatening to cut off its access to iOS and Mac developer tools, which Apple currently offers at little to no cost:

Apple will terminate Epic’s inclusion in the Apple Developer Program, a membership that’s necessary to distribute apps on iOS devices or use Apple developer tools, if the company does not “cure your breaches” to the agreement within two weeks, according to a letter from Apple that was shared by Epic. Epic won’t be able to notarize Mac apps either, a process that could make installing Epic’s software more difficult or block it altogether. Apple requires that all apps are notarized before they can be run on newer versions of macOS, even if they’re distributed outside the App Store.

There is little to prevent Apple from more heavily monetizing these tools – should Epic’s antitrust case successfully prevent it from charging commissions via its app store.

All of this raises the question: why is Epic bringing a suit that, if successful, would merely result in the emergence of alternative fee schedules (as opposed to a significant reduction of the overall fees paid by developers).

One potential answer is that the current system is highly favorable to small apps that earn little to no revenue from purchases and who benefit most from the trust created by Apple and Google’s curation of their stores. It is, however, much less favorable to developers like Epic who no longer require any curation to garner the necessary trust from consumers and who earn a large share of their revenue from in-app purchases.

In more technical terms, the fact that all in-game payments are made through Apple and Google’s payment processing enables both platforms to more easily price-discriminate. Unlike fixed fees (but just like royalties), percentage commissions are necessarily state-contingent (i.e. the same commission will lead to vastly different revenue depending on an underlying app’s success). The most successful apps thus contribute far more to a platform’s fixed costs. For instance, it is estimated that mobile games account for 72% of all app store spend. Likewise, more than 80% of the apps on Apple’s store pay no commission at all.

This likely expands app store output by getting lower value developers on board. In that sense, it is akin to Ramsey pricing (where a firm/utility expands social welfare by allocating a higher share of fixed costs to the most inelastic consumers). Unfortunately, this would be much harder to accomplish if high value developers could easily bypass Apple or Google’s payment systems.

The bottom line is that Epic appears to be fighting to change Apple and Google’s app store business models in order to obtain fee schedules that are better aligned with its own interests. This is all the more important for Epic Games, given that mobile gaming is becoming increasingly popular relative to other gaming mediums (also here).

The emergence of new gaming platforms

Up to this point, I have mostly presented a zero-sum view of Epic’s lawsuit – i.e. developers and platforms are fighting over the distribution app store profits (though some smaller developers may lose out). But this ignores what is likely the chief virtue of Apple and Google’s “closed” distribution model. Namely, that it has greatly expanded the market for mobile gaming (and other mobile software), and will likely continue to do so in the future.

Much has already been said about the significant security and trust benefits that Apple and Google’s curation of their app stores (including their control of in-app payments) provide to users. Benedict Evans and Ben Thompson have both written excellent pieces on this very topic. 

In a nutshell, the closed model allows previously unknown developers to rapidly expand because (i) users do not have to fear their apps contain some form of malware, and (ii) they greatly reduce payments frictions, most notably security related ones. But while these are indeed tremendous benefits, another important upside seems to have gone relatively unnoticed. 

The “closed” business model also gives Apple and Google (as well as other platforms) significant incentives to develop new distribution mediums (smart TVs spring to mind) and improve existing ones. In turn, this greatly expands the audience that software developers can reach. In short, developers get a smaller share of a much larger pie.

The economics of two-sided markets are enlightening in this respect. Apple and Google’s stores are what Armstrong and Wright (here and here) refer to as “competitive bottlenecks”. That is, they compete aggressively (amongst themselves, and with other gaming platforms) to attract exclusive users. They can then charge developers a premium to access those users (note, however, that in the case at hand the incidence of those platform fees is unclear).

This gives platforms significant incentives to continuously attract and retain new users. For instance, if Steve Jobs is to be believed, giving consumers better access to media such as eBooks, video and games was one of the driving forces behind the launch of the iPad

This model of innovation would be seriously undermined if developers and consumers could easily bypass platforms (as Epic games is seeking to do).

In response, some commentators have countered that platforms may use their strong market positions to squeeze developers, thereby undermining software investments. But such a course of action may ultimately be self-defeating. For instance, writing about retail platforms imitating third-party sellers, Anfrei Hagiu, Tat-How Teh and Julian Wright have argued that:

[T]he platform has an incentive to commit itself not to imitate highly innovative third-party products in order to preserve their incentives to innovate.

Seen in this light, Apple and Google’s 30% commissions can be seen as a soft commitment not to expropriate developers, thus leaving them with a sizable share of the revenue generated on each platform. This may explain why the 30% commission has become a standard in the games industry (and beyond). 

Furthermore, from an evolutionary perspective, it is hard to argue that the 30% commission is somehow extortionate. If game developers were systematically expropriated, then the gaming industry – in particular its mobile segment – would not have grown so drastically over the past years:

All of this this likely explains why a recent survey found that 81% of app developers believed regulatory intervention would be misguided:

81% of developers and publishers believe that the relationship between them and platforms is best handled within the industry, rather than through government intervention. Competition and choice mean that developers will use platforms that they work with best.

The upshot is that the “closed” model employed by Apple and Google has served the gaming industry well. There is little compelling reason to overhaul that model today.

Final thoughts

When all is said and done, there is no escaping the fact that Epic games is currently playing a high-stakes rent-seeking game. As Apple noted in its opposition to Epic’s motion for a temporary restraining order:

Epic did not, and has not, contested that it is in breach of the App Store Guidelines and the License Agreement. Epic’s plan was to violate the agreements intentionally in order to manufacture an emergency. The moment Fortnite was removed from the App Store, Epic launched an extensive PR smear campaign against Apple and a litigation plan was orchestrated to the minute; within hours, Epic had filed a 56-page complaint, and within a few days, filed nearly 200 pages with this Court in a pre-packaged “emergency” motion. And just yesterday, it even sought to leverage its request to this Court for a sales promotion, announcing a “#FreeFortniteCup” to take place on August 23, inviting players for one last “Battle Royale” across “all platforms” this Sunday, with prizes targeting Apple.

Epic is ultimately seeking to introduce its own app store on both Apple and Google’s platforms, or at least bypass their payment processing services (as Spotify is seeking to do in the EU).

Unfortunately, as this post has argued, condoning this type of free-riding could prove highly detrimental to the entire mobile software industry. Smaller companies would almost inevitably be left to foot a larger share of the bill, existing platforms would become less secure, and the development of new ones could be hindered. At the end of the day, 30% might actually be a small price to pay.

Copyright law, ever a sore point in some quarters, has found a new field of battle in the FCC’s recent set-top box proposal. At the request of members of Congress, the Copyright Office recently wrote a rather thorough letter outlining its view of the FCC’s proposal on rightsholders.

In sum, the CR’s letter was an even-handed look at the proposal which concluded:

As a threshold matter, it seems critical that any revised proposal respect the authority of creators to manage the exploitation of their copyrighted works through private licensing arrangements, because regulatory actions that undermine such arrangements would be inconsistent with the rights granted under the Copyright Act.

This fairly uncontroversial statement of basic legal principle was met with cries of alarm. And Stanford’s CIS had a post from Affiliated Scholar Annemarie Bridy that managed to trot out breathless comparisons to inapposite legal theories while simultaneously misconstruing the “fair use” doctrine (as well as how Copyright law works in the video market, for that matter).

Look out! Lochner is coming!

In its letter the Copyright Office warned the FCC that its proposed rules have the potential to disrupt the web of contracts that underlie cable programming, and by extension, risk infringing the rights of copyright holders to commercially exploit their property. This analysis actually tracks what Geoff Manne and I wrote in both our initial comment and our reply comment to the set-top box proposal.

Yet Professor Bridy seems to believe that, notwithstanding the guarantees of both the Constitution and Section 106 of the Copyright Act, the FCC should have the power to abrogate licensing contracts between rightsholders and third parties.  She believes that

[t]he Office’s view is essentially that the Copyright Act gives right holders not only the limited range of rights enumerated in Section 106 (i.e., reproduction, preparation of derivative works, distribution, public display, and public performance), but also a much broader and more amorphous right to “manage the commercial exploitation” of copyrighted works in whatever ways they see fit and can accomplish in the marketplace, without any regulatory interference from the government.

What in the world does this even mean? A necessary logical corollary of the Section 106 rights includes the right to exploit works commercially as rightsholders see fit. Otherwise, what could it possibly mean to have the right to control the reproduction or distribution of a work? The truth is that Section 106 sets out a general set of rights that inhere in rightsholders with respect to their protected works, and that commercial exploitation is merely a subset of this total bundle of rights.

The ability to contract with other parties over these rights is also a necessary corollary of the property rights recognized in Section 106. After all, the right to exclude implies by necessity the right to include. Which is exactly what a licensing arrangement is.

But wait, there’s more — she actually managed to pull out the Lochner bogeyman to validate her argument!

The Office’s absolutist logic concerning freedom of contract in the copyright licensing domain is reminiscent of the Supreme Court’s now-infamous reasoning in Lochner v. New York, a 1905 case that invalidated a state law limiting maximum working hours for bakers on the ground that it violated employer-employee freedom of contract. The Court in Lochner deprived the government of the ability to provide basic protections for workers in a labor environment that subjected them to unhealthful and unsafe conditions. As Julie Cohen describes it, “‘Lochner’ has become an epithet used to characterize an outmoded, over-narrow way of thinking about state and federal economic regulation; it goes without saying that hardly anybody takes the doctrine it represents seriously.”

This is quite a leap of logic, as there is precious little in common between the letter from the Copyright Office and the Lochner opinion aside from the fact that both contain the word “contracts” in their pages.  Perhaps the most critical problem with Professor Bridy’s analogy is the fact that Lochner was about a legislature interacting with the common law system of contract, whereas the FCC is a body subordinate to Congress, and IP is both constitutionally and statutorily guaranteed. A sovereign may be entitled to interfere with the operation of common law, but an administrative agency does not have the same sort of legal status as a legislature when redefining general legal rights.

The key argument that Professor Bridy offered in support of her belief that the FCC should be free to abrogate contracts at will is that “[r]egulatory limits on private bargains may come in the form of antitrust laws or telecommunications laws or, as here, telecommunications regulations that further antitrust ends.”  However, this completely misunderstand U.S. constitutional doctrine.

In particular, as Geoff Manne and I discussed in our set-top box comments to the FCC, using one constitutional clause to end-run another constitutional clause is generally a no-no:

Regardless of whether or how well the rules effect the purpose of Sec. 629, copyright violations cannot be justified by recourse to the Communications Act. Provisions of the Communications Act — enacted under Congress’s Commerce Clause power — cannot be used to create an end run around limitations imposed by the Copyright Act under the Constitution’s Copyright Clause. “Congress cannot evade the limits of one clause of the Constitution by resort to another,” and thus neither can an agency acting within the scope of power delegated to it by Congress. Establishing a regulatory scheme under the Communications Act whereby compliance by regulated parties forces them to violate content creators’ copyrights is plainly unconstitutional.

Congress is of course free to establish the implementation of the Copyright Act as it sees fit. However, unless Congress itself acts to change that implementation, the FCC — or any other party — is not at liberty to interfere with rightsholders’ constitutionally guaranteed rights.

You Have to Break the Law Before You Raise a Defense

Another bone of contention upon which Professor Bridy gnaws is a concern that licensing contracts will abrogate an alleged right to “fair use” by making the defense harder to muster:  

One of the more troubling aspects of the Copyright Office’s letter is the length to which it goes to assert that right holders must be free in their licensing agreements with MVPDs to bargain away the public’s fair use rights… Of course, the right of consumers to time-shift video programming for personal use has been enshrined in law since Sony v. Universal in 1984. There’s no uncertainty about that particular fair use question—none at all.

The major problem with this reasoning (notwithstanding the somewhat misleading drafting of Section 107) is that “fair use” is not an affirmative right, it is an affirmative defense. Despite claims that “fair use” is a right, the Supreme Court has noted on at least two separate occasions (1, 2) that Section 107 was “structured… [as]… an affirmative defense requiring a case-by-case analysis.”

Moreover, important as the Sony case is, it does not not establish that “[t]here’s no uncertainty about [time-shifting as a] fair use question—none at all.” What it actually establishes is that, given the facts of that case, time-shifting was a fair use. Not for nothing the Sony Court notes at the outset of its opinion that

An explanation of our rejection of respondents’ unprecedented attempt to impose copyright liability upon the distributors of copying equipment requires a quite detailed recitation of the findings of the District Court.

But more generally, the Sony doctrine stands for the proposition that:

“The limited scope of the copyright holder’s statutory monopoly, like the limited copyright duration required by the Constitution, reflects a balance of competing claims upon the public interest: creative work is to be encouraged and rewarded, but private motivation must ultimately serve the cause of promoting broad public availability of literature, music, and the other arts. The immediate effect of our copyright law is to secure a fair return for an ‘author’s’ creative labor. But the ultimate aim is, by this incentive, to stimulate artistic creativity for the general public good. ‘The sole interest of the United States and the primary object in conferring the monopoly,’ this Court has said, ‘lie in the general benefits derived by the public from the labors of authors.’ Fox Film Corp. v. Doyal, 286 U. S. 123, 286 U. S. 127. See Kendall v. Winsor, 21 How. 322, 62 U. S. 327-328; Grant v. Raymond, 6 Pet. 218, 31 U. S. 241-242. When technological change has rendered its literal terms ambiguous, the Copyright Act must be construed in light of this basic purpose.” Twentieth Century Music Corp. v. Aiken, 422 U. S. 151, 422 U. S. 156 (1975) (footnotes omitted).

In other words, courts must balance competing interests to maximize “the general benefits derived by the public,” subject to technological change and other criteria that might shift that balance in any particular case.  

Thus, even as an affirmative defense, nothing is guaranteed. The court will have to walk through a balancing test, and only after that point, and if the accused party’s behavior has not tipped the scales against herself, will the court find the use a “fair use.”  

As I noted before,

Not surprisingly, other courts are inclined to follow the Supreme Court. Thus the Eleventh Circuit, the Southern District of New York, and the Central District of California (here and here), to name but a few, all explicitly refer to fair use as an affirmative defense. Oh, and the Ninth Circuit did too, at least until Lenz.

The Lenz case was an interesting one because, despite the above noted Supreme Court precedent treating “fair use” as a defense, it is one of the very few cases that has held “fair use” to be an affirmative right (in that case, the court decided that Section 1201 of the DMCA required consideration of “fair use” as a part of filling out a take-down notice). And in doing so, it too tried to rely on Sony to restructure the nature of “fair use.” But as I have previously written, “[i]t bears noting that the Court in Sony Corp. did not discuss whether or not fair use is an affirmative defense, whereas Acuff Rose (decided 10 years after Sony Corp.) and Harper & Row decisions do.”

Further, even the Eleventh Circuit, which the Ninth relied upon in Lenz, later clarified its position that the above-noted Supreme Court precedent definitely binds lower courts, and that “fair use” is in fact an affirmative defense.

Thus, to say that rightsholders’ licensing contracts somehow impinge a “right” of fair use completely puts the cart before the horse. Remember, as an affirmative defense, “fair use” is an excuse for otherwise infringing behavior, and rightsholders are well within their constitutional and statutory rights to avoid potential infringing uses.

Think about it this way. When you commit a crime you can raise a defense: for instance, an insanity defense. But just because you might be excused for committing a crime if a court finds you were not operating with full faculties, this does not entitle every insane person to go out and commit that crime. The insanity defense can be raised only after a crime is committed, and at that point it will be examined by a judge and jury to determine if applying the defense furthers the overall criminal law scheme.

“Fair use” works in exactly the same manner. And even though Sony described how time- and space-shifting were potentially permissible, it did so only by determining on those facts that the balancing test came out to allow it. So, maybe a particular time-shifting use would be “fair use.” But maybe not. More likely, in this case, even the allegedly well-established “fair use” of time-shifting in the context of today’s digital media, on-demand programing, Netflix and the like may not meet that burden.

And what this means is that a rightsholder does not have an ex ante obligation to consider whether a particular contractual clause might in some fashion or other give rise to a “fair use” defense.

The contrary point of view makes no sense. Because “fair use” is a defense, forcing parties to build “fair use” considerations into their contractual negotiations essentially requires them to build in an allowance for infringement — and one that a court might or might not ever find appropriate in light of the requisite balancing of interests. That just can’t be right.

Instead, I think this article is just a piece of the larger IP-skeptic movement. I suspect that when “fair use” was in its initial stages of development, it was intended as a fairly gentle softening on the limits of intellectual property — something like the “public necessity” doctrine in common law with respect to real property and trespass. However, that is just not how “fair use” advocates see it today. As Geoff Manne has noted, the idea of “permissionless innovation” has wrongly come to mean “no contracts required (or permitted)”:  

[Permissionless innovation] is used to justify unlimited expansion of fair use, and is extended by advocates to nearly all of copyright…, which otherwise requires those pernicious licenses (i.e., permission) from others.

But this position is nonsense — intangible property is still property. And at root, property is just a set of legal relations between persons that defines their rights and obligations with respect to some “thing.” It doesn’t matter if you can hold that thing in your hand or not. As property, IP can be subject to transfer and control through voluntarily created contracts.

Even if “fair use” were some sort of as-yet unknown fundamental right, it would still be subject to limitations upon it by other rights and obligations. To claim that “fair use” should somehow trump the right of a property holder to dispose of the property as she wishes is completely at odds with our legal system.

In a thorough and convincing paper, “The FTC’s Proposal for Regulating IP through SSOs Would Replace Private Coordination with Government Hold-Up,” Richard Epstein, Scott Kieff and Dan Spulber assess and then decimate the FTC’s proposal on patent notice and remedies, “The Evolving IP Marketplace: Aligning Patent Notice and Remedies with Competition.”  Note Epstein, Kieff and Spulber:

In its recent report entitled “The Evolving IP Marketplace,” the Federal Trade Commission (FTC) advances a far‐reaching regulatory approach (Proposal) whose likely effect would be to distort the operation of the intellectual property (IP) marketplace in ways that will hamper the innovation and commercialization of new technologies. The gist of the FTC Proposal is to rely on highly non-­standard and misguided definitions of economic terms of art such as “ex ante” and “hold-­up,” while urging new inefficient rules for calculating damages for patent infringement. Stripped of the technicalities, the FTC Proposal would so reduce the costs of infringement by downstream users that the rate of infringement would unduly increase, as potential infringers find it in their interest to abandon the voluntary market in favor of a more attractive system of judicial pricing. As the number of nonmarket transactions increases, the courts will play an ever larger role in deciding the terms on which the patents of one party may be used by another party. The adverse effects of this new trend will do more than reduce the incentives for innovation; it will upset the current set of well-­‐functioning private coordination activities in the IP marketplace that are needed to accomplish the commercialization of new technologies. Such a trend would seriously undermine capital formation, job growth, competition, and the consumer welfare the FTC seeks to promote.

Focusing in particular on SSOs, the trio homes in on the potential incentive problem created by the FTC’s proposal:

The central problem with the FTC’s approach is that it would interfere seriously with the helpful incentives all parties in the IP marketplace presently have to contract with each other. The FTC’s approach ignores the powerful incentives that it creates in putative licenses to spurn the voluntary market in order to obtain a strategic advantage over the licensor. In any voluntary market, the low rates that go to initial licensees reflect the uncertainty of the value of the patented technology at the time the license is issued. Once that technology has proven its worth, there is no sound reason to allow any potential licensee who instead held out from the originally offered deal to get bargain rates down the road. Allowing such an option would make the holdout better off than the contracting party. Such holdouts would not need to take licenses for technologies with low value, while resting assured they would still get technologies with high value at below market rates. The FTC seems to overlook that a well-­‐functioning patent damage system should do more than merely calibrate damages after the fact. An efficient approach to damages is one that also reduces the number of infringements overall by making sure that the infringer cannot improve his economic position by his own wrong.

The FTC Proposal rests on the misguided conviction that the law should not allow a licensor to “demand and obtain royalty payments based on the infringer’s switching costs” once the manufacturer has “sunk costs into using the technology;” and it labels any such payments as the result of “hold-­up.”

As Epstein, et al. discuss, current private ordering (reciprocal dealing, repeat play, RAND terms, etc.) works perfectly well to address real hold-up problems, and the FTC seems to be both defining the problem oddly and, thus, creating a problem that doesn’t really exist.

Although not discussed directly, the paper owes a great deal to the great Ben Klein and especially his paper, Why Hold-Ups Occur: The Self-Enforcing Range of Contractual Relationships (to say nothing of Klein, Crawford & Alchian, of course).  Likewise, although not discussed in the paper, Josh and Bruce Kobayashi’s excellent paper, Federalism, Substantive Preemption and Limits on Antitrust: An Application to Patent Holdup is an essential precursor to this paper, addressing the comparative merits of antitrust  and contract-based evaluation of claimed patent holdups in SSOs.

Highly-recommended and an important addition to the ever-interesting antitrust/IP discussion.