Archives For film

Still from Squid Game, Netflix and Siren Pictures Inc., 2021

Recent commentary on the proposed merger between WarnerMedia and Discovery, as well as Amazon’s acquisition of MGM, often has included the suggestion that the online content-creation and video-streaming markets are excessively consolidated, or that they will become so absent regulatory intervention. For example, in a recent letter to the U.S. Justice Department (DOJ), the American Antitrust Institute and Public Knowledge opine that:

Slow and inadequate oversight risks the streaming market going the same route as cable—where consumers have little power, few options, and where consolidation and concentration reign supreme. A number of threats to competition are clear, as discussed in this section, including: (1) market power issues surrounding content and (2) the role of platforms in “gatekeeping” to limit competition.

But the AAI/PK assessment overlooks key facts about the video-streaming industry, some of which suggest that, if anything, these markets currently suffer from too much fragmentation.

The problem is well-known: any individual video-streaming service will offer only a fraction of the content that viewers want, but budget constraints limit the number of services that a household can afford to subscribe to. It may be counterintuitive, but consolidation in the market for video-streaming can solve both problems at once.

One subscription is not enough

Surveys find that U.S. households currently maintain, on average, four video-streaming subscriptions. This explains why even critics concede that a plethora of streaming services compete for consumer eyeballs. For instance, the AAI and PK point out that:

Today, every major media company realizes the value of streaming and a bevy of services have sprung up to offer different catalogues of content.

These companies have challenged the market leader, Netflix and include: Prime Video (2006), Hulu (2007), Paramount+ (2014), ESPN+ (2018), Disney+ (2019), Apple TV+ (2019), HBO Max (2020), Peacock (2020), and Discovery+ (2021).

With content scattered across several platforms, multiple subscriptions are the only way for households to access all (or most) of the programs they desire. Indeed, other than price, library sizes and the availability of exclusive content are reportedly the main drivers of consumer purchase decisions.

Of course, there is nothing inherently wrong with the current equilibrium in which consumers multi-home across multiple platforms. One potential explanation is demand for high-quality exclusive content, which requires tremendous investment to develop and promote. Production costs for TV series routinely run in the tens of millions of dollars per episode (see here and here). Economic theory predicts these relationship-specific investments made by both producers and distributors will cause producers to opt for exclusive distribution or vertical integration. The most sought-after content is thus exclusive to each platform. In other words, exclusivity is likely the price that users must pay to ensure that high-quality entertainment continues to be produced.

But while this paradigm has many strengths, the ensuing fragmentation can be detrimental to consumers, as this may lead to double marginalization or mundane issues like subscription fatigue. Consolidation can be a solution to both.

Substitutes, complements, or unrelated?

As Hal Varian explains in his seminal book, the relationship between two goods can range among three extremes: perfect substitutes (i.e., two goods are perfectly interchangeable); perfect complements (i.e., there is no value to owning one good without the other); or goods that exist in independent markets (i.e., the price of one good does not affect demand for the other).

These distinctions are critical when it comes to market concentration. All else equal—which is obviously not the case in reality—increased concentration leads to lower prices for complements, and higher prices for substitutes. Finally, if demand for two goods is unrelated, then bringing them under common ownership should not affect their price.

To at least some extent, streaming services should be seen as complements rather than substitutes—or, at least, as services with unrelated demand. If they were perfect substitutes, consumers would be indifferent between two Netflix subscriptions or one Netflix plan and one Amazon Prime plan. That is obviously not the case. Nor are they perfect complements, which would mean that Netflix is worthless without Amazon Prime, Disney+, and other services.

However, there is reason to believe there exists some complementarity between streaming services, or at least that demand for them is independent. Most consumers subscribe to multiple services, and almost no one subscribes to the same service twice:

SOURCE: Finance Buzz

This assertion is also supported by the ubiquitous bundling of subscriptions in the cable distribution industry, which also has recently been seen in video-streaming markets. For example, in the United States, Disney+ can be purchased in a bundle with Hulu and ESPN+.

The key question is: is each service more valuable, less valuable, or as valuable in isolation than they are when bundled? If households place some additional value on having a complete video offering (one that includes child entertainment, sports, more mature content, etc.), and if they value the convenience of accessing more of their content via a single app, then we can infer these services are to some extent complementary.

Finally, it is worth noting that any complementarity between these services would be largely endogenous. If the industry suddenly switched to a paradigm of non-exclusive content—as is broadly the case for audio streaming—the above analysis would be altered (though, as explained above, such a move would likely be detrimental to users). Streaming services would become substitutes if they offered identical catalogues.

In short, the extent to which streaming services are complements ultimately boils down to an empirical question that may fluctuate with industry practices. As things stand, there is reason to believe that these services feature some complementarities, or at least that demand for them is independent. In turn, this suggests that further consolidation within the industry would not lead to price increases and may even reduce them.

Consolidation can enable price discrimination

It is well-established that bundling entertainment goods can enable firms to better engage in price discrimination, often increasing output and reducing deadweight loss in the process.

Take George Stigler’s famous explanation for the practice of “block booking,” in which movie studios sold multiple films to independent movie theatres as a unit. Stigler assumes the underlying goods are neither substitutes nor complements:

Stigler, George J. (1963) “United States v. Loew’s Inc.: A Note on Block-Booking,” Supreme Court Review: Vol. 1963 : No. 1 , Article 2.

The upshot is that, when consumer tastes for content are idiosyncratic—as is almost certainly the case for movies and television series, movies—it can counterintuitively make sense to sell differing content as a bundle. In doing so, the distributor avoids pricing consumers out of the content upon which they place a lower value. Moreover, this solution is more efficient than price discriminating on an unbundled basis, as doing so would require far more information on the seller’s part and would be vulnerable to arbitrage.

In short, bundling enables each consumer to access a much wider variety of content. This, in turn, provides a powerful rationale for mergers in the video-streaming space—particularly where they can bring together varied content libraries. Put differently, it cuts in favor of more, not less, concentration in video-streaming markets (at least, up to a certain point).

Finally, a wide array of scale-related economies further support the case for concentration in video-streaming markets. These include potential economies of scale, network effects, and reduced transaction costs.

The simplest of these ideas is that the cost of video streaming may decrease at the margin (i.e., serving each marginal viewer might be cheaper than the previous one). In other words, mergers of video-streaming services mayenable platforms to operate at a more efficient scale. There has notably been some discussion of whether Netflix benefits from scale economies of this sort. But this is, of course, ultimately an empirical question. As I have written with Geoffrey Manne, we should not assume that this is the case for all digital platforms, or that these increasing returns are present at all ranges of output.

Likewise, the fact that content can earn greater revenues by reaching a wider audience (or a greater number of small niches) may increase a producer’s incentive to create high-quality content. For example, Netflix’s recent hit series Squid Game reportedly cost $16.8 million to produce a total of nine episodes. This is significant for a Korean-language thriller. These expenditures were likely only possible because of Netflix’s vast network of viewers. Video-streaming mergers can jump-start these effects by bringing previously fragmented audiences onto a single platform.

Finally, operating at a larger scale may enable firms and consumers to economize on various transaction and search costs. For instance, consumers don’t need to manage several subscriptions, and searching for content is easier within a single ecosystem.

Conclusion

In short, critics could hardly be more wrong in assuming that consolidation in the video-streaming industry will necessarily harm consumers. To the contrary, these mergers should be presumptively welcomed because, to a first approximation, they are likely to engender lower prices and reduce deadweight loss.

Critics routinely draw parallels between video streaming and the consolidation that previously moved through the cable industry. They suggest these events as evidence that consolidation was (and still is) inefficient and exploitative of consumers. As AAI and PK frame it:

Moreover, given the broader competition challenges that reside in those markets, and the lessons learned from a failure to ensure competition in the traditional MVPD markets, enforcers should be particularly vigilant.

But while it might not have been ideal for all consumers, the comparatively laissez-faire approach to competition in the cable industry arguably facilitated the United States’ emergence as a global leader for TV programming. We are now witnessing what appears to be a similar trend in the online video-streaming market.

This is mostly a good thing. While a single streaming service might not be the optimal industry configuration from a welfare standpoint, it would be equally misguided to assume that fragmentation necessarily benefits consumers. In fact, as argued throughout this piece, there are important reasons to believe that the status quo—with at least 10 significant players—is too fragmented and that consumers would benefit from additional consolidation.

The European Court of Justice issued its long-awaited ruling Dec. 9 in the Groupe Canal+ case. The case centered on licensing agreements in which Paramount Pictures granted absolute territorial exclusivity to several European broadcasters, including Canal+.

Back in 2015, the European Commission charged six U.S. film studios, including Paramount,  as well as British broadcaster Sky UK Ltd., with illegally limiting access to content. The crux of the EC’s complaint was that the contractual agreements to limit cross-border competition for content distribution ran afoul of European Union competition law. Paramount ultimately settled its case with the commission and agreed to remove the problematic clauses from its contracts. This affected third parties like Canal+, who lost valuable contractual protections. 

While the ECJ ultimately upheld the agreements on what amounts to procedural grounds (Canal+ was unduly affected by a decision to which it was not a party), the case provides yet another example of the European Commission’s misguided stance on absolute territorial licensing, sometimes referred to as “geo-blocking.”

The EC’s long-running efforts to restrict geo-blocking emerge from its attempts to harmonize trade across the EU. Notably, in its Digital Single Market initiative, the Commission envisioned

[A] Digital Single Market is one in which the free movement of goods, persons, services and capital is ensured and where individuals and businesses can s​eamlessly access and exercise online activities under conditions of f​air competition,​ and a high level of consumer and personal data protection, irrespective of their nationality or place of residence.

This policy stance has been endorsed consistently by the European Court of Justice. In the 2011 Murphy decision, for example, the court held that agreements between rights holders and broadcasters infringe European competition when they categorically prevent the latter from supplying “decoding devices” to consumers located in other member states. More precisely, while rights holders can license their content on a territorial basis, they cannot restrict so-called “passive sales”; broadcasters can be prevented from actively chasing consumers in other member states, but not from serving them altogether. If this sounds Kafkaesque, it’s because it is.

The problem with the ECJ’s vision is that it elides the complex factors that underlie a healthy free-trade zone. Geo-blocking frequently is misunderstood or derided by consumers as an unwarranted restriction on their consumption preferences. It doesn’t feel “fair” or “seamless” when a rights holder can decide who can access their content and on what terms. But that doesn’t mean geo-blocking is a nefarious or socially harmful practice. Quite the contrary: allowing creators to create different sets of distribution options offers both a return to the creators as well as more choice in general to consumers. 

In economic terms, geo-blocking allows rights holders to engage in third-degree price discrimination; that is, they have the ability to charge different prices for different sets of consumers. This type of pricing will increase total welfare so long as it increases output. As Hal Varian puts it:

If a new market is opened up because of price discrimination—a market that was not previously being served under the ordinary monopoly—then we will typically have a Pareto improving welfare enhancement.

Another benefit of third-degree price discrimination is that, by shifting some economic surplus from consumers to firms, it can stimulate investment in much the same way copyright and patents do. Put simply, the prospect of greater economic rents increases the maximum investment firms will be willing to make in content creation and distribution.

For these reasons, respecting parties’ freedom to license content as they see fit is likely to produce much more efficient outcomes than annulling those agreements through government-imposed “seamless access” and “fair competition” rules. Part of the value of copyright law is in creating space to contract by protecting creators’ property rights. Without geo-blocking, the enforcement of licensing agreements would become much more difficult. Laws restricting copyright owners’ ability to contract freely reduce allocational efficiency, as well as the incentives to create in the first place. Further, when individual creators have commercial and creative autonomy, they gain a degree of predictability that can ensure they will continue to produce content in the future. 

The European Union would do well to adopt a more nuanced understanding of the contractual relationships between producers and distributors. 

Hardly a day goes by without news of further competition-related intervention in the digital economy. The past couple of weeks alone have seen the European Commission announce various investigations into Apple’s App Store (here and here), as well as reaffirming its desire to regulate so-called “gatekeeper” platforms. Not to mention the CMA issuing its final report regarding online platforms and digital advertising.

While the limits of these initiatives have already been thoroughly dissected (e.g. here, here, here), a fundamental question seems to have eluded discussions: What are authorities trying to achieve here?

At first sight, the answer might appear to be extremely simple. Authorities want to “bring more competition” to digital markets. Furthermore, they believe that this competition will not arise spontaneously because of the underlying characteristics of digital markets (network effects, economies of scale, tipping, etc). But while it may have some intuitive appeal, this answer misses the forest for the trees.

Let us take a step back. Digital markets could have taken a vast number of shapes, so why have they systematically gravitated towards those very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones? Indeed, if recent commentary is to be believed, it is the latter that should succeed because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see intermediaries step into the breach – i.e. arbitrage. This does not seem to be happening in the digital economy. The naïve answer is to say that this is precisely the problem, the harder one is to actually understand why.

To draw a parallel with evolution, in the late 18th century, botanists discovered an orchid with an unusually long spur (above). This made its nectar incredibly hard to reach for insects. Rational observers at the time could be forgiven for thinking that this plant made no sense, that its design was suboptimal. And yet, decades later, Darwin conjectured that the plant could be explained by a (yet to be discovered) species of moth with a proboscis that was long enough to reach the orchid’s nectar. Decades after his death, the discovery of the xanthopan moth proved him right.

Returning to the digital economy, we thus need to ask why the platform business models that authorities desire are not the ones that emerge organically. Unfortunately, this complex question is mostly overlooked by policymakers and commentators alike.

Competition law on a spectrum

To understand the above point, let me start with an assumption: the digital platforms that have been subject to recent competition cases and investigations can all be classified along two (overlapping) dimensions: the extent to which they are open (or closed) to “rivals” and the extent to which their assets are propertized (as opposed to them being shared). This distinction borrows heavily from Jonathan Barnett’s work on the topic. I believe that by applying such a classification, we would obtain a graph that looks something like this:

While these classifications are certainly not airtight, this would be my reasoning:

In the top-left quadrant, Apple and Microsoft, both operate closed platforms that are highly propertized (Apple’s platform is likely even more closed than Microsoft’s Windows ever was). Both firms notably control who is allowed on their platform and how they can interact with users. Apple notably vets the apps that are available on its App Store and influences how payments can take place. Microsoft famously restricted OEMs freedom to distribute Windows PCs as they saw fit (notably by “imposing” certain default apps and, arguably, limiting the compatibility of Microsoft systems with servers running other OSs). 

In the top right quadrant, the business models of Amazon and Qualcomm are much more “open”, yet they remain highly propertized. Almost anyone is free to implement Qualcomm’s IP – so long as they conclude a license agreement to do so. Likewise, there are very few limits on the goods that can be sold on Amazon’s platform, but Amazon does, almost by definition, exert a significant control on the way in which the platform is monetized. Retailers can notably pay Amazon for product placement, fulfilment services, etc. 

Finally, Google Search and Android sit in the bottom left corner. Both of these services are weakly propertized. The Android source code is shared freely via an open source license, and Google’s apps can be preloaded by OEMs free of charge. The only limit is that Google partially closes its platform, notably by requiring that its own apps (if they are pre-installed) receive favorable placement. Likewise, Google’s search engine is only partially “open”. While any website can be listed on the search engine, Google selects a number of specialized results that are presented more prominently than organic search results (weather information, maps, etc). There is also some amount of propertization, namely that Google sells the best “real estate” via ad placement. 

Enforcement

Readers might ask what is the point of this classification? The answer is that in each of the above cases, competition intervention attempted (or is attempting) to move firms/platforms towards more openness and less propertization – the opposite of their original design.

The Microsoft cases and the Apple investigation, both sought/seek to bring more openness and less propetization to these respective platforms. Microsoft was made to share proprietary data with third parties (less propertization) and open up its platform to rival media players and web browsers (more openness). The same applies to Apple. Available information suggests that the Commission is seeking to limit the fees that Apple can extract from downstream rivals (less propertization), as well as ensuring that it cannot exclude rival mobile payment solutions from its platform (more openness).

The various cases that were brought by EU and US authorities against Qualcomm broadly sought to limit the extent to which it was monetizing its intellectual property. The European Amazon investigation centers on the way in which the company uses data from third-party sellers (and ultimately the distribution of revenue between them and Amazon). In both of these cases, authorities are ultimately trying to limit the extent to which these firms propertize their assets.

Finally, both of the Google cases, in the EU, sought to bring more openness to the company’s main platform. The Google Shopping decision sanctioned Google for purportedly placing its services more favorably than those of its rivals. And the Android decision notably sought to facilitate rival search engines’ and browsers’ access to the Android ecosystem. The same appears to be true of ongoing investigations in the US.

What is striking about these decisions/investigations is that authorities are pushing back against the distinguishing features of the platforms they are investigating. Closed -or relatively closed- platforms are being opened-up, and firms with highly propertized assets are made to share them (or, at the very least, monetize them less aggressively).

The empty quadrant

All of this would not be very interesting if it weren’t for a final piece of the puzzle: the model of open and shared platforms that authorities apparently favor has traditionally struggled to gain traction with consumers. Indeed, there seem to be very few successful consumer-oriented products and services in this space.

There have been numerous attempts to introduce truly open consumer-oriented operating systems – both in the mobile and desktop segments. For the most part, these have ended in failure. Ubuntu and other Linux distributions remain fringe products. There have been attempts to create open-source search engines, again they have not been met with success. The picture is similar in the online retail space. Amazon appears to have beaten eBay despite the latter being more open and less propertized – Amazon has historically charged higher fees than eBay and offers sellers much less freedom in the way they sell their goods. This theme is repeated in the standardization space. There have been innumerable attempts to impose open royalty-free standards. At least in the mobile internet industry, few if any of these have taken off (5G and WiFi are the best examples of this trend). That pattern is repeated in other highly-standardized industries, like digital video formats. Most recently, the proprietary Dolby Vision format seems to be winning the war against the open HDR10+ format. 

This is not to say there haven’t been any successful ventures in this space – the internet, blockchain and Wikipedia all spring to mind – or that we will not see more decentralized goods in the future. But by and large firms and consumers have not yet taken to the idea of open and shared platforms. And while some “open” projects have achieved tremendous scale, the consumer-facing side of these platforms is often dominated by intermediaries that opt for much more traditional business models (think of Coinbase and Blockchain, or Android and Linux).

An evolutionary explanation?

The preceding paragraphs have posited a recurring reality: the digital platforms that competition authorities are trying to to bring about are fundamentally different from those that emerge organically. This begs the question: why have authorities’ ideal platforms, so far, failed to achieve truly meaningful success at consumers’ end of the market? 

I can see at least three potential explanations:

  1. Closed/propertized platforms have systematically -and perhaps anticompetitively- thwarted their open/shared rivals;
  2. Shared platforms have failed to emerge because they are much harder to monetize (and there is thus less incentive to invest in them);
  3. Consumers have opted for closed systems precisely because they are closed.

I will not go into details over the merits of the first conjecture. Current antitrust debates have endlessly rehashed this proposition. However, it is worth mentioning that many of today’s dominant platforms overcame open/shared rivals well before they achieved their current size (Unix is older than Windows, Linux is older than iOs, eBay and Amazon are basically the same age, etc). It is thus difficult to make the case that the early success of their business models was down to anticompetitive behavior.

Much more interesting is the fact that options (2) and (3) are almost systematically overlooked – especially by antitrust authorities. And yet, if true, both of them would strongly cut against current efforts to regulate digital platforms and ramp-up antitrust enforcement against them. 

For a start, it is not unreasonable to suggest that highly propertized platforms are generally easier to monetize than shared ones (2). For example, open-source platforms often rely on complementarities for monetization, but this tends to be vulnerable to outside competition and free-riding. If this is true, then there is a natural incentive for firms to invest and innovate in more propertized environments. In turn, competition enforcement that limits a platforms’ ability to propertize their assets may harm innovation.

Similarly, authorities should at the very least reflect on whether consumers really want the more “competitive” ecosystems that they are trying to design (3)

For instance, it is striking that the European Commission has a long track record of seeking to open-up digital platforms (the Microsoft decisions are perhaps the most salient example). And yet, even after these interventions, new firms have kept on using the very business model that the Commission reprimanded. Apple tied the Safari browser to its iPhones, Google went to some length to ensure that Chrome was preloaded on devices, Samsung phones come with Samsung Internet as default. But this has not deterred consumers. A sizable share of them notably opted for Apple’s iPhone, which is even more centrally curated than Microsoft Windows ever was (and the same is true of Apple’s MacOS). 

Finally, it is worth noting that the remedies imposed by competition authorities are anything but unmitigated successes. Windows XP N (the version of Windows that came without Windows Media Player) was an unprecedented flop – it sold a paltry 1,787 copies. Likewise, the internet browser ballot box imposed by the Commission was so irrelevant to consumers that it took months for authorities to notice that Microsoft had removed it, in violation of the Commission’s decision. 

There are many reasons why consumers might prefer “closed” systems – even when they have to pay a premium for them. Take the example of app stores. Maintaining some control over the apps that can access the store notably enables platforms to easily weed out bad players. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. In other words, centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and consumers. This is especially true when consumers struggle to attribute dips in performance to an individual app, rather than the overall platform. 

It is also conceivable that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision. They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Furthermore, forcing too many “within-platform” choices upon users may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different. In short, contrary to what antitrust authorities seem to believe, closed platforms might be giving most users exactly what they desire. 

To conclude, consumers and firms appear to gravitate towards both closed and highly propertized platforms, the opposite of what the Commission and many other competition authorities favor. The reasons for this trend are still misunderstood, and mostly ignored. Too often, it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. This post certainly does not purport to answer the complex question of “the origin of platforms”, but it does suggest that what some refer to as “market failures” may in fact be features that explain the rapid emergence of the digital economy. Ronald Coase said this best when he quipped that economists always find a monopoly explanation for things that they fail to understand. The digital economy might just be the latest in this unfortunate trend.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Dirk Auer, (Senior Researcher, Liege Competition & Innovation Institute; Senior Fellow, ICLE).]

Privacy absolutism is the misguided belief that protecting citizens’ privacy supersedes all other policy goals, especially economic ones. This is a mistake. Privacy is one value among many, not an end in itself. Unfortunately, the absolutist worldview has filtered into policymaking and is beginning to have very real consequences. Readers need look no further than contact tracing applications and the fight against Covid-19.

Covid-19 has presented the world with a privacy conundrum worthy of the big screen. In fact, it’s a plotline we’ve seen before. Moviegoers will recall that, in the wildly popular film “The Dark Knight”, Batman has to decide between preserving the privacy of Gotham’s citizens or resorting to mass surveillance in order to defeat the Joker. Ultimately, the caped crusader begrudgingly chooses the latter. Before the Covid-19 outbreak, this might have seemed like an unrealistic plot twist. Fast forward a couple of months, and it neatly illustrates the difficult decision that most western societies urgently need to make as they consider the use of contract tracing apps to fight Covid-19.

Contact tracing is often cited as one of the most promising tools to safely reopen Covid-19-hit economies. Unfortunately, its adoption has been severely undermined by a barrage of overblown privacy fears.

Take the contact tracing API and App co-developed by Apple and Google. While these firms’ efforts to rapidly introduce contact tracing tools are laudable, it is hard to shake the feeling that they have been holding back slightly. 

In an overt attempt to protect users’ privacy, Apple and Google’s joint offering does not collect any location data (a move that has irked some states). Similarly, both firms have repeatedly stressed that users will have to opt-in to their contact tracing solution (as opposed to the API functioning by default). And, of course, all the data will be anonymous – even for healthcare authorities. 

This is a missed opportunity. Google and Apple’s networks include billions of devices. That puts them in a unique position to rapidly achieve the scale required to successfully enable the tracing of Covid-19 infections. Contact tracing applications need to reach a critical mass of users to be effective. For instance, some experts have argued that an adoption rate of at least 60% is necessary. Unfortunately, existing apps – notably in Singapore, Australia, Norway and Iceland – have struggled to get anywhere near this number. Forcing users to opt-out of Google and Apple’s services could go a long way towards inverting this trend. Businesses could also boost these numbers by making them mandatory for their employees and consumers.

However, it is hard to blame Google or Apple for not pushing the envelope a little bit further. For the best part of a decade, they and other firms have repeatedly faced specious accusations of “surveillance capitalism”. This has notably resulted in heavy-handed regulation (including the GDPR, in the EU, and the CCPA, in California), as well as significant fines and settlements

Those chickens have now come home to roost. The firms that are probably best-placed to implement an effective contact tracing solution simply cannot afford the privacy-related risks. This includes the risk associated with violating existing privacy law, but also potential reputational consequences. 

Matters have also been exacerbated by the overly cautious stance of many western governments, as well as their citizens: 

  • The European Data Protection Board cautioned governments and private sector actors to anonymize location data collected via contact tracing apps. The European Parliament made similar pronouncements.
  • A group of Democratic Senators pushed back against Apple and Google’s contact tracing solution, notably due to privacy considerations.
  • And public support for contact tracing is also critically low. Surveys in the US show that contact tracing is significantly less popular than more restrictive policies, such as business and school closures. Similarly, polls in the UK suggest that between 52% and 62% of Britons would consider using contact tracing applications.
  • Belgium’s initial plans for a contact tracing application were struck down by its data protection authority on account that they did not comply with the GDPR.
  • Finally, across the globe, there has been pushback against so-called “centralized” tracing apps, notably due to privacy fears.

In short, the West’s insistence on maximizing privacy protection is holding back its efforts to combat the joint threats posed by Covid-19 and the unfolding economic recession. 

But contrary to the mass surveillance portrayed in the Dark Knight, the privacy risks entailed by contact tracing are for the most part negligible. State surveillance is hardly a prospect in western democracies. And the risk of data breaches is no greater here than with many other apps and services that we all use daily. To wit, password, email, and identity theft are still, by far, the most common targets for cyber attackers. Put differently, cyber criminals appear to be more interested in stealing assets that can be readily monetized, rather than location data that is almost worthless. This suggests that contact tracing applications, whether centralized or not, are unlikely to be an important target for cyberattackers.

The meagre risks entailed by contact tracing – regardless of how it is ultimately implemented – are thus a tiny price to pay if they enable some return to normalcy. At the time of writing, at least 5,8 million human beings have been infected with Covid-19, causing an estimated 358,000 deaths worldwide. Both Covid-19 and the measures destined to combat it have resulted in a collapse of the global economy – what the IMF has called “the worst economic downturn since the great depression”. Freedoms that the west had taken for granted have suddenly evaporated: the freedom to work, to travel, to see loved ones, etc. Can anyone honestly claim that is not worth temporarily sacrificing some privacy to partially regain these liberties?

More generally, it is not just contact tracing applications and the fight against Covid-19 that have suffered because of excessive privacy fears. The European GDPR offers another salient example. Whatever one thinks about the merits of privacy regulation, it is becoming increasingly clear that the EU overstepped the mark. For instance, an early empirical study found that the entry into force of the GDPR markedly decreased venture capital investments in Europe. Michal Gal aptly summarizes the implications of this emerging body of literature:

The price of data protection through the GDPR is much higher than previously recognized. The GDPR creates two main harmful effects on competition and innovation: it limits competition in data markets, creating more concentrated market structures and entrenching the market power of those who are already strong; and it limits data sharing between different data collectors, thereby preventing the realization of some data synergies which may lead to better data-based knowledge. […] The effects on competition and innovation identified may justify a reevaluation of the balance reached to ensure that overall welfare is increased. 

In short, just like the Dark Knight, policymakers, firms and citizens around the world need to think carefully about the tradeoff that exists between protecting privacy and other objectives, such as saving lives, promoting competition, and increasing innovation. As things stand, however, it seems that many have veered too far on the privacy end of the scale.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Kristian Stout, (Associate Director, International Center for Law & Economics]


The ongoing pandemic has been an opportunity to explore different aspects of the human condition. For myself, I have learned that, despite a deep commitment to philosophical (neo- or classical-) liberalism, at heart I am pragmatic. I would prefer a society that optimizes for more individual liberty, but I am emphatically not someone who would even entertain the idea of using crises to advance my agenda when it is not clearly in service to amelioration of immediate problems.

Sadly, I have also learned that there are those who are not similarly pragmatic, and are willing to advance their ideological agenda come hell or high water. In this regard, I was disappointed yesterday to see the Gurry IP/COVID Letter passing around Twitter calling for widespread, worldwide interference with the property rights of IPR holders. 

The letter calls for a scattershot set of “remedies” to the crisis that would open access to copyright- and patent-protected inventions and content, including (among other things): 

  • voluntary licensing and non-enforcement of IP;
  • abrogation of IPR by WIPO members using the  “flexibility” in the international IP regime; 
  • the removal of geographical restrictions on IP licenses;
  • forcing patents into COVID-19 patent pools; and 
  • the implementation of compulsory licensing. 

And, unlike many prior efforts to push the envelope on weakening IP protections, the Gurry Letter also calls for measures that would weaken trade secrets and expose confidential business information in order to “achieve universal and equitable access to COVID-19 medicines and medical technologies as soon as reasonably possible.”

Notably, nothing in the letter suggests that any of these measures should be regarded as temporary.

We all want treatments for infection, vaccines for prevention, and ample supply of personal protective equipment as soon as possible, but if all the demands in this letter were met, it would do little to increase the supply of any of these things in the short term, while undermining incentives to develop new treatments, vaccines and better preventative tools in the long run. 

Fundamentally, the letter  reflects a willingness to use the COVID-19 pandemic to pursue an agenda that lacks merit and would be dismissed in the normal course of affairs. 

What is most certainly the case is that we need more innovation now, and we need it faster. There is no reason to believe that mandating open source status or forcing compulsory licensing on the firms doing that work will encourage that work to proceed with all due haste—and every indication that the opposite is the case. 

Where there are short term shortages of certain products that might be produced in much larger quantities by relaxing IP, companies are responding by doing just that—voluntarily. But this is fundamentally different from the imposition of unlimited compulsory licenses.

Further, private actors have displayed an impressive willingness to provide free or low cost access to technologies and content—without government coercion. The following is a short list of some of the content and inventions that have been opened up:

Culture, Fitness & Entertainment

  • HBO Will Stream 500 Hours of Free Programming, Including Full Seasons of ‘Veep,’ ‘The Sopranos,’ ‘Silicon Valley’”
  • Dozens (or more) of artists, both famous and lesser known, are releasing free back catalog performances or are taking part in free live streaming sessions on social media platforms. Notably, viewers are often welcome to donate or “pay what they” want to help support these artists (more on this below).
  • The NBA, NFL, and NHL are offering free access to their back catalogue of games.
  • A large array of music production software can now be used free on extended trials for 3 months (or completely free and unlimited in some cases). 
  • CBS All Access expanded its free trial period.
  • Neil Gaiman and Harper Collins granted permission to Levar Burton to livestream readings from their catalogs.
  • Disney is releasing movies early onto its (paid) Disney+ services.
  • Gold’s Gym is providing free access to its app-based workouts.
  • The Met is streaming free recordings of its Live in HD series.
  • The Seattle Symphony is offering free access to some of its recorded performances.
  • The UK National Theater is streaming some of its most popular plays for free.
  • Andrew Lloyd Weber is streaming his shows online for free.

Science, News & Education

  • Scholastica released free content intended to help educate students stuck at home while sheltering-in-place. 
  • Nearly 100 academic journals, societies, institutes, and companies signed a commitment to make research and data on COVID-19 freely available, at least for the duration of the outbreak.
  • The Atlantic lifted paywall restrictions on access to its COVID-19-related content.
  • The New England Journal of Medicine is allowing free access to COVID-19-related resources.
  • The Lancet allows free access to research it publishes on COVID-19.
  • All material published by theBMJ on the coronavirus outbreak is freely available.
  • The AAAS-published Science allows free access to its coronavirus research and commentary.
  • Elsevier gave full access to its content on its COVID-19 Information Center for PubMed Central and other public health databases.
  • The American Economic Association announced open access to all of its journals until the end of June.
  • JSTOR expanded free access to some of its scholarship.

Medicine & Technology

  • The Global Center for Medical Design is developing license-free PPE designs that can be quickly implemented by manufacturers.
  • Medtronic published “design specifications for the Puritan Bennett 560 (PB560) to allow innovators, inventors, start-ups, and academic institutions to leverage their own expertise and resources to evaluate options for rapid ventilator manufacturing.” It additionally provided software licenses for this technology.
  • AbbVie announced it won’t enforce its patent rights for Kaletra—a drug that may provide treatment for COVID-19 infections. Israel had earlier indicated it would impose compulsory licenses for the drug, but AbbVie is allowing use worldwide. The company, moreover, had donated supplies of the drug to China earlier in the year when the outbreak first became apparent.
  • Google is working with health researchers to provide anonymized and aggregated user location data. 
  • Cisco has extended free licenses and expanded usage counts at no extra charge for three of its security technologies to help strained IT teams and partners ready themselves and their clients for remote work.”
  • Microsoft is offering free subscriptions to its Teams product for six months.
  • Zoom expanded its free access and other limitations for educational institutions around the world.

Incentivize innovation, now more than ever

In addition to undermining the short-term incentives to draw more research resources into the fight against COVID-19, using this crisis to weaken the IP regime will cause long-term damage to the economies of the world. We still will need creators making new cultural products and researchers developing new medicines and technologies; weakening the IP regime will undermine the delicate set of incentives that cultural and scientific production depends upon. 

Any clear-eyed assessment of the broader course of the pandemic and the response to it gives lie to the notion that IP rights are oppressive or counterproductive. It is the pharmaceutical industry—hated as they may be in some quarters—that will be able to marshall the resources and expertise to develop treatments and vaccines. And it is artists and educators producing cultural content who (theoretically) depend on the licensing revenues of their creations for survival. 

In fact, one of the things that the pandemic has exposed is the fragility of artists’ livelihoods and the callousness with which they are often treated. Shortly after the lockdowns began in the US, the well-established rock musician David Crosby said in an interview that, if he could not tour this year, he would face tremendous financial hardship. 

As unfortunate as that may be for Crosby, a world-famous musician, imagine how much harder it is for struggling musicians who can hardly hope to achieve a fraction of Crosby’s success for their own tours, let alone for licensing. If David Crosby cannot manage well for a few months on the revenue from his popular catalog, what hope do small artists have?

Indeed, the flood of unable-to-tour artists who are currently offering “donate what you can” streaming performances are a symptom of the destructive assault on IPR exemplified in the letter. For decades, these artists have been told that they can only legitimately make money through touring. Although the potential to actually make a living while touring is possibly out of reach for many or most artists,  those that had been scraping by have now been brought to the brink of ruin as the ability to tour is taken away. 

There are certainly ways the various IP regimes can be improved (like, for instance, figuring out how to help creators make a living from their creations), but now is not the time to implement wishlist changes to an otherwise broadly successful rights regime. 

And, critically, there is a massive difference between achieving wider distribution of intellectual property voluntarily as opposed to through government fiat. When done voluntarily the IP owner determines the contours and extent of “open sourcing” so she can tailor increased access to her own needs (including the need to eat and pay rent). In some cases this may mean providing unlimited, completely free access, but in other cases—where the particular inventor or creator has a different set of needs and priorities—it may be something less than completely open access. When a rightsholder opts to “open source” her property voluntarily, she still retains the right to govern future use (i.e. once the pandemic is over) and is able to plan for reductions in revenue and how to manage future return on investment. 

Our lawmakers can consider if a particular situation arises where a particular piece of property is required for the public good, should the need arise. Otherwise, as responsible individuals, we should restrain ourselves from trying to capitalize on the current crisis to ram through our policy preferences. 

This guest post is by Jonathan M. Barnett, Torrey H. Webb Professor Law, University of Southern California Gould School of Law.

It has become virtual received wisdom that antitrust law has been subdued by economic analysis into a state of chronic underenforcement. Following this line of thinking, many commentators applauded the Antitrust Division’s unsuccessful campaign to oppose the acquisition of Time-Warner by AT&T and some (unsuccessfully) urged the Division to take stronger action against the acquisition of most of Fox by Disney. The arguments in both cases followed a similar “big is bad” logic. Consolidating control of a large portfolio of creative properties (Fox plus Disney) or integrating content production and distribution capacities (Time-Warner plus AT&T) would exacerbate market concentration, leading to reduced competition and some combination of higher prices and reduced product for consumers. 

Less than 18 months after the closing of both transactions, those concerns seem to have been largely unwarranted. 

Far from precipitating any decline in product output or variety, both transactions have been followed by a vigorous burst of competition in the digital streaming market. In place of the Amazon plus Netflix bottleneck (with Hulu trailing behind), consumers now, or in 2020 will, have a choice of at least four new streaming services with original content, Disney+, AT&T’s “HBO Max”, Apple’s “Apple TV+” and Comcast’s NBCUniversal “Peacock” services. Critically, each service relies on a formidable combination of creative, financing and technological capacities that can only be delivered by a firm of sufficiently large size and scale.  As modern antitrust law has long recognized, it turns out that “big” is sometimes not bad.

Where’s the Harm?

At present, it is hard to see any net consumer harm arising from the concurrence of increased size and increased competition. 

On the supply side, this is just the next episode in the ongoing “Golden Age of Television” in which content producers have enjoyed access to exceptional funding to support high-value productions.  It has been reported that Apple TV+’s new “Morning Show” series will cost $15 million per episode while similar estimates are reported for hit shows such as HBO’s “Game of Thrones” and Netflix’s “The Crown.”  Each of those services is locked in a fierce competition to gain and retain sufficient subscribers to earn a return on those investments, which leads directly to the next happy development.

On the demand side, consumers enjoy a proliferating array of streaming services, ranging from free ad-supported services to subscription ad-free services. Consumers can now easily “cut the cord” and assemble a customized bundle of preferred content from multiple services, each of which is less costly than a traditional cable package and can generally be cancelled at any time.  Current market performance does not plausibly conform to the declining output, limited variety or increasing prices that are the telltale symptoms of a less than competitive market.

Real-World v. Theoretical Markets

The market’s favorable trajectory following these two controversial transactions should not be surprising. When scrutinized against the actual characteristics of real-world digital content markets, rather than stylized theoretical models or antiquated pre-digital content markets, the arguments leveled against these transactions never made much sense. There were two fundamental and related errors. 

Error #1: Content is Scarce

Advocates for antitrust intervention assumed that entry barriers into the content market were high, in which case it followed that the owner of an especially valuable creative portfolio could exert pricing power to consumers’ detriment. Yet, in reality, funding for content production is plentiful and even a service that has an especially popular show is unlikely to have sustained pricing power in the face of a continuous flow of high-value productions being released by formidable competitors. The amounts being spent on content in 2019 by leading streaming services are unprecedented, ranging from a reported $15 billion for Netflix to an estimated $6 billion for Amazon and Apple TV+ to an estimated $3.9 billion for AT&T’s HBO Max. It is also important to note that a hit show is often a mobile asset that a streaming or other video distribution service has licensed from independent production companies and other rights holders. Once the existing deal expires, those rights are available for purchase by the highest bidder. For example, in 2019, Netflix purchased the streaming rights to “Seinfeld”, Viacom purchased the cable rights to “Seinfeld”, and HBO Max purchased the streaming rights to “South Park.” Similarly, the producers behind a hit show are always free to take their talents to competitors once any existing agreement terminates.

Error #2: Home Pay-TV is a “Monopoly”

Advocates of antitrust action were looking at the wrong market—or more precisely, the market as it existed about a decade ago. The theory that AT&T’s acquisition of Time-Warner’s creative portfolio would translate into pricing power in the home pay-TV market mighthave been plausible when consumers had no reasonable alternative to the local cable provider. But this argument makes little sense today when consumers are fleeing bulky home pay-TV bundles for cheaper cord-cutting options that deliver more targeted content packages to a mobile device.  In 2019, a “home” pay-TV market is fast becoming an anachronism and hence a home pay-TV “monopoly” largely reduces to a formalism that, with the possible exception of certain live programming, is unlikely to translate into meaningful pricing power. 

Wait a Second! What About the HBO Blackout?

A skeptical reader might reasonably object that this mostly rosy account of the post-merger home video market is unpersuasive since it does not address the ongoing blackout of HBO (now an AT&T property) on the Dish satellite TV service. Post-merger commentary that remains skeptical of the AT&T/Time-Warner merger has focused on this dispute, arguing that it “proves” that the government was right since AT&T is purportedly leveraging its new ownership of HBO to disadvantage one of its competitors in the pay-TV market. This interpretation tends to miss the forest for the trees (or more precisely, a tree).  

The AT&T/Dish dispute over HBO is only one of over 200 “carriage” disputes resulting in blackouts that have occurred this year, which continues an upward trend since approximately 2011. Some of those include Dish’s dispute with Univision (settled in March 2019 after a nine-month blackout) and AT&T’s dispute (as pay-TV provider) with Nexstar (settled in August 2019 after a nearly two-month blackout). These disputes reflect the fact that the flood of subscriber defections from traditional pay-TV to mobile streaming has made it difficult for pay-TV providers to pass on the fees sought by content owners. As a result, some pay-TV providers adopt the negotiating tactic of choosing to drop certain content until the terms improve, just as AT&T, in its capacity as a pay-TV provider, dropped CBS for three weeks in July and August 2019 pending renegotiation of licensing terms. It is the outward shift in the boundaries of the economically relevant market (from home to home-plus-mobile video delivery), rather than market power concerns, that best accounts for periodic breakdowns in licensing negotiations.  This might even be viewed positively from an antitrust perspective since it suggests that the “over the top” market is putting pressure on the fees that content owners can extract from providers in the traditional pay-TV market.

Concluding Thoughts

It is common to argue today that antitrust law has become excessively concerned about “false positives”– that is, the possibility of blocking a transaction or enjoining a practice that would have benefited consumers. Pending future developments, this early post-mortem on the regulatory and judicial treatment of these two landmark media transactions suggests that there are sometimes good reasons to stay the hand of the court or regulator. This is especially the case when a generational market shift is in progress and any regulator’s or judge’s foresight is likely to be guesswork. Antitrust law’s “failure” to stop these transactions may turn out to have been a ringing success.

And if David finds out the data beneath his profile, you’ll start to be able to connect the dots in various ways with Facebook and Cambridge Analytica and Trump and Brexit and all these loosely-connected entities. Because you get to see inside the beast, you get to see inside the system.

This excerpt from the beginning of Netflix’s The Great Hack shows the goal of the documentary: to provide one easy explanation for Brexit and the election of Trump, two of the most surprising electoral outcomes in recent history.

Unfortunately, in attempting to tell a simple narrative, the documentary obscures more than it reveals about what actually happened in the Facebook-Cambridge Analytica data scandal. In the process, the film wildly overstates the significance of the scandal in either the 2016 US presidential election or the 2016 UK referendum on leaving the EU.

In this article, I will review the background of the case and show seven things the documentary gets wrong about the Facebook-Cambridge Analytica data scandal.

Background

In 2013, researchers published a paper showing that you could predict some personality traits — openness and extraversion — from an individual’s Facebook Likes. Cambridge Analytica wanted to use Facebook data to create a “psychographic” profile — i.e., personality type — of each voter and then micro-target them with political messages tailored to their personality type, ultimately with the hope of persuading them to vote for Cambridge Analytica’s client (or at least to not vote for the opposing candidate).

In this case, the psychographic profile is the person’s Big Five (or OCEAN) personality traits, which research has shown are relatively stable throughout our lives:

  1. Openness to new experiences
  2. Conscientiousness
  3. Extroversion
  4. Agreeableness
  5. Neuroticism

But how to get the Facebook data to create these profiles? A researcher at Cambridge University, Alex Kogan, created an app called thisismydigitallife, a short quiz for determining your personality type. Between 250,000 and 270,000 people were paid a small amount of money to take this quiz. 

Those who took the quiz shared some of their own Facebook data as well as their friends’ data (so long as the friends’ privacy settings allowed third-party app developers to access their data). 

This process captured data on “at least 30 million identifiable U.S. consumers”, according to the FTC. For context, even if we assume all 30 million were registered voters, that means the data could be used to create profiles for less than 20 percent of the relevant population. And though some may disagree with Facebook’s policy for sharing user data with third-party developers, collecting data in this manner was in compliance with Facebook’s terms of service at the time.

What crossed the line was what happened next. Kogan then sold that data to Cambridge Analytica, without the consent of the affected Facebook users and in express violation of Facebook’s prohibition on selling Facebook data between third and fourth parties. 

Upon learning of the sale, Facebook directed Alex Kogan and Cambridge Analytica to delete the data. But the social media company failed to notify users that their data had been misused or confirm via an independent audit that the data was actually deleted.

1. Cambridge Analytica was selling snake oil (no, you are not easily manipulated)

There’s a line in The Great Hack that sums up the opinion of the filmmakers and the subjects in their story: “There’s 2.1 billion people, each with their own reality. And once everybody has their own reality, it’s relatively easy to manipulate them.” According to the latest research from political science, this is completely bogus (and it’s the same marketing puffery that Cambridge Analytica would pitch to prospective clients).

The best evidence in this area comes from Joshua Kalla and David E. Broockman in a 2018 study published by American Political Science Review:

We argue that the best estimate of the effects of campaign contact and advertising on Americans’ candidates choices in general elections is zero. First, a systematic meta-analysis of 40 field experiments estimates an average effect of zero in general elections. Second, we present nine original field experiments that increase the statistical evidence in the literature about the persuasive effects of personal contact 10-fold. These experiments’ average effect is also zero.

In other words, a meta-analysis covering 49 high-quality field experiments found that in US general elections, advertising has zero effect on the outcome. (However, there is evidence “campaigns are able to have meaningful persuasive effects in primary and ballot measure campaigns, when partisan cues are not present.”)

But the relevant conclusion for the Cambridge Analytica scandal remains the same: in highly visible elections with a polarized electorate, it simply isn’t that easy to persuade voters to change their minds.

2. Micro-targeting political messages is overrated — people prefer general messages on shared beliefs

But maybe Cambridge Analytica’s micro-targeting strategy would result in above-average effects? The literature provides reason for skepticism here as well. Another paper by Eitan D. Hersh and Brian F. Schaffner in The Journal of Politics found that voters “rarely prefer targeted pandering to general messages” and “seem to prefer being solicited based on broad principles and collective beliefs.” It’s political tribalism all the way down. 

A field experiment with 56,000 Wisconsin voters in the 2008 US presidential election found that “persuasive appeals possibly reduced candidate support and almost certainly did not increase it,” suggesting that  “contact by a political campaign can engender a backlash.”

3. Big Five personality traits are not very useful for predicting political orientation

Or maybe there’s something special about targeting political messages based on a person’s Big Five personality traits? Again, there is little reason to believe this is the case. As Kris-Stella Trump mentions in an article for The Washington Post

The ‘Big 5’ personality traits … only predict about 5 percent of the variation in individuals’ political orientations. Even accurate personality data would only add very little useful information to a data set that includes people’s partisanship — which is what most campaigns already work with.

The best evidence we have on the importance of personality traits on decision-making comes from the marketing literature (n.b., it’s likely easier to influence consumer decisions than political decisions in today’s increasingly polarized electorate). Here too the evidence is weak:

In this successful study, researchers targeted ads, based on personality, to more than 1.5 million people; the result was about 100 additional purchases of beauty products than had they advertised without targeting.

More to the point, the Facebook data obtained by Cambridge Analytica couldn’t even accomplish the simple task of matching Facebook Likes to the Big Five personality traits. Here’s Cambridge University researcher Alex Kogan in Michael Lewis’s podcast episode about the scandal: 

We started asking the question of like, well, how often are we right? And so there’s five personality dimensions? And we said like, okay, for what percentage of people do we get all five personality categories correct? We found it was like 1%.

Eitan Hersh, an associate professor of political science at Tufts University, summed it up best: “Every claim about psychographics etc made by or about [Cambridge Analytica] is BS.

4. If Cambridge Analytica’s “weapons-grade communications techniques” were so powerful, then Ted Cruz would be president

The Great Hack:

Ted Cruz went from the lowest rated candidate in the primaries to being the last man standing before Trump got the nomination… Everyone said Ted Cruz had this amazing ground game, and now we know who came up with all of it. Joining me now, Alexander Nix, CEO of Cambridge Analytica, the company behind it all.

Reporting by Nicholas Confessore and Danny Hakim at The New York Times directly contradicts this framing on Cambridge Analytica’s role in the 2016 Republican presidential primary:

Cambridge’s psychographic models proved unreliable in the Cruz presidential campaign, according to Rick Tyler, a former Cruz aide, and another consultant involved in the campaign. In one early test, more than half the Oklahoma voters whom Cambridge had identified as Cruz supporters actually favored other candidates.

Most significantly, the Cruz campaign stopped using Cambridge Analytica’s services in February 2016 due to disappointing results, as Kenneth P. Vogel and Darren Samuelsohn reported in Politico in June of that year:

Cruz’s data operation, which was seen as the class of the GOP primary field, was disappointed in Cambridge Analytica’s services and stopped using them before the Nevada GOP caucuses in late February, according to a former staffer for the Texas Republican.

“There’s this idea that there’s a magic sauce of personality targeting that can overcome any issue, and the fact is that’s just not the case,” said the former staffer, adding that Cambridge “doesn’t have a level of understanding or experience that allows them to target American voters.”

Vogel later tweeted that most firms hired Cambridge Analytica “because it was seen as a prerequisite for receiving $$$ from the MERCERS.” So it seems campaigns hired Cambridge Analytica not for its “weapons-grade communications techniques” but for the firm’s connections to billionaire Robert Mercer.

5. The Trump campaign phased out Cambridge Analytica data in favor of RNC data for the general election

Just as the Cruz campaign became disillusioned after working with Cambridge Analytica during the primary, so too did the Trump campaign during the general election, as Major Garrett reported for CBS News:

The crucial decision was made in late September or early October when Mr. Trump’s son-in-law Jared Kushner and Brad Parscale, Mr. Trump’s digital guru on the 2016 campaign, decided to utilize just the RNC data for the general election and used nothing from that point from Cambridge Analytica or any other data vendor. The Trump campaign had tested the RNC data, and it proved to be vastly more accurate than Cambridge Analytica’s, and when it was clear the RNC would be a willing partner, Mr. Trump’s campaign was able to rely solely on the RNC.

And of the little work Cambridge Analytica did complete for the Trump campaign, none involved “psychographics,” The New York Times reported:

Mr. Bannon at one point agreed to expand the company’s role, according to the aides, authorizing Cambridge to oversee a $5 million purchase of television ads. But after some of them appeared on cable channels in Washington, D.C. — hardly an election battleground — Cambridge’s involvement in television targeting ended.

Trump aides … said Cambridge had played a relatively modest role, providing personnel who worked alongside other analytics vendors on some early digital advertising and using conventional micro-targeting techniques. Later in the campaign, Cambridge also helped set up Mr. Trump’s polling operation and build turnout models used to guide the candidate’s spending and travel schedule. None of those efforts involved psychographics.

6. There is no evidence that Facebook data was used in the Brexit referendum

Last year, the UK’s data protection authority fined Facebook £500,000 — the maximum penalty allowed under the law — for violations related to the Cambridge Analytica data scandal. The fine was astonishing considering that the investigation of Cambridge Analytica’s licensed data derived from Facebook “found no evidence that UK citizens were among them,” according to the BBC. This detail demolishes the second central claim of The Great Hack, that data fraudulently acquired from Facebook users enabled Cambridge Analytica to manipulate the British people into voting for Brexit. On this basis, Facebook is currently appealing the fine.

7. The Great Hack wasn’t a “hack” at all

The title of the film is an odd choice given the facts of the case, as detailed in the background section of this article. A “hack” is generally understood as an unauthorized breach of a computer system or network by a malicious actor. People think of a genius black hat programmer who overcomes a company’s cybersecurity defenses to profit off stolen data. Alex Kogan, the Cambridge University researcher who acquired the Facebook data for Cambridge Analytica, was nothing of the sort. 

As Gus Hurwitz noted in an article last year, Kogan entered into a contract with Facebook and asked users for their permission to acquire their data by using the thisismydigitallife personality app. Arguably, if there was a breach of trust, it was when the app users chose to share their friends’ data, too. The editorial choice to call this a “hack” instead of “data collection” or “data scraping” is of a piece with the rest of the film; when given a choice between accuracy and sensationalism, the directors generally chose the latter.

Why does this narrative persist despite the facts of the case?

The takeaway from the documentary is that Cambridge Analytica hacked Facebook and subsequently undermined two democratic processes: the Brexit referendum and the 2016 US presidential election. The reason this narrative has stuck in the public consciousness is that it serves everyone’s self-interest (except, of course, Facebook’s).

It lets voters off the hook for what seem, to many, to be drastic mistakes (i.e., electing a reality TV star president and undoing the European project). If we were all manipulated into making the “wrong” decision, then the consequences can’t be our fault! 

This narrative also serves Cambridge Analytica, to a point. For a time, the political consultant liked being able to tell prospective clients that it was the mastermind behind two stunning political upsets. Lastly, journalists like the story because they compete with Facebook in the advertising market and view the tech giant as an existential threat.

There is no evidence for the film’s implicit assumption that, but for Cambridge Analytica’s use of Facebook data to target voters, Trump wouldn’t have been elected and the UK wouldn’t have voted to leave the EU. Despite its tone and ominous presentation style, The Great Hack fails to muster any support for its extreme claims. The truth is much more mundane: the Facebook-Cambridge Analytica data scandal was neither a “hack” nor was it “great” in historical importance.

The documentary ends with a question:

But the hardest part in all of this is that these wreckage sites and crippling divisions begin with the manipulation of one individual. Then another. And another. So, I can’t help but ask myself: Can I be manipulated? Can you?

No — but the directors of The Great Hack tried their best to do so.

The dust has barely settled on the European Commission’s record-breaking €4.3 Billion Google Android fine, but already the European Commission is gearing up for its next high-profile case. Last month, Margrethe Vestager dropped a competition bombshell: the European watchdog is looking into the behavior of Amazon. Should the Commission decide to move further with the investigation, Amazon will likely join other US tech firms such as Microsoft, Intel, Qualcomm and, of course, Google, who have all been on the receiving end of European competition enforcement.

The Commission’s move – though informal at this stage – is not surprising. Over the last couples of years, Amazon has become one of the world’s largest and most controversial companies. The animosity against it is exemplified in a paper by Lina Khan, which uses the example of Amazon to highlight the numerous ills that allegedly plague modern antitrust law. The paper is widely regarded as the starting point of the so-called “hipster antitrust” movement.

But is there anything particularly noxious about Amazon’s behavior, or is it just the latest victim of a European crusade against American tech companies?

Where things stand so far

As is often the case in such matters, publicly available information regarding the Commission’s “probe” (the European watchdog is yet to open a formal investigation) is particularly thin. What we know so far comes from a number of declarations made by Margrethe Vestager (here and here) and a leaked questionnaire that was sent to Amazon’s rivals. Going on this limited information, it appears that the Commission is preoccupied about the manner in which Amazon uses the data that it gathers from its online merchants. In Vestager’s own words:

The question here is about the data, because if you as Amazon get the data from the smaller merchants that you host […] do you then also use this data to do your own calculations? What is the new big thing, what is it that people want, what kind of offers do they like to receive, what makes them buy things.

These concerns relate to the fact that Amazon acts as both a retailer in its own right and a platform for other retailers, which allegedly constitutes a “conflict of interest”. As a retailer, Amazon sells a wide range of goods directly to consumers. Meanwhile, its marketplace platform enables third party merchants to offer their goods in exchange for referral fees when items are sold (these fees typically range from 8% to 15%, depending on the type of good). Merchants can either execute theses orders themselves or opt for fulfilment by Amazon, in which case it handles storage and shipping. In addition to its role as a platform operator,  As of 2017, more than 50% of units sold on the Amazon marketplace where fulfilled by third-party sellers, although Amazon derived three times more revenue from its own sales than from those of third parties (note that Amazon Web Services is still by far its largest source of profits).

Mirroring concerns raised by Khan, the Commission worries that Amazon uses the data it gathers from third party retailers on its platform to outcompete them. More specifically, the concern is that Amazon might use this data to identify and enter the most profitable segments of its online platform, excluding other retailers in the process (or deterring them from joining the platform in the first place). Although there is some empirical evidence to support such claims, it is far from clear that this is in any way harmful to competition or consumers. Indeed, the authors of the paper that found evidence in support of the claims note:

Amazon is less likely to enter product spaces that require greater seller efforts to grow, suggesting that complementors’ platform‐specific investments influence platform owners’ entry decisions. While Amazon’s entry discourages affected third‐party sellers from subsequently pursuing growth on the platform, it increases product demand and reduces shipping costs for consumers.

Thou shalt not punish efficient behavior

The question is whether Amazon using data on rivals’ sales to outcompete them should raise competition concerns? After all, this is a standard practice in the brick-and-mortar industry, where most large retailers use house brands to go after successful, high-margin third-party brands. Some, such as Costco, even eliminate some third-party products from their shelves once they have a successful own-brand product. Granted, as Khan observes, Amazon may be doing this more effectively because it has access to vastly superior data. But does that somehow make Amazon’s practice harmful to social social welfare? Absent further evidence, I believe not.

The basic problem is the following. Assume that Amazon does indeed have a monopoly in the market for online retail platforms (or, in other words, that the Amazon marketplace is a bottleneck for online retailers). Why would it move into direct retail competition against its third party sellers if it is less efficient than them? Amazon would either have to sell at a loss or hope that consumers saw something in its products that warrants a higher price. A more profitable alternative would be to stay put and increase its fees. It could thereby capture all the profits of its independent retailers. Not that Amazon would necessarily want to do so, as this could potentially deter other retailers from joining its platform. The upshot is that Amazon has little incentive to exclude more efficient retailers.

Astute readers, will have observed that this is simply a restatement of the Chicago school’s Single Monopoly Theory, which broadly holds that, absent efficiencies, a monopolist in one line of commerce cannot increase its profits by entering the competitive market for a complementary good. Although the theory has drawn some criticism, it remains a crucial starting point with which enforcers must contend before they conclude that a monopolist’s behavior is anticompetitive.

So why does Amazon move into retail segments that are already occupied by its rivals? The most likely explanation is simply that it can source and sell these goods more efficiently than them, and that these efficiencies cannot be achieved through contracts with the said rivals. Once we accept the possibility that Amazon is simply more efficient, the picture changes dramatically. The sooner it overthrows less efficient rivals the better. Doing so creates valuable surplus that can flow to either itself or its consumers. This is true regardless of whether Amazon has a marketplace monopoly or not. Even if it does have a monopoly (which is doubtful given competition from the likes of Zalando, AliExpress, Google Search and eBay), at least some of these efficiencies will likely be passed on to consumers. Such a scenario is also perfectly compatible with increased profits for Amazon. The real test is whether output increases when Amazon enters segments that were previously occupied by rivals.

Of course, the usual critiques voiced against the “Single Monopoly Profit” theory apply here. It is plausible that, by excluding its retail rivals, Amazon is simply seeking to protect its alleged platform monopoly. However, the anecdotal evidence that has been raised thus far does not support this conclusion.

But what about innovation?

Possibly sensing the weakness of the “inefficiency” line of arguments against Amazon, critics will likely put put forward a second theory of harm. The claim is that by capturing the rents of potentially innovative retailers, Amazon may hamper their incentives to innovate and will therefore harm consumer choice. Margrethe Vestager intimated this much in a Bloomberg interview. Though this framing might seem tempting at first, it falters under close inspection.

The effects of Amazon’s behavior could first be framed in terms of appropriability — that is: the extent to which an innovator captures the social benefits of its innovation. The higher its share of those benefits, the larger its incentives to innovate. By forcing out its retail rivals, it is plausible that Amazon is reducing the returns which they earn on their potential innovations.

Another potential framing is that of holdup theory. Applied to this case, one could argue that rival retailers made sunk investments (potentially innovation-related) to join the Amazon platform, and that Amazon is behaving opportunistically by capturing their surplus. With hindsight, merchants might thus have opted to stay out of the Amazon marketplace.

Unfortunately for Amazon’s critics, there are numerous objections to these two framings. For a start, the business implication of both the appropriability and holdup theories is that firms can and should take sensible steps to protect their investments. The recent empirical paper mentioned above stresses that these actions are critical for the sake of Amazon’s retailers.

Potential solutions abound. Retailers could in principle enter into long-term exclusivity agreements with their suppliers (which would keep Amazon out of the market if there are no alternative suppliers). Alternatively, they could sign non-compete clauses with Amazon, exchange assets, or even outright merge. In fact, there is at least some evidence of this last possibility occurring, as Amazon has acquired some of its online retailers. The fact that some retailers have not opted for these safety measures (or other methods of appropriability) suggests that they either don’t perceive a threat or are unwilling to make the necessary investments. It might also be due to bad business judgement on their part).

Which brings us to the big question. Should competition law step into the breach in those cases where firms have refused to take even basic steps to protect their investments? The answer is probably no.

For a start, condoning this poor judgement encourages firms to rely on competition enforcement rather than private solutions  to solve appropriability and holdup issues. This is best understood with reference to moral hazard. By insuring firms against the capture of their profits, competition authorities disincentivize all forms of risk-mitigation on the part of those firms. This will ultimately raise enforcement costs (as firms become increasingly reliant on the antitrust system for protection).

It is also informationally much more burdensome, as authorities will systematically have to rule on the appropriate share of profits between parties to a case.

Finally, overprotecting these investments would go against the philosophy of the European Court of Justice’s Huawei ruling.  Albeit in the specific context of injunctions relating to SEPs, the Court conditioned competition liability on firms showing that they have taken a series of reasonable steps to sort out their disputes privately.

Concluding remarks

This is not to say that competition intervention should categorically be proscribed. But rather that the capture of a retailer’s investments by Amazon is an insufficient condition for enforcement actions. Instead, the Commission should question whether Amazon’s actions are truly detrimental to consumer welfare and output. Absent strong evidence that an excluded retailer offered superior products, or that Amazon’s move was merely a strategic play to prevent entry, competition authorities should let the chips fall where they may.

As things stand, there is simply no evidence to indicate that anything out of the ordinary is occurring on the Amazon marketplace. By shining the spotlight on Amazon, the Commission is putting itself under tremendous political pressure to move forward with a formal investigation (all the more so, given the looming European Parliament elections). This is regrettable, as there are surely more pressing matters for the European regulator to deal with. The Commission would thus do well to recall the words of Shakespeare in the Merchant of Venice: “All that glisters is not gold”. Applied in competition circles this translates to “all that is big is not inefficient”.

In an ideal world, it would not be necessary to block websites in order to combat piracy. But we do not live in an ideal world. We live in a world in which enormous amounts of content—from books and software to movies and music—is being distributed illegally. As a result, content creators and owners are being deprived of their rights and of the revenue that would flow from legitimate consumption of that content.

In this real world, site blocking may be both a legitimate and a necessary means of reducing piracy and protecting the rights and interests of rightsholders.

Of course, site blocking may not be perfectly effective, given that pirates will “domain hop” (moving their content from one website/IP address to another). As such, it may become a game of whack-a-mole. However, relative to other enforcement options, such as issuing millions of takedown notices, it is likely a much simpler, easier and more cost-effective strategy.

And site blocking could be abused or misapplied, just as any other legal remedy can be abused or misapplied. It is a fair concern to keep in mind with any enforcement program, and it is important to ensure that there are protections against such abuse and misapplication.

Thus, a Canadian coalition of telecom operators and rightsholders, called FairPlay Canada, have proposed a non-litigation alternative solution to piracy that employs site blocking but is designed to avoid the problems that critics have attributed to other private ordering solutions.

The FairPlay Proposal

FairPlay has sent a proposal to the CRTC (the Canadian telecom regulator) asking that it develop a process by which it can adjudicate disputes over web sites that are “blatantly, overwhelmingly, or structurally engaged in piracy.”  The proposal asks for the creation of an Independent Piracy Review Agency (“IPRA”) that would hear complaints of widespread piracy, perform investigations, and ultimately issue a report to the CRTC with a recommendation either to block or not to block sites in question. The CRTC would retain ultimate authority regarding whether to add an offending site to a list of known pirates. Once on that list, a pirate site would have its domain blocked by ISPs.

The upside seems fairly obvious: it would be a more cost-effective and efficient process for investigating allegations of piracy and removing offenders. The current regime is cumbersome and enormously costly, and the evidence suggests that site blocking is highly effective.

Under Canadian law—the so-called “Notice and Notice” regime—rightsholders send notices to ISPs, who in turn forward those notices to their own users. Once those notices have been sent, rightsholders can then move before a court to require ISPs to expose the identities of users that upload infringing content. In just one relatively large case, it was estimated that the cost of complying with these requests was 8.25M CAD.

The failure of the American equivalent of the “Notice and Notice” regime provides evidence supporting the FairPlay proposal. The graduated response system was set up in 2012 as a means of sending a series of escalating warnings to users who downloaded illegal content, much as the “Notice and Notice” regime does. But the American program has since been discontinued because it did not effectively target the real source of piracy: repeat offenders who share a large amount of material.

This, on the other hand, demonstrates one of the greatest points commending the FairPlay proposal. The focus of enforcement shifts away from casually infringing users and directly onto the  operators of sites that engage in widespread infringement. Therefore, one of the criticisms of Canada’s current “notice and notice” regime — that the notice passthrough system is misused to send abusive settlement demands — is completely bypassed.

And whichever side of the notice regime bears the burden of paying the associated research costs under “Notice and Notice”—whether ISPs eat them as a cost of doing business, or rightsholders pay ISPs for their work—the net effect is a deadweight loss. Therefore, whatever can be done to reduce these costs, while also complying with Canada’s other commitments to protecting its citizens’ property interests and civil rights, is going to be a net benefit to Canadian society.

Of course it won’t be all upside — no policy, private or public, ever is. IP and property generally represent a set of tradeoffs intended to net the greatest social welfare gains. As Richard Epstein has observed

No one can defend any system of property rights, whether for tangible or intangible objects, on the naïve view that it produces all gain and no pain. Every system of property rights necessarily creates some winners and some losers. Recognize property rights in land, and the law makes trespassers out of people who were once free to roam. We choose to bear these costs not because we believe in the divine rights of private property. Rather, we bear them because we make the strong empirical judgment that any loss of liberty is more than offset by the gains from manufacturing, agriculture and commerce that exclusive property rights foster. These gains, moreover, are not confined to some lucky few who first get to occupy land. No, the private holdings in various assets create the markets that use voluntary exchange to spread these gains across the entire population. Our defense of IP takes the same lines because the inconveniences it generates are fully justified by the greater prosperity and well-being for the population at large.

So too is the justification — and tempering principle — behind any measure meant to enforce copyrights. The relevant question when thinking about a particular enforcement regime is not whether some harms may occur because some harm will always occur. The proper questions are: (1) Does the measure to be implemented stand a chance of better giving effect to the property rights we have agreed to protect and (2) when harms do occur, is there a sufficiently open and accessible process available whereby affected parties (and interested third parties) can rightly criticize and improve the system.

On both accounts the FairPlay proposal appears to hit the mark.

FairPlay’s proposal can reduce piracy while respecting users’ rights

Although I am generally skeptical of calls for state intervention, this case seems to present a real opportunity for the CRTC to do some good. If Canada adopts this proposal it is is establishing a reasonable and effective remedy to address violations of individuals’ property, the ownership of which is considered broadly legitimate.

And, as a public institution subject to input from many different stakeholder groups — FairPlay describes the stakeholders  as comprised of “ISPs, rightsholders, consumer advocacy and citizen groups” — the CRTC can theoretically provide a fairly open process. This is distinct from, for example, the Donuts trusted notifier program that some criticized (in my view, mistakenly) as potentially leading to an unaccountable, private ordering of the DNS.

FairPlay’s proposal outlines its plan to provide affected parties with due process protections:

The system proposed seeks to maximize transparency and incorporates extensive safeguards and checks and balances, including notice and an opportunity for the website, ISPs, and other interested parties to review any application submitted to and provide evidence and argument and participate in a hearing before the IPRA; review of all IPRA decisions in a transparent Commission process; the potential for further review of all Commission decisions through the established review and vary procedure; and oversight of the entire system by the Federal Court of Appeal, including potential appeals on questions of law or jurisdiction including constitutional questions, and the right to seek judicial review of the process and merits of the decision.

In terms of its efficacy, according to even the critics of the FairPlay proposal, site blocking provides a measurably positive reduction on piracy. In its formal response to critics, FairPlay Canada noted that one of the studies the critics relied upon actually showed that previous blocks of the PirateBay domains had reduced piracy by nearly 25%:

The Poort study shows that when a single illegal peer-to-peer piracy site (The Pirate Bay) was blocked, between 8% and 9.3% of consumers who were engaged in illegal downloading (from any site, not just The Pirate Bay) at the time the block was implemented reported that they stopped their illegal downloading entirely.  A further 14.5% to 15.3% reported that they reduced their illegal downloading. This shows the power of the regime the coalition is proposing.

The proposal stands to reduce the costs of combating piracy, as well. As noted above, the costs of litigating a large case can reach well into the millions just to initiate proceedings. In its reply comments, FairPlay Canada noted the costs for even run-of-the-mill suits essentially price enforcement of copyrights out of the reach of smaller rightsholders:

[T]he existing process can be inefficient and inaccessible for rightsholders. In response to this argument raised by interveners and to ensure the Commission benefits from a complete record on the point, the coalition engaged IP and technology law firm Hayes eLaw to explain the process that would likely have to be followed to potentially obtain such an order under existing legal rules…. [T]he process involves first completing litigation against each egregious piracy site, and could take up to 765 days and cost up to $338,000 to address a single site.

Moreover, these cost estimates assume that the really bad pirates can even be served with process — which is untrue for many infringers. Unlike physical distributors of counterfeit material (e.g. CDs and DVDs), online pirates do not need to operate within Canada to affect Canadian artists — which leaves a remedy like site blocking as one of the only viable enforcement mechanisms.

Don’t we want to reduce piracy?

More generally, much of the criticism of this proposal is hard to understand. Piracy is clearly a large problem to any observer who even casually peruses the lumen database. Even defenders of the status quo  are forced to acknowledge that “the notice and takedown provisions have been used by rightsholders countless—but likely billions—of times” — a reality that shows that efforts to control piracy to date have been insufficient.

So why not try this experiment? Why not try using a neutral multistakeholder body to see if rightsholders, ISPs, and application providers can create an online environment both free from massive, obviously infringing piracy, and also free for individuals to express themselves and service providers to operate?

In its response comments, the FairPlay coalition noted that some objectors have “insisted that the Commission should reject the proposal… because it might lead… the Commission to use a similar mechanism to address other forms of illegal content online.”

This is the same weak argument that is easily deployable against any form of collective action at all. Of course the state can be used for bad ends — anyone with even a superficial knowledge of history knows this  — but that surely can’t be an indictment against lawmaking as a whole. If allowing a form of prohibition for category A is appropriate, but the same kind of prohibition is inappropriate for category B, then either we assume lawmakers are capable of differentiating between category A and category B, or else we believe that prohibition itself is per se inappropriate. If site blocking is wrong in every circumstance, the objectors need to convincingly  make that case (which, to date, they have not).

Regardless of these criticisms, it seems unlikely that such a public process could be easily subverted for mass censorship. And any incipient censorship should be readily apparent and addressable in the IPRA process. Further, at least twenty-five countries have been experimenting with site blocking for IP infringement in different ways, and, at least so far, there haven’t been widespread allegations of massive censorship.

Maybe there is a perfect way to control piracy and protect user rights at the same time. But until we discover the perfect, I’m all for trying the good. The FairPlay coalition has a good idea, and I look forward to seeing how it progresses in Canada.

As has been rumored in the press for a few weeks, today Comcast announced it is considering making a renewed bid for a large chunk of Twenty-First Century Fox’s (Fox) assets. Fox is in the process of a significant reorganization, entailing primarily the sale of its international and non-television assets. Fox itself will continue, but with a focus on its US television business.

In December of last year, Fox agreed to sell these assets to Disney, in the process rejecting a bid from Comcast. Comcast’s initial bid was some 16% higher than Disney’s, although there were other differences in the proposed deals, as well.

In April of this year, Disney and Fox filed a proxy statement with the SEC explaining the basis for the board’s decision, including predominantly the assertion that the Comcast bid (NB: Comcast is identified as “Party B” in that document) presented greater regulatory (antitrust) risk.

As noted, today Comcast announced it is in “advanced stages” of preparing another unsolicited bid. This time,

Any offer for Fox would be all-cash and at a premium to the value of the current all-share offer from Disney. The structure and terms of any offer by Comcast, including with respect to both the spin-off of “New Fox” and the regulatory risk provisions and the related termination fee, would be at least as favorable to Fox shareholders as the Disney offer.

Because, as we now know (since the April proxy filing), Fox’s board rejected Comcast’s earlier offer largely on the basis of the board’s assessment of the antitrust risk it presented, and because that risk assessment (and the difference between an all-cash and all-share offer) would now be the primary distinguishing feature between Comcast’s and Disney’s bids, it is worth evaluating that conclusion as Fox and its shareholders consider Comcast’s new bid.

In short: There is no basis for ascribing a greater antitrust risk to Comcast’s purchase of Fox’s assets than to Disney’s.

Summary of the Proposed Deal

Post-merger, Fox will continue to own Fox News Channel, Fox Business Network, Fox Broadcasting Company, Fox Sports, Fox Television Stations Group, and sports cable networks FS1, FS2, Fox Deportes, and Big Ten Network.

The deal would transfer to Comcast (or Disney) the following:

  • Primarily, international assets, including Fox International (cable channels in Latin America, the EU, and Asia), Star India (the largest cable and broadcast network in India), and Fox’s 39% interest in Sky (Europe’s largest pay TV service).
  • Fox’s film properties, including 20th Century Fox, Fox Searchlight, and Fox Animation. These would bring along with them studios in Sydney and Los Angeles, but would not include the Fox Los Angeles backlot. Like the rest of the US film industry, the majority of Fox’s film revenue is earned overseas.
  • FX cable channels, National Geographic cable channels (of which Fox currently owns 75%), and twenty-two regional sports networks (RSNs). In terms of relative demand for the two cable networks, FX is a popular basic cable channel, but fairly far down the list of most-watched channels, while National Geographic doesn’t even crack the top 50. Among the RSNs, only one geographic overlap exists with Comcast’s current RSNs, and most of the Fox RSNs (at least 14 of the 22) are not in areas where Comcast has a substantial service presence.
  • The deal would also entail a shift in the companies’ ownership interests in Hulu. Hulu is currently owned in equal 30% shares by Disney, Comcast, and Fox, with the remaining, non-voting 10% owned by Time Warner. Either Comcast or Disney would hold a controlling 60% share of Hulu following the deal with Fox.

Analysis of the Antitrust Risk of a Comcast/Fox Merger

According to the joint proxy statement, Fox’s board discounted Comcast’s original $34.36/share offer — but not the $28.00/share offer from Disney — because of “the level of regulatory issues posed and the proposed risk allocation arrangements.” Significantly on this basis, the Fox board determined Disney’s offer to be superior.

The claim that a merger with Comcast poses sufficiently greater antitrust risk than a purchase by Disney to warrant its rejection out of hand is unsupportable, however. From an antitrust perspective, it is even plausible that a Comcast acquisition of the Fox assets would be on more-solid ground than would be a Disney acquisition.

Vertical Mergers Generally Present Less Antitrust Risk

A merger between Comcast and Fox would be predominantly vertical, while a merger between Disney and Fox, in contrast, would be primarily horizontal. Generally speaking, it is easier to get antitrust approval for vertical mergers than it is for horizontal mergers. As Bruce Hoffman, Director of the FTC’s Bureau of Competition, noted earlier this year:

[V]ertical merger enforcement is still a small part of our merger workload….

There is a strong theoretical basis for horizontal enforcement because economic models predict at least nominal potential for anticompetitive effects due to elimination of horizontal competition between substitutes.

Where horizontal mergers reduce competition on their face — though that reduction could be minimal or more than offset by benefits — vertical mergers do not…. [T]here are plenty of theories of anticompetitive harm from vertical mergers. But the problem is that those theories don’t generally predict harm from vertical mergers; they simply show that harm is possible under certain conditions.

On its face, and consistent with the last quarter century of merger enforcement by the DOJ and FTC, the Comcast acquisition would be less likely to trigger antitrust scrutiny, and the Disney acquisition raises more straightforward antitrust issues.

This is true even in light of the fact that the DOJ decided to challenge the AT&T-Time Warner (AT&T/TWX) merger.

The AT&T/TWX merger is a single data point in a long history of successful vertical mergers that attracted little scrutiny, and no litigation, by antitrust enforcers (although several have been approved subject to consent orders).

Just because the DOJ challenged that one merger does not mean that antitrust enforcers generally, nor even the DOJ in particular, have suddenly become more hostile to vertical mergers.

Of particular importance to the conclusion that the AT&T/TWX merger challenge is of minimal relevance to predicting the DOJ’s reception in this case, the theory of harm argued by the DOJ in that case is far from well-accepted, while the potential theory that could underpin a challenge to a Disney/Fox merger is. As Bruce Hoffman further remarks:

I am skeptical of arguments that vertical mergers cause harm due to an increased bargaining skill; this is likely not an anticompetitive effect because it does not flow from a reduction in competition. I would contrast that to the elimination of competition in a horizontal merger that leads to an increase in bargaining leverage that could raise price or reduce output.

The Relatively Lower Risk of a Vertical Merger Challenge Hasn’t Changed Following the DOJ’s AT&T/Time Warner Challenge

Judge Leon is expected to rule on the AT&T/TWX merger in a matter of weeks. The theory underpinning the DOJ’s challenge is problematic (to say the least), and the case it presented was decidedly weak. But no litigated legal outcome is ever certain, and the court could, of course, rule against the merger nevertheless.

Yet even if the court does rule against the AT&T/TWX merger, this hardly suggests that a Comcast/Fox deal would create a greater antitrust risk than would a Disney/Fox merger.

A single successful challenge to a vertical merger — what would be, in fact, the first successful vertical merger challenge in four decades — doesn’t mean that the courts are becoming hostile to vertical mergers any more than the DOJ’s challenge means that vertical mergers suddenly entail heightened enforcement risk. Rather, it would simply mean that that, given the specific facts of the case, the DOJ was able to make out its prima facie case, and that the defendants were unable to rebut it.  

A ruling for the DOJ in the AT&T/TWX merger challenge would be rooted in a highly fact-specific analysis that could have no direct bearing on future cases.

In the AT&T/TWX case, the court’s decision will turn on its assessment of the DOJ’s argument that the merged firm could raise subscriber prices by a few pennies per subscriber. But as AT&T’s attorney aptly pointed out at trial (echoing the testimony of AT&T’s economist, Dennis Carlton):

The government’s modeled price increase is so negligible that, given the inherent uncertainty in that predictive exercise, it is not meaningfully distinguishable from zero.

Even minor deviations from the facts or the assumptions used in the AT&T/TWX case could completely upend the analysis — and there are important differences between the AT&T/TWX merger and a Comcast/Fox merger. True, both would be largely vertical mergers that would bring together programming and distribution assets in the home video market. But the foreclosure effects touted by the DOJ in the AT&T/TWX merger are seemingly either substantially smaller or entirely non-existent in the proposed Comcast/Fox merger.

Most importantly, the content at issue in AT&T/TWX is at least arguably (and, in fact, argued by the DOJ) “must have” programming — Time Warner’s premium HBO channels and its CNN news programming, in particular, were central to the DOJ’s foreclosure argument. By contrast, the programming that Comcast would pick up as a result of the proposed merger with Fox — FX (a popular, but non-essential, basic cable channel) and National Geographic channels (which attract a tiny fraction of cable viewing) — would be extremely unlikely to merit that designation.

Moreover, the DOJ made much of the fact that AT&T, through DirectTV, has a national distribution footprint. As a result, its analysis was dependent upon the company’s potential ability to attract new subscribers decamping from competing providers from whom it withholds access to Time Warner content in every market in the country. Comcast, on the other hand, provides cable service in only about 35% of the country. This significantly limits its ability to credibly threaten competitors because its ability to recoup lost licensing fees by picking up new subscribers is so much more limited.

And while some RSNs may offer some highly prized live sports programming, the mismatch between Comcast’s footprint and the FOX RSNs (only about 8 of the 22 Fox RSNs are in Comcast service areas) severely limits any ability or incentive the company would have to leverage that content for higher fees. Again, to the extent that RSN programming is not “must-have,” and to the extent there is not overlap between the RSN’s geographic area and Comcast’s service area, the situation is manifestly not the same as the one at issue in the AT&T/TWX merger.

In sum, a ruling in favor of the DOJ in the AT&T/TWX case would be far from decisive in predicting how the agency and the courts would assess any potential concerns arising from Comcast’s ownership of Fox’s assets.

A Comcast/Fox Deal May Entail Lower Antitrust Risk than a Disney/Fox Merger

As discussed below, concerns about antitrust enforcement risk from a Comcast/Fox merger are likely overstated. Perhaps more importantly, however, to the extent these concerns are legitimate, they apply at least as much to a Disney/Fox merger. There is, at minimum, no basis for assuming a Comcast deal would present any greater regulatory risk.

The Antitrust Risk of a Comcast/Fox Merger Is Likely Overstated

The primary theory upon which antitrust enforcers could conceivably base a Comcast/Fox merger challenge would be a vertical foreclosure theory. Importantly, such a challenge would have to be based on the incremental effect of adding the Fox assets to Comcast, and not on the basis of its existing assets. Thus, for example, antitrust enforcers would not be able to base a merger challenge on the possibility that Comcast could leverage NBC content it currently owns to extract higher fees from competitors. Rather, only if the combination of NBC programming with additional content from Fox could create a new antitrust risk would a case be tenable.

Enforcers would be unlikely to view the addition of FX and National Geographic to the portfolio of programming content Comcast currently owns as sufficient to raise concerns that the merger would give Comcast anticompetitive bargaining power or the ability to foreclose access to its content.

Although even less likely, enforcers could be concerned with the (horizontal) addition of 20th Century Fox filmed entertainment to Universal’s existing film production and distribution. But the theatrical film market is undeniably competitive, with the largest studio by revenue (Disney) last year holding only 22% of the market. The combination of 20th Century Fox with Universal would still result in a market share only around 25% based on 2017 revenues (and, depending on the year, not even result in the industry’s largest share).

There is also little reason to think that a Comcast controlling interest in Hulu would attract problematic antitrust attention. Comcast has already demonstrated an interest in diversifying its revenue across cable subscriptions and licensing, broadband subscriptions, and licensing to OVDs, as evidenced by its recent deal to offer Netflix as part of its Xfinity packages. Hulu likely presents just one more avenue for pursuing this same diversification strategy. And Universal has a history (see, e.g., this, this, and this) of very broad licensing across cable providers, cable networks, OVDs, and the like.

In the case of Hulu, moreover, the fact that Comcast is vertically integrated in broadband as well as cable service likely reduces the anticompetitive risk because more-attractive OVD content has the potential to increase demand for Comcast’s broadband service. Broadband offers larger margins (and is growing more rapidly) than cable, and it’s quite possible that any loss in Comcast’s cable subscriber revenue from Hulu’s success would be more than offset by gains in its content licensing and broadband subscription revenue. The same, of course, goes for Comcast’s incentives to license content to OVD competitors like Netflix: Comcast plausibly gains broadband subscription revenue from heightened consumer demand for Netflix, and this at least partially offsets any possible harm to Hulu from Netflix’s success.

At the same time, especially relative to Netflix’s vast library of original programming (an expected $8 billion worth in 2018 alone) and content licensed from other sources, the additional content Comcast would gain from a merger with Fox is not likely to appreciably increase its bargaining leverage or its ability to foreclose Netflix’s access to its content.     

Finally, Comcast’s ownership of Fox’s RSNs could, as noted, raise antitrust enforcers’ eyebrows. Enforcers could be concerned that Comcast would condition competitors’ access to RSN programming on higher licensing fees or prioritization of its NBC Sports channels.

While this is indeed a potential risk, it is hardly a foregone conclusion that it would draw an enforcement action. Among other things, NBC is far from the market leader, and improving its competitive position relative to ESPN could be viewed as a benefit of the deal. In any case, potential problems arising from ownership of the RSNs could easily be dealt with through divestiture or behavioral conditions; they are extremely unlikely to lead to an outright merger challenge.

The Antitrust Risk of a Disney Deal May Be Greater than Expected

While a Comcast/Fox deal doesn’t entail no antitrust enforcement risk, it certainly doesn’t entail sufficient risk to deem the deal dead on arrival. Moreover, it may entail less antitrust enforcement risk than would a Disney/Fox tie-up.

Yet, curiously, the joint proxy statement doesn’t mention any antitrust risk from the Disney deal at all and seems to suggest that the Fox board applied no risk discount in evaluating Disney’s bid.

Disney — already the market leader in the filmed entertainment industry — would acquire an even larger share of box office proceeds (and associated licensing revenues) through acquisition of Fox’s film properties. Perhaps even more important, the deal would bring the movie rights to almost all of the Marvel Universe within Disney’s ambit.

While, as suggested above, even that combination probably wouldn’t trigger any sort of market power presumption, it would certainly create an entity with a larger share of the market and stronger control of the industry’s most valuable franchises than would a Comcast/Fox deal.

Another relatively larger complication for a Disney/Fox merger arises from the prospect of combining Fox’s RSNs with ESPN. Whatever ability or incentive either company would have to engage in anticompetitive conduct surrounding sports programming, that risk would seem to be more significant for undisputed market leader, Disney. At the same time, although still powerful, demand for ESPN on cable has been flagging. Disney could well see the ability to bundle ESPN with regional sports content as a way to prop up subscription revenues for ESPN — a practice, in fact, that it has employed successfully in the past.   

Finally, it must be noted that licensing of consumer products is an even bigger driver of revenue from filmed entertainment than is theatrical release. No other company comes close to Disney in this space.

Disney is the world’s largest licensor, earning almost $57 billion in 2016 from licensing properties like Star Wars and Marvel Comics. Universal is in a distant 7th place, with 2016 licensing revenue of about $6 billion. Adding Fox’s (admittedly relatively small) licensing business would enhance Disney’s substantial lead (even the number two global licensor, Meredith, earned less than half of Disney’s licensing revenue in 2016). Again, this is unlikely to be a significant concern for antitrust enforcers, but it is notable that, to the extent it might be an issue, it is one that applies to Disney and not Comcast.

Conclusion

Although I hope to address these issues in greater detail in the future, for now the preliminary assessment is clear: There is no legitimate basis for ascribing a greater antitrust risk to a Comcast/Fox deal than to a Disney/Fox deal.

I recently became aware of a decision from the High Court in South Africa that examines an interesting intersection of freedom of expression, copyright and contract. It addresses the issue of how to define the public interest in an environment of relatively unguarded rhetoric about the role of copyright in society that is worth exploring. But first, a quick recap of the relevant facts, none of which were in issue.

A well known filmmaker, Ms. SE Vollenhoven, was hired by South African broadcaster, SABC, to produce a documentary film exposing certain governmental improprieties. In her contract with SABC, Vollenhoven transferred all copyright interests to SABC in exchange for compensation. SABC ultimately decided that it was uncomfortable with the product, and decided against releasing it. Vollenhoven initiated a discussion with SABC in an effort to buy back the rights to the film, but SABC refused, leading SABC to seek an injunction preventing Vollenhoven from engaging in any acts that would infringe their rights in the film.

For the purposes of this analysis, let’s assume that all equities are with Vollenhoven, and that the public would gain from the release of the film. I am not in a position to make such a judgment personally, but certainly my sympathies would be with a filmmaker whose own expressive work is relegated to the dustbins due to a decision by a business partner to keep the film out of the public eye. Her frustration is clearly understandable. Let’s further assume that the government pressured SABC into not releasing the film—not because in fact I assume this, but because it is certainly possible, and I want to examine the copyright questions in a light least hospitable to the assertion of copyright. There is an axiom in legal circles that “bad facts make bad law,” but sometimes bad facts allow us to observe legal principles without artifice or obstruction in ways that are useful for our understanding of fundamental principles of law and justice.

This is just such a case. Much as we might sympathize with Vollenhoven, the arguments presented by her counsel would require us to believe that the rejection of free will that undergirds freedom of contract and self-determination is a legitimate price in the quest for perceived freedom. I believe that is a fundamentally flawed proposition, and that willingness to constrain free will that allows a person to determine the scope of her consent undermines rather than advances the public good. The ends, even assuming that they are noble and just, do not justify a means that eliminates consent while seeking to improve the human condition. Vollenhoven and amici (we will get to them later)  ask us to reject free will to achieve freedom. But there is no freedom at the end of that road. As the Court brilliantly and succinctly observed: “a limitation of freedom is irreconcilable with the right of choice.

There are a number of equitable doctrines under which contracts may be vitiated, for example when they are the result of duress or where the consent required for formation of a contract is found to be absent. But here, no such equitable doctrines would apply. Vollenhoven was an accomplished filmmaker who freely negotiated a contract with SABC for her services. There is no suggestion from any party that the contract was somehow unfair, nor are we talking about the application of a non-negotiated provision of law vesting copyright in an employer or commissioning party. Vollenhoven herself does not assert anything different. Her unhappiness with the result of the contract is understandable, but doesn’t justify the attempt to circumvent it through a novel and dangerous mischaracterization of copyright laws and exceptions thereto.

This is where things get interesting. Since the contract under which SABC obtained the copyright in the documentary was unassailable, Vollenhoven and her supporters determined to “free the film” by asserting an implied exception to copyright laws to permit dissemination of information in the public interest. This took a variety of forms, all of which eventually defaulted to the proposition that the public’s interest in access superseded the copyright owner’s interest in protection. I take particular note of the participation of the Freedom of Expression Institute (FXI) on behalf of Vollenhoven since they most perfectly articulate the position that copyright is a form of censorship, having written in their 2015 copyright reform submission to DTI that: “FXI believes that copyright law and free speech are fundamentally in conflict. It should come as no surprise, at all, that both governments and the private sector use copyright law to suppress speech and dissent.” Vollenhoven’s counsel, as summarized by the Court, argued that the Copyright law exists, inter alia, “to promote the free spread of art, ideas and information, not to hinder it and to regulate copyright so as to enhance a vibrant culture in South Africa. Thus on a purposeful interpretation of the Act, so it is argued, it is not just to protect owners of copyright but to advance the public good.”

The Court was unimpressed, finding that: “There is nothing…to support the meaning of public good relied on by the Respondents. Their construction of public good or welfare is equated to dissemination of ideas and this is nowhere to be found or implied….The view that copyright aims to promote public disclosure and dissemination of works cannot be regarded as a true reflection of the purpose or intent of the Act and is not part of our copyright law. The Respondents’  conception of the purposes of the Copyright Act is overbroad. The Act by no means purports to regulate or promote the free spread of ideas although it undoubtedly is a mechanism by which this result may be effected. It is straining the proper limits of the Act to find some kind of implied condition of dissemination in the conferral of copyright.”

And of course, the Court is absolutely correct–enjoining the distribution of the film doesn’t prevent the distribution of the information/ideas contained therein, only the specific original expression of said ideas. Vollerhoven, or anyone else, remains free to tell stories through separate vehicles. As the Court explained: “[Vollerhoven] concedes readily that the respondents have right to tell the story in a different work and have not attempted to stifle this form of expression. In truth the respondents’ freedom of speech is not impinged at all. What is impinged is the use of the work which the respondents sold to the applicant and were substantially rewarded monetarily. The copyrights are vested by law in the applicants. This cannot be conflated with an infringement of freedom of speech. Vollenhoven shows that she is alive to the distinction between the work and the underlying story or idea and does not shirk from asserting her rights to exploit the story as she is well entitled to do.”

The contrary rule argued by her counsel and by FXI is untenable, and would require embracing the perverse logic that the protection of expression is itself a restriction on freedom of expression, a proposition worthy of Wonderland’s Red Queen. If the right of access enjoyed by the public always supersedes the individual’s right to control the uses of her property, then copyright is truly meaningless. FXI’s position essentially acknowledges this. While I think that FXI is mistaken, and fails to capture how copyright serves to democratize the production of original cultural materials for the benefit of society, I will at least give them credit for their directness. Perhaps they believe that state support for the arts is a better tool for sustaining creators. Perhaps they believe in private patronage. But unlike many of their copyright-skeptic peers in the west, they at least own their narrative and don’t feel the need to say that they believe in copyright while rejecting any modality for its protection. It’s a flawed vision that fails to reflect that the interests of the public are served by sustaining creators, and by protecting fundamental human rights in connection with the creation of original works. But it is a vision. Hopefully one that will evolve through an increased recognition that ensuring consent in a technological universe that celebrates lack of permission is central to advancing our humanity and retaining and celebrating our cultural differences.

Copyright law, ever a sore point in some quarters, has found a new field of battle in the FCC’s recent set-top box proposal. At the request of members of Congress, the Copyright Office recently wrote a rather thorough letter outlining its view of the FCC’s proposal on rightsholders.

In sum, the CR’s letter was an even-handed look at the proposal which concluded:

As a threshold matter, it seems critical that any revised proposal respect the authority of creators to manage the exploitation of their copyrighted works through private licensing arrangements, because regulatory actions that undermine such arrangements would be inconsistent with the rights granted under the Copyright Act.

This fairly uncontroversial statement of basic legal principle was met with cries of alarm. And Stanford’s CIS had a post from Affiliated Scholar Annemarie Bridy that managed to trot out breathless comparisons to inapposite legal theories while simultaneously misconstruing the “fair use” doctrine (as well as how Copyright law works in the video market, for that matter).

Look out! Lochner is coming!

In its letter the Copyright Office warned the FCC that its proposed rules have the potential to disrupt the web of contracts that underlie cable programming, and by extension, risk infringing the rights of copyright holders to commercially exploit their property. This analysis actually tracks what Geoff Manne and I wrote in both our initial comment and our reply comment to the set-top box proposal.

Yet Professor Bridy seems to believe that, notwithstanding the guarantees of both the Constitution and Section 106 of the Copyright Act, the FCC should have the power to abrogate licensing contracts between rightsholders and third parties.  She believes that

[t]he Office’s view is essentially that the Copyright Act gives right holders not only the limited range of rights enumerated in Section 106 (i.e., reproduction, preparation of derivative works, distribution, public display, and public performance), but also a much broader and more amorphous right to “manage the commercial exploitation” of copyrighted works in whatever ways they see fit and can accomplish in the marketplace, without any regulatory interference from the government.

What in the world does this even mean? A necessary logical corollary of the Section 106 rights includes the right to exploit works commercially as rightsholders see fit. Otherwise, what could it possibly mean to have the right to control the reproduction or distribution of a work? The truth is that Section 106 sets out a general set of rights that inhere in rightsholders with respect to their protected works, and that commercial exploitation is merely a subset of this total bundle of rights.

The ability to contract with other parties over these rights is also a necessary corollary of the property rights recognized in Section 106. After all, the right to exclude implies by necessity the right to include. Which is exactly what a licensing arrangement is.

But wait, there’s more — she actually managed to pull out the Lochner bogeyman to validate her argument!

The Office’s absolutist logic concerning freedom of contract in the copyright licensing domain is reminiscent of the Supreme Court’s now-infamous reasoning in Lochner v. New York, a 1905 case that invalidated a state law limiting maximum working hours for bakers on the ground that it violated employer-employee freedom of contract. The Court in Lochner deprived the government of the ability to provide basic protections for workers in a labor environment that subjected them to unhealthful and unsafe conditions. As Julie Cohen describes it, “‘Lochner’ has become an epithet used to characterize an outmoded, over-narrow way of thinking about state and federal economic regulation; it goes without saying that hardly anybody takes the doctrine it represents seriously.”

This is quite a leap of logic, as there is precious little in common between the letter from the Copyright Office and the Lochner opinion aside from the fact that both contain the word “contracts” in their pages.  Perhaps the most critical problem with Professor Bridy’s analogy is the fact that Lochner was about a legislature interacting with the common law system of contract, whereas the FCC is a body subordinate to Congress, and IP is both constitutionally and statutorily guaranteed. A sovereign may be entitled to interfere with the operation of common law, but an administrative agency does not have the same sort of legal status as a legislature when redefining general legal rights.

The key argument that Professor Bridy offered in support of her belief that the FCC should be free to abrogate contracts at will is that “[r]egulatory limits on private bargains may come in the form of antitrust laws or telecommunications laws or, as here, telecommunications regulations that further antitrust ends.”  However, this completely misunderstand U.S. constitutional doctrine.

In particular, as Geoff Manne and I discussed in our set-top box comments to the FCC, using one constitutional clause to end-run another constitutional clause is generally a no-no:

Regardless of whether or how well the rules effect the purpose of Sec. 629, copyright violations cannot be justified by recourse to the Communications Act. Provisions of the Communications Act — enacted under Congress’s Commerce Clause power — cannot be used to create an end run around limitations imposed by the Copyright Act under the Constitution’s Copyright Clause. “Congress cannot evade the limits of one clause of the Constitution by resort to another,” and thus neither can an agency acting within the scope of power delegated to it by Congress. Establishing a regulatory scheme under the Communications Act whereby compliance by regulated parties forces them to violate content creators’ copyrights is plainly unconstitutional.

Congress is of course free to establish the implementation of the Copyright Act as it sees fit. However, unless Congress itself acts to change that implementation, the FCC — or any other party — is not at liberty to interfere with rightsholders’ constitutionally guaranteed rights.

You Have to Break the Law Before You Raise a Defense

Another bone of contention upon which Professor Bridy gnaws is a concern that licensing contracts will abrogate an alleged right to “fair use” by making the defense harder to muster:  

One of the more troubling aspects of the Copyright Office’s letter is the length to which it goes to assert that right holders must be free in their licensing agreements with MVPDs to bargain away the public’s fair use rights… Of course, the right of consumers to time-shift video programming for personal use has been enshrined in law since Sony v. Universal in 1984. There’s no uncertainty about that particular fair use question—none at all.

The major problem with this reasoning (notwithstanding the somewhat misleading drafting of Section 107) is that “fair use” is not an affirmative right, it is an affirmative defense. Despite claims that “fair use” is a right, the Supreme Court has noted on at least two separate occasions (1, 2) that Section 107 was “structured… [as]… an affirmative defense requiring a case-by-case analysis.”

Moreover, important as the Sony case is, it does not not establish that “[t]here’s no uncertainty about [time-shifting as a] fair use question—none at all.” What it actually establishes is that, given the facts of that case, time-shifting was a fair use. Not for nothing the Sony Court notes at the outset of its opinion that

An explanation of our rejection of respondents’ unprecedented attempt to impose copyright liability upon the distributors of copying equipment requires a quite detailed recitation of the findings of the District Court.

But more generally, the Sony doctrine stands for the proposition that:

“The limited scope of the copyright holder’s statutory monopoly, like the limited copyright duration required by the Constitution, reflects a balance of competing claims upon the public interest: creative work is to be encouraged and rewarded, but private motivation must ultimately serve the cause of promoting broad public availability of literature, music, and the other arts. The immediate effect of our copyright law is to secure a fair return for an ‘author’s’ creative labor. But the ultimate aim is, by this incentive, to stimulate artistic creativity for the general public good. ‘The sole interest of the United States and the primary object in conferring the monopoly,’ this Court has said, ‘lie in the general benefits derived by the public from the labors of authors.’ Fox Film Corp. v. Doyal, 286 U. S. 123, 286 U. S. 127. See Kendall v. Winsor, 21 How. 322, 62 U. S. 327-328; Grant v. Raymond, 6 Pet. 218, 31 U. S. 241-242. When technological change has rendered its literal terms ambiguous, the Copyright Act must be construed in light of this basic purpose.” Twentieth Century Music Corp. v. Aiken, 422 U. S. 151, 422 U. S. 156 (1975) (footnotes omitted).

In other words, courts must balance competing interests to maximize “the general benefits derived by the public,” subject to technological change and other criteria that might shift that balance in any particular case.  

Thus, even as an affirmative defense, nothing is guaranteed. The court will have to walk through a balancing test, and only after that point, and if the accused party’s behavior has not tipped the scales against herself, will the court find the use a “fair use.”  

As I noted before,

Not surprisingly, other courts are inclined to follow the Supreme Court. Thus the Eleventh Circuit, the Southern District of New York, and the Central District of California (here and here), to name but a few, all explicitly refer to fair use as an affirmative defense. Oh, and the Ninth Circuit did too, at least until Lenz.

The Lenz case was an interesting one because, despite the above noted Supreme Court precedent treating “fair use” as a defense, it is one of the very few cases that has held “fair use” to be an affirmative right (in that case, the court decided that Section 1201 of the DMCA required consideration of “fair use” as a part of filling out a take-down notice). And in doing so, it too tried to rely on Sony to restructure the nature of “fair use.” But as I have previously written, “[i]t bears noting that the Court in Sony Corp. did not discuss whether or not fair use is an affirmative defense, whereas Acuff Rose (decided 10 years after Sony Corp.) and Harper & Row decisions do.”

Further, even the Eleventh Circuit, which the Ninth relied upon in Lenz, later clarified its position that the above-noted Supreme Court precedent definitely binds lower courts, and that “fair use” is in fact an affirmative defense.

Thus, to say that rightsholders’ licensing contracts somehow impinge a “right” of fair use completely puts the cart before the horse. Remember, as an affirmative defense, “fair use” is an excuse for otherwise infringing behavior, and rightsholders are well within their constitutional and statutory rights to avoid potential infringing uses.

Think about it this way. When you commit a crime you can raise a defense: for instance, an insanity defense. But just because you might be excused for committing a crime if a court finds you were not operating with full faculties, this does not entitle every insane person to go out and commit that crime. The insanity defense can be raised only after a crime is committed, and at that point it will be examined by a judge and jury to determine if applying the defense furthers the overall criminal law scheme.

“Fair use” works in exactly the same manner. And even though Sony described how time- and space-shifting were potentially permissible, it did so only by determining on those facts that the balancing test came out to allow it. So, maybe a particular time-shifting use would be “fair use.” But maybe not. More likely, in this case, even the allegedly well-established “fair use” of time-shifting in the context of today’s digital media, on-demand programing, Netflix and the like may not meet that burden.

And what this means is that a rightsholder does not have an ex ante obligation to consider whether a particular contractual clause might in some fashion or other give rise to a “fair use” defense.

The contrary point of view makes no sense. Because “fair use” is a defense, forcing parties to build “fair use” considerations into their contractual negotiations essentially requires them to build in an allowance for infringement — and one that a court might or might not ever find appropriate in light of the requisite balancing of interests. That just can’t be right.

Instead, I think this article is just a piece of the larger IP-skeptic movement. I suspect that when “fair use” was in its initial stages of development, it was intended as a fairly gentle softening on the limits of intellectual property — something like the “public necessity” doctrine in common law with respect to real property and trespass. However, that is just not how “fair use” advocates see it today. As Geoff Manne has noted, the idea of “permissionless innovation” has wrongly come to mean “no contracts required (or permitted)”:  

[Permissionless innovation] is used to justify unlimited expansion of fair use, and is extended by advocates to nearly all of copyright…, which otherwise requires those pernicious licenses (i.e., permission) from others.

But this position is nonsense — intangible property is still property. And at root, property is just a set of legal relations between persons that defines their rights and obligations with respect to some “thing.” It doesn’t matter if you can hold that thing in your hand or not. As property, IP can be subject to transfer and control through voluntarily created contracts.

Even if “fair use” were some sort of as-yet unknown fundamental right, it would still be subject to limitations upon it by other rights and obligations. To claim that “fair use” should somehow trump the right of a property holder to dispose of the property as she wishes is completely at odds with our legal system.