Archives For Sony

Output of the LG Research AI to the prompt: “a system of copyright for artificial intelligence”

Not only have digital-image generators like Stable Diffusion, DALL-E, and Midjourney—which make use of deep-learning models and other artificial-intelligence (AI) systems—created some incredible (and sometimes creepy – see above) visual art, but they’ve engendered a good deal of controversy, as well. Human artists have banded together as part of a fledgling anti-AI campaign; lawsuits have been filed; and policy experts have been trying to think through how these machine-learning systems interact with various facets of the law.

Debates about the future of AI have particular salience for intellectual-property rights. Copyright is notoriously difficult to protect online, and these expert systems add an additional wrinkle: it can at least argued that their outputs can be unique creations. There are also, of course, moral and philosophical objections to those arguments, with many grounded in the supposition that only a human (or something with a brain, like humans) can be creative.

Leaving aside for the moment a potentially pitched battle over the definition of “creation,” we should be able to find consensus that at least some of these systems produce unique outputs and are not merely cutting and pasting other pieces of visual imagery into a new whole. That is, at some level, the machines are engaging in a rudimentary sort of “learning” about how humans arrange colors and lines when generating images of certain subjects. The machines then reconstruct this process and produce a new set of lines and colors that conform to the patterns they found in the human art.

But that isn’t the end of the story. Even if some of these systems’ outputs are unique and noninfringing, the way the machines learn—by ingesting existing artwork—can raise a number of thorny issues. Indeed, these systems are arguably infringing copyright during the learning phase, and such use may not survive a fair-use analysis.

We are still in the early days of thinking through how this new technology maps onto the law. Answers will inevitably come, but for now, there are some very interesting questions about the intellectual-property implications of AI-generated art, which I consider below.

The Points of Collision Between Intellectual Property Law and AI-Generated Art

AI-generated art is not a single thing. It is, rather, a collection of differing processes, each with different implications for the law. For the purposes of this post, I am going to deal with image-generation systems that use “generated adversarial networks” (GANs) and diffusion models. The various implementations of each will differ in some respects, but from what I understand, the ways that these techniques can be used generate all sorts of media are sufficiently similar that we can begin to sketch out some of their legal implications. 

A (very) brief technical description

This is a very high-level overview of how these systems work; for a more detailed (but very readable) description, see here.

A GAN is a type of machine-learning model that consists of two parts: a generator and a discriminator. The generator is trained to create new images that look like they come from a particular dataset, while the discriminator is trained to distinguish the generated images from real images in the dataset. The two parts are trained together in an adversarial manner, with the generator trying to produce images that can fool the discriminator and the discriminator trying to correctly identify the generated images.

A diffusion model, by contrast, analyzes the distribution of information in an image, as noise is progressively added to it. This kind of algorithm analyzes characteristics of sample images—like the distribution of colors or lines—in order to “understand” what counts as an accurate representation of a subject (i.e., what makes a picture of a cat look like a cat and not like a dog).

For example, in the generation phase, systems like Stable Diffusion start with randomly generated noise, and work backward in “denoising” steps to essentially “see” shapes:

The sampled noise is predicted so that if we subtract it from the image, we get an image that’s closer to the images the model was trained on (not the exact images themselves, but the distribution – the world of pixel arrangements where the sky is usually blue and above the ground, people have two eyes, cats look a certain way – pointy ears and clearly unimpressed).

It is relevant here that, once networks using these techniques are trained, they do not need to rely on saved copies of the training images in order to generate new images. Of course, it’s possible that some implementations might be designed in a way that does save copies of those images, but for the purposes of this post, I will assume we are talking about systems that save known works only during the training phase. The models that are produced during training are, in essence, instructions to a different piece of software about how to start with a text prompt from a user—a palette of pure noise—and progressively “discover” signal in that image until some new image emerges.

Input-stage use of intellectual property

The creators of OpenAI, one of the most popular AI tools, are not shy about their use of protected works in the training phase of AI algorithms. In comments to the U.S. Patent and Trademark Office (PTO), they note that:

…[m]odern AI systems require large amounts of data. For certain tasks, that data is derived from existing publicly accessible “corpora”… of data that include copyrighted works. By analyzing large corpora (which necessarily involves first making copies of the data to be analyzed), AI systems can learn patterns inherent in human-generated data and then use those patterns to synthesize similar data which yield increasingly compelling novel media in modalities as diverse as text, image, and audio. (emphasis added).

Thus, at the training stage, the most popular forms of machine-learning systems require making copies of existing works. And where the material being used is either not in the public domain or is not licensed, an infringement occurs (as Getty Images notes in a suit against Stability AI that it recently filed). Thus, some affirmative defense is needed to excuse the infringement.

Toward this end, OpenAI believes that its algorithmic training should qualify as a fair use. Other major services that use these AI techniques to “learn” from existing media would likely make similar arguments. But, at least in the way that OpenAI has framed the fair-use analysis (that these uses are sufficiently “transformative”), it’s not clear that they should qualify.

The purpose and character of the use

In brief, fair use—found in 17 USC § 107—provides for an affirmative defense against infringement when the use is  “for purposes such as criticism, comment, news reporting, teaching…, scholarship, or research.” When weighing a fair-use defense, a court must balance a number of factors:

  1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
  2. the nature of the copyrighted work;
  3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
  4. the effect of the use upon the potential market for or value of the copyrighted work.

OpenAI’s fair-use claim is rooted in the first factor: the nature and character of the use. I should note, then, that what follows is solely a consideration of Factor 1, with special attention paid to whether these uses are “transformative.” But it is important to stipulate fair-use analysis is a multi-factor test and that, even within the first factor, it’s not mandatory that a use be “transformative.” It is entirely possible that a court balancing all of the factors could, indeed, find that OpenAI is engaged in fair use, even if it does not agree that it is “transformative.”

Whether the use of copyrighted works to train an AI is “transformative” is certainly a novel question, but it is likely answered through an observation that the U.S. Supreme Court made in Campbell v. Acuff Rose Music:

[W]hat Sony said simply makes common sense: when a commercial use amounts to mere duplication of the entirety of an original, it clearly “supersede[s] the objects,”… of the original and serves as a market replacement for it, making it likely that cognizable market harm to the original will occur… But when, on the contrary, the second use is transformative, market substitution is at least less certain, and market harm may not be so readily inferred.

A key question, then, is whether training an AI on copyrighted works amounts to mere “duplication of the entirety of an original” or is sufficiently “transformative” to support a fair-use finding. Open AI, as noted above, believes its use is highly transformative. According to its comments:

Training of AI systems is clearly highly transformative. Works in training corpora were meant primarily for human consumption for their standalone entertainment value. The “object of the original creation,” in other words, is direct human consumption of the author’s ​expression.​ Intermediate copying of works in training AI systems is, by contrast, “non-expressive” the copying helps computer programs learn the patterns inherent in human-generated media. The aim of this process—creation of a useful generative AI system—is quite different than the original object of human consumption.  The output is different too: nobody looking to read a specific webpage contained in the corpus used to train an AI system can do so by studying the AI system or its outputs. The new purpose and expression are thus both highly transformative.

But the way that Open AI frames its system works against its interests in this argument. As noted above, and reinforced in the immediately preceding quote, an AI system like DALL-E or Stable Diffusion is actually made of at least two distinct pieces. The first is a piece of software that ingests existing works and creates a file that can serve as instructions to the second piece of software. The second piece of software then takes the output of the first part and can produce independent results. Thus, there is a clear discontinuity in the process, whereby the ultimate work created by the system is disconnected from the creative inputs used to train the software.

Therefore, contrary to what Open AI asserts, the protected works are indeed ingested into the first part of the system “for their standalone entertainment value.” That is to say, the software is learning what counts as “standalone entertainment value” and therefore, the works mustbe used in those terms.

Surely, a computer is not sitting on a couch and surfing for its own entertainment. But it is solely for the very “standalone entertainment value” that the first piece of software is being shown copyrighted material. By contrast, parody or “remixing”  uses incorporate the work into some secondary expression that transforms the input. The way these systems work is to learn what makes a piece entertaining and then to discard that piece altogether. Moreover, this use of art qua art most certainly interferes with the existing market insofar as this use is in lieu of reaching a licensing agreement with rightsholders.

The 2nd U.S. Circuit Court of Appeals dealt with an analogous case. In American Geophysical Union v. Texaco, the 2nd Circuit considered whether Texaco’s photocopying of scientific articles produced by the plaintiffs qualified for a fair-use defense. Texaco employed between 400 and 500 research scientists and, as part of supporting their work, maintained subscriptions to a number of scientific journals. It was common practice for Texaco’s scientists to photocopy entire articles and save them in a file.

The plaintiffs sued for copyright infringement. Texaco asserted that photocopying by its scientists for the purposes of furthering scientific research—that is to train the scientists on the content of the journal articles—should count as a fair use, at least in part because it was sufficiently “transformative.” The 2nd Circuit disagreed:

The “transformative use” concept is pertinent to a court’s investigation under the first factor because it assesses the value generated by the secondary use and the means by which such value is generated. To the extent that the secondary use involves merely an untransformed duplication, the value generated by the secondary use is little or nothing more than the value that inheres in the original. Rather than making some contribution of new intellectual value and thereby fostering the advancement of the arts and sciences, an untransformed copy is likely to be used simply for the same intrinsic purpose as the original, thereby providing limited justification for a finding of fair use… (emphasis added).

As in the case at hand, the 2nd Circuit observed that making full copies of the scientific articles was solely for the consumption of the material itself. A rejoinder, of course, is that training these AI systems surely advances scientific research and, thus, does foster the “advancement of the arts and sciences.” But in American Geophysical Union, where the secondary use was explicitly for the creation of new and different scientific outputs, the court still held that making copies of one scientific article in order to learn and produce new scientific innovations did not count as “transformative.”

What this case represents is that one cannot merely state that some social goal will be advanced in the future by permitting an exception to copyright protection today. As the 2nd Circuit put it:

…the dominant purpose of the use is a systematic institutional policy of multiplying the available number of copies of pertinent copyrighted articles by circulating the journals among employed scientists for them to make copies, thereby serving the same purpose for which additional subscriptions are normally sold, or… for which photocopying licenses may be obtained.

The secondary use itself must be transformative and different. Where an AI system ingests copyrighted works, that use is simply not transformative; it is using the works in their original sense in order to train a system to be able to make other original works. As in American Geophysical Union, the AI creators are completely free to seek licenses from rightsholders in order to train their systems.

Finally, there is a sense in which this machine learning might not infringe on copyrights at all. To my knowledge, the technology does not itself exist, but if it were possible for a machine to somehow “see” in the way that humans do—without using stored copies of copyrighted works—merely “learning” from those works, such as we can call it learning, probably would not violate copyright laws.

Do the outputs of these systems violate intellectual property laws?

The outputs of GANs and diffusion models may or may not violate IP laws, but there is nothing inherent in the processes described above to dictate that they must. As noted, the most common AI systems do not save copies of existing works, but merely “instructions” (more or less) on how to create new works that conform to patterns they found by examining existing work. If we assume that a system isn’t violating copyright at the input stage, it’s entirely possible that it can produce completely new pieces of art that have never before existed and do not violate copyright.

They can, however, be made to violate IP rights. For example, trademark violations appear to be one of the most popular uses of these AI systems by end users. To take but one example, a quick search of Google Images for “midjourney iron man” returns a slew of images that almost certainly violate trademarks for the character Iron Man. Similarly, these systems can be instructed to generate art that is not just “in the style” of a particular artist, but that very closely resembles existing pieces. In this sense, the system would be making a copy that theoretically infringes. 

There is a common bug in such systems that leads to outputs that are more likely to violate copyright in this way. Known as “overfitting,” the training leg of these AI systems can be presented with samples that contain too many instances of a particular image. This leads to a data set that contains too much information about the specific image, such that when the AI generates a new image, it is constrained to producing something very close to the original.

An argument can also be made that generating art “in the style of” a famous artist violates moral rights (in jurisdictions where such rights exist).

At least in the copyright space, cases like Sony are going to become crucial. Does the user side of these AI systems have substantial noninfringing uses? If so, the firms that host software for end users could avoid secondary-infringement liability, and the onus would fall on users to avoid violating copyright laws. At the same time, it seems plausible that legislatures could place some obligation on these providers to implement filters to mitigate infringement by end users.

Opportunities for New IP Commercialization with AI

There are a number of ways that AI systems may inexcusably infringe on intellectual-property rights. As a best practice, I would encourage the firms that operate these services to seek licenses from rightsholders. While this would surely be an expense, it also opens new opportunities for both sides to generate revenue.

For example, an AI firm could develop its own version of YouTube’s ContentID that allows creators to opt their work into training. For some well-known artists this could be negotiated with an upfront licensing fee. On the user-side, any artist who has opted in could then be selected as a “style” for the AI to emulate. When users generate an image, a royalty payment to the artist would be created. Creators would also have the option to remove their influence from the system if they so desired. 

Undoubtedly, there are other ways to monetize the relationship between creators and the use of their work in AI systems. Ultimately, the firms that run these systems will not be able to simply wish away IP laws. There are going to be opportunities for creators and AI firms to both succeed, and the law should help to generate that result.

With just a week to go until the U.S. midterm elections, which potentially herald a change in control of one or both houses of Congress, speculation is mounting that congressional Democrats may seek to use the lame-duck session following the election to move one or more pieces of legislation targeting the so-called “Big Tech” companies.

Gaining particular notice—on grounds that it is the least controversial of the measures—is S. 2710, the Open App Markets Act (OAMA). Introduced by Sen. Richard Blumenthal (D-Conn.), the Senate bill has garnered 14 cosponsors: exactly seven Republicans and seven Democrats. It would, among other things, force certain mobile app stores and operating systems to allow “sideloading” and open their platforms to rival in-app payment systems.

Unfortunately, even this relatively restrained legislation—at least, when compared to Sen. Amy Klobuchar’s (D-Minn.) American Innovation and Choice Online Act or the European Union’s Digital Markets Act (DMA)—is highly problematic in its own right. Here, I will offer seven major questions the legislation leaves unresolved.

1.     Are Quantitative Thresholds a Good Indicator of ‘Gatekeeper Power’?

It is no secret that OAMA has been tailor-made to regulate two specific app stores: Android’s Google Play Store and Apple’s Apple App Store (see here, here, and, yes, even Wikipedia knows it).The text makes this clear by limiting the bill’s scope to app stores with more than 50 million users, a threshold that only Google Play and the Apple App Store currently satisfy.

However, purely quantitative thresholds are a poor indicator of a company’s potential “gatekeeper power.” An app store might have much fewer than 50 million users but cater to a relevant niche market. By the bill’s own logic, why shouldn’t that app store likewise be compelled to be open to competing app distributors? Conversely, it may be easy for users of very large app stores to multi-home or switch seamlessly to competing stores. In either case, raw user data paints a distorted picture of the market’s realities.

As it stands, the bill’s thresholds appear arbitrary and pre-committed to “disciplining” just two companies: Google and Apple. In principle, good laws should be abstract and general and not intentionally crafted to apply only to a few select actors. In OAMA’s case, the law’s specific thresholds are also factually misguided, as purely quantitative criteria are not a good proxy for the sort of market power the bill purportedly seeks to curtail.

2.     Why Does the Bill not Apply to all App Stores?

Rather than applying to app stores across the board, OAMA targets only those associated with mobile devices and “general purpose computing devices.” It’s not clear why.

For example, why doesn’t it cover app stores on gaming platforms, such as Microsoft’s Xbox or Sony’s PlayStation?

Source: Visual Capitalist

Currently, a PlayStation user can only buy digital games through the PlayStation Store, where Sony reportedly takes a 30% cut of all sales—although its pricing schedule is less transparent than that of mobile rivals such as Apple or Google.

Clearly, this bothers some developers. Much like Epic Games CEO Tim Sweeney’s ongoing crusade against the Apple App Store, indie-game publisher Iain Garner of Neon Doctrine recently took to Twitter to complain about Sony’s restrictive practices. According to Garner, “Platform X” (clearly PlayStation) charges developers up to $25,000 and 30% of subsequent earnings to give games a modicum of visibility on the platform, in addition to requiring them to jump through such hoops as making a PlayStation-specific trailer and writing a blog post. Garner further alleges that Sony severely circumscribes developers’ ability to offer discounts, “meaning that Platform X owners will always get the worst deal!” (see also here).

Microsoft’s Xbox Game Store similarly takes a 30% cut of sales. Presumably, Microsoft and Sony both have the same type of gatekeeper power in the gaming-console market that Apple and Google are said to have on their respective platforms, leading to precisely those issues that OAMA ostensibly purports to combat. Namely, that consumers are not allowed to choose alternative app stores through which to buy games on their respective consoles, and developers must acquiesce to Sony’s and Microsoft’s terms if they want their games to reach those players.

More broadly, dozens of online platforms also charge commissions on the sales made by their creators. To cite but a few: OnlyFans takes a 20% cut of sales; Facebook gets 30% of the revenue that creators earn from their followers; YouTube takes 45% of ad revenue generated by users; and Twitch reportedly rakes in 50% of subscription fees.

This is not to say that all these services are monopolies that should be regulated. To the contrary, it seems like fees in the 20-30% range are common even in highly competitive environments. Rather, it is merely to observe that there are dozens of online platforms that demand a percentage of the revenue that creators generate and that prevent those creators from bypassing the platform. As well they should, after all, because creating and improving a platform is not free.

It is nonetheless difficult to see why legislation regulating online marketplaces should focus solely on two mobile app stores. Ultimately, the inability of OAMA’s sponsors to properly account for this carveout diminishes the law’s credibility.

3.     Should Picking Among Legitimate Business Models Be up to Lawmakers or Consumers?

“Open” and “closed” platforms posit two different business models, each with its own advantages and disadvantages. Some consumers may prefer more open platforms because they grant them more flexibility to customize their mobile devices and operating systems. But there are also compelling reasons to prefer closed systems. As Sam Bowman observed, narrowing choice through a more curated system frees users from having to research every possible option every time they buy or use some product. Instead, they can defer to the platform’s expertise in determining whether an app or app store is trustworthy or whether it contains, say, objectionable content.

Currently, users can choose to opt for Apple’s semi-closed “walled garden” iOS or Google’s relatively more open Android OS (which OAMA wants to pry open even further). Ironically, under the pretext of giving users more “choice,” OAMA would take away the possibility of choice where it matters the most—i.e., at the platform level. As Mikolaj Barczentewicz has written:

A sideloading mandate aims to give users more choice. It can only achieve this, however, by taking away the option of choosing a device with a “walled garden” approach to privacy and security (such as is taken by Apple with iOS).

This obviates the nuances between the two and pushes Android and iOS to converge around a single model. But if consumers unequivocally preferred open platforms, Apple would have no customers, because everyone would already be on Android.

Contrary to regulators’ simplistic assumptions, “open” and “closed” are not synonyms for “good” and “bad.” Instead, as Boston University’s Andrei Hagiu has shown, there are fundamental welfare tradeoffs at play between these two perfectly valid business models that belie simplistic characterizations of one being inherently superior to the other.

It is debatable whether courts, regulators, or legislators are well-situated to resolve these complex tradeoffs by substituting businesses’ product-design decisions and consumers’ revealed preferences with their own. After all, if regulators had such perfect information, we wouldn’t need markets or competition in the first place.

4.     Does OAMA Account for the Security Risks of Sideloading?

Platforms retaining some control over the apps or app stores allowed on their operating systems bolsters security, as it allows companies to weed out bad players.

Both Apple and Google do this, albeit to varying degrees. For instance, Android already allows sideloading and third-party in-app payment systems to some extent, while Apple runs a tighter ship. However, studies have shown that it is precisely the iOS “walled garden” model which gives it an edge over Android in terms of privacy and security. Even vocal Apple critic Tim Sweeney recently acknowledged that increased safety and privacy were competitive advantages for Apple.

The problem is that far-reaching sideloading mandates—such as the ones contemplated under OAMA—are fundamentally at odds with current privacy and security capabilities (see here and here).

OAMA’s defenders might argue that the law does allow covered platforms to raise safety and security defenses, thus making the tradeoffs between openness and security unnecessary. But the bill places such stringent conditions on those defenses that platform operators will almost certainly be deterred from risking running afoul of the law’s terms. To invoke the safety and security defenses, covered companies must demonstrate that provisions are applied on a “demonstrably consistent basis”; are “narrowly tailored and could not be achieved through less discriminatory means”; and are not used as a “pretext to exclude or impose unnecessary or discriminatory terms.”

Implementing these stringent requirements will drag enforcers into a micromanagement quagmire. There are thousands of potential spyware, malware, rootkit, backdoor, and phishing (to name just a few) software-security issues—all of which pose distinct threats to an operating system. The Federal Trade Commission (FTC) and the federal courts will almost certainly struggle to control the “consistency” requirement across such varied types.

Likewise, OAMA’s reference to “least discriminatory means” suggests there is only one valid answer to any given security-access tradeoff. Further, depending on one’s preferred balance between security and “openness,” a claimed security risk may or may not be “pretextual,” and thus may or may not be legal.

Finally, the bill text appears to preclude the possibility of denying access to a third-party app or app store for reasons other than safety and privacy. This would undermine Apple’s and Google’s two-tiered quality-control systems, which also control for “objectionable” content such as (child) pornography and social engineering. 

5.     How Will OAMA Safeguard the Rights of Covered Platforms?

OAMA is also deeply flawed from a procedural standpoint. Most importantly, there is no meaningful way to contest the law’s designation as “covered company,” or the harms associated with it.

Once a company is “covered,” it is presumed to hold gatekeeper power, with all the associated risks for competition, innovation, and consumer choice. Remarkably, this presumption does not admit any qualitative or quantitative evidence to the contrary. The only thing a covered company can do to rebut the designation is to demonstrate that it, in fact, has fewer than 50 million users.

By preventing companies from showing that they do not hold the kind of gatekeeper power that harms competition, decreases innovation, raises prices, and reduces choice (the bill’s stated objectives), OAMA severely tilts the playing field in the FTC’s favor. Even the EU’s enforcer-friendly DMA incorporated a last-minute amendment allowing firms to dispute their status as “gatekeepers.” While this defense is not perfect (companies cannot rely on the same qualitative evidence that the European Commission can use against them), at least gatekeeper status can be contested under the DMA.

6.     Should Legislation Protect Competitors at the Expense of Consumers?

Like most of the new wave of regulatory initiatives against Big Tech (but unlike antitrust law), OAMA is explicitly designed to help competitors, with consumers footing the bill.

For example, OAMA prohibits covered companies from using or combining nonpublic data obtained from third-party apps or app stores operating on their platforms in competition with those third parties. While this may have the short-term effect of redistributing rents away from these platforms and toward competitors, it risks harming consumers and third-party developers in the long run.

Platforms’ ability to integrate such data is part of what allows them to bring better and improved products and services to consumers in the first place. OAMA tacitly admits this by recognizing that the use of nonpublic data grants covered companies a competitive advantage. In other words, it allows them to deliver a product that is better than competitors’.

Prohibiting self-preferencing raises similar concerns. Why wouldn’t a company that has invested billions in developing a successful platform and ecosystem not give preference to its own products to recoup some of that investment? After all, the possibility of exercising some control over downstream and adjacent products is what might have driven the platform’s development in the first place. In other words, self-preferencing may be a symptom of competition, and not the absence thereof. Third-party companies also would have weaker incentives to develop their own platforms if they can free-ride on the investments of others. And platforms that favor their own downstream products might simply be better positioned to guarantee their quality and reliability (see here and here).

In all of these cases, OAMA’s myopic focus on improving the lot of competitors for easy political points will upend the mobile ecosystems from which both users and developers derive significant benefit.

7.     Shouldn’t the EU Bear the Risks of Bad Tech Regulation?

Finally, U.S. lawmakers should ask themselves whether the European Union, which has no tech leaders of its own, is really a model to emulate. Today, after all, marks the day the long-awaited Digital Markets Act— the EU’s response to perceived contestability and fairness problems in the digital economy—officially takes effect. In anticipation of the law entering into force, I summarized some of the outstanding issues that will define implementation moving forward in this recent tweet thread.

We have been critical of the DMA here at Truth on the Market on several factual, legal, economic, and procedural grounds. The law’s problems range from it essentially being a tool to redistribute rents away from platforms and to third-parties, despite it being unclear why the latter group is inherently more deserving (Pablo Ibañez Colomo has raised a similar point); to its opacity and lack of clarity, a process that appears tilted in the Commission’s favor; to the awkward way it interacts with EU competition law, ignoring the welfare tradeoffs between the models it seeks to impose and perfectly valid alternatives (see here and here); to its flawed assumptions (see, e.g., here on contestability under the DMA); to the dubious legal and economic value of the theory of harm known as  “self-preferencing”; to the very real possibility of unintended consequences (e.g., in relation to security and interoperability mandates).

In other words, that the United States lags the EU in seeking to regulate this area might not be a bad thing, after all. Despite the EU’s insistence on being a trailblazing agenda-setter at all costs, the wiser thing in tech regulation might be to remain at a safe distance. This is particularly true when one considers the potentially large costs of legislative missteps and the difficulty of recalibrating once a course has been set.

U.S. lawmakers should take advantage of this dynamic and learn from some of the Old Continent’s mistakes. If they play their cards right and take the time to read the writing on the wall, they might just succeed in averting antitrust’s uncertain future.

Fair use’s fatal conceit

Geoffrey Manne —  21 February 2017

My colleague, Neil Turkewitz, begins his fine post for Fair Use Week (read: crashing Fair Use Week) by noting that

Many of the organizations celebrating fair use would have you believe, because it suits their analysis, that copyright protection and the public interest are diametrically opposed. This is merely a rhetorical device, and is a complete fallacy.

If I weren’t a recovering law professor, I would just end there: that about sums it up, and “the rest is commentary,” as they say. Alas….  

All else equal, creators would like as many people to license their works as possible; there’s no inherent incompatibility between “incentives and access” (which is just another version of the fallacious “copyright protection versus the public interest” trope). Everybody wants as much access as possible. Sure, consumers want to pay as little as possible for it, and creators want to be paid as much as possible. That’s a conflict, and at the margin it can seem like a conflict between access and incentives. But it’s not a fundamental, philosophical, and irreconcilable difference — it’s the last 15 minutes of negotiation before the contract is signed.

Reframing what amounts to a fundamental agreement into a pitched battle for society’s soul is indeed a purely rhetorical device — and a mendacious one, at that.

The devil is in the details, of course, and there are still disputes on the margin, as I said. But it helps to know what they’re really about, and why they are so far from the fanciful debates the copyright scolds wish we were having.

First, price is, in fact, a big deal. For the creative industries it can be the difference between, say, making one movie or a hundred, and for artists is can be the difference between earning a livelihood writing songs or packing it in for a desk job.

But despite their occasional lip service to the existence of trade-offs, many “fair-users” see price — i.e., licensing agreements — as nothing less than a threat to social welfare. After all, the logic runs, if copies can be made at (essentially) zero marginal cost, a positive price is just extortion. They say, “more access!,” but they don’t mean, “more access at an agreed-upon price;” they mean “zero-price access, and nothing less.” These aren’t the same thing, and when “fair use” is a stand-in for “zero-price use,” fair-users moving the goalposts — and being disingenuous about it.

The other, related problem, of course, is piracy. Sometimes rightsholders’ objections to the expansion of fair use are about limiting access. But typically that’s true only where fine-tuned contracting isn’t feasible, and where the only realistic choice they’re given is between no access for some people, and pervasive (and often unstoppable) piracy. There are any number of instances where rightsholders have no realistic prospect of efficiently negotiating licensing terms and receiving compensation, and would welcome greater access to their works even without a license — as long as the result isn’t also (or only) excessive piracy. The key thing is that, in such cases, opposition to fair use isn’t opposition to reasonable access, even free access. It’s opposition to piracy.

Time-shifting with VCRs and space-shifting with portable mp3 players (to take two contentious historical examples) fall into this category (even if they are held up — as they often are — by the fair-users as totems of their fanciful battle ). At least at the time of the Sony and Diamond Rio cases, when there was really no feasible way to enforce licenses or charge differential prices for such uses, the choice rightsholders faced was effectively all-or-nothing, and they had to pick one. I’m pretty sure, all else equal, they would have supported such uses, even without licenses and differential compensation — except that the piracy risk was so significant that it swamped the likely benefits, tilting the scale toward “nothing” instead of “all.”

Again, the reality is that creators and rightsholders were confronted with a choice between two imperfect options; neither was likely “right,” and they went with the lesser evil. But one can’t infer from that constrained decision an inherent antipathy to fair use. Sadly, such decisions have to be made in the real world, not law reviews and EFF blog posts. As economists Benjamin Klein, Andres Lerner and Kevin Murphy put it regarding the Diamond Rio case:

[R]ather than representing an attempt by copyright-holders to increase their profits by controlling legally established “fair uses,”… the obvious record-company motivation is to reduce the illegal piracy that is encouraged by the technology. Eliminating a “fair use” [more accurately, “opposing an expansion of fair use” -ed.] is not a benefit to the record companies; it is an unfortunate cost they have to bear to solve the much larger problem of infringing uses. The record companies face competitive pressure to avoid these costs by developing technologies that distinguish infringing from non-infringing copying.

This last point is important, too. Fair-users don’t like technological protection measures, either, even if they actually facilitate licensing and broader access to copyrighted content. But that really just helps to reveal the poverty of their position. They should welcome technology that expands access, even if it also means that it enables rightsholders to fine-tune their licenses and charge a positive price. Put differently: Why do they hate Spotify!?

I’m just hazarding a guess here, but I suspect that the antipathy to technological solutions goes well beyond the short-term limits on some current use of content that copyright minimalists think shouldn’t be limited. If technology, instead of fair use, is truly determinative of the extent of zero-price access, then their ability to seriously influence (read: rein in) the scope of copyright is diminished. Fair use is amorphous. They can bring cases, they can lobby Congress, they can pen strongly worded blog posts, and they can stage protests. But they can’t do much to stop technological progress. Of course, technology does at least as much to limit the enforceability of licenses and create new situations where zero-price access is the norm. But still, R&D is a lot harder than PR.

What’s more, if technology were truly determinative, it would frequently mean that former fair uses could become infringing at some point (or vice versa, of course). Frankly, there’s no reason for time-shifting of TV content to continue to be considered a fair use today. We now have the technology to both enable time shifting and to efficiently license content for the purpose, charge a differential price for it, and enforce the terms. In fact, all of that is so pervasive today that most users do pay for time-shifting technologies, under license terms that presumably define the scope of their right to do so; they just may not have read the contract. Where time-shifting as a fair use rears its ugly head today is in debates over new, infringing technology where, in truth, the fair use argument is really a malleable pretext to advocate for a restriction on the scope of copyright (e.g., Aereo).

In any case, as the success of business models like Spotify and Netflix (to say nothing of Comcast’s X1 interface and new Xfinity Stream app) attest, technology has enabled users to legitimately engage in what was once conceivable seemingly only under fair use. Yes, at a price — one that millions of people are willing to pay. It is surely the case that rightsholders’ licensing of technologies like these have made content more accessible, to more people, and with higher-quality service, than a regime of expansive unlicensed use could ever have done.

At the same time, let’s not forget that, often, even when they could efficiently distribute content only at a positive price, creators offer up scads of content for free, in myriad ways. Sure, the objective is to maximize revenue overall by increasing exposure, price discriminating, or enhancing the quality of paid-for content in some way — but so what? More content is more content, and easier access is easier access. All of that uncompensated distribution isn’t rightsholders nodding toward the copyright scolds’ arguments; it’s perfectly consistent with licensing. Obviously, the vast majority of music, for example, is listened-to subject to license agreements, not because of fair use exceptions or rightsholders’ largesse.

For the vast majority of creators, users and uses, licensed access works, and gets us massive amounts of content and near ubiquitous access. The fair use disputes we do have aren’t really about ensuring broad access; that’s already happening. Rather, those disputes are either niggling over the relatively few ambiguous margins on the one hand, or, on the other, fighting the fair-users’ manufactured, existential fight over whether copyright exceptions will subsume the rule. The former is to be expected: Copyright boundaries will always be imperfect, and courts will always be asked to make the close calls. The latter, however, is simply a drain on resources that could be used to create more content, improve its quality, distribute it more broadly, or lower prices.

Copyright law has always been, and always will be, operating in the shadow of technology — technology both for distribution and novel uses, as well as for pirating content. The irony is that, as digital distribution expands, it has dramatically increased the risk of piracy, even as copyright minimalists argue that the low costs of digital access justify a more expansive interpretation of fair use — which would, in turn, further increase the risk of piracy.

Creators’ opposition to this expansion has nothing to do with opposition to broad access to content, and everything to do with ensuring that piracy doesn’t overwhelm their ability to get paid, and to produce content in the first place.

Even were fair use to somehow disappear tomorrow, there would be more and higher-quality content, available to more people in more places, than ever before. But creators have no interest in seeing fair use disappear. What they do have is an interest in is licensing their content as broadly as possible when doing so is efficient, and in minimizing piracy. Sometimes legitimate fair-use questions get caught in the middle. We could and should have a reasonable debate over the precise contours of fair use in such cases. But the false dichotomy of creators against users makes that extremely difficult. Until the disingenuous rhetoric is clawed back, we’re stuck with needless fights that don’t benefit either users or creators — although they do benefit the policy scolds, academics, wonks and businesses that foment them.