Archives For law and economics

What does it mean to “own” something? A simple question (with a complicated answer, of course) that, astonishingly, goes unasked in a recent article in the Pennsylvania Law Review entitled, What We Buy When We “Buy Now,” by Aaron Perzanowski and Chris Hoofnagle (hereafter “P&H”). But how can we reasonably answer the question they pose without first trying to understand the nature of property interests?

P&H set forth a simplistic thesis for their piece: when an e-commerce site uses the term “buy” to indicate the purchase of digital media (instead of the term “license”), it deceives consumers. This is so, the authors assert, because the common usage of the term “buy” indicates that there will be some conveyance of property that necessarily includes absolute rights such as alienability, descendibility, and excludability, and digital content doesn’t generally come with these attributes. The authors seek to establish this deception through a poorly constructed survey regarding consumers’ understanding of the parameters of their property interests in digitally acquired copies. (The survey’s considerable limitations is a topic for another day….)

The issue is more than merely academic: NTIA and the USPTO have just announced that they will hold a public meeting

to discuss how best to communicate to consumers regarding license terms and restrictions in connection with online transactions involving copyrighted works… [as a precursor to] the creation of a multistakeholder process to establish best practices to improve consumers’ understanding of license terms and restrictions in connection with online transactions involving creative works.

Whatever the results of that process, it should not begin, or end, with P&H’s problematic approach.

Getting to their conclusion that platforms are engaged in deceptive practices requires two leaps of faith: First, that property interests are absolute and that any restraint on the use of “property” is inconsistent with the notion of ownership; and second, that consumers’ stated expectations (even assuming that they were measured correctly) alone determine the appropriate contours of legal (and economic) property interests. Both leaps are meritless.

Property and ownership are not absolute concepts

P&H are in such a rush to condemn downstream restrictions on the alienability of digital copies that they fail to recognize that “property” and “ownership” are not absolute terms, and are capable of being properly understood only contextually. Our very notions of what objects may be capable of ownership change over time, along with the scope of authority over owned objects. For P&H, the fact that there are restrictions on the use of an object means that it is not properly “owned.” But that overlooks our everyday understanding of the nature of property.

Ownership is far more complex than P&H allow, and ownership limited by certain constraints is still ownership. As Armen Alchian and Harold Demsetz note in The Property Right Paradigm (1973):

In common speech, we frequently speak of someone owning this land, that house, or these bonds. This conversational style undoubtedly is economical from the viewpoint of quick communication, but it masks the variety and complexity of the ownership relationship. What is owned are rights to use resources, including one’s body and mind, and these rights are always circumscribed, often by the prohibition of certain actions. To “own land” usually means to have the right to till (or not to till) the soil, to mine the soil, to offer those rights for sale, etc., but not to have the right to throw soil at a passerby, to use it to change the course of a stream, or to force someone to buy it. What are owned are socially recognized rights of action. (Emphasis added).

Literally, everything we own comes with a range of limitations on our use rights. Literally. Everything. So starting from a position that limitations on use mean something is not, in fact, owned, is absurd.

Moreover, in defining what we buy when we buy digital goods by reference to analog goods, P&H are comparing apples and oranges, without acknowledging that both apples and oranges are bought.

There has been a fair amount of discussion about the nature of digital content transactions (including by the USPTO and NTIA), and whether they are analogous to traditional sales of objects or more properly characterized as licenses. But this is largely a distinction without a difference, and the nature of the transaction is unnecessary in understanding that P&H’s assertion of deception is unwarranted.

Quite simply, we are accustomed to buying licenses as well as products. Whenever we buy a ticket — e.g., an airline ticket or a ticket to the movies — we are buying the right to use something or gain some temporary privilege. These transactions are governed by the terms of the license. But we certainly buy tickets, no? Alchian and Demsetz again:

The domain of demarcated uses of a resource can be partitioned among several people. More than one party can claim some ownership interest in the same resource. One party may own the right to till the land, while another, perhaps the state, may own an easement to traverse or otherwise use the land for specific purposes. It is not the resource itself which is owned; it is a bundle, or a portion, of rights to use a resource that is owned. In its original meaning, property referred solely to a right, title, or interest, and resources could not be identified as property any more than they could be identified as right, title, or interest. (Emphasis added).

P&H essentially assert that restrictions on the use of property are so inconsistent with the notion of property that it would be deceptive to describe the acquisition transaction as a purchase. But such a claim completely overlooks the fact that there are restrictions on any use of property in general, and on ownership of copies of copyright-protected materials in particular.

Take analog copies of copyright-protected works. While the lawful owner of a copy is able to lend that copy to a friend, sell it, or even use it as a hammer or paperweight, he or she can not offer it for rental (for certain kinds of works), cannot reproduce it, may not publicly perform or broadcast it, and may not use it to bludgeon a neighbor. In short, there are all kinds of restrictions on the use of said object — yet P&H have little problem with defining the relationship of person to object as “ownership.”

Consumers’ understanding of all the terms of exchange is a poor metric for determining the nature of property interests

P&H make much of the assertion that most users don’t “know” the precise terms that govern the allocation of rights in digital copies; this is the source of the “deception” they assert. But there is a cost to marking out the precise terms of use with perfect specificity (no contract specifies every eventuality), a cost to knowing the terms perfectly, and a cost to caring about them.

When we buy digital goods, we probably care a great deal about a few terms. For a digital music file, for example, we care first and foremost about whether it will play on our device(s). Other terms are of diminishing importance. Users certainly care whether they can play a song when offline, for example, but whether their children will be able to play it after they die? Not so much. That eventuality may, in fact, be specified in the license, but the nature of this particular ownership relationship includes a degree of rational ignorance on the users’ part: The typical consumer simply doesn’t care. In other words, she is, in Nobel-winning economist Herbert Simon’s term, “boundedly rational.” That isn’t deception; it’s a feature of life without which we would be overwhelmed by “information overload” and unable to operate. We have every incentive and ability to know the terms we care most about, and to ignore the ones about which we care little.

Relatedly, P&H also fail to understand the relationship between price and ownership. A digital song that is purchased from Amazon for $.99 comes with a set of potentially valuable attributes. For example:

  • It may be purchased on its own, without the other contents of an album;
  • It never degrades in quality, and it’s extremely difficult to misplace;
  • It may be purchased from one’s living room and be instantaneously available;
  • It can be easily copied or transferred onto multiple devices; and
  • It can be stored in Amazon’s cloud without taking up any of the consumer’s physical memory resources.

In many ways that matter to consumers, digital copies are superior to analog or physical ones. And yet, compared to physical media, on a per-song basis (assuming one could even purchase a physical copy of a single song without purchasing an entire album), $.99 may represent a considerable discount. Moreover, in 1982 when CDs were first released, they cost an average of $15. In 2017 dollars, that would be $38. Yet today most digital album downloads can be found for $10 or less.

Of course, songs purchased on CD or vinyl offer other benefits that a digital copy can’t provide. But the main thing — the ability to listen to the music — is approximately equal, and yet the digital copy offers greater convenience at (often) lower price. It is impossible to conclude that a consumer is duped by such a purchase, even if it doesn’t come with the ability to resell the song.

In fact, given the price-to-value ratio, it is perhaps reasonable to think that consumers know full well (or at least suspect) that there might be some corresponding limitations on use — the inability to resell, for example — that would explain the discount. For some people, those limitations might matter, and those people, presumably, figure out whether such limitations are present before buying a digital album or song For everyone else, however, the ability to buy a digital song for $.99 — including all of the benefits of digital ownership, but minus the ability to resell — is a good deal, just as it is worth it to a home buyer to purchase a house, regardless of whether it is subject to various easements.

Consumers are, in fact, familiar with “buying” property with all sorts of restrictions

The inability to resell digital goods looms inordinately large for P&H: According to them, by virtue of the fact that digital copies may not be resold, “ownership” is no longer an appropriate characterization of the relationship between the consumer and her digital copy. P&H believe that digital copies of works are sufficiently similar to analog versions, that traditional doctrines of exhaustion (which would permit a lawful owner of a copy of a work to dispose of that copy as he or she deems appropriate) should apply equally to digital copies, and thus that the inability to alienate the copy as the consumer wants means that there is no ownership interest per se.

But, as discussed above, even ownership of a physical copy doesn’t convey to the purchaser the right to make or allow any use of that copy. So why should we treat the ability to alienate a copy as the determining factor in whether it is appropriate to refer to the acquisition as a purchase? P&H arrive at this conclusion only through the illogical assertion that

Consumers operate in the marketplace based on their prior experience. We suggest that consumers’ “default” behavior is based on the experiences of buying physical media, and the assumptions from that context have carried over into the digital domain.

P&H want us to believe that consumers can’t distinguish between the physical and virtual worlds, and that their ability to use media doesn’t differentiate between these realms. But consumers do understand (to the extent that they care) that they are buying a different product, with different attributes. Does anyone try to play a vinyl record on his or her phone? There are perceived advantages and disadvantages to different kinds of media purchases. The ability to resell is only one of these — and for many (most?) consumers not likely the most important.

And, furthermore, the notion that consumers better understood their rights — and the limitations on ownership — in the physical world and that they carried these well-informed expectations into the digital realm is fantasy. Are we to believe that the consumers of yore understood that when they bought a physical record they could sell it, but not rent it out? That if they played that record in a public place they would need to pay performance royalties to the songwriter and publisher? Not likely.

Simply put, there is a wide variety of goods and services that we clearly buy, but that have all kinds of attributes that do not fit P&H’s crabbed definition of ownership. For example:

  • We buy tickets to events and membership in clubs (which, depending upon club rules, may not be alienated, and which always lapse for non-payment).
  • We buy houses notwithstanding the fact that in most cases all we own is the right to inhabit the premises for as long as we pay the bank (which actually retains more of the incidents of “ownership”).
  • In fact, we buy real property encumbered by a series of restrictive covenants: Depending upon where we live, we may not be able to build above a certain height, we may not paint the house certain colors, we may not be able to leave certain objects in the driveway, and we may not be able to resell without approval of a board.

We may or may not know (or care) about all of the restrictions on our use of such property. But surely we may accurately say that we bought the property and that we “own” it, nonetheless.

The reality is that we are comfortable with the notion of buying any number of limited property interests — including the purchasing of a license — regardless of the contours of the purchase agreement. The fact that some ownership interests may properly be understood as licenses rather than as some form of exclusive and permanent dominion doesn’t suggest that a consumer is not involved in a transaction properly characterized as a sale, or that a consumer is somehow deceived when the transaction is characterized as a sale — and P&H are surely aware of this.

Conclusion: The real issue for P&H is “digital first sale,” not deception

At root, P&H are not truly concerned about consumer deception; they are concerned about what they view as unreasonable constraints on the “rights” of consumers imposed by copyright law in the digital realm. Resale looms so large in their analysis not because consumers care about it (or are deceived about it), but because the real object of their enmity is the lack of a “digital first sale doctrine” that exactly mirrors the law regarding physical goods.

But Congress has already determined that there are sufficient distinctions between ownership of digital copies and ownership of analog ones to justify treating them differently, notwithstanding ownership of the particular copy. And for good reason: Trade in “used” digital copies is not a secondary market. Such copies are identical to those traded in the primary market and would compete directly with “pristine” digital copies. It makes perfect sense to treat ownership differently in these cases — and still to say that both digital and analog copies are “bought” and “owned.”

P&H’s deep-seated opposition to current law colors and infects their analysis — and, arguably, their failure to be upfront about it is the real deception. When one starts an analysis with an already-identified conclusion, the path from hypothesis to result is unlikely to withstand scrutiny, and that is certainly the case here.

Thanks to Truth on the Market for the opportunity to guest blog, and to ICLE for inviting me to join as a Senior Scholar! I’m honoured to be involved with both of these august organizations.

In Brussels, the talk of the town is that the European Commission (“Commission”) is casting a new eye on the old antitrust conjecture that prophesizes a negative relationship between industry concentration and innovation. This issue arises in the context of the review of several mega-mergers in the pharmaceutical and AgTech (i.e., seed genomics, biochemicals, “precision farming,” etc.) industries.

The antitrust press reports that the Commission has shown signs of interest for the introduction of a new theory of harm: the Significant Impediment to Industry Innovation (“SIII”) theory, which would entitle the remediation of mergers on the sole ground that a transaction significantly impedes innovation incentives at the industry level. In a recent ICLE White Paper, I discuss the desirability and feasibility of the introduction of this doctrine for the assessment of mergers in R&D-driven industries.

The introduction of SIII analysis in EU merger policy would no doubt be a sea change, as compared to past decisional practice. In previous cases, the Commission has paid heed to the effects of a merger on incentives to innovate, but the assessment has been limited to the effect on the innovation incentives of the merging parties in relation to specific current or future products. The application of the SIII theory, however, would entail an assessment of a possible reduction of innovation in (i) a given industry as a whole; and (ii) not in relation to specific product applications.

The SIII theory would also be distinct from the innovation markets” framework occasionally applied in past US merger policy and now marginalized. This framework considers the effect of a merger on separate upstream “innovation markets,i.e., on the R&D process itself, not directly linked to a downstream current or future product market. Like SIII, innovation markets analysis is interesting in that the identification of separate upstream innovation markets implicitly recognises that the players active in those markets are not necessarily the same as those that compete with the merging parties in downstream product markets.

SIII is way more intrusive, however, because R&D incentives are considered in the abstract, without further obligation on the agency to identify structured R&D channels, pipeline products, and research trajectories.

With this, any case for an expansion of the Commission’s power to intervene against mergers in certain R&D-driven industries should rely on sound theoretical and empirical infrastructure. Yet, despite efforts by the most celebrated Nobel-prize economists of the past decades, the economics that underpin the relation between industry concentration and innovation incentives remains an unfathomable mystery. As Geoffrey Manne and Joshua Wright have summarized in detail, the existing literature is indeterminate, at best. As they note, quoting Rich Gilbert,

[a] careful examination of the empirical record concludes that the existing body of theoretical and empirical literature on the relationship between competition and innovation “fails to provide general support for the Schumpeterian hypothesis that monopoly promotes either investment in research and development or the output of innovation” and that “the theoretical and empirical evidence also does not support a strong conclusion that competition is uniformly a stimulus to innovation.”

Available theoretical research also fails to establish a directional relationship between mergers and innovation incentives. True, soundbites from antitrust conferences suggest that the Commission’s Chief Economist Team has developed a deterministic model that could be brought to bear on novel merger policy initiatives. Yet, given the height of the intellectual Everest under discussion, we remain dubious (yet curious).

And, as noted, the available empirical data appear inconclusive. Consider a relatively concentrated industry like the seed and agrochemical sector. Between 2009 and 2016, all big six agrochemical firms increased their total R&D expenditure and their R&D intensity either increased or remained stable. Note that this has taken place in spite of (i) a significant increase in concentration among the largest firms in the industry; (ii) dramatic drop in global agricultural commodity prices (which has adversely affected several agrochemical businesses); and (iii) the presence of strong appropriability devices, namely patent rights.

This brief industry example (that I discuss more thoroughly in the paper) calls our attention to a more general policy point: prior to poking and prodding with novel theories of harm, one would expect an impartial antitrust examiner to undertake empirical groundwork, and screen initial intuitions of adverse effects of mergers on innovation through the lenses of observable industry characteristics.

At a more operational level, SIII also illustrates the difficulties of using indirect proxies of innovation incentives such as R&D figures and patent statistics as a preliminary screening tool for the assessment of the effects of the merger. In my paper, I show how R&D intensity can increase or decrease for a variety of reasons that do not necessarily correlate with an increase or decrease in the intensity of innovation. Similarly, I discuss why patent counts and patent citations are very crude indicators of innovation incentives. Over-reliance on patent counts and citations can paint a misleading picture of the parties’ strength as innovators in terms of market impact: not all patents are translated into products that are commercialised or are equal in terms of commercial value.

As a result (and unlike the SIII or innovation markets approaches), the use of these proxies as a measure of innovative strength should be limited to instances where the patent clearly has an actual or potential commercial application in those markets that are being assessed. Such an approach would ensure that patents with little or no impact on innovation competition in a market are excluded from consideration. Moreover, and on pain of stating the obvious, patents are temporal rights. Incentives to innovate may be stronger as a protected technological application approaches patent expiry. Patent counts and citations, however, do not discount the maturity of patents and, in particular, do not say much about whether the patent is far from or close to its expiry date.

In order to overcome the limitations of crude quantitative proxies, it is in my view imperative to complement an empirical analysis with industry-specific qualitative research. Central to the assessment of the qualitative dimension of innovation competition is an understanding of the key drivers of innovation in the investigated industry. In the agrochemical industry, industry structure and market competition may only be one amongst many other factors that promote innovation. Economic models built upon Arrow’s replacement effect theory – namely that a pre-invention monopoly acts as a strong disincentive to further innovation – fail to capture that successful agrochemical products create new technology frontiers.

Thus, for example, progress in crop protection products – and, in particular, in pest- and insect-resistant crops – had fuelled research investments in pollinator protection technology. Moreover, the impact of wider industry and regulatory developments on incentives to innovate and market structure should not be ignored (for example, falling crop commodity prices or regulatory restrictions on the use of certain products). Last, antitrust agencies are well placed to understand that beyond R&D and patent statistics, there is also a degree of qualitative competition in the innovation strategies that are pursued by agrochemical players.

My paper closes with a word of caution. No compelling case has been advanced to support a departure from established merger control practice with the introduction of SIII in pharmaceutical and agrochemical mergers. The current EU merger control framework, which enables the Commission to conduct a prospective analysis of the parties’ R&D incentives in current or future product markets, seems to provide an appropriate safeguard against anticompetitive transactions.

In his 1974 Nobel Prize Lecture, Hayek criticized the “scientific error” of much economic research, which assumes that intangible, correlational laws govern observable and measurable phenomena. Hayek warned that economics is like biology: both fields focus on “structures of essential complexity” which are recalcitrant to stylized modeling. Interestingly, competition was one of the examples expressly mentioned by Hayek in his lecture:

[T]he social sciences, like much of biology but unlike most fields of the physical sciences, have to deal with structures of essential complexity, i.e. with structures whose characteristic properties can be exhibited only by models made up of relatively large numbers of variables. Competition, for instance, is a process which will produce certain results only if it proceeds among a fairly large number of acting persons.

What remains from this lecture is a vibrant call for humility in policy making, at a time where some constituencies within antitrust agencies show signs of interest in revisiting the relationship between concentration and innovation. And if Hayek’s convoluted writing style is not the most accessible of all, the title captures it all: “The Pretense of Knowledge.

My colleague, Neil Turkewitz, begins his fine post for Fair Use Week (read: crashing Fair Use Week) by noting that

Many of the organizations celebrating fair use would have you believe, because it suits their analysis, that copyright protection and the public interest are diametrically opposed. This is merely a rhetorical device, and is a complete fallacy.

If I weren’t a recovering law professor, I would just end there: that about sums it up, and “the rest is commentary,” as they say. Alas….  

All else equal, creators would like as many people to license their works as possible; there’s no inherent incompatibility between “incentives and access” (which is just another version of the fallacious “copyright protection versus the public interest” trope). Everybody wants as much access as possible. Sure, consumers want to pay as little as possible for it, and creators want to be paid as much as possible. That’s a conflict, and at the margin it can seem like a conflict between access and incentives. But it’s not a fundamental, philosophical, and irreconcilable difference — it’s the last 15 minutes of negotiation before the contract is signed.

Reframing what amounts to a fundamental agreement into a pitched battle for society’s soul is indeed a purely rhetorical device — and a mendacious one, at that.

The devil is in the details, of course, and there are still disputes on the margin, as I said. But it helps to know what they’re really about, and why they are so far from the fanciful debates the copyright scolds wish we were having.

First, price is, in fact, a big deal. For the creative industries it can be the difference between, say, making one movie or a hundred, and for artists is can be the difference between earning a livelihood writing songs or packing it in for a desk job.

But despite their occasional lip service to the existence of trade-offs, many “fair-users” see price — i.e., licensing agreements — as nothing less than a threat to social welfare. After all, the logic runs, if copies can be made at (essentially) zero marginal cost, a positive price is just extortion. They say, “more access!,” but they don’t mean, “more access at an agreed-upon price;” they mean “zero-price access, and nothing less.” These aren’t the same thing, and when “fair use” is a stand-in for “zero-price use,” fair-users moving the goalposts — and being disingenuous about it.

The other, related problem, of course, is piracy. Sometimes rightsholders’ objections to the expansion of fair use are about limiting access. But typically that’s true only where fine-tuned contracting isn’t feasible, and where the only realistic choice they’re given is between no access for some people, and pervasive (and often unstoppable) piracy. There are any number of instances where rightsholders have no realistic prospect of efficiently negotiating licensing terms and receiving compensation, and would welcome greater access to their works even without a license — as long as the result isn’t also (or only) excessive piracy. The key thing is that, in such cases, opposition to fair use isn’t opposition to reasonable access, even free access. It’s opposition to piracy.

Time-shifting with VCRs and space-shifting with portable mp3 players (to take two contentious historical examples) fall into this category (even if they are held up — as they often are — by the fair-users as totems of their fanciful battle ). At least at the time of the Sony and Diamond Rio cases, when there was really no feasible way to enforce licenses or charge differential prices for such uses, the choice rightsholders faced was effectively all-or-nothing, and they had to pick one. I’m pretty sure, all else equal, they would have supported such uses, even without licenses and differential compensation — except that the piracy risk was so significant that it swamped the likely benefits, tilting the scale toward “nothing” instead of “all.”

Again, the reality is that creators and rightsholders were confronted with a choice between two imperfect options; neither was likely “right,” and they went with the lesser evil. But one can’t infer from that constrained decision an inherent antipathy to fair use. Sadly, such decisions have to be made in the real world, not law reviews and EFF blog posts. As economists Benjamin Klein, Andres Lerner and Kevin Murphy put it regarding the Diamond Rio case:

[R]ather than representing an attempt by copyright-holders to increase their profits by controlling legally established “fair uses,”… the obvious record-company motivation is to reduce the illegal piracy that is encouraged by the technology. Eliminating a “fair use” [more accurately, “opposing an expansion of fair use” -ed.] is not a benefit to the record companies; it is an unfortunate cost they have to bear to solve the much larger problem of infringing uses. The record companies face competitive pressure to avoid these costs by developing technologies that distinguish infringing from non-infringing copying.

This last point is important, too. Fair-users don’t like technological protection measures, either, even if they actually facilitate licensing and broader access to copyrighted content. But that really just helps to reveal the poverty of their position. They should welcome technology that expands access, even if it also means that it enables rightsholders to fine-tune their licenses and charge a positive price. Put differently: Why do they hate Spotify!?

I’m just hazarding a guess here, but I suspect that the antipathy to technological solutions goes well beyond the short-term limits on some current use of content that copyright minimalists think shouldn’t be limited. If technology, instead of fair use, is truly determinative of the extent of zero-price access, then their ability to seriously influence (read: rein in) the scope of copyright is diminished. Fair use is amorphous. They can bring cases, they can lobby Congress, they can pen strongly worded blog posts, and they can stage protests. But they can’t do much to stop technological progress. Of course, technology does at least as much to limit the enforceability of licenses and create new situations where zero-price access is the norm. But still, R&D is a lot harder than PR.

What’s more, if technology were truly determinative, it would frequently mean that former fair uses could become infringing at some point (or vice versa, of course). Frankly, there’s no reason for time-shifting of TV content to continue to be considered a fair use today. We now have the technology to both enable time shifting and to efficiently license content for the purpose, charge a differential price for it, and enforce the terms. In fact, all of that is so pervasive today that most users do pay for time-shifting technologies, under license terms that presumably define the scope of their right to do so; they just may not have read the contract. Where time-shifting as a fair use rears its ugly head today is in debates over new, infringing technology where, in truth, the fair use argument is really a malleable pretext to advocate for a restriction on the scope of copyright (e.g., Aereo).

In any case, as the success of business models like Spotify and Netflix (to say nothing of Comcast’s X1 interface and new Xfinity Stream app) attest, technology has enabled users to legitimately engage in what was once conceivable seemingly only under fair use. Yes, at a price — one that millions of people are willing to pay. It is surely the case that rightsholders’ licensing of technologies like these have made content more accessible, to more people, and with higher-quality service, than a regime of expansive unlicensed use could ever have done.

At the same time, let’s not forget that, often, even when they could efficiently distribute content only at a positive price, creators offer up scads of content for free, in myriad ways. Sure, the objective is to maximize revenue overall by increasing exposure, price discriminating, or enhancing the quality of paid-for content in some way — but so what? More content is more content, and easier access is easier access. All of that uncompensated distribution isn’t rightsholders nodding toward the copyright scolds’ arguments; it’s perfectly consistent with licensing. Obviously, the vast majority of music, for example, is listened-to subject to license agreements, not because of fair use exceptions or rightsholders’ largesse.

For the vast majority of creators, users and uses, licensed access works, and gets us massive amounts of content and near ubiquitous access. The fair use disputes we do have aren’t really about ensuring broad access; that’s already happening. Rather, those disputes are either niggling over the relatively few ambiguous margins on the one hand, or, on the other, fighting the fair-users’ manufactured, existential fight over whether copyright exceptions will subsume the rule. The former is to be expected: Copyright boundaries will always be imperfect, and courts will always be asked to make the close calls. The latter, however, is simply a drain on resources that could be used to create more content, improve its quality, distribute it more broadly, or lower prices.

Copyright law has always been, and always will be, operating in the shadow of technology — technology both for distribution and novel uses, as well as for pirating content. The irony is that, as digital distribution expands, it has dramatically increased the risk of piracy, even as copyright minimalists argue that the low costs of digital access justify a more expansive interpretation of fair use — which would, in turn, further increase the risk of piracy.

Creators’ opposition to this expansion has nothing to do with opposition to broad access to content, and everything to do with ensuring that piracy doesn’t overwhelm their ability to get paid, and to produce content in the first place.

Even were fair use to somehow disappear tomorrow, there would be more and higher-quality content, available to more people in more places, than ever before. But creators have no interest in seeing fair use disappear. What they do have is an interest in is licensing their content as broadly as possible when doing so is efficient, and in minimizing piracy. Sometimes legitimate fair-use questions get caught in the middle. We could and should have a reasonable debate over the precise contours of fair use in such cases. But the false dichotomy of creators against users makes that extremely difficult. Until the disingenuous rhetoric is clawed back, we’re stuck with needless fights that don’t benefit either users or creators — although they do benefit the policy scolds, academics, wonks and businesses that foment them.

Following is the second in a series of posts on my forthcoming book, How to Regulate: A Guide for Policy Makers (Cambridge Univ. Press 2017).  The initial post is here.

As I mentioned in my first post, How to Regulate examines the market failures (and other private ordering defects) that have traditionally been invoked as grounds for government regulation.  For each such defect, the book details the adverse “symptoms” produced, the underlying “disease” (i.e., why those symptoms emerge), the range of available “remedies,” and the “side effects” each remedy tends to generate.  The first private ordering defect the book addresses is the externality.

I’ll never forget my introduction to the concept of externalities.  P.J. Hill, my much-beloved economics professor at Wheaton College, sauntered into the classroom eating a giant, juicy apple.  As he lectured, he meandered through the rows of seats, continuing to chomp on that enormous piece of fruit.  Every time he took a bite, juice droplets and bits of apple fell onto students’ desks.  Speaking with his mouth full, he propelled fruit flesh onto students’ class notes.  It was disgusting.

It was also quite effective.  Professor Hill was making the point (vividly!) that some activities impose significant effects on bystanders.  We call those effects “externalities,” he explained, because they are experienced by people who are outside the process that creates them.  When the spillover effects are adverse—costs—we call them “negative” externalities.  “Positive” externalities are spillovers of benefits.  Air pollution is a classic example of a negative externality.  Landscaping one’s yard, an activity that benefits one’s neighbors, generates a positive externality.

An obvious adverse effect (“symptom”) of externalities is unfairness.  It’s not fair for a factory owner to capture the benefits of its production while foisting some of the cost onto others.  Nor is it fair for a homeowner’s neighbors to enjoy her spectacular flower beds without contributing to their creation or maintenance.

A graver symptom of externalities is “allocative inefficiency,” a failure to channel productive resources toward the uses that will wring the greatest possible value from them.  When an activity involves negative externalities, people tend to do too much of it—i.e., to devote an inefficiently high level of productive resources to the activity.  That’s because a person deciding how much of the conduct at issue to engage in accounts for all of his conduct’s benefits, which ultimately inure to him, but only a portion of his conduct’s costs, some of which are borne by others.  Conversely, when an activity involves positive externalities, people tend to do too little of it.  In that case, they must bear all of the cost of their conduct but can capture only a portion of the benefit it produces.

Because most government interventions addressing externalities have been concerned with negative externalities (and because How to Regulate includes a separate chapter on public goods, which entail positive externalities), the book’s externalities chapter focuses on potential remedies for cost spillovers.  There are three main options, which are discussed below the fold. Continue Reading…

So I’ve just finished writing a book (hence my long hiatus from Truth on the Market).  Now that the draft is out of my hands and with the publisher (Cambridge University Press), I figured it’s a good time to rejoin my colleagues here at TOTM.  To get back into the swing of things, I’m planning to produce a series of posts describing my new book, which may be of interest to a number of TOTM readers.  I’ll get things started today with a brief overview of the project.

The book is titled How to Regulate: A Guide for Policy Makers.  A topic of that enormity could obviously fill many volumes.  I sought to address the matter in a single, non-technical book because I think law schools often do a poor job teaching their students, many of whom are future regulators, the substance of sound regulation.  Law schools regularly teach administrative law, the procedures that must be followed to ensure that rules have the force of law.  Rarely, however, do law schools teach students how to craft the substance of a policy to address a new perceived problem (e.g., What tools are available? What are the pros and cons of each?).

Economists study that matter, of course.  But economists are often naïve about the difficulty of transforming their textbook models into concrete rules that can be easily administered by business planners and adjudicators.  Many economists also pay little attention to the high information requirements of the policies they propose (i.e., the Hayekian knowledge problem) and the susceptibility of those policies to political manipulation by well-organized interest groups (i.e., public choice concerns).

How to Regulate endeavors to provide both economic training to lawyers and law students and a sense of the “limits of law” to the economists and other policy wonks who tend to be involved in crafting regulations.  Below the fold, I’ll give a brief overview of the book.  In later posts, I’ll describe some of the book’s specific chapters. Continue Reading…

I just posted a new ICLE white paper, co-authored with former ICLE Associate Director, Ben Sperry:

When Past Is Not Prologue: The Weakness of the Economic Evidence Against Health Insurance Mergers.

Yesterday the hearing in the DOJ’s challenge to stop the Aetna-Humana merger got underway, and last week phase 1 of the Cigna-Anthem merger trial came to a close.

The DOJ’s challenge in both cases is fundamentally rooted in a timeworn structural analysis: More consolidation in the market (where “the market” is a hotly-contested issue, of course) means less competition and higher premiums for consumers.

Following the traditional structural playbook, the DOJ argues that the Aetna-Humana merger (to pick one) would result in presumptively anticompetitive levels of concentration, and that neither new entry not divestiture would suffice to introduce sufficient competition. It does not (in its pretrial brief, at least) consider other market dynamics (including especially the complex and evolving regulatory environment) that would constrain the firm’s ability to charge supracompetitive prices.

Aetna & Humana, for their part, contend that things are a bit more complicated than the government suggests, that the government defines the relevant market incorrectly, and that

the evidence will show that there is no correlation between the number of [Medicare Advantage organizations] in a county (or their shares) and Medicare Advantage pricing—a fundamental fact that the Government’s theories of harm cannot overcome.

The trial will, of course, feature expert economic evidence from both sides. But until we see that evidence, or read the inevitable papers derived from it, we are stuck evaluating the basic outlines of the economic arguments based on the existing literature.

A host of antitrust commentators, politicians, and other interested parties have determined that the literature condemns the mergers, based largely on a small set of papers purporting to demonstrate that an increase of premiums, without corresponding benefit, inexorably follows health insurance “consolidation.” In fact, virtually all of these critics base their claims on a 2012 case study of a 1999 merger (between Aetna and Prudential) by economists Leemore Dafny, Mark Duggan, and Subramaniam Ramanarayanan, Paying a Premium on Your Premium? Consolidation in the U.S. Health Insurance Industry, as well as associated testimony by Prof. Dafny, along with a small number of other papers by her (and a couple others).

Our paper challenges these claims. As we summarize:

This white paper counsels extreme caution in the use of past statistical studies of the purported effects of health insurance company mergers to infer that today’s proposed mergers—between Aetna/Humana and Anthem/Cigna—will likely have similar effects. Focusing on one influential study—Paying a Premium on Your Premium…—as a jumping off point, we highlight some of the many reasons that past is not prologue.

In short: extrapolated, long-term, cumulative, average effects drawn from 17-year-old data may grab headlines, but they really don’t tell us much of anything about the likely effects of a particular merger today, or about the effects of increased concentration in any particular product or geographic market.

While our analysis doesn’t necessarily undermine the paper’s limited, historical conclusions, it does counsel extreme caution for inferring the study’s applicability to today’s proposed mergers.

By way of reference, Dafny, et al. found average premium price increases from the 1999 Aetna/Prudential merger of only 0.25 percent per year for two years following the merger in the geographic markets they studied. “Health Insurance Mergers May Lead to 0.25 Percent Price Increases!” isn’t quite as compelling a claim as what critics have been saying, but it’s arguably more accurate (and more relevant) than the 7 percent price increase purportedly based on the paper that merger critics like to throw around.

Moreover, different markets and a changed regulatory environment alone aren’t the only things suggesting that past is not prologue. When we delve into the paper more closely we find even more significant limitations on the paper’s support for the claims made in its name, and its relevance to the current proposed mergers.

The full paper is available here.

Today ICLE released a white paper entitled, A critical assessment of the latest charge of Google’s anticompetitive bias from Yelp and Tim Wu.

The paper is a comprehensive response to a study by Michael Luca, Timothy Wu, Sebastian Couvidat, Daniel Frank, & William Seltzer, entitled, Is Google degrading search? Consumer harm from Universal Search.

The Wu, et al. paper will be one of the main topics of discussion at today’s Capitol Forum and George Washington Institute of Public Policy event on Dominant Platforms Under the Microscope: Policy Approaches in the US and EU, at which I will be speaking — along with a host of luminaries including, inter alia, Josh Wright, Jonathan Kanter, Allen Grunes, Catherine Tucker, and Michael Luca — one of the authors of the Universal Search study.

Follow the link above to register — the event starts at noon today at the National Press Club.

Meanwhile, here’s a brief description of our paper:

Late last year, Tim Wu of Columbia Law School (and now the White House Office of Management and Budget), Michael Luca of Harvard Business School (and a consultant for Yelp), and a group of Yelp data scientists released a study claiming that Google has been purposefully degrading search results from its more-specialized competitors in the area of local search. The authors’ claim is that Google is leveraging its dominant position in general search to thwart competition from specialized search engines by favoring its own, less-popular, less-relevant results over those of its competitors:

To improve the popularity of its specialized search features, Google has used the power of its dominant general search engine. The primary means for doing so is what is called the “universal search” or the “OneBox.”

This is not a new claim, and researchers have been attempting (and failing) to prove Google’s “bias” for some time. Likewise, these critics have drawn consistent policy conclusions from their claims, asserting that antitrust violations lie at the heart of the perceived bias. But the studies are systematically marred by questionable methodology and bad economics.

This latest study by Tim Wu, along with a cadre of researchers employed by Yelp (one of Google’s competitors and one of its chief antitrust provocateurs), fares no better, employing slightly different but equally questionable methodology, bad economics, and a smattering of new, but weak, social science. (For a thorough criticism of the inherent weaknesses of Wu et al.’s basic social science methodology, see Miguel de la Mano, Stephen Lewis, and Andrew Leyden, Focus on the Evidence: A Brief Rebuttal of Wu, Luca, et al (2016), available here).

The basic thesis of the study is that Google purposefully degrades its local searches (e.g., for restaurants, hotels, services, etc.) to the detriment of its specialized search competitors, local businesses, consumers, and even Google’s bottom line — and that this is an actionable antitrust violation.

But in fact the study shows nothing of the kind. Instead, the study is marred by methodological problems that, in the first instance, make it impossible to draw any reliable conclusions. Nor does the study show that Google’s conduct creates any antitrust-relevant problems. Rather, the construction of the study and the analysis of its results reflect a superficial and inherently biased conception of consumer welfare that completely undermines the study’s purported legal and economic conclusions.

Read the whole thing here.

Since the European Commission (EC) announced its first inquiry into Google’s business practices in 2010, the company has been the subject of lengthy investigations by courts and competition agencies around the globe. Regulatory authorities in the United States, France, the United Kingdom, Canada, Brazil, and South Korea have all opened and rejected similar antitrust claims.

And yet the EC marches on, bolstered by Google’s myriad competitors, who continue to agitate for further investigations and enforcement actions, even as we — companies and consumers alike — enjoy the benefits of an increasingly dynamic online marketplace.

Indeed, while the EC has spent more than half a decade casting about for some plausible antitrust claim, the online economy has thundered ahead. Since 2010, Facebook has tripled its active users and multiplied its revenue ninefold; the number of apps available in the Amazon app store has grown from less than 4000 to over 400,000 today; and there are almost 1.5 billion more Internet users globally than there were in 2010. And consumers are increasingly using new and different ways to search for information: Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and Facebook’s Messenger are a few of the many new innovations challenging traditional search engines.

Advertisers have adapted to this evolution, moving increasingly online, and from search to display ads as mobile adoption has skyrocketedSocial networks like Twitter and Snapchat have come into their own, competing for the same (and ever-increasing) advertising dollars. For marketers, advertising on social networks is now just as important as advertising in search. No wonder e-commerce sales have more than doubled, to almost $2 trillion worldwide; for the first time, consumers purchased more online than in stores this past year.

To paraphrase Louis C.K.: Everything is amazing — and no one at the European Commission is happy.

The EC’s market definition is fatally flawed

Like its previous claims, the Commission’s most recent charges are rooted in the assertion that Google abuses its alleged dominance in “general search” advertising to unfairly benefit itself and to monopolize other markets. But European regulators continue to miss the critical paradigm shift among online advertisers and consumers that has upended this stale view of competition on the Internet. The reality is that Google’s competition may not, and need not, look exactly like Google itself, but it is competition nonetheless. And it’s happening in spades.

The key to understanding why the European Commission’s case is fundamentally flawed lies in an examination of how it defines the relevant market. Through a series of economically and factually unjustified assumptions, the Commission defines search as a distinct market in which Google faces limited competition and enjoys an 80% market share. In other words, for the EC, “general search” apparently means only nominal search providers like Google and Bing; it doesn’t mean companies like Amazon, Facebook and Twitter — Google’s biggest competitors.  

But the reality is that “general search” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google happens to use to match consumers and advertisers doesn’t reflect the substitutability of other mechanisms that do the same thing — merely because these mechanisms aren’t called “search.”

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive.

Consumers today are increasingly using platforms like Amazon and Facebook as substitutes for the searches they might have run on Google or Bing. “Closed” platforms like the iTunes store and innumerable apps handle copious search traffic but also don’t figure in the EC’s market calculations. And so-called “dark social” interactions like email, text messages, and IMs, drive huge amounts of some of the most valuable traffic on the Internet. This, in turn, has led to a competitive scramble to roll out completely new technologies like chatbots to meet consumers’ informational (and merchants’ advertising) needs.

Properly construed, Google’s market position is precarious

Like Facebook and Twitter (and practically every other Internet platform), advertising is Google’s primary source of revenue. Instead of charging for fancy hardware or offering services to users for a fee, Google offers search, the Android operating system, and a near-endless array of other valuable services for free to users. The company’s very existence relies on attracting Internet users and consumers to its properties in order to effectively connect them with advertisers.

But being an online matchmaker is a difficult and competitive enterprise. Among other things, the ability to generate revenue turns crucially on the quality of the match: All else equal, an advertiser interested in selling widgets will pay more for an ad viewed by a user who can be reliably identified as being interested in buying widgets.

Google’s primary mechanism for attracting users to match with advertisers — general search — is substantially about information, not commerce, and the distinction between product and informational searches is crucially important to understanding Google’s market and the surprisingly limited and tenuous market power it possesses.

General informational queries aren’t nearly as valuable to advertisers: Significantly, only about 30 percent of Google’s searches even trigger any advertising at all. Meanwhile, as of 2012, one-third of product searches started on Amazon while only 13% started on a general search engine.

As economist Hal Singer aptly noted in 2012,

[the data] suggest that Google lacks market power in a critical segment of search — namely, product searches. Even though searches for items such as power tools or designer jeans account for only 10 to 20 percent of all searches, they are clearly some of the most important queries for search engines from a business perspective, as they are far easier to monetize than informational queries like “Kate Middleton.”

While Google Search clearly offers substantial value to advertisers, its ability to continue to do so is precarious when confronted with the diverse array of competitors that, like Facebook, offer a level of granularity in audience targeting that general search can’t match, or that, like Amazon, systematically offer up the most valuable searchers.

In order to compete in this market — one properly defined to include actual competitors — Google has had to constantly innovate to maintain its position. Unlike a complacent monopolist, it has evolved to meet changing consumer demand, shifting technology and inventive competitors. Thus, Google’s search algorithm has changed substantially over the years to make more effective use of the information available to ensure relevance; search results have evolved to give consumers answers to queries rather than just links, and to provide more-direct access to products and services; and, as users have shifted more and more of their time and attention to mobile devices, search has incorporated more-localized results.

Competitors want a free lunch

Critics complain, nevertheless, that these developments have made it harder, in one way or another, for rivals to compete. And the EC has provided a willing ear. According to Commissioner Vestager last week:

Google has come up with many innovative products that have made a difference to our lives. But that doesn’t give Google the right to deny other companies the chance to compete and innovate. Today, we have further strengthened our case that Google has unduly favoured its own comparison shopping service in its general search result pages…. (Emphasis added).

Implicit in this statement is the remarkable assertion that by favoring its own comparison shopping services, Google “den[ies] other companies the chance to compete and innovate.” Even assuming Google does “favor” its own results, this is an astounding claim.

First, it is not a violation of competition law simply to treat competitors’ offerings differently than one’s own, even for a dominant firm. Instead, conduct must actually exclude competitors from the market, without offering countervailing advantages to consumers. But Google’s conduct is not exclusionary, and there are many benefits to consumers.

As it has from the start of its investigations of Google, the EC begins with a flawed assumption: that Google’s competitors both require, and may be entitled to, unfettered access to Google’s property in order to compete. But this is patently absurd. Google is not an essential facility: Billions of users reach millions of companies everyday through direct browser navigation, apps, email links, review sites and blogs, and countless other means — all without once touching Google.com.

Google Search results do not exclude competitors, whether comparison shopping sites or others. For example, 72% of TripAdvisor’s U.S. traffic comes from search, and almost all of that from organic results; other specialized search sites see similar traffic volumes.

More important, however, in addition to continuing to reach rival sites through Google Search, billions of consumers access rival services directly through their mobile apps. In fact, for Yelp,

Approximately 21 million unique devices accessed Yelp via the mobile app on a monthly average basis in the first quarter of 2016, an increase of 32% compared to the same period in 2015. App users viewed approximately 70% of page views in the first quarter and were more than 10 times as engaged as website users, as measured by number of pages viewed. (Emphasis added).

And a staggering 40 percent of mobile browsing is now happening inside the Facebook app, competing with the browsers and search engines pre-loaded on smartphones.

Millions of consumers also directly navigate to Google’s rivals via their browser by simply typing, for example, “Yelp.com” in their address bar. And as noted above, consumers are increasingly using Google rivals’ new disruptive information engines like Alexa and Siri for their search needs. Even the traditional search engine space is competitive — in fact, according to Wired, as of July 2016:

Microsoft has now captured more than one-third of Internet searches. Microsoft’s transformation from a company that sells boxed software to one that sells services in the cloud is well underway. (Emphasis added).

With such numbers, it’s difficult to see how rivals are being foreclosed from reaching consumers in any meaningful way.

Meanwhile, the benefits to consumers are obvious: Google is directly answering questions for consumers rather than giving them a set of possible links to click through and further search. In some cases its results present entirely new and valuable forms of information (e.g., search trends and structured data); in others they serve to hone searches by suggesting further queries, or to help users determine which organic results (including those of its competitors) may be most useful. And, of course, consumers aren’t forced to endure these innovations if they don’t find them useful, as they can quickly switch to other providers.  

Nostalgia makes for bad regulatory policy

Google is not the unstoppable monopolist of the EU competition regulators’ imagining. Rather, it is a continual innovator, forced to adapt to shifting consumer demand, changing technology, and competitive industry dynamics. And, instead of trying to hamstring Google, if they are to survive, Google’s competitors (and complainants) must innovate as well.

Dominance in technology markets — especially online — has always been ephemeral. Once upon a time, MySpace, AOL, and Yahoo were the dominant Internet platforms. Kodak, once practically synonymous with “instant camera” let the digital revolution pass it by. The invincible Sony Walkman was upended by mp3s and the iPod. Staid, keyboard-operated Blackberries and Nokias simply couldn’t compete with app-driven, graphical platforms from Apple and Samsung. Even today, startups like Snapchat, Slack, and Spotify gain massive scale and upend entire industries with innovative new technology that can leave less-nimble incumbents in the dustbin of tech history.

Put differently, companies that innovate are able to thrive, while those that remain dependent on yesterday’s technology and outdated business models usually fail — and deservedly so. It should never be up to regulators to pick winners and losers in a highly dynamic and competitive market, particularly if doing so constrains the market’s very dynamism. As Alfonso Lamadrid has pointed out:

It is companies and not competition enforcers which will strive or fail in the adoption of their business models, and it is therefore companies and not competition enforcers who are to decide on what business models to use. Some will prove successful and others will not; some companies will thrive and some will disappear, but with experimentation with business models, success and failure are and have always been part of the game.

In other words, we should not forget that competition law is, or should be, business-model agnostic, and that regulators are – like anyone else – far from omniscient.

Like every other technology company before them, Google and its competitors must be willing and able to adapt in order to keep up with evolving markets — just as for Lewis Carroll’s Red Queen, “it takes all the running you can do, to keep in the same place.” Google confronts a near-constantly evolving marketplace and fierce competition from unanticipated quarters; companies that build their businesses around Google face a near-constantly evolving Google. In the face of such relentless market dynamism, neither consumers nor firms are well served by regulatory policy rooted in nostalgia.  

Yesterday, the International Center for Law & Economics filed reply comments in the docket of the FCC’s Broadband Privacy NPRM. ICLE was joined in its comments by the following scholars of law & economics:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Adam Candeub, Professor of Law, Michigan State University College of Law
  • Justin (Gus) Hurwitz, Assistant Professor of Law, Nebraska College of Law
  • Daniel Lyons, Associate Professor, Boston College Law School
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University Department of Economics

As in our initial comments, we drew on the economic scholarship of multi-sided platforms to argue that the FCC failed to consider the ways in which asymmetric regulation will ultimately have negative competitive effects and harm consumers. The FCC and some critics claimed that ISPs are gatekeepers deserving of special regulation — a case that both the FCC and the critics failed to make.

The NPRM fails adequately to address these issues, to make out an adequate case for the proposed regulation, or to justify treating ISPs differently than other companies that collect and use data.

Perhaps most important, the NPRM also fails to acknowledge or adequately assess the actual market in which the use of consumer data arises: the advertising market. Whether intentionally or not, this NPRM is not primarily about regulating consumer privacy; it is about keeping ISPs out of the advertising business. But in this market, ISPs are upstarts challenging the dominant position of firms like Google and Facebook.

Placing onerous restrictions upon ISPs alone results in either under-regulation of edge providers or over-regulation of ISPs within the advertising market, without any clear justification as to why consumer privacy takes on different qualities for each type of advertising platform. But the proper method of regulating privacy is, in fact, the course that both the FTC and the FCC have historically taken, and which has yielded a stable, evenly administered regime: case-by-case examination of actual privacy harms and a minimalist approach to ex ante, proscriptive regulations.

We also responded to particular claims made by New America’s Open Technology Institute about the expectations of consumers regarding data collection online, the level of competitiveness in the marketplace, and the technical realities that differentiate ISPs from edge providers.

OTI attempts to substitute its own judgment of what consumers (should) believe about their data for that of consumers themselves. And in the process it posits a “context” that can and will never shift as new technology and new opportunities emerge. Such a view of consumer expectations is flatly anti-innovation and decidedly anti-consumer, consigning broadband users to yesterday’s technology and business models. The rule OTI supports could effectively forbid broadband providers from offering consumers the option to trade data for lower prices.

Our reply comments went on to point out that much of the basis upon which the NPRM relies — and alleged lack of adequate competition among ISPs — was actually a “manufactured scarcity” based upon the Commission’s failure to properly analyze the relevant markets.

The Commission’s claim that ISPs, uniquely among companies in the modern data economy, face insufficient competition in the broadband market is… insufficiently supported. The flawed manner in which the Commission has defined the purported relevant market for broadband distorts the analysis upon which the proposed rules are based, and manufactures a false scarcity in order to justify unduly burdensome privacy regulations for ISPs. Even the Commission’s own data suggest that consumer choice is alive and well in broadband… The reality is that there is in fact enough competition in the broadband market to offer privacy-sensitive consumers options if they are ever faced with what they view as overly invasive broadband business practices. According to the Commission, as of December 2014, 74% of American homes had a choice of two or more wired ISPs delivering download speeds of at least 10 Mbps, and 88% had a choice of at least two providers of 3 Mbps service. Meanwhile, 93% of consumers have access to at least three mobile broadband providers. Looking forward, consumer choice at all download speeds is increasing at rapid rates due to extensive network upgrades and new entry in a highly dynamic market.

Finally, we rebutted the contention that predictive analytics was a magical tool that would enable ISPs to dominate information gathering and would, consequently, lead to consumer harms — even where ISPs had access only to seemingly trivial data about users.

Some comments in support of the proposed rules attempt to cast ISPs as all powerful by virtue of their access to apparently trivial data — IP addresses, access timing, computer ports, etc. — because of the power of predictive analytics. These commenters assert that the possibility of predictive analytics coupled with a large data set undermines research that demonstrates that ISPs, thanks to increasing encryption, do not have access to any better quality data, and probably less quality data, than edge providers themselves have.

But this is a curious bit of reasoning. It essentially amounts to the idea that, not only should consumers be permitted to control with whom their data is shared, but that all other parties online should be proscribed from making their own independent observations about consumers. Such a rule would be akin to telling supermarkets that they are not entitled to observe traffic patterns in their stores in order to place particular products in relatively more advantageous places, for example. But the reality is that most data is noise; simply having more of it is not necessarily a boon, and predictive analytics is far from a panacea. In fact, the insights gained from extensive data collection are frequently useless when examining very large data sets, and are better employed by single firms answering particular questions about their users and products.

Our full reply comments are available here.

Thanks to Geoff for the introduction. I look forward to posting a few things over the summer.

I’d like to begin by discussing Geoff’s post on the pending legislative proposals designed to combat strategic abuse of drug safety regulations to prevent generic competition. Specifically, I’d like to address the economic incentive structure that is in effect in this highly regulated market.

Like many others, I first noticed the abuse of drug safety regulations to prevent competition when Turing Pharmaceuticals—then led by now infamous CEO Martin Shkreli—acquired the manufacturing rights for the anti-parasitic drug Daraprim, and raised the price of the drug by over 5,000%. The result was a drug that cost $750 per tablet. Daraprim (pyrimethamine) is used to combat malaria and toxoplasma gondii infections in immune-compromised patients, especially those with HIV. The World Health Organization includes Daraprim on its “List of Essential Medicines” as a medicine important to basic health systems. After the huge price hike, the drug was effectively out of reach for many insurance plans and uninsured patients who needed it for the six to eight week course of treatment for toxoplasma gondii infections.

It’s not unusual for drugs to sell at huge multiples above their manufacturing cost. Indeed, a primary purpose of patent law is to allow drug companies to earn sufficient profits to engage in the expensive and risky business of developing new drugs. But Daraprim was first sold in 1953 and thus has been off patent for decades. With no intellectual property protection Daraprim should, theoretically, now be available from generic drug manufactures for only a little above cost. Indeed, this is what we see in the rest of the world. Daraprim is available all over the world for very cheap prices. The per tablet price is 3 rupees (US$0.04) in India, R$0.07 (US$0.02) in Brazil, US$0.18 in Australia, and US$0.66 in the UK.

So what gives in the U.S.? Or rather, what does not give? What in our system of drug distribution has gotten stuck and is preventing generic competition from swooping in to compete down the high price of off-patent drugs like Daraprim? The answer is not market failure, but rather regulatory failure, as Geoff noted in his post. While generics would love to enter a market where a drug is currently selling for high profits, they cannot do so without getting FDA approval for their generic version of the drug at issue. To get approval, a generic simply has to file an Abbreviated New Drug Application (“ANDA”) that shows that its drug is equivalent to the branded drug with which it wants to compete. There’s no need for the generic to repeat the safety and efficacy tests that the brand manufacturer originally conducted. To test for equivalence, the generic needs samples of the brand drug. Without those samples, the generic cannot meet its burden of showing equivalence. This is where the strategic use of regulation can come into play.

Geoff’s post explains the potential abuse of Risk Evaluation and Mitigation Strategies (“REMS”). REMS are put in place to require certain safety steps (like testing a woman for pregnancy before prescribing a drug that can cause birth defects) or to restrict the distribution channels for dangerous or addictive drugs. As Geoff points out, there is evidence that a few brand name manufacturers have engaged in bad-faith refusals to provide samples using the excuse of REMS or restricted distribution programs to (1) deny requests for samples, (2) prevent generic manufacturers from buying samples from resellers, and (3) deny generics whose drugs have won approval access to the REMS system that is required for generics to distribute their drugs. Once the FDA has certified that a generic manufacturer can safely handle the drug at issue, there is no legitimate basis for the owners of brand name drugs to deny samples to the generic maker. Expressed worries about liability from entering joint REMS programs with generics also ring hollow, for the most part, and would be ameliorated by the pending legislation.

It’s important to note that this pricing situation is unique to drugs because of the regulatory framework surrounding drug manufacture and distribution. If a manufacturer of, say, an off-patent vacuum cleaner wants to prevent competitors from copying its vacuum cleaner design, it is unlikely to be successful. Even if the original manufacturer refuses to sell any vacuum cleaners to a competitor, and instructs its retailers not to sell either, this will be very difficult to monitor and enforce. Moreover, because of an unrestricted resale market, a competitor would inevitably be able to obtain samples of the vacuum cleaner it wishes to copy. Only patent law can successfully protect against the copying of a product sold to the general public, and when the patent expires, so too will the ability to prevent copying.

Drugs are different. The only way a consumer can resell prescription drugs is by breaking the law. Pills bought from an illegal secondary market would be useless to generics for purposes of FDA approval anyway, because the chain of custody would not exist to prove that the samples are the real thing. This means generics need to get samples from the authorized manufacturer or distribution company. When a drug is subject to a REMS-required restricted distribution program, it is even more difficult, if not impossible, for a generic maker to get samples of the drugs for which it wants to make generic versions. Restricted distribution programs, which are used for dangerous or addictive drugs, by design very tightly control the chain of distribution so that the drugs go only to patients with proper prescriptions from authorized doctors.

A troubling trend has arisen recently in which drug owners put their branded drugs into restricted distribution programs not because of any FDA REMS requirement, but instead as a method to prevent generics from obtaining samples and making generic versions of the drugs. This is the strategy that Turing used before it raised prices over 5,000% on Daraprim. And Turing isn’t the only company to use this strategy. It is being emulated by others, although perhaps not so conspicuously. For instance, in 2015, Valeant Pharmaceuticals completed a hostile takeover of Allergan Pharmaceuticals, with the help of the hedge fund, Pershing Square. Once Valeant obtained ownership of Allergan and its drug portfolio, it adopted restricted distribution programs and raised the prices on its off-patent drugs substantially. It raised the price of two life-saving heart drugs by 212% and 525% respectively. Others have followed suit.

A key component of the strategy to profit from hiking prices on off-patent drugs while avoiding competition from generics is to select drugs that do not currently have generic competitors. Sometimes this is because a drug has recently come off patent, and sometimes it is because the drug is for a small patient population, and thus generics haven’t bothered to enter the market given that brand name manufacturers generally drop their prices to close to cost after the drug comes off patent. But with the strategic control of samples and refusals to allow generics to enter REMS programs, the (often new) owners of the brand name drugs seek to prevent the generic competition that we count on to make products cheap and plentiful once their patent protection expires.

Most brand name drug makers do not engage in withholding samples from generics and abusing restricted distribution and REMS programs. But the few that do cost patients and insurers dearly for important medicines that should be much cheaper once they go off patent. More troubling still is the recent strategy of taking drugs that have been off patent and cheap for years, and abusing the regulatory regime to raise prices and block competition. This growing trend of abusing restricted distribution and REMS to facilitate rent extraction from drug purchasers needs to be corrected.

Two bills addressing this issue are pending in Congress. Both bills (1) require drug companies to provide samples to generics after the FDA has certified the generic, (2) require drug companies to enter into shared REMS programs with generics, (3) allow generics to set up their own REMS compliant systems, and (4) exempt drug companies from liability for sharing products and REMS-compliant systems with generic companies in accordance with the steps set out in the bills. When it comes to remedies, however, the Senate version is significantly better. The penalties provided in the House bill are both vague and overly broad. The bill provides for treble damages and costs against the drug company “of the kind described in section 4(a) of the Clayton Act.” Not only is the application of the Clayton Act unclear in the context of the heavily regulated market for drugs (see Trinko), but treble damages may over-deter reasonably restrictive behavior by drug companies when it comes to distributing dangerous drugs.

The remedies in the Senate version are very well crafted to deter rent seeking behavior while not overly deterring reasonable behavior. The remedial scheme is particularly good, because it punishes most those companies that attempt to make exorbitant profits on drugs by denying generic entry. The Senate version provides as a remedy for unreasonable delay that the plaintiff shall be awarded attorneys’ fees, costs, and the defending drug company’s profits on the drug at issue during the time of the unreasonable delay. This means that a brand name drug company that sells an old drug for a low price and delays sharing only because of honest concern about the safety standards of a particular generic company will not face terribly high damages if it is found unreasonable. On the other hand, a company that sends the price of an off-patent drug soaring and then attempts to block generic entry will know that it can lose all of its rent-seeking profits, plus the cost of the victorious generic company’s attorneys fees. This vastly reduces the incentive for the company owning the brand name drug to raise prices and keep competitors out. It likewise greatly increases the incentive of a generic company to enter the market and–if it is unreasonably blocked–to file a civil action the result of which would be to transfer the excess profits to the generic. This provides a rather elegant fix to the regulatory gaming in this area that has become an increasing problem. The balancing of interests and incentives in the Senate bill should leave many congresspersons feeling comfortable to support the bill.