Archives For consumer protection

Today, the International Center for Law & Economics (ICLE) released a study updating our 2014 analysis of the economic effects of the Durbin Amendment to the Dodd-Frank Act.

The new paper, Unreasonable and Disproportionate: How the Durbin Amendment Harms Poorer Americans and Small Businesses, by ICLE scholars, Todd J. Zywicki, Geoffrey A. Manne, and Julian Morris, can be found here; a Fact Sheet highlighting the paper’s key findings is available here.

Introduced as part of the Dodd-Frank Act in 2010, the Durbin Amendment sought to reduce the interchange fees assessed by large banks on debit card transactions. In the words of its primary sponsor, Sen. Richard Durbin, the Amendment aspired to help “every single Main Street business that accepts debit cards keep more of their money, which is a savings they can pass on to their consumers.”

Unfortunately, although the Durbin Amendment did generate benefits for big-box retailers, ICLE’s 2014 analysis found that it had actually harmed many other merchants and imposed substantial net costs on the majority of consumers, especially those from lower-income households.

In the current study, we analyze a welter of new evidence and arguments to assess whether time has ameliorated or exacerbated the Amendment’s effects. Our findings in this report expand upon and reinforce our findings from 2014:

Relative to the period before the Durbin Amendment, almost every segment of the interrelated retail, banking, and consumer finance markets has been made worse off as a result of the Amendment.

Predictably, the removal of billions of dollars in interchange fee revenue has led to the imposition of higher bank fees and reduced services for banking consumers.

In fact, millions of households, regardless of income level, have been adversely affected by the Durbin Amendment through higher overdraft fees, increased minimum balances, reduced access to free checking, higher ATM fees, and lost debit card rewards, among other things.

Nor is there any evidence that merchants have lowered prices for retail consumers; for many small-ticket items, in fact, prices have been driven up.

Contrary to Sen. Durbin’s promises, in other words, increased banking costs have not been offset by lower retail prices.

At the same time, although large merchants continue to reap a Durbin Amendment windfall, there remains no evidence that small merchants have realized any interchange cost savings — indeed, many have suffered cost increases.

And all of these effects fall hardest on the poor. Hundreds of thousands of low-income households have chosen (or been forced) to exit the banking system, with the result that they face higher costs, difficulty obtaining credit, and complications receiving and making payments — all without offset in the form of lower retail prices.

Finally, the 2017 study also details a new trend that was not apparent when we examined the data three years ago: Contrary to our findings then, the two-tier system of interchange fee regulation (which exempts issuing banks with under $10 billion in assets) no longer appears to be protecting smaller banks from the Durbin Amendment’s adverse effects.

This week the House begins consideration of the Amendment’s repeal as part of Rep. Hensarling’s CHOICE Act. Our study makes clear that the Durbin price-control experiment has proven a failure, and that repeal is, indeed, the only responsible option.

Click on the following links to read:

Full Paper

Fact Sheet

Summary

In a recent long-form article in the New York Times, reporter Noam Scheiber set out to detail some of the ways Uber (and similar companies, but mainly Uber) are engaged in “an extraordinary experiment in behavioral science to subtly entice an independent work force to maximize its growth.”

That characterization seems innocuous enough, but it is apparent early on that Scheiber’s aim is not only to inform but also, if not primarily, to deride these efforts. The title of the piece, in fact, sets the tone:

How Uber Uses Psychological Tricks to Push Its Drivers’ Buttons

Uber and its relationship with its drivers are variously described by Scheiber in the piece as secretive, coercive, manipulative, dominating, and exploitative, among other things. As Schreiber describes his article, it sets out to reveal how

even as Uber talks up its determination to treat drivers more humanely, it is engaged in an extraordinary behind-the-scenes experiment in behavioral science to manipulate them in the service of its corporate growth — an effort whose dimensions became evident in interviews with several dozen current and former Uber officials, drivers and social scientists, as well as a review of behavioral research.

What’s so galling about the piece is that, if you strip away the biased and frequently misguided framing, it presents a truly engaging picture of some of the ways that Uber sets about solving a massively complex optimization problem, abetted by significant agency costs.

So I did. Strip away the detritus, add essential (but omitted) context, and edit the article to fix the anti-Uber bias, the one-sided presentation, the mischaracterizations, and the fundamentally non-economic presentation of what is, at its core, a fascinating illustration of some basic problems (and solutions) from industrial organization economics. (For what it’s worth, Scheiber should know better. After all, “He holds a master’s degree in economics from the University of Oxford, where he was a Rhodes Scholar, and undergraduate degrees in math and economics from Tulane University.”)

In my retelling, the title becomes:

How Uber Uses Innovative Management Tactics to Incentivize Its Drivers

My transformed version of the piece, with critical commentary in the form of tracked changes to the original, is here (pdf).

It’s a long (and, as I said, fundamentally interesting) piece, with cool interactive graphics, well worth the read (well, at least in my retelling, IMHO). Below is just a taste of the edits and commentary I added.

For example, where Scheiber writes:

Uber exists in a kind of legal and ethical purgatory, however. Because its drivers are independent contractors, they lack most of the protections associated with employment. By mastering their workers’ mental circuitry, Uber and the like may be taking the economy back toward a pre-New Deal era when businesses had enormous power over workers and few checks on their ability to exploit it.

With my commentary (here integrated into final form rather than tracked), that paragraph becomes:

Uber operates under a different set of legal constraints, however, also duly enacted and under which millions of workers have profitably worked for decades. Because its drivers are independent contractors, they receive their compensation largely in dollars rather than government-mandated “benefits” that remove some of the voluntariness from employer/worker relationships. And, in the case of overtime pay, for example, the Uber business model that is built in part on offering flexible incentives to match supply and demand using prices and compensation, would be next to impossible. It is precisely through appealing to drivers’ self-interest that Uber and the like may be moving the economy forward to a new era when businesses and workers have more flexibility, much to the benefit of all.

Elsewhere, Scheiber’s bias is a bit more subtle, but no less real. Thus, he writes:

As he tried to log off at 7:13 a.m. on New Year’s Day last year, Josh Streeter, then an Uber driver in the Tampa, Fla., area, received a message on the company’s driver app with the headline “Make it to $330.” The text then explained: “You’re $10 away from making $330 in net earnings. Are you sure you want to go offline?” Below were two prompts: “Go offline” and “Keep driving.” The latter was already highlighted.

With my edits and commentary, that paragraph becomes:

As he started the process of logging off at 7:13 a.m. on New Year’s Day last year, Josh Streeter, then an Uber driver in the Tampa, Fla., area, received a message on the company’s driver app with the headline “Make it to $330.” The text then explained: “You’re $10 away from making $330 in net earnings. Are you sure you want to go offline?” Below were two prompts: “Go offline” and “Keep driving.” The latter was already highlighted, but the former was listed first. It’s anyone’s guess whether either characteristic — placement or coloring — had any effect on drivers’ likelihood of clicking one button or the other.

And one last example. Scheiber writes:

Consider an algorithm called forward dispatch — Lyft has a similar one — that dispatches a new ride to a driver before the current one ends. Forward dispatch shortens waiting times for passengers, who may no longer have to wait for a driver 10 minutes away when a second driver is dropping off a passenger two minutes away.

Perhaps no less important, forward dispatch causes drivers to stay on the road substantially longer during busy periods — a key goal for both companies.

Uber and Lyft explain this in essentially the same way. “Drivers keep telling us the worst thing is when they’re idle for a long time,” said Kevin Fan, the director of product at Lyft. “If it’s slow, they’re going to go sign off. We want to make sure they’re constantly busy.”

While this is unquestionably true, there is another way to think of the logic of forward dispatch: It overrides self-control.

* * *

Uber officials say the feature initially produced so many rides at times that drivers began to experience a chronic Netflix ailment — the inability to stop for a bathroom break. Amid the uproar, Uber introduced a pause button.

“Drivers were saying: ‘I can never go offline. I’m on just continuous trips. This is a problem.’ So we redesigned it,” said Maya Choksi, a senior Uber official in charge of building products that help drivers. “In the middle of the trip, you can say, ‘Stop giving me requests.’ So you can have more control over when you want to stop driving.”

It is true that drivers can pause the services’ automatic queuing feature if they need to refill their tanks, or empty them, as the case may be. Yet once they log back in and accept their next ride, the feature kicks in again. To disable it, they would have to pause it every time they picked up a new passenger. By contrast, even Netflix allows users to permanently turn off its automatic queuing feature, known as Post-Play.

This pre-emptive hard-wiring can have a huge influence on behavior, said David Laibson, the chairman of the economics department at Harvard and a leading behavioral economist. Perhaps most notably, as Ms. Rosenblat and Luke Stark observed in an influential paper on these practices, Uber’s app does not let drivers see where a passenger is going before accepting the ride, making it hard to judge how profitable a trip will be.

Here’s how I would recast that, and add some much-needed economics:

Consider an algorithm called forward dispatch — Lyft has a similar one — that dispatches a new ride to a driver before the current one ends. Forward dispatch shortens waiting times for passengers, who may no longer have to wait for a driver 10 minutes away when a second driver is dropping off a passenger two minutes away.

Perhaps no less important, forward dispatch causes drivers to stay on the road substantially longer during busy periods — a key goal for both companies — by giving them more income-earning opportunities.

Uber and Lyft explain this in essentially the same way. “Drivers keep telling us the worst thing is when they’re idle for a long time,” said Kevin Fan, the director of product at Lyft. “If it’s slow, they’re going to go sign off. We want to make sure they’re constantly busy.”

While this is unquestionably true, and seems like another win-win, some critics have tried to paint even this means of satisfying both driver and consumer preferences in a negative light by claiming that the forward dispatch algorithm overrides self-control.

* * *

Uber officials say the feature initially produced so many rides at times that drivers began to experience a chronic Netflix ailment — the inability to stop for a bathroom break. Amid the uproar, Uber introduced a pause button.

“Drivers were saying: ‘I can never go offline. I’m on just continuous trips. This is a problem.’ So we redesigned it,” said Maya Choksi, a senior Uber official in charge of building products that help drivers. “In the middle of the trip, you can say, ‘Stop giving me requests.’ So you can have more control over when you want to stop driving.”

Tweaks like these put paid to the arguments that Uber is simply trying to abuse its drivers. And yet, critics continue to make such claims:

It is true that drivers can pause the services’ automatic queuing feature if they need to refill their tanks, or empty them, as the case may be. Yet once they log back in and accept their next ride, the feature kicks in again. To disable it, they would have to pause it every time they picked up a new passenger. By contrast, even Netflix allows users to permanently turn off its automatic queuing feature, known as Post-Play.

It’s difficult to take seriously claims that Uber “abuses” drivers by setting a default that drivers almost certainly prefer; surely drivers seek out another fare following the last fare more often than they seek out another bathroom break. In any case, the difference between one default and the other is a small change in the number of times drivers might have to push a single button; hardly a huge impediment.

But such claims persist, nevertheless. Setting a trivially different default can have a huge influence on behavior, claims David Laibson, the chairman of the economics department at Harvard and a leading behavioral economist. Perhaps most notably — and to change the subject — as Ms. Rosenblat and Luke Stark observed in an influential paper on these practices, Uber’s app does not let drivers see where a passenger is going before accepting the ride, making it hard to judge how profitable a trip will be. But there are any number of defenses of this practice, from both a driver- and consumer-welfare standpoint. Not least, such disclosure could well create isolated scarcity for a huge range of individual ride requests (as opposed to the general scarcity during a “surge”), leading to longer wait times, the need to adjust prices for consumers on the basis of individual rides, and more intense competition among drivers for the most profitable rides. Given these and other explanations, it is extremely unlikely that the practice is actually aimed at “abusing” drivers.

As they say, read the whole thing!

What does it mean to “own” something? A simple question (with a complicated answer, of course) that, astonishingly, goes unasked in a recent article in the Pennsylvania Law Review entitled, What We Buy When We “Buy Now,” by Aaron Perzanowski and Chris Hoofnagle (hereafter “P&H”). But how can we reasonably answer the question they pose without first trying to understand the nature of property interests?

P&H set forth a simplistic thesis for their piece: when an e-commerce site uses the term “buy” to indicate the purchase of digital media (instead of the term “license”), it deceives consumers. This is so, the authors assert, because the common usage of the term “buy” indicates that there will be some conveyance of property that necessarily includes absolute rights such as alienability, descendibility, and excludability, and digital content doesn’t generally come with these attributes. The authors seek to establish this deception through a poorly constructed survey regarding consumers’ understanding of the parameters of their property interests in digitally acquired copies. (The survey’s considerable limitations is a topic for another day….)

The issue is more than merely academic: NTIA and the USPTO have just announced that they will hold a public meeting

to discuss how best to communicate to consumers regarding license terms and restrictions in connection with online transactions involving copyrighted works… [as a precursor to] the creation of a multistakeholder process to establish best practices to improve consumers’ understanding of license terms and restrictions in connection with online transactions involving creative works.

Whatever the results of that process, it should not begin, or end, with P&H’s problematic approach.

Getting to their conclusion that platforms are engaged in deceptive practices requires two leaps of faith: First, that property interests are absolute and that any restraint on the use of “property” is inconsistent with the notion of ownership; and second, that consumers’ stated expectations (even assuming that they were measured correctly) alone determine the appropriate contours of legal (and economic) property interests. Both leaps are meritless.

Property and ownership are not absolute concepts

P&H are in such a rush to condemn downstream restrictions on the alienability of digital copies that they fail to recognize that “property” and “ownership” are not absolute terms, and are capable of being properly understood only contextually. Our very notions of what objects may be capable of ownership change over time, along with the scope of authority over owned objects. For P&H, the fact that there are restrictions on the use of an object means that it is not properly “owned.” But that overlooks our everyday understanding of the nature of property.

Ownership is far more complex than P&H allow, and ownership limited by certain constraints is still ownership. As Armen Alchian and Harold Demsetz note in The Property Right Paradigm (1973):

In common speech, we frequently speak of someone owning this land, that house, or these bonds. This conversational style undoubtedly is economical from the viewpoint of quick communication, but it masks the variety and complexity of the ownership relationship. What is owned are rights to use resources, including one’s body and mind, and these rights are always circumscribed, often by the prohibition of certain actions. To “own land” usually means to have the right to till (or not to till) the soil, to mine the soil, to offer those rights for sale, etc., but not to have the right to throw soil at a passerby, to use it to change the course of a stream, or to force someone to buy it. What are owned are socially recognized rights of action. (Emphasis added).

Literally, everything we own comes with a range of limitations on our use rights. Literally. Everything. So starting from a position that limitations on use mean something is not, in fact, owned, is absurd.

Moreover, in defining what we buy when we buy digital goods by reference to analog goods, P&H are comparing apples and oranges, without acknowledging that both apples and oranges are bought.

There has been a fair amount of discussion about the nature of digital content transactions (including by the USPTO and NTIA), and whether they are analogous to traditional sales of objects or more properly characterized as licenses. But this is largely a distinction without a difference, and the nature of the transaction is unnecessary in understanding that P&H’s assertion of deception is unwarranted.

Quite simply, we are accustomed to buying licenses as well as products. Whenever we buy a ticket — e.g., an airline ticket or a ticket to the movies — we are buying the right to use something or gain some temporary privilege. These transactions are governed by the terms of the license. But we certainly buy tickets, no? Alchian and Demsetz again:

The domain of demarcated uses of a resource can be partitioned among several people. More than one party can claim some ownership interest in the same resource. One party may own the right to till the land, while another, perhaps the state, may own an easement to traverse or otherwise use the land for specific purposes. It is not the resource itself which is owned; it is a bundle, or a portion, of rights to use a resource that is owned. In its original meaning, property referred solely to a right, title, or interest, and resources could not be identified as property any more than they could be identified as right, title, or interest. (Emphasis added).

P&H essentially assert that restrictions on the use of property are so inconsistent with the notion of property that it would be deceptive to describe the acquisition transaction as a purchase. But such a claim completely overlooks the fact that there are restrictions on any use of property in general, and on ownership of copies of copyright-protected materials in particular.

Take analog copies of copyright-protected works. While the lawful owner of a copy is able to lend that copy to a friend, sell it, or even use it as a hammer or paperweight, he or she can not offer it for rental (for certain kinds of works), cannot reproduce it, may not publicly perform or broadcast it, and may not use it to bludgeon a neighbor. In short, there are all kinds of restrictions on the use of said object — yet P&H have little problem with defining the relationship of person to object as “ownership.”

Consumers’ understanding of all the terms of exchange is a poor metric for determining the nature of property interests

P&H make much of the assertion that most users don’t “know” the precise terms that govern the allocation of rights in digital copies; this is the source of the “deception” they assert. But there is a cost to marking out the precise terms of use with perfect specificity (no contract specifies every eventuality), a cost to knowing the terms perfectly, and a cost to caring about them.

When we buy digital goods, we probably care a great deal about a few terms. For a digital music file, for example, we care first and foremost about whether it will play on our device(s). Other terms are of diminishing importance. Users certainly care whether they can play a song when offline, for example, but whether their children will be able to play it after they die? Not so much. That eventuality may, in fact, be specified in the license, but the nature of this particular ownership relationship includes a degree of rational ignorance on the users’ part: The typical consumer simply doesn’t care. In other words, she is, in Nobel-winning economist Herbert Simon’s term, “boundedly rational.” That isn’t deception; it’s a feature of life without which we would be overwhelmed by “information overload” and unable to operate. We have every incentive and ability to know the terms we care most about, and to ignore the ones about which we care little.

Relatedly, P&H also fail to understand the relationship between price and ownership. A digital song that is purchased from Amazon for $.99 comes with a set of potentially valuable attributes. For example:

  • It may be purchased on its own, without the other contents of an album;
  • It never degrades in quality, and it’s extremely difficult to misplace;
  • It may be purchased from one’s living room and be instantaneously available;
  • It can be easily copied or transferred onto multiple devices; and
  • It can be stored in Amazon’s cloud without taking up any of the consumer’s physical memory resources.

In many ways that matter to consumers, digital copies are superior to analog or physical ones. And yet, compared to physical media, on a per-song basis (assuming one could even purchase a physical copy of a single song without purchasing an entire album), $.99 may represent a considerable discount. Moreover, in 1982 when CDs were first released, they cost an average of $15. In 2017 dollars, that would be $38. Yet today most digital album downloads can be found for $10 or less.

Of course, songs purchased on CD or vinyl offer other benefits that a digital copy can’t provide. But the main thing — the ability to listen to the music — is approximately equal, and yet the digital copy offers greater convenience at (often) lower price. It is impossible to conclude that a consumer is duped by such a purchase, even if it doesn’t come with the ability to resell the song.

In fact, given the price-to-value ratio, it is perhaps reasonable to think that consumers know full well (or at least suspect) that there might be some corresponding limitations on use — the inability to resell, for example — that would explain the discount. For some people, those limitations might matter, and those people, presumably, figure out whether such limitations are present before buying a digital album or song For everyone else, however, the ability to buy a digital song for $.99 — including all of the benefits of digital ownership, but minus the ability to resell — is a good deal, just as it is worth it to a home buyer to purchase a house, regardless of whether it is subject to various easements.

Consumers are, in fact, familiar with “buying” property with all sorts of restrictions

The inability to resell digital goods looms inordinately large for P&H: According to them, by virtue of the fact that digital copies may not be resold, “ownership” is no longer an appropriate characterization of the relationship between the consumer and her digital copy. P&H believe that digital copies of works are sufficiently similar to analog versions, that traditional doctrines of exhaustion (which would permit a lawful owner of a copy of a work to dispose of that copy as he or she deems appropriate) should apply equally to digital copies, and thus that the inability to alienate the copy as the consumer wants means that there is no ownership interest per se.

But, as discussed above, even ownership of a physical copy doesn’t convey to the purchaser the right to make or allow any use of that copy. So why should we treat the ability to alienate a copy as the determining factor in whether it is appropriate to refer to the acquisition as a purchase? P&H arrive at this conclusion only through the illogical assertion that

Consumers operate in the marketplace based on their prior experience. We suggest that consumers’ “default” behavior is based on the experiences of buying physical media, and the assumptions from that context have carried over into the digital domain.

P&H want us to believe that consumers can’t distinguish between the physical and virtual worlds, and that their ability to use media doesn’t differentiate between these realms. But consumers do understand (to the extent that they care) that they are buying a different product, with different attributes. Does anyone try to play a vinyl record on his or her phone? There are perceived advantages and disadvantages to different kinds of media purchases. The ability to resell is only one of these — and for many (most?) consumers not likely the most important.

And, furthermore, the notion that consumers better understood their rights — and the limitations on ownership — in the physical world and that they carried these well-informed expectations into the digital realm is fantasy. Are we to believe that the consumers of yore understood that when they bought a physical record they could sell it, but not rent it out? That if they played that record in a public place they would need to pay performance royalties to the songwriter and publisher? Not likely.

Simply put, there is a wide variety of goods and services that we clearly buy, but that have all kinds of attributes that do not fit P&H’s crabbed definition of ownership. For example:

  • We buy tickets to events and membership in clubs (which, depending upon club rules, may not be alienated, and which always lapse for non-payment).
  • We buy houses notwithstanding the fact that in most cases all we own is the right to inhabit the premises for as long as we pay the bank (which actually retains more of the incidents of “ownership”).
  • In fact, we buy real property encumbered by a series of restrictive covenants: Depending upon where we live, we may not be able to build above a certain height, we may not paint the house certain colors, we may not be able to leave certain objects in the driveway, and we may not be able to resell without approval of a board.

We may or may not know (or care) about all of the restrictions on our use of such property. But surely we may accurately say that we bought the property and that we “own” it, nonetheless.

The reality is that we are comfortable with the notion of buying any number of limited property interests — including the purchasing of a license — regardless of the contours of the purchase agreement. The fact that some ownership interests may properly be understood as licenses rather than as some form of exclusive and permanent dominion doesn’t suggest that a consumer is not involved in a transaction properly characterized as a sale, or that a consumer is somehow deceived when the transaction is characterized as a sale — and P&H are surely aware of this.

Conclusion: The real issue for P&H is “digital first sale,” not deception

At root, P&H are not truly concerned about consumer deception; they are concerned about what they view as unreasonable constraints on the “rights” of consumers imposed by copyright law in the digital realm. Resale looms so large in their analysis not because consumers care about it (or are deceived about it), but because the real object of their enmity is the lack of a “digital first sale doctrine” that exactly mirrors the law regarding physical goods.

But Congress has already determined that there are sufficient distinctions between ownership of digital copies and ownership of analog ones to justify treating them differently, notwithstanding ownership of the particular copy. And for good reason: Trade in “used” digital copies is not a secondary market. Such copies are identical to those traded in the primary market and would compete directly with “pristine” digital copies. It makes perfect sense to treat ownership differently in these cases — and still to say that both digital and analog copies are “bought” and “owned.”

P&H’s deep-seated opposition to current law colors and infects their analysis — and, arguably, their failure to be upfront about it is the real deception. When one starts an analysis with an already-identified conclusion, the path from hypothesis to result is unlikely to withstand scrutiny, and that is certainly the case here.

In an October 25 blog commentary posted at this site, Geoffrey Manne and Kristian Stout argued against a proposed Federal Communications Commission (FCC) ban on the use of mandatory arbitration clauses in internet service providers’ consumer service agreements.  This proposed ban is just one among many unfortunate features in the latest misguided effort by the Federal Communications Commission (FCC) to regulate the privacy of data transmitted over the Internet (FCC Privacy NPRM), discussed by me in an October 27, 2016 Heritage Foundation Legal Memorandum:

The growth of the Internet economy has highlighted the costs associated with the unauthorized use of personal information transmitted online. The federal government’s consumer protection agency, the Federal Trade Commission (FTC), has taken enforcement actions for online privacy violations based on its authority to proscribe “unfair or deceptive” practices affecting commerce. The FTC’s economically influenced case-by-case approach to privacy violations focuses on practices that harm consumers. The FCC has proposed a rule that that would impose intrusive privacy regulation on broadband Internet service providers (but not other Internet companies), without regard to consumer harm.  If implemented, the FCC’s rule would impose major economic costs and would interfere with neutral implementation of the FTC’s less intrusive approach, as well as the FTC’s lead role in federal regulatory privacy coordination with foreign governments.

My analysis concludes with the following recommendations:

The FCC’s Privacy NPRM is at odds with the pro-competitive, economic welfare enhancing goals of the 1996 Telecommunications Act. It ignores the limitations imposed by that act and, if implemented, would harm consumers and producers and slow innovation. This prompts four recommendations.

The FCC should withdraw the NPRM and leave it to the FTC to oversee all online privacy practices under its Section 5 unfairness and deception authority. The adoption of the Privacy Shield, which designates the FTC as the responsible American privacy oversight agency, further strengthens the case against FCC regulation in this area.

In overseeing online privacy practices, the FTC should employ a very light touch that stresses economic analysis and cost-benefit considerations. Moreover, it should avoid requiring that rigid privacy policy conditions be kept in place for long periods of time through consent decree conditions, in order to allow changing market conditions to shape and improve business privacy policies.

Moreover, the FTC should borrow a page from former FTC Commissioner Joshua Wright by implementing an “economic approach” to privacy.  Under such an approach, FTC economists would help make the commission a privacy “thought leader” by developing a rigorous academic research agenda on the economics of privacy, featuring the economic evaluation of industry sectors and practices;

The FTC would bear the burden of proof in showing that violations of a company’s privacy policy are material to consumer decision-making;

FTC economists would report independently to the FTC about proposed privacy-related enforcement initiatives; and

The FTC would publish the views of its Bureau of Economics in all privacy-related consent decrees that are placed on the public record.

The FTC should encourage the European Commission and other foreign regulators to take into account the economics of privacy in developing their privacy regulatory policies. In so doing, it should emphasize that innovation is harmed, the beneficial development of the Internet is slowed, and consumer welfare and rights are undermined through highly prescriptive regulation in this area (well-intentioned though it may be). Relatedly, the FTC and other U.S. government negotiators should argue against adoption of a “one-size-fits-all” global privacy regulation framework.  Such a global framework could harmfully freeze into place over-regulatory policies and preclude beneficial experimentation in alternative forms of “lighter-touch” regulation and enforcement.

Although not a panacea, these recommendations would help deter (or, at least, constrain) the economically harmful government micromanagement of businesses’ privacy practices in the United States and abroad.  The Internet economy would in turn benefit from such a restraint on the grasping hand of big government.

Stay tuned.

Over the weekend, Senator Al Franken and FCC Commissioner Mignon Clyburn issued an impassioned statement calling for the FCC to thwart the use of mandatory arbitration clauses in ISPs’ consumer service agreements — starting with a ban on mandatory arbitration of privacy claims in the Chairman’s proposed privacy rules. Unfortunately, their call to arms rests upon a number of inaccurate or weak claims. Before the Commissioners vote on the proposed privacy rules later this week, they should carefully consider whether consumers would actually be served by such a ban.

FCC regulations can’t override congressional policy favoring arbitration

To begin with, it is firmly cemented in Supreme Court precedent that the Federal Arbitration Act (FAA) “establishes ‘a liberal federal policy favoring arbitration agreements.’” As the Court recently held:

[The FAA] reflects the overarching principle that arbitration is a matter of contract…. [C]ourts must “rigorously enforce” arbitration agreements according to their terms…. That holds true for claims that allege a violation of a federal statute, unless the FAA’s mandate has been “overridden by a contrary congressional command.”

For better or for worse, that’s where the law stands, and it is the exclusive province of Congress — not the FCC — to change it. Yet nothing in the Communications Act (to say nothing of the privacy provisions in Section 222 of the Act) constitutes a “contrary congressional command.”

And perhaps that’s for good reason. In enacting the statute, Congress didn’t demonstrate the same pervasive hostility toward companies and their relationships with consumers that has characterized the way this FCC has chosen to enforce the Act. As Commissioner O’Rielly noted in dissenting from the privacy NPRM:

I was also alarmed to see the Commission acting on issues that should be completely outside the scope of this proceeding and its jurisdiction. For example, the Commission seeks comment on prohibiting carriers from including mandatory arbitration clauses in contracts with their customers. Here again, the Commission assumes that consumers don’t understand the choices they are making and is willing to impose needless costs on companies by mandating how they do business.

If the FCC were to adopt a provision prohibiting arbitration clauses in its privacy rules, it would conflict with the FAA — and the FAA would win. Along the way, however, it would create a thorny uncertainty for both companies and consumers seeking to enforce their contracts.  

The evidence suggests that arbitration is pro-consumer

But the lack of legal authority isn’t the only problem with the effort to shoehorn an anti-arbitration bias into the Commission’s privacy rules: It’s also bad policy.

In its initial broadband privacy NPRM, the Commission said this about mandatory arbitration:

In the 2015 Open Internet Order, we agreed with the observation that “mandatory arbitration, in particular, may more frequently benefit the party with more resources and more understanding of the dispute procedure, and therefore should not be adopted.” We further discussed how arbitration can create an asymmetrical relationship between large corporations that are repeat players in the arbitration system and individual customers who have fewer resources and less experience. Just as customers should not be forced to agree to binding arbitration and surrender their right to their day in court in order to obtain broadband Internet access service, they should not have to do so in order to protect their private information conveyed through that service.

The Commission may have “agreed with the cited observations about arbitration, but that doesn’t make those views accurate. As one legal scholar has noted, summarizing the empirical data on the effects of arbitration:

[M]ost of the methodologically sound empirical research does not validate the criticisms of arbitration. To give just one example, [employment] arbitration generally produces higher win rates and higher awards for employees than litigation.

* * *

In sum, by most measures — raw win rates, comparative win rates, some comparative recoveries and some comparative recoveries relative to amounts claimed — arbitration generally produces better results for claimants [than does litigation].

A comprehensive, empirical study by Northwestern Law’s Searle Center on AAA (American Arbitration Association) cases found much the same thing, noting in particular that

  • Consumer claimants in arbitration incur average arbitration fees of only about $100 to arbitrate small (under $10,000) claims, and $200 for larger claims (up to $75,000).
  • Consumer claimants also win attorneys’ fees in over 60% of the cases in which they seek them.
  • On average, consumer arbitrations are resolved in under 7 months.
  • Consumers win some relief in more than 50% of cases they arbitrate…
  • And they do almost exactly as well in cases brought against “repeat-player” business.

In short, it’s extremely difficult to sustain arguments suggesting that arbitration is tilted against consumers relative to litigation.

(Upper) class actions: Benefitting attorneys — and very few others

But it isn’t just any litigation that Clyburn and Franken seek to preserve; rather, they are focused on class actions:

If you believe that you’ve been wronged, you could take your service provider to court. But you’d have to find a lawyer willing to take on a multi-national telecom provider over a few hundred bucks. And even if you won the case, you’d likely pay more in legal fees than you’d recover in the verdict.

The only feasible way for you as a customer to hold that corporation accountable would be to band together with other customers who had been similarly wronged, building a case substantial enough to be worth the cost—and to dissuade that big corporation from continuing to rip its customers off.

While — of course — litigation plays an important role in redressing consumer wrongs, class actions frequently don’t confer upon class members anything close to the imagined benefits that plaintiffs’ lawyers and their congressional enablers claim. According to a 2013 report on recent class actions by the law firm, Mayer Brown LLP, for example:

  • “In [the] entire data set, not one of the class actions ended in a final judgment on the merits for the plaintiffs. And none of the class actions went to trial, either before a judge or a jury.” (Emphasis in original).
  • “The vast majority of cases produced no benefits to most members of the putative class.”
  • “For those cases that do settle, there is often little or no benefit for class members. What is more, few class members ever even see those paltry benefits — particularly in consumer class actions.”
  • “The bottom line: The hard evidence shows that class actions do not provide class members with anything close to the benefits claimed by their proponents, although they can (and do) enrich attorneys.”

Similarly, a CFPB study of consumer finance arbitration and litigation between 2008 and 2012 seems to indicate that the class action settlements and judgments it studied resulted in anemic relief to class members, at best. The CFPB tries to disguise the results with large, aggregated and heavily caveated numbers (never once actually indicating what the average payouts per person were) that seem impressive. But in the only hard numbers it provides (concerning four classes that ended up settling in 2013), promised relief amounted to under $23 each (comprising both cash and in-kind payment) if every class member claimed against the award. Back-of-the-envelope calculations based on the rest of the data in the report suggest that result was typical.

Furthermore, the average time to settlement of the cases the CFPB looked at was almost 2 years. And somewhere between 24% and 37% involved a non-class settlement — meaning class members received absolutely nothing at all because the named plaintiff personally took a settlement.

By contrast, according to the Searle Center study, the average award in the consumer-initiated arbitrations it studied (admittedly, involving cases with a broader range of claims) was almost $20,000, and the average time to resolution was less than 7 months.

To be sure, class action litigation has been an important part of our system of justice. But, as Arthur Miller — a legal pioneer who helped author the rules that make class actions viable — himself acknowledged, they are hardly a panacea:

I believe that in the 50 years we have had this rule, that there are certain class actions that never should have been brought, admitted; that we have burdened our judiciary, yes. But we’ve had a lot of good stuff done. We really have.

The good that has been done, according to Professor Miller, relates in large part to the civil rights violations of the 50’s and 60’s, which the class action rules were designed to mitigate:

Dozens and dozens and dozens of communities were desegregated because of the class action. You even see desegregation decisions in my old town of Boston where they desegregated the school system. That was because of a class action.

It’s hard to see how Franken and Clyburn’s concern for redress of “a mysterious 99-cent fee… appearing on your broadband bill” really comes anywhere close to the civil rights violations that spawned the class action rules. Particularly given the increasingly pervasive role of the FCC, FTC, and other consumer protection agencies in addressing and deterring consumer harms (to say nothing of arbitration itself), it is manifestly unclear why costly, protracted litigation that infrequently benefits anyone other than trial attorneys should be deemed so essential.

“Empowering the 21st century [trial attorney]”

Nevertheless, Commissioner Clyburn and Senator Franken echo the privacy NPRM’s faulty concerns about arbitration clauses that restrict consumers’ ability to litigate in court:

If you’re prohibited from using our legal system to get justice when you’re wronged, what’s to protect you from being wronged in the first place?

Well, what do they think the FCC is — chopped liver?

Hardly. In fact, it’s a little surprising to see Commissioner Clyburn (who sits on a Commission that proudly proclaims that “[p]rotecting consumers is part of [its] DNA”) and Senator Franken (among Congress’ most vocal proponents of the FCC’s claimed consumer protection mission) asserting that the only protection for consumers from ISPs’ supposed depredations is the cumbersome litigation process.

In fact, of course, the FCC has claimed for itself the mantle of consumer protector, aimed at “Empowering the 21st Century Consumer.” But nowhere does the agency identify “promoting and preserving the rights of consumers to litigate” among its tools of consumer empowerment (nor should it). There is more than a bit of irony in a federal regulator — a commissioner of an agency charged with making sure, among other things, that corporations comply with the law — claiming that, without class actions, consumers are powerless in the face of bad corporate conduct.

Moreover, even if it were true (it’s not) that arbitration clauses tend to restrict redress of consumer complaints, effective consumer protection would still not necessarily be furthered by banning such clauses in the Commission’s new privacy rules.

The FCC’s contemplated privacy regulations are poised to introduce a wholly new and untested regulatory regime with (at best) uncertain consequences for consumers. Given the risk of consumer harm resulting from the imposition of this new regime, as well as the corollary risk of its excessive enforcement by complainants seeking to test or push the boundaries of new rules, an agency truly concerned with consumer protection would tread carefully. Perhaps, if the rules were enacted without an arbitration ban, it would turn out that companies would mandate arbitration (though this result is by no means certain, of course). And perhaps arbitration and agency enforcement alone would turn out to be insufficient to effectively enforce the rules. But given the very real costs to consumers of excessive, frivolous or potentially abusive litigation, cabining the litigation risk somewhat — even if at first it meant the regime were tilted slightly too much against enforcement — would be the sensible, cautious and pro-consumer place to start.

____

Whether rooted in a desire to “protect” consumers or not, the FCC’s adoption of a rule prohibiting mandatory arbitration clauses to address privacy complaints in ISP consumer service agreements would impermissibly contravene the FAA. As the Court has made clear, such a provision would “‘stand[] as an obstacle to the accomplishment and execution of the full purposes and objectives of Congress’ embodied in the Federal Arbitration Act.” And not only would such a rule tend to clog the courts in contravention of the FAA’s objectives, it would do so without apparent benefit to consumers. Even if such a rule wouldn’t effectively be invalidated by the FAA, the Commission should firmly reject it anyway: A rule that operates primarily to enrich class action attorneys at the expense of their clients has no place in an agency charged with protecting the public interest.

Next week the FCC is slated to vote on the second iteration of Chairman Wheeler’s proposed broadband privacy rules. Of course, as has become all too common, none of us outside the Commission has actually seen the proposal. But earlier this month Chairman Wheeler released a Fact Sheet that suggests some of the ways it would update the rules he initially proposed.

According to the Fact Sheet, the new proposed rules are

designed to evolve with changing technologies and encourage innovation, and are in harmony with other key privacy frameworks and principles — including those outlined by the Federal Trade Commission and the Administration’s Consumer Privacy Bill of Rights.

Unfortunately, the Chairman’s proposal appears to fall short of the mark on both counts.

As I discuss in detail in a letter filed with the Commission yesterday, despite the Chairman’s rhetoric, the rules described in the Fact Sheet fail to align with the FTC’s approach to privacy regulation embodied in its 2012 Privacy Report in at least two key ways:

  • First, the Fact Sheet significantly expands the scope of information that would be considered “sensitive” beyond that contemplated by the FTC. That, in turn, would impose onerous and unnecessary consumer consent obligations on commonplace uses of data, undermining consumer welfare, depriving consumers of information and access to new products and services, and restricting competition.
  • Second, unlike the FTC’s framework, the proposal described by the Fact Sheet ignores the crucial role of “context” in determining the appropriate level of consumer choice before affected companies may use consumer data. Instead, the Fact Sheet takes a rigid, acontextual approach that would stifle innovation and harm consumers.

The Chairman’s proposal moves far beyond the FTC’s definition of “sensitive” information requiring “opt-in” consent

The FTC’s privacy guidance is, in its design at least, appropriately flexible, aimed at balancing the immense benefits of information flows with sensible consumer protections. Thus it eschews an “inflexible list of specific practices” that would automatically trigger onerous consent obligations and “risk[] undermining companies’ incentives to innovate and develop new products and services….”

Under the FTC’s regime, depending on the context in which it is used (on which see the next section, below), the sensitivity of data delineates the difference between data uses that require “express affirmative” (opt-in) consent and those that do not (requiring only “other protections” short of opt-in consent — e.g., opt-out).

Because the distinction is so important — because opt-in consent is much more likely to staunch data flows — the FTC endeavors to provide guidance as to what data should be considered sensitive, and to cabin the scope of activities requiring opt-in consent. Thus, the FTC explains that “information about children, financial and health information, Social Security numbers, and precise geolocation data [should be treated as] sensitive.” But beyond those instances, the FTC doesn’t consider any other type of data as inherently sensitive.

By contrast, and without explanation, Chairman Wheeler’s Fact Sheet significantly expands what constitutes “sensitive” information requiring “opt-in” consent by adding “web browsing history,” “app usage history,” and “the content of communications” to the list of categories of data deemed sensitive in all cases.

By treating some of the most common and important categories of data as always “sensitive,” and by making the sensitivity of data the sole determinant for opt-in consent, the Chairman’s proposal would make it almost impossible for ISPs to make routine (to say nothing of innovative), appropriate, and productive uses of data comparable to those undertaken by virtually every major Internet company.  This goes well beyond anything contemplated by the FTC — with no evidence of any corresponding benefit to consumers and with obvious harm to competition, innovation, and the overall economy online.

And because the Chairman’s proposal would impose these inappropriate and costly restrictions only on ISPs, it would create a barrier to competition by ISPs in other platform markets, without offering a defensible consumer protection rationale to justify either the disparate treatment or the restriction on competition.

As Fred Cate and Michael Staten have explained,

“Opt-in” offers no greater privacy protection than allowing consumers to “opt-out”…, yet it imposes significantly higher costs on consumers, businesses, and the economy.

Not surprisingly, these costs fall disproportionately on the relatively poor and the less technology-literate. In the former case, opt-in requirements may deter companies from offering services at all, even to people who would make a very different trade-off between privacy and monetary price. In the latter case, because an initial decision to opt-in must be taken in relative ignorance, users without much experience to guide their decisions will face effectively higher decision-making costs than more knowledgeable users.

The Chairman’s proposal ignores the central role of context in the FTC’s privacy framework

In part for these reasons, central to the FTC’s more flexible framework is the establishment of a sort of “safe harbor” for data uses where the benefits clearly exceed the costs and consumer consent may be inferred:

Companies do not need to provide choice before collecting and using consumer data for practices that are consistent with the context of the transaction or the company’s relationship with the consumer….

Thus for many straightforward uses of data, the “context of the transaction,” not the asserted “sensitivity” of the underlying data, is the threshold question in evaluating the need for consumer choice in the FTC’s framework.

Chairman Wheeler’s Fact Sheet, by contrast, ignores this central role of context in its analysis. Instead, it focuses solely on data sensitivity, claiming that doing so is “in line with customer expectations.”

But this is inconsistent with the FTC’s approach.

In fact, the FTC’s framework explicitly rejects a pure “consumer expectations” standard:

Rather than relying solely upon the inherently subjective test of consumer expectations, the… standard focuses on more objective factors related to the consumer’s relationship with a business.

And while everyone agrees that sensitivity is a key part of pegging privacy regulation to actual consumer and corporate relationships, the FTC also recognizes that the importance of the sensitivity of the underlying data varies with the context in which it is used. Or, in the words of the White House’s 2012 Consumer Data Privacy in a Networked World Report (introducing its Consumer Privacy Bill of Rights), “[c]ontext should shape the balance and relative emphasis of particular principles” guiding the regulation of privacy.

By contrast, Chairman Wheeler’s “sensitivity-determines-consumer-expectations” framing is a transparent attempt to claim fealty to the FTC’s (and the Administration’s) privacy standards while actually implementing a privacy regime that is flatly inconsistent with them.

The FTC’s approach isn’t perfect, but that’s no excuse to double down on its failings

The FTC’s privacy guidance, and even more so its privacy enforcement practices under Section 5, are far from perfect. The FTC should be commended for its acknowledgement that consumers’ privacy preferences and companies’ uses of data will change over time, and that there are trade-offs inherent in imposing any constraints on the flow of information. But even the FTC fails to actually assess the magnitude of the costs and benefits of, and the deep complexities involved in, the trade-off, and puts an unjustified thumb on the scale in favor of limiting data use.  

But that’s no excuse for Chairman Wheeler to ignore what the FTC gets right, and to double down on its failings. Based on the Fact Sheet (and the initial NPRM), it’s a virtual certainty that the Chairman’s proposal doesn’t heed the FTC’s refreshing call for humility and flexibility regarding the application of privacy rules to ISPs (and other Internet platforms):

These are complex and rapidly evolving areas, and more work should be done to learn about the practices of all large platform providers, their technical capabilities with respect to consumer data, and their current and expected uses of such data.

The rhetoric of the Chairman’s Fact Sheet is correct: the FCC should in fact conform its approach to privacy to the framework established by the FTC. Unfortunately, the reality of the Fact Sheet simply doesn’t comport with its rhetoric.

As the FCC’s vote on the Chairman’s proposal rapidly nears, and in light of its significant defects, we can only hope that the rest of the Commission refrains from reflexively adopting the proposed regime, and works to ensure that these problematic deviations from the FTC’s framework are addressed before moving forward.

Section 5(a)(2) of the Federal Trade Commission (FTC) Act authorizes the FTC to “prevent persons, partnerships, or corporations, except . . . common carriers subject to the Acts to regulate commerce . . . from using unfair methods of competition in or affecting commerce and unfair or deceptive acts or practices in or affecting commerce.”  On August 29, in FTC v. AT&T, the Ninth Circuit issued a decision that exempts non-common carrier data services from U.S. Federal Trade Commission (FTC) jurisdiction, merely because they are offered by a company that has common carrier status.  This case involved an FTC allegation that AT&T had “throttled” data (slowed down Internet service) for “unlimited mobile data” customers without adequate consent or disclosures, in violation of Section 5 of the FTC Act.  The FTC had claimed that although AT&T mobile wireless voice services were a common carrier service, the company’s mobile wireless data services were not, and, thus, were subject to FTC oversight.  Reversing a federal district court’s refusal to grant AT&T’s motion to dismiss, the Ninth Circuit concluded that “when Congress used the term ‘common carrier’ in the FTC Act, [there is no indication] it could only have meant ‘common carrier to the extent engaged in common carrier activity.’”  The Ninth Circuit therefore determined that “a literal reading of the words Congress selected simply does comport with [the FTC’s] activity-based approach.”  The FTC’s pending case against AT&T in the Northern District of California (which is within the Ninth Circuit) regarding alleged unfair and deceptive advertising of satellite services by AT&T subsidiary DIRECTTV (see here) could be affected by this decision.

The Ninth Circuit’s AT&T holding threatens to further extend the FCC’s jurisdictional reach at the expense of the FTC.  It comes on the heels of the divided D.C. Circuit’s benighted and ill-reasoned decision (see here) upholding the FCC’s “Open Internet Order,” including its decision to reclassify Internet broadband service as a common carrier service.  That decision subjects broadband service to heavy-handed and costly FCC “consumer protection” regulation, including in the area of privacy.  The FCC’s overly intrusive approach stands in marked contrast to the economic efficiency considerations (albeit not always perfectly applied) that underlie FTC consumer protection mode of analysis.  As I explained in a May 2015 Heritage Foundation Legal Memorandum,  the FTC’s highly structured, analytic, fact-based methodology, combined with its vast experience in privacy and data security investigations, make it a far better candidate than the FCC to address competition and consumer protection problems in the area of broadband.

I argued in this space in March 2016 that, should the D.C. Circuit uphold the FCC’s Open Internet Order, Congress should carefully consider whether to strip the FCC of regulatory authority in this area (including, of course, privacy practices) and reassign it to the FTC.  The D.C. Circuit’s decision upholding that Order, combined with the Ninth Circuit’s latest ruling, makes the case for potential action by the next Congress even more urgent.

While it is at it, the next Congress should also weigh whether to repeal the FTC’s common carrier exemption, as well as all special exemptions for specified categories of institutions, such as banks, savings and loans, and federal credit unions (see here).  In so doing, Congress might also do away with the Consumer Financial Protection Bureau, an unaccountable bureaucracy whose consumer protection regulatory responsibilities should cease (see my February 2016 Heritage Legal Memorandum here).

Finally, as Heritage Foundation scholars have urged, Congress should look into enacting additional regulatory reform legislation, such as requiring congressional approval of new major regulations issued by agencies (including financial services regulators) and subjecting “independent” agencies (including the FCC) to executive branch regulatory review.

That’s enough for now.  Stay tuned.

Yesterday, the International Center for Law & Economics filed reply comments in the docket of the FCC’s Broadband Privacy NPRM. ICLE was joined in its comments by the following scholars of law & economics:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Adam Candeub, Professor of Law, Michigan State University College of Law
  • Justin (Gus) Hurwitz, Assistant Professor of Law, Nebraska College of Law
  • Daniel Lyons, Associate Professor, Boston College Law School
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University Department of Economics

As in our initial comments, we drew on the economic scholarship of multi-sided platforms to argue that the FCC failed to consider the ways in which asymmetric regulation will ultimately have negative competitive effects and harm consumers. The FCC and some critics claimed that ISPs are gatekeepers deserving of special regulation — a case that both the FCC and the critics failed to make.

The NPRM fails adequately to address these issues, to make out an adequate case for the proposed regulation, or to justify treating ISPs differently than other companies that collect and use data.

Perhaps most important, the NPRM also fails to acknowledge or adequately assess the actual market in which the use of consumer data arises: the advertising market. Whether intentionally or not, this NPRM is not primarily about regulating consumer privacy; it is about keeping ISPs out of the advertising business. But in this market, ISPs are upstarts challenging the dominant position of firms like Google and Facebook.

Placing onerous restrictions upon ISPs alone results in either under-regulation of edge providers or over-regulation of ISPs within the advertising market, without any clear justification as to why consumer privacy takes on different qualities for each type of advertising platform. But the proper method of regulating privacy is, in fact, the course that both the FTC and the FCC have historically taken, and which has yielded a stable, evenly administered regime: case-by-case examination of actual privacy harms and a minimalist approach to ex ante, proscriptive regulations.

We also responded to particular claims made by New America’s Open Technology Institute about the expectations of consumers regarding data collection online, the level of competitiveness in the marketplace, and the technical realities that differentiate ISPs from edge providers.

OTI attempts to substitute its own judgment of what consumers (should) believe about their data for that of consumers themselves. And in the process it posits a “context” that can and will never shift as new technology and new opportunities emerge. Such a view of consumer expectations is flatly anti-innovation and decidedly anti-consumer, consigning broadband users to yesterday’s technology and business models. The rule OTI supports could effectively forbid broadband providers from offering consumers the option to trade data for lower prices.

Our reply comments went on to point out that much of the basis upon which the NPRM relies — and alleged lack of adequate competition among ISPs — was actually a “manufactured scarcity” based upon the Commission’s failure to properly analyze the relevant markets.

The Commission’s claim that ISPs, uniquely among companies in the modern data economy, face insufficient competition in the broadband market is… insufficiently supported. The flawed manner in which the Commission has defined the purported relevant market for broadband distorts the analysis upon which the proposed rules are based, and manufactures a false scarcity in order to justify unduly burdensome privacy regulations for ISPs. Even the Commission’s own data suggest that consumer choice is alive and well in broadband… The reality is that there is in fact enough competition in the broadband market to offer privacy-sensitive consumers options if they are ever faced with what they view as overly invasive broadband business practices. According to the Commission, as of December 2014, 74% of American homes had a choice of two or more wired ISPs delivering download speeds of at least 10 Mbps, and 88% had a choice of at least two providers of 3 Mbps service. Meanwhile, 93% of consumers have access to at least three mobile broadband providers. Looking forward, consumer choice at all download speeds is increasing at rapid rates due to extensive network upgrades and new entry in a highly dynamic market.

Finally, we rebutted the contention that predictive analytics was a magical tool that would enable ISPs to dominate information gathering and would, consequently, lead to consumer harms — even where ISPs had access only to seemingly trivial data about users.

Some comments in support of the proposed rules attempt to cast ISPs as all powerful by virtue of their access to apparently trivial data — IP addresses, access timing, computer ports, etc. — because of the power of predictive analytics. These commenters assert that the possibility of predictive analytics coupled with a large data set undermines research that demonstrates that ISPs, thanks to increasing encryption, do not have access to any better quality data, and probably less quality data, than edge providers themselves have.

But this is a curious bit of reasoning. It essentially amounts to the idea that, not only should consumers be permitted to control with whom their data is shared, but that all other parties online should be proscribed from making their own independent observations about consumers. Such a rule would be akin to telling supermarkets that they are not entitled to observe traffic patterns in their stores in order to place particular products in relatively more advantageous places, for example. But the reality is that most data is noise; simply having more of it is not necessarily a boon, and predictive analytics is far from a panacea. In fact, the insights gained from extensive data collection are frequently useless when examining very large data sets, and are better employed by single firms answering particular questions about their users and products.

Our full reply comments are available here.

Thanks to Geoff for the introduction. I look forward to posting a few things over the summer.

I’d like to begin by discussing Geoff’s post on the pending legislative proposals designed to combat strategic abuse of drug safety regulations to prevent generic competition. Specifically, I’d like to address the economic incentive structure that is in effect in this highly regulated market.

Like many others, I first noticed the abuse of drug safety regulations to prevent competition when Turing Pharmaceuticals—then led by now infamous CEO Martin Shkreli—acquired the manufacturing rights for the anti-parasitic drug Daraprim, and raised the price of the drug by over 5,000%. The result was a drug that cost $750 per tablet. Daraprim (pyrimethamine) is used to combat malaria and toxoplasma gondii infections in immune-compromised patients, especially those with HIV. The World Health Organization includes Daraprim on its “List of Essential Medicines” as a medicine important to basic health systems. After the huge price hike, the drug was effectively out of reach for many insurance plans and uninsured patients who needed it for the six to eight week course of treatment for toxoplasma gondii infections.

It’s not unusual for drugs to sell at huge multiples above their manufacturing cost. Indeed, a primary purpose of patent law is to allow drug companies to earn sufficient profits to engage in the expensive and risky business of developing new drugs. But Daraprim was first sold in 1953 and thus has been off patent for decades. With no intellectual property protection Daraprim should, theoretically, now be available from generic drug manufactures for only a little above cost. Indeed, this is what we see in the rest of the world. Daraprim is available all over the world for very cheap prices. The per tablet price is 3 rupees (US$0.04) in India, R$0.07 (US$0.02) in Brazil, US$0.18 in Australia, and US$0.66 in the UK.

So what gives in the U.S.? Or rather, what does not give? What in our system of drug distribution has gotten stuck and is preventing generic competition from swooping in to compete down the high price of off-patent drugs like Daraprim? The answer is not market failure, but rather regulatory failure, as Geoff noted in his post. While generics would love to enter a market where a drug is currently selling for high profits, they cannot do so without getting FDA approval for their generic version of the drug at issue. To get approval, a generic simply has to file an Abbreviated New Drug Application (“ANDA”) that shows that its drug is equivalent to the branded drug with which it wants to compete. There’s no need for the generic to repeat the safety and efficacy tests that the brand manufacturer originally conducted. To test for equivalence, the generic needs samples of the brand drug. Without those samples, the generic cannot meet its burden of showing equivalence. This is where the strategic use of regulation can come into play.

Geoff’s post explains the potential abuse of Risk Evaluation and Mitigation Strategies (“REMS”). REMS are put in place to require certain safety steps (like testing a woman for pregnancy before prescribing a drug that can cause birth defects) or to restrict the distribution channels for dangerous or addictive drugs. As Geoff points out, there is evidence that a few brand name manufacturers have engaged in bad-faith refusals to provide samples using the excuse of REMS or restricted distribution programs to (1) deny requests for samples, (2) prevent generic manufacturers from buying samples from resellers, and (3) deny generics whose drugs have won approval access to the REMS system that is required for generics to distribute their drugs. Once the FDA has certified that a generic manufacturer can safely handle the drug at issue, there is no legitimate basis for the owners of brand name drugs to deny samples to the generic maker. Expressed worries about liability from entering joint REMS programs with generics also ring hollow, for the most part, and would be ameliorated by the pending legislation.

It’s important to note that this pricing situation is unique to drugs because of the regulatory framework surrounding drug manufacture and distribution. If a manufacturer of, say, an off-patent vacuum cleaner wants to prevent competitors from copying its vacuum cleaner design, it is unlikely to be successful. Even if the original manufacturer refuses to sell any vacuum cleaners to a competitor, and instructs its retailers not to sell either, this will be very difficult to monitor and enforce. Moreover, because of an unrestricted resale market, a competitor would inevitably be able to obtain samples of the vacuum cleaner it wishes to copy. Only patent law can successfully protect against the copying of a product sold to the general public, and when the patent expires, so too will the ability to prevent copying.

Drugs are different. The only way a consumer can resell prescription drugs is by breaking the law. Pills bought from an illegal secondary market would be useless to generics for purposes of FDA approval anyway, because the chain of custody would not exist to prove that the samples are the real thing. This means generics need to get samples from the authorized manufacturer or distribution company. When a drug is subject to a REMS-required restricted distribution program, it is even more difficult, if not impossible, for a generic maker to get samples of the drugs for which it wants to make generic versions. Restricted distribution programs, which are used for dangerous or addictive drugs, by design very tightly control the chain of distribution so that the drugs go only to patients with proper prescriptions from authorized doctors.

A troubling trend has arisen recently in which drug owners put their branded drugs into restricted distribution programs not because of any FDA REMS requirement, but instead as a method to prevent generics from obtaining samples and making generic versions of the drugs. This is the strategy that Turing used before it raised prices over 5,000% on Daraprim. And Turing isn’t the only company to use this strategy. It is being emulated by others, although perhaps not so conspicuously. For instance, in 2015, Valeant Pharmaceuticals completed a hostile takeover of Allergan Pharmaceuticals, with the help of the hedge fund, Pershing Square. Once Valeant obtained ownership of Allergan and its drug portfolio, it adopted restricted distribution programs and raised the prices on its off-patent drugs substantially. It raised the price of two life-saving heart drugs by 212% and 525% respectively. Others have followed suit.

A key component of the strategy to profit from hiking prices on off-patent drugs while avoiding competition from generics is to select drugs that do not currently have generic competitors. Sometimes this is because a drug has recently come off patent, and sometimes it is because the drug is for a small patient population, and thus generics haven’t bothered to enter the market given that brand name manufacturers generally drop their prices to close to cost after the drug comes off patent. But with the strategic control of samples and refusals to allow generics to enter REMS programs, the (often new) owners of the brand name drugs seek to prevent the generic competition that we count on to make products cheap and plentiful once their patent protection expires.

Most brand name drug makers do not engage in withholding samples from generics and abusing restricted distribution and REMS programs. But the few that do cost patients and insurers dearly for important medicines that should be much cheaper once they go off patent. More troubling still is the recent strategy of taking drugs that have been off patent and cheap for years, and abusing the regulatory regime to raise prices and block competition. This growing trend of abusing restricted distribution and REMS to facilitate rent extraction from drug purchasers needs to be corrected.

Two bills addressing this issue are pending in Congress. Both bills (1) require drug companies to provide samples to generics after the FDA has certified the generic, (2) require drug companies to enter into shared REMS programs with generics, (3) allow generics to set up their own REMS compliant systems, and (4) exempt drug companies from liability for sharing products and REMS-compliant systems with generic companies in accordance with the steps set out in the bills. When it comes to remedies, however, the Senate version is significantly better. The penalties provided in the House bill are both vague and overly broad. The bill provides for treble damages and costs against the drug company “of the kind described in section 4(a) of the Clayton Act.” Not only is the application of the Clayton Act unclear in the context of the heavily regulated market for drugs (see Trinko), but treble damages may over-deter reasonably restrictive behavior by drug companies when it comes to distributing dangerous drugs.

The remedies in the Senate version are very well crafted to deter rent seeking behavior while not overly deterring reasonable behavior. The remedial scheme is particularly good, because it punishes most those companies that attempt to make exorbitant profits on drugs by denying generic entry. The Senate version provides as a remedy for unreasonable delay that the plaintiff shall be awarded attorneys’ fees, costs, and the defending drug company’s profits on the drug at issue during the time of the unreasonable delay. This means that a brand name drug company that sells an old drug for a low price and delays sharing only because of honest concern about the safety standards of a particular generic company will not face terribly high damages if it is found unreasonable. On the other hand, a company that sends the price of an off-patent drug soaring and then attempts to block generic entry will know that it can lose all of its rent-seeking profits, plus the cost of the victorious generic company’s attorneys fees. This vastly reduces the incentive for the company owning the brand name drug to raise prices and keep competitors out. It likewise greatly increases the incentive of a generic company to enter the market and–if it is unreasonably blocked–to file a civil action the result of which would be to transfer the excess profits to the generic. This provides a rather elegant fix to the regulatory gaming in this area that has become an increasing problem. The balancing of interests and incentives in the Senate bill should leave many congresspersons feeling comfortable to support the bill.

Brand drug manufacturers are no strangers to antitrust accusations when it comes to their complicated relationship with generic competitors — most obviously with respect to reverse payment settlements. But the massive and massively complex regulatory scheme under which drugs are regulated has provided other opportunities for regulatory legerdemain with potentially anticompetitive effect, as well.

In particular, some FTC Commissioners have raised concerns that brand drug companies have been taking advantage of an FDA drug safety program — the Risk Evaluation and Mitigation Strategies program, or “REMS” — to delay or prevent generic entry.

Drugs subject to a REMS restricted distribution program are difficult to obtain through market channels and not otherwise readily available, even for would-be generic manufacturers that need samples in order to perform the tests required to receive FDA approval to market their products. REMS allows (requires, in fact) brand manufacturers to restrict the distribution of certain drugs that present safety or abuse risks, creating an opportunity for branded drug manufacturers to take advantage of imprecise regulatory requirements by inappropriately limiting access by generic manufacturers.

The FTC has not (yet) brought an enforcement action, but it has opened several investigations, and filed an amicus brief in a private-party litigation. Generic drug companies have filed several antitrust claims against branded drug companies and raised concerns with the FDA.

The problem, however, is that even if these companies are using REMS to delay generics, such a practice makes for a terrible antitrust case. Not only does the existence of a regulatory scheme arguably set Trinko squarely in the way of a successful antitrust case, but the sort of refusal to deal claims at issue here (as in Trinko) are rightly difficult to win because, as the DOJ’s Section 2 Report notes, “there likely are few circumstances where forced sharing would help consumers in the long run.”

But just because there isn’t a viable antitrust case doesn’t mean there isn’t still a competition problem. But in this case, it’s a problem of regulatory failure. Companies rationally take advantage of poorly written federal laws and regulations in order to tilt the market to their own advantage. It’s no less problematic for the market, but its solution is much more straightforward, if politically more difficult.

Thus it’s heartening to see that Senator Mike Lee (R-UT), along with three of his colleagues (Patrick Leahy (D-VT), Chuck Grassley (R-IA), and Amy Klobuchar (D-MN)), has proposed a novel but efficient way to correct these bureaucracy-generated distortions in the pharmaceutical market without resorting to the “blunt instrument” of antitrust law. As the bill notes:

While the antitrust laws may address actions by license holders who impede the prompt negotiation and development on commercially reasonable terms of a single, shared system of elements to assure safe use, a more tailored legal pathway would help ensure that license holders negotiate such agreements in good faith and in a timely manner, facilitating competition in the marketplace for drugs and biological products.

The legislative solution put forward by the Creating and Restoring Equal Access to Equivalent Samples (CREATES) Act of 2016 targets the right culprit: the poor regulatory drafting that permits possibly anticompetitive conduct to take place. Moreover, the bill refrains from creating a per se rule, instead implementing several features that should still enable brand manufacturers to legitimately restrict access to drug samples when appropriate.

In essence, Senator Lee’s bill introduces a third party (in this case, the Secretary of Health and Human Services) who is capable of determining whether an eligible generic manufacturer is able to comply with REMS restrictions — thus bypassing any bias on the part of the brand manufacturer. Where the Secretary determines that a generic firm meets the REMS requirements, the bill also creates a narrow cause of action for this narrow class of plaintiffs, allowing suits against certain brand manufacturers who — despite the prohibition on using REMS to delay generics — nevertheless misuse the process to delay competitive entry.

Background on REMS

The REMS program was introduced as part of the Food and Drug Administration Amendments Act of 2007 (FDAAA). Following the withdrawal of Vioxx, an arthritis pain reliever, from the market because of a post-approval linkage of the drug to heart attacks, the FDA was under considerable fire, and there was a serious risk that fewer and fewer net beneficial drugs would be approved. The REMS program was introduced by Congress as a mechanism to ensure that society could reap the benefits from particularly risky drugs and biologics — rather than the FDA preventing them from entering the market at all. It accomplishes this by ensuring (among other things) that brands and generics adopt appropriate safety protocols for distribution and use of drugs — particularly when a drug has the potential to cause serious side effects, or has an unusually high abuse profile.

The FDA-determined REMS protocols can range from the simple (e.g., requiring a medication guide or a package insert about potential risks) to the more burdensome (including restrictions on a drug’s sale and distribution, or what the FDA calls “Elements to Assure Safe Use” (“ETASU”)). Most relevant here, the REMS process seems to allow brands considerable leeway to determine whether generic manufacturers are compliant or able to comply with ETASUs. Given this discretion, it is no surprise that brand manufacturers may be tempted to block competition by citing “safety concerns.”

Although the FDA specifically forbids the use of REMS to block lower-cost, generic alternatives from entering the market (of course), almost immediately following the law’s enactment, certain less-scrupulous branded pharmaceutical companies began using REMS for just that purpose (also, of course).

REMS abuse

To enter into pharmaceutical markets that no longer have any underlying IP protections, manufactures must submit to the FDA an Abbreviated New Drug Application (ANDA) for a generic, or an Abbreviated Biologic License Application (ABLA) for a biosimilar, of the brand drug. The purpose is to prove to the FDA that the competing product is as safe and effective as the branded reference product. In order to perform the testing sufficient to prove efficacy and safety, generic and biosimilar drug manufacturers must acquire a sample (many samples, in fact) of the reference product they are trying to replicate.

For the narrow class of dangerous or highly abused drugs, generic manufacturers are forced to comply with any REMS restrictions placed upon the brand manufacturer — even when the terms require the brand manufacturer to tightly control the distribution of its product.

And therein lies the problem. Because the brand manufacturer controls access to its products, it can refuse to provide the needed samples, using REMS as an excuse. In theory, it may be true in certain cases that a brand manufacturer is justified in refusing to distribute samples of its product, of course; some would-be generic manufacturers certainly may not meet the requisite standards for safety and security.

But in practice it turns out that most of the (known) examples of brands refusing to provide samples happen across the board — they preclude essentially all generic competition, not just the few firms that might have insufficient safeguards. It’s extremely difficult to justify such refusals on the basis of a generic manufacturer’s suitability when all would-be generic competitors are denied access, including well-established, high-quality manufacturers.

But, for a few brand manufacturers, at least, that seems to be how the REMS program is implemented. Thus, for example, Jon Haas, director of patient access at Turing Pharmaceuticals, referred to the practice of denying generics samples this way:

Most likely I would block that purchase… We spent a lot of money for this drug. We would like to do our best to avoid generic competition. It’s inevitable. They seem to figure out a way [to make generics], no matter what. But I’m certainly not going to make it easier for them. We’re spending millions and millions in research to find a better Daraprim, if you will.

As currently drafted, the REMS program gives branded manufacturers the ability to limit competition by stringing along negotiations for product samples for months, if not years. Although access to a few samples for testing is seemingly such a small, trivial thing, the ability to block this access allows a brand manufacturer to limit competition (at least from bioequivalent and generic drugs; obviously competition between competing branded drugs remains).

And even if a generic competitor manages to get ahold of samples, the law creates an additional wrinkle by imposing a requirement that brand and generic manufacturers enter into a single shared REMS plan for bioequivalent and generic drugs. But negotiating the particulars of the single, shared program can drag on for years. Consequently, even when a generic manufacturer has received the necessary samples, performed the requisite testing, and been approved by the FDA to sell a competing drug, it still may effectively be barred from entering the marketplace because of REMS.

The number of drugs covered by REMS is small: fewer than 100 in a universe of several thousand FDA-approved drugs. And the number of these alleged to be subject to abuse is much smaller still. Nonetheless, abuse of this regulation by certain brand manufacturers has likely limited competition and increased prices.

Antitrust is not the answer

Whether the complex, underlying regulatory scheme that allocates the relative rights of brands and generics — and that balances safety against access — gets the balance correct or not is an open question, to be sure. But given the regulatory framework we have and the perceived need for some sort of safety controls around access to samples and for shared REMS plans, the law should at least work to do what it intends, without creating an opportunity for harmful manipulation. Yet it appears that the ambiguity of the current law has allowed some brand manufacturers to exploit these safety protections to limit competition.

As noted above, some are quite keen to make this an antitrust issue. But, as also noted, antitrust is a poor fit for handling such abuses.

First, antitrust law has an uneasy relationship with other regulatory schemes. Not least because of Trinko, it is a tough case to make that brand manufacturers are violating antitrust laws when they rely upon legal obligations under a safety program that is essentially designed to limit generic entry on safety grounds. The issue is all the more properly removed from the realm of antitrust enforcement given that the problem is actually one of regulatory failure, not market failure.

Second, antitrust law doesn’t impose a duty to deal with rivals except in very limited circumstances. In Trinko, for example, the Court rejected the invitation to extend a duty to deal to situations where an existing, voluntary economic relationship wasn’t terminated. By definition this is unlikely to be the case here where the alleged refusal to deal is what prevents the generic from entering the market in the first place. The logic behind Trinko (and a host of other cases that have limited competitors’ obligations to assist their rivals) was to restrict duty to deal cases to those rare circumstances where it reliably leads to long-term competitive harm — not where it amounts to a perfectly legitimate effort to compete without giving rivals a leg-up.

But antitrust is such a powerful tool and such a flexible “catch-all” regulation, that there are always efforts to thwart reasonable limits on its use. As several of us at TOTM have written about at length in the past, former FTC Commissioner Rosch and former FTC Chairman Leibowitz were vocal proponents of using Section 5 of the FTC Act to circumvent sensible judicial limits on making out and winning antitrust claims, arguing that the limits were meant only for private plaintiffs — not (implicitly infallible) government enforcers. Although no one at the FTC has yet (publicly) suggested bringing a REMS case as a standalone Section 5 case, such a case would be consistent with the sorts of theories that animated past standalone Section 5 cases.

Again, this approach serves as an end-run around the reasonable judicial constraints that evolved as a result of judges actually examining the facts of individual cases over time, and is a misguided way of dealing with what is, after all, fundamentally a regulatory design problem.

The CREATES Act

Senator Lee’s bill, on the other hand, aims to solve the problem with a more straightforward approach by improving the existing regulatory mechanism and by adding a limited judicial remedy to incentivize compliance under the amended regulatory scheme. In summary:

  • The bill creates a cause of action for a refusal to deal only where plaintiff can prove, by a preponderance of the evidence, that certain well-defined conditions are met.
  • For samples, if a drug is not covered by a REMS, or if the generic manufacturer is specifically authorized, then the generic can sue if it doesn’t receive sufficient quantities of samples on commercially reasonable terms. This is not a per se offense subject to outsized antitrust damages. Instead, the remedy is a limited injunction ensuring the sale of samples on commercially reasonable terms, reasonable attorneys’ fees, and a monetary fine limited to revenue earned from sale of the drug during the refusal period.
  • The bill also gives a brand manufacturer an affirmative defense if it can prove by a preponderance of the evidence that, regardless of its own refusal to supply them, samples were nevertheless available elsewhere on commercially reasonable terms, or where the brand manufacturer is unable to supply the samples because it does not actually produce or market the drug.
  • In order to deal with the REMS process problems, the bill creates similar rights with similar limitations when the license holders and generics cannot come to an agreement about a shared REMS on commercially reasonable terms within 120 days of first contact by an eligible developer.
  • The bill also explicitly limits brand manufacturers’ liability for claims “arising out of the failure of an [eligible generic manufacturer] to follow adequate safeguards,” thus removing one of the (perfectly legitimate) objections to the bill pressed by brand manufacturers.

The primary remedy is limited, injunctive relief to end the delay. And brands are protected from frivolous litigation by an affirmative defense under which they need only show that the product is available for purchase on reasonable terms elsewhere. Damages are similarly limited and are awarded only if a court finds that the brand manufacturer lacked a legitimate business justification for its conduct (which, under the drug safety regime, means essentially a reasonable belief that its own REMS plan would be violated by dealing with the generic entrant). And monetary damages do not include punitive damages.

Finally, the proposed bill completely avoids the question of whether antitrust laws are applicable, leaving that possibility open to determination by courts — as is appropriate. Moreover, by establishing even more clearly the comprehensive regulatory regime governing potential generic entrants’ access to dangerous drugs, the bill would, given the holding in Trinko, probably make application of antitrust laws here considerably less likely.

Ultimately Senator Lee’s bill is a well-thought-out and targeted fix to an imperfect regulation that seems to be facilitating anticompetitive conduct by a few bad actors. It does so without trampling on the courts’ well-established antitrust jurisprudence, and without imposing excessive cost or risk on the majority of brand manufacturers that behave perfectly appropriately under the law.