Archives For law and economics

Today ICLE released a white paper entitled, A critical assessment of the latest charge of Google’s anticompetitive bias from Yelp and Tim Wu.

The paper is a comprehensive response to a study by Michael Luca, Timothy Wu, Sebastian Couvidat, Daniel Frank, & William Seltzer, entitled, Is Google degrading search? Consumer harm from Universal Search.

The Wu, et al. paper will be one of the main topics of discussion at today’s Capitol Forum and George Washington Institute of Public Policy event on Dominant Platforms Under the Microscope: Policy Approaches in the US and EU, at which I will be speaking — along with a host of luminaries including, inter alia, Josh Wright, Jonathan Kanter, Allen Grunes, Catherine Tucker, and Michael Luca — one of the authors of the Universal Search study.

Follow the link above to register — the event starts at noon today at the National Press Club.

Meanwhile, here’s a brief description of our paper:

Late last year, Tim Wu of Columbia Law School (and now the White House Office of Management and Budget), Michael Luca of Harvard Business School (and a consultant for Yelp), and a group of Yelp data scientists released a study claiming that Google has been purposefully degrading search results from its more-specialized competitors in the area of local search. The authors’ claim is that Google is leveraging its dominant position in general search to thwart competition from specialized search engines by favoring its own, less-popular, less-relevant results over those of its competitors:

To improve the popularity of its specialized search features, Google has used the power of its dominant general search engine. The primary means for doing so is what is called the “universal search” or the “OneBox.”

This is not a new claim, and researchers have been attempting (and failing) to prove Google’s “bias” for some time. Likewise, these critics have drawn consistent policy conclusions from their claims, asserting that antitrust violations lie at the heart of the perceived bias. But the studies are systematically marred by questionable methodology and bad economics.

This latest study by Tim Wu, along with a cadre of researchers employed by Yelp (one of Google’s competitors and one of its chief antitrust provocateurs), fares no better, employing slightly different but equally questionable methodology, bad economics, and a smattering of new, but weak, social science. (For a thorough criticism of the inherent weaknesses of Wu et al.’s basic social science methodology, see Miguel de la Mano, Stephen Lewis, and Andrew Leyden, Focus on the Evidence: A Brief Rebuttal of Wu, Luca, et al (2016), available here).

The basic thesis of the study is that Google purposefully degrades its local searches (e.g., for restaurants, hotels, services, etc.) to the detriment of its specialized search competitors, local businesses, consumers, and even Google’s bottom line — and that this is an actionable antitrust violation.

But in fact the study shows nothing of the kind. Instead, the study is marred by methodological problems that, in the first instance, make it impossible to draw any reliable conclusions. Nor does the study show that Google’s conduct creates any antitrust-relevant problems. Rather, the construction of the study and the analysis of its results reflect a superficial and inherently biased conception of consumer welfare that completely undermines the study’s purported legal and economic conclusions.

Read the whole thing here.

Since the European Commission (EC) announced its first inquiry into Google’s business practices in 2010, the company has been the subject of lengthy investigations by courts and competition agencies around the globe. Regulatory authorities in the United States, France, the United Kingdom, Canada, Brazil, and South Korea have all opened and rejected similar antitrust claims.

And yet the EC marches on, bolstered by Google’s myriad competitors, who continue to agitate for further investigations and enforcement actions, even as we — companies and consumers alike — enjoy the benefits of an increasingly dynamic online marketplace.

Indeed, while the EC has spent more than half a decade casting about for some plausible antitrust claim, the online economy has thundered ahead. Since 2010, Facebook has tripled its active users and multiplied its revenue ninefold; the number of apps available in the Amazon app store has grown from less than 4000 to over 400,000 today; and there are almost 1.5 billion more Internet users globally than there were in 2010. And consumers are increasingly using new and different ways to search for information: Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and Facebook’s Messenger are a few of the many new innovations challenging traditional search engines.

Advertisers have adapted to this evolution, moving increasingly online, and from search to display ads as mobile adoption has skyrocketedSocial networks like Twitter and Snapchat have come into their own, competing for the same (and ever-increasing) advertising dollars. For marketers, advertising on social networks is now just as important as advertising in search. No wonder e-commerce sales have more than doubled, to almost $2 trillion worldwide; for the first time, consumers purchased more online than in stores this past year.

To paraphrase Louis C.K.: Everything is amazing — and no one at the European Commission is happy.

The EC’s market definition is fatally flawed

Like its previous claims, the Commission’s most recent charges are rooted in the assertion that Google abuses its alleged dominance in “general search” advertising to unfairly benefit itself and to monopolize other markets. But European regulators continue to miss the critical paradigm shift among online advertisers and consumers that has upended this stale view of competition on the Internet. The reality is that Google’s competition may not, and need not, look exactly like Google itself, but it is competition nonetheless. And it’s happening in spades.

The key to understanding why the European Commission’s case is fundamentally flawed lies in an examination of how it defines the relevant market. Through a series of economically and factually unjustified assumptions, the Commission defines search as a distinct market in which Google faces limited competition and enjoys an 80% market share. In other words, for the EC, “general search” apparently means only nominal search providers like Google and Bing; it doesn’t mean companies like Amazon, Facebook and Twitter — Google’s biggest competitors.  

But the reality is that “general search” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google happens to use to match consumers and advertisers doesn’t reflect the substitutability of other mechanisms that do the same thing — merely because these mechanisms aren’t called “search.”

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive.

Consumers today are increasingly using platforms like Amazon and Facebook as substitutes for the searches they might have run on Google or Bing. “Closed” platforms like the iTunes store and innumerable apps handle copious search traffic but also don’t figure in the EC’s market calculations. And so-called “dark social” interactions like email, text messages, and IMs, drive huge amounts of some of the most valuable traffic on the Internet. This, in turn, has led to a competitive scramble to roll out completely new technologies like chatbots to meet consumers’ informational (and merchants’ advertising) needs.

Properly construed, Google’s market position is precarious

Like Facebook and Twitter (and practically every other Internet platform), advertising is Google’s primary source of revenue. Instead of charging for fancy hardware or offering services to users for a fee, Google offers search, the Android operating system, and a near-endless array of other valuable services for free to users. The company’s very existence relies on attracting Internet users and consumers to its properties in order to effectively connect them with advertisers.

But being an online matchmaker is a difficult and competitive enterprise. Among other things, the ability to generate revenue turns crucially on the quality of the match: All else equal, an advertiser interested in selling widgets will pay more for an ad viewed by a user who can be reliably identified as being interested in buying widgets.

Google’s primary mechanism for attracting users to match with advertisers — general search — is substantially about information, not commerce, and the distinction between product and informational searches is crucially important to understanding Google’s market and the surprisingly limited and tenuous market power it possesses.

General informational queries aren’t nearly as valuable to advertisers: Significantly, only about 30 percent of Google’s searches even trigger any advertising at all. Meanwhile, as of 2012, one-third of product searches started on Amazon while only 13% started on a general search engine.

As economist Hal Singer aptly noted in 2012,

[the data] suggest that Google lacks market power in a critical segment of search — namely, product searches. Even though searches for items such as power tools or designer jeans account for only 10 to 20 percent of all searches, they are clearly some of the most important queries for search engines from a business perspective, as they are far easier to monetize than informational queries like “Kate Middleton.”

While Google Search clearly offers substantial value to advertisers, its ability to continue to do so is precarious when confronted with the diverse array of competitors that, like Facebook, offer a level of granularity in audience targeting that general search can’t match, or that, like Amazon, systematically offer up the most valuable searchers.

In order to compete in this market — one properly defined to include actual competitors — Google has had to constantly innovate to maintain its position. Unlike a complacent monopolist, it has evolved to meet changing consumer demand, shifting technology and inventive competitors. Thus, Google’s search algorithm has changed substantially over the years to make more effective use of the information available to ensure relevance; search results have evolved to give consumers answers to queries rather than just links, and to provide more-direct access to products and services; and, as users have shifted more and more of their time and attention to mobile devices, search has incorporated more-localized results.

Competitors want a free lunch

Critics complain, nevertheless, that these developments have made it harder, in one way or another, for rivals to compete. And the EC has provided a willing ear. According to Commissioner Vestager last week:

Google has come up with many innovative products that have made a difference to our lives. But that doesn’t give Google the right to deny other companies the chance to compete and innovate. Today, we have further strengthened our case that Google has unduly favoured its own comparison shopping service in its general search result pages…. (Emphasis added).

Implicit in this statement is the remarkable assertion that by favoring its own comparison shopping services, Google “den[ies] other companies the chance to compete and innovate.” Even assuming Google does “favor” its own results, this is an astounding claim.

First, it is not a violation of competition law simply to treat competitors’ offerings differently than one’s own, even for a dominant firm. Instead, conduct must actually exclude competitors from the market, without offering countervailing advantages to consumers. But Google’s conduct is not exclusionary, and there are many benefits to consumers.

As it has from the start of its investigations of Google, the EC begins with a flawed assumption: that Google’s competitors both require, and may be entitled to, unfettered access to Google’s property in order to compete. But this is patently absurd. Google is not an essential facility: Billions of users reach millions of companies everyday through direct browser navigation, apps, email links, review sites and blogs, and countless other means — all without once touching Google.com.

Google Search results do not exclude competitors, whether comparison shopping sites or others. For example, 72% of TripAdvisor’s U.S. traffic comes from search, and almost all of that from organic results; other specialized search sites see similar traffic volumes.

More important, however, in addition to continuing to reach rival sites through Google Search, billions of consumers access rival services directly through their mobile apps. In fact, for Yelp,

Approximately 21 million unique devices accessed Yelp via the mobile app on a monthly average basis in the first quarter of 2016, an increase of 32% compared to the same period in 2015. App users viewed approximately 70% of page views in the first quarter and were more than 10 times as engaged as website users, as measured by number of pages viewed. (Emphasis added).

And a staggering 40 percent of mobile browsing is now happening inside the Facebook app, competing with the browsers and search engines pre-loaded on smartphones.

Millions of consumers also directly navigate to Google’s rivals via their browser by simply typing, for example, “Yelp.com” in their address bar. And as noted above, consumers are increasingly using Google rivals’ new disruptive information engines like Alexa and Siri for their search needs. Even the traditional search engine space is competitive — in fact, according to Wired, as of July 2016:

Microsoft has now captured more than one-third of Internet searches. Microsoft’s transformation from a company that sells boxed software to one that sells services in the cloud is well underway. (Emphasis added).

With such numbers, it’s difficult to see how rivals are being foreclosed from reaching consumers in any meaningful way.

Meanwhile, the benefits to consumers are obvious: Google is directly answering questions for consumers rather than giving them a set of possible links to click through and further search. In some cases its results present entirely new and valuable forms of information (e.g., search trends and structured data); in others they serve to hone searches by suggesting further queries, or to help users determine which organic results (including those of its competitors) may be most useful. And, of course, consumers aren’t forced to endure these innovations if they don’t find them useful, as they can quickly switch to other providers.  

Nostalgia makes for bad regulatory policy

Google is not the unstoppable monopolist of the EU competition regulators’ imagining. Rather, it is a continual innovator, forced to adapt to shifting consumer demand, changing technology, and competitive industry dynamics. And, instead of trying to hamstring Google, if they are to survive, Google’s competitors (and complainants) must innovate as well.

Dominance in technology markets — especially online — has always been ephemeral. Once upon a time, MySpace, AOL, and Yahoo were the dominant Internet platforms. Kodak, once practically synonymous with “instant camera” let the digital revolution pass it by. The invincible Sony Walkman was upended by mp3s and the iPod. Staid, keyboard-operated Blackberries and Nokias simply couldn’t compete with app-driven, graphical platforms from Apple and Samsung. Even today, startups like Snapchat, Slack, and Spotify gain massive scale and upend entire industries with innovative new technology that can leave less-nimble incumbents in the dustbin of tech history.

Put differently, companies that innovate are able to thrive, while those that remain dependent on yesterday’s technology and outdated business models usually fail — and deservedly so. It should never be up to regulators to pick winners and losers in a highly dynamic and competitive market, particularly if doing so constrains the market’s very dynamism. As Alfonso Lamadrid has pointed out:

It is companies and not competition enforcers which will strive or fail in the adoption of their business models, and it is therefore companies and not competition enforcers who are to decide on what business models to use. Some will prove successful and others will not; some companies will thrive and some will disappear, but with experimentation with business models, success and failure are and have always been part of the game.

In other words, we should not forget that competition law is, or should be, business-model agnostic, and that regulators are – like anyone else – far from omniscient.

Like every other technology company before them, Google and its competitors must be willing and able to adapt in order to keep up with evolving markets — just as for Lewis Carroll’s Red Queen, “it takes all the running you can do, to keep in the same place.” Google confronts a near-constantly evolving marketplace and fierce competition from unanticipated quarters; companies that build their businesses around Google face a near-constantly evolving Google. In the face of such relentless market dynamism, neither consumers nor firms are well served by regulatory policy rooted in nostalgia.  

Yesterday, the International Center for Law & Economics filed reply comments in the docket of the FCC’s Broadband Privacy NPRM. ICLE was joined in its comments by the following scholars of law & economics:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Adam Candeub, Professor of Law, Michigan State University College of Law
  • Justin (Gus) Hurwitz, Assistant Professor of Law, Nebraska College of Law
  • Daniel Lyons, Associate Professor, Boston College Law School
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University Department of Economics

As in our initial comments, we drew on the economic scholarship of multi-sided platforms to argue that the FCC failed to consider the ways in which asymmetric regulation will ultimately have negative competitive effects and harm consumers. The FCC and some critics claimed that ISPs are gatekeepers deserving of special regulation — a case that both the FCC and the critics failed to make.

The NPRM fails adequately to address these issues, to make out an adequate case for the proposed regulation, or to justify treating ISPs differently than other companies that collect and use data.

Perhaps most important, the NPRM also fails to acknowledge or adequately assess the actual market in which the use of consumer data arises: the advertising market. Whether intentionally or not, this NPRM is not primarily about regulating consumer privacy; it is about keeping ISPs out of the advertising business. But in this market, ISPs are upstarts challenging the dominant position of firms like Google and Facebook.

Placing onerous restrictions upon ISPs alone results in either under-regulation of edge providers or over-regulation of ISPs within the advertising market, without any clear justification as to why consumer privacy takes on different qualities for each type of advertising platform. But the proper method of regulating privacy is, in fact, the course that both the FTC and the FCC have historically taken, and which has yielded a stable, evenly administered regime: case-by-case examination of actual privacy harms and a minimalist approach to ex ante, proscriptive regulations.

We also responded to particular claims made by New America’s Open Technology Institute about the expectations of consumers regarding data collection online, the level of competitiveness in the marketplace, and the technical realities that differentiate ISPs from edge providers.

OTI attempts to substitute its own judgment of what consumers (should) believe about their data for that of consumers themselves. And in the process it posits a “context” that can and will never shift as new technology and new opportunities emerge. Such a view of consumer expectations is flatly anti-innovation and decidedly anti-consumer, consigning broadband users to yesterday’s technology and business models. The rule OTI supports could effectively forbid broadband providers from offering consumers the option to trade data for lower prices.

Our reply comments went on to point out that much of the basis upon which the NPRM relies — and alleged lack of adequate competition among ISPs — was actually a “manufactured scarcity” based upon the Commission’s failure to properly analyze the relevant markets.

The Commission’s claim that ISPs, uniquely among companies in the modern data economy, face insufficient competition in the broadband market is… insufficiently supported. The flawed manner in which the Commission has defined the purported relevant market for broadband distorts the analysis upon which the proposed rules are based, and manufactures a false scarcity in order to justify unduly burdensome privacy regulations for ISPs. Even the Commission’s own data suggest that consumer choice is alive and well in broadband… The reality is that there is in fact enough competition in the broadband market to offer privacy-sensitive consumers options if they are ever faced with what they view as overly invasive broadband business practices. According to the Commission, as of December 2014, 74% of American homes had a choice of two or more wired ISPs delivering download speeds of at least 10 Mbps, and 88% had a choice of at least two providers of 3 Mbps service. Meanwhile, 93% of consumers have access to at least three mobile broadband providers. Looking forward, consumer choice at all download speeds is increasing at rapid rates due to extensive network upgrades and new entry in a highly dynamic market.

Finally, we rebutted the contention that predictive analytics was a magical tool that would enable ISPs to dominate information gathering and would, consequently, lead to consumer harms — even where ISPs had access only to seemingly trivial data about users.

Some comments in support of the proposed rules attempt to cast ISPs as all powerful by virtue of their access to apparently trivial data — IP addresses, access timing, computer ports, etc. — because of the power of predictive analytics. These commenters assert that the possibility of predictive analytics coupled with a large data set undermines research that demonstrates that ISPs, thanks to increasing encryption, do not have access to any better quality data, and probably less quality data, than edge providers themselves have.

But this is a curious bit of reasoning. It essentially amounts to the idea that, not only should consumers be permitted to control with whom their data is shared, but that all other parties online should be proscribed from making their own independent observations about consumers. Such a rule would be akin to telling supermarkets that they are not entitled to observe traffic patterns in their stores in order to place particular products in relatively more advantageous places, for example. But the reality is that most data is noise; simply having more of it is not necessarily a boon, and predictive analytics is far from a panacea. In fact, the insights gained from extensive data collection are frequently useless when examining very large data sets, and are better employed by single firms answering particular questions about their users and products.

Our full reply comments are available here.

Thanks to Geoff for the introduction. I look forward to posting a few things over the summer.

I’d like to begin by discussing Geoff’s post on the pending legislative proposals designed to combat strategic abuse of drug safety regulations to prevent generic competition. Specifically, I’d like to address the economic incentive structure that is in effect in this highly regulated market.

Like many others, I first noticed the abuse of drug safety regulations to prevent competition when Turing Pharmaceuticals—then led by now infamous CEO Martin Shkreli—acquired the manufacturing rights for the anti-parasitic drug Daraprim, and raised the price of the drug by over 5,000%. The result was a drug that cost $750 per tablet. Daraprim (pyrimethamine) is used to combat malaria and toxoplasma gondii infections in immune-compromised patients, especially those with HIV. The World Health Organization includes Daraprim on its “List of Essential Medicines” as a medicine important to basic health systems. After the huge price hike, the drug was effectively out of reach for many insurance plans and uninsured patients who needed it for the six to eight week course of treatment for toxoplasma gondii infections.

It’s not unusual for drugs to sell at huge multiples above their manufacturing cost. Indeed, a primary purpose of patent law is to allow drug companies to earn sufficient profits to engage in the expensive and risky business of developing new drugs. But Daraprim was first sold in 1953 and thus has been off patent for decades. With no intellectual property protection Daraprim should, theoretically, now be available from generic drug manufactures for only a little above cost. Indeed, this is what we see in the rest of the world. Daraprim is available all over the world for very cheap prices. The per tablet price is 3 rupees (US$0.04) in India, R$0.07 (US$0.02) in Brazil, US$0.18 in Australia, and US$0.66 in the UK.

So what gives in the U.S.? Or rather, what does not give? What in our system of drug distribution has gotten stuck and is preventing generic competition from swooping in to compete down the high price of off-patent drugs like Daraprim? The answer is not market failure, but rather regulatory failure, as Geoff noted in his post. While generics would love to enter a market where a drug is currently selling for high profits, they cannot do so without getting FDA approval for their generic version of the drug at issue. To get approval, a generic simply has to file an Abbreviated New Drug Application (“ANDA”) that shows that its drug is equivalent to the branded drug with which it wants to compete. There’s no need for the generic to repeat the safety and efficacy tests that the brand manufacturer originally conducted. To test for equivalence, the generic needs samples of the brand drug. Without those samples, the generic cannot meet its burden of showing equivalence. This is where the strategic use of regulation can come into play.

Geoff’s post explains the potential abuse of Risk Evaluation and Mitigation Strategies (“REMS”). REMS are put in place to require certain safety steps (like testing a woman for pregnancy before prescribing a drug that can cause birth defects) or to restrict the distribution channels for dangerous or addictive drugs. As Geoff points out, there is evidence that a few brand name manufacturers have engaged in bad-faith refusals to provide samples using the excuse of REMS or restricted distribution programs to (1) deny requests for samples, (2) prevent generic manufacturers from buying samples from resellers, and (3) deny generics whose drugs have won approval access to the REMS system that is required for generics to distribute their drugs. Once the FDA has certified that a generic manufacturer can safely handle the drug at issue, there is no legitimate basis for the owners of brand name drugs to deny samples to the generic maker. Expressed worries about liability from entering joint REMS programs with generics also ring hollow, for the most part, and would be ameliorated by the pending legislation.

It’s important to note that this pricing situation is unique to drugs because of the regulatory framework surrounding drug manufacture and distribution. If a manufacturer of, say, an off-patent vacuum cleaner wants to prevent competitors from copying its vacuum cleaner design, it is unlikely to be successful. Even if the original manufacturer refuses to sell any vacuum cleaners to a competitor, and instructs its retailers not to sell either, this will be very difficult to monitor and enforce. Moreover, because of an unrestricted resale market, a competitor would inevitably be able to obtain samples of the vacuum cleaner it wishes to copy. Only patent law can successfully protect against the copying of a product sold to the general public, and when the patent expires, so too will the ability to prevent copying.

Drugs are different. The only way a consumer can resell prescription drugs is by breaking the law. Pills bought from an illegal secondary market would be useless to generics for purposes of FDA approval anyway, because the chain of custody would not exist to prove that the samples are the real thing. This means generics need to get samples from the authorized manufacturer or distribution company. When a drug is subject to a REMS-required restricted distribution program, it is even more difficult, if not impossible, for a generic maker to get samples of the drugs for which it wants to make generic versions. Restricted distribution programs, which are used for dangerous or addictive drugs, by design very tightly control the chain of distribution so that the drugs go only to patients with proper prescriptions from authorized doctors.

A troubling trend has arisen recently in which drug owners put their branded drugs into restricted distribution programs not because of any FDA REMS requirement, but instead as a method to prevent generics from obtaining samples and making generic versions of the drugs. This is the strategy that Turing used before it raised prices over 5,000% on Daraprim. And Turing isn’t the only company to use this strategy. It is being emulated by others, although perhaps not so conspicuously. For instance, in 2015, Valeant Pharmaceuticals completed a hostile takeover of Allergan Pharmaceuticals, with the help of the hedge fund, Pershing Square. Once Valeant obtained ownership of Allergan and its drug portfolio, it adopted restricted distribution programs and raised the prices on its off-patent drugs substantially. It raised the price of two life-saving heart drugs by 212% and 525% respectively. Others have followed suit.

A key component of the strategy to profit from hiking prices on off-patent drugs while avoiding competition from generics is to select drugs that do not currently have generic competitors. Sometimes this is because a drug has recently come off patent, and sometimes it is because the drug is for a small patient population, and thus generics haven’t bothered to enter the market given that brand name manufacturers generally drop their prices to close to cost after the drug comes off patent. But with the strategic control of samples and refusals to allow generics to enter REMS programs, the (often new) owners of the brand name drugs seek to prevent the generic competition that we count on to make products cheap and plentiful once their patent protection expires.

Most brand name drug makers do not engage in withholding samples from generics and abusing restricted distribution and REMS programs. But the few that do cost patients and insurers dearly for important medicines that should be much cheaper once they go off patent. More troubling still is the recent strategy of taking drugs that have been off patent and cheap for years, and abusing the regulatory regime to raise prices and block competition. This growing trend of abusing restricted distribution and REMS to facilitate rent extraction from drug purchasers needs to be corrected.

Two bills addressing this issue are pending in Congress. Both bills (1) require drug companies to provide samples to generics after the FDA has certified the generic, (2) require drug companies to enter into shared REMS programs with generics, (3) allow generics to set up their own REMS compliant systems, and (4) exempt drug companies from liability for sharing products and REMS-compliant systems with generic companies in accordance with the steps set out in the bills. When it comes to remedies, however, the Senate version is significantly better. The penalties provided in the House bill are both vague and overly broad. The bill provides for treble damages and costs against the drug company “of the kind described in section 4(a) of the Clayton Act.” Not only is the application of the Clayton Act unclear in the context of the heavily regulated market for drugs (see Trinko), but treble damages may over-deter reasonably restrictive behavior by drug companies when it comes to distributing dangerous drugs.

The remedies in the Senate version are very well crafted to deter rent seeking behavior while not overly deterring reasonable behavior. The remedial scheme is particularly good, because it punishes most those companies that attempt to make exorbitant profits on drugs by denying generic entry. The Senate version provides as a remedy for unreasonable delay that the plaintiff shall be awarded attorneys’ fees, costs, and the defending drug company’s profits on the drug at issue during the time of the unreasonable delay. This means that a brand name drug company that sells an old drug for a low price and delays sharing only because of honest concern about the safety standards of a particular generic company will not face terribly high damages if it is found unreasonable. On the other hand, a company that sends the price of an off-patent drug soaring and then attempts to block generic entry will know that it can lose all of its rent-seeking profits, plus the cost of the victorious generic company’s attorneys fees. This vastly reduces the incentive for the company owning the brand name drug to raise prices and keep competitors out. It likewise greatly increases the incentive of a generic company to enter the market and–if it is unreasonably blocked–to file a civil action the result of which would be to transfer the excess profits to the generic. This provides a rather elegant fix to the regulatory gaming in this area that has become an increasing problem. The balancing of interests and incentives in the Senate bill should leave many congresspersons feeling comfortable to support the bill.

Brand drug manufacturers are no strangers to antitrust accusations when it comes to their complicated relationship with generic competitors — most obviously with respect to reverse payment settlements. But the massive and massively complex regulatory scheme under which drugs are regulated has provided other opportunities for regulatory legerdemain with potentially anticompetitive effect, as well.

In particular, some FTC Commissioners have raised concerns that brand drug companies have been taking advantage of an FDA drug safety program — the Risk Evaluation and Mitigation Strategies program, or “REMS” — to delay or prevent generic entry.

Drugs subject to a REMS restricted distribution program are difficult to obtain through market channels and not otherwise readily available, even for would-be generic manufacturers that need samples in order to perform the tests required to receive FDA approval to market their products. REMS allows (requires, in fact) brand manufacturers to restrict the distribution of certain drugs that present safety or abuse risks, creating an opportunity for branded drug manufacturers to take advantage of imprecise regulatory requirements by inappropriately limiting access by generic manufacturers.

The FTC has not (yet) brought an enforcement action, but it has opened several investigations, and filed an amicus brief in a private-party litigation. Generic drug companies have filed several antitrust claims against branded drug companies and raised concerns with the FDA.

The problem, however, is that even if these companies are using REMS to delay generics, such a practice makes for a terrible antitrust case. Not only does the existence of a regulatory scheme arguably set Trinko squarely in the way of a successful antitrust case, but the sort of refusal to deal claims at issue here (as in Trinko) are rightly difficult to win because, as the DOJ’s Section 2 Report notes, “there likely are few circumstances where forced sharing would help consumers in the long run.”

But just because there isn’t a viable antitrust case doesn’t mean there isn’t still a competition problem. But in this case, it’s a problem of regulatory failure. Companies rationally take advantage of poorly written federal laws and regulations in order to tilt the market to their own advantage. It’s no less problematic for the market, but its solution is much more straightforward, if politically more difficult.

Thus it’s heartening to see that Senator Mike Lee (R-UT), along with three of his colleagues (Patrick Leahy (D-VT), Chuck Grassley (R-IA), and Amy Klobuchar (D-MN)), has proposed a novel but efficient way to correct these bureaucracy-generated distortions in the pharmaceutical market without resorting to the “blunt instrument” of antitrust law. As the bill notes:

While the antitrust laws may address actions by license holders who impede the prompt negotiation and development on commercially reasonable terms of a single, shared system of elements to assure safe use, a more tailored legal pathway would help ensure that license holders negotiate such agreements in good faith and in a timely manner, facilitating competition in the marketplace for drugs and biological products.

The legislative solution put forward by the Creating and Restoring Equal Access to Equivalent Samples (CREATES) Act of 2016 targets the right culprit: the poor regulatory drafting that permits possibly anticompetitive conduct to take place. Moreover, the bill refrains from creating a per se rule, instead implementing several features that should still enable brand manufacturers to legitimately restrict access to drug samples when appropriate.

In essence, Senator Lee’s bill introduces a third party (in this case, the Secretary of Health and Human Services) who is capable of determining whether an eligible generic manufacturer is able to comply with REMS restrictions — thus bypassing any bias on the part of the brand manufacturer. Where the Secretary determines that a generic firm meets the REMS requirements, the bill also creates a narrow cause of action for this narrow class of plaintiffs, allowing suits against certain brand manufacturers who — despite the prohibition on using REMS to delay generics — nevertheless misuse the process to delay competitive entry.

Background on REMS

The REMS program was introduced as part of the Food and Drug Administration Amendments Act of 2007 (FDAAA). Following the withdrawal of Vioxx, an arthritis pain reliever, from the market because of a post-approval linkage of the drug to heart attacks, the FDA was under considerable fire, and there was a serious risk that fewer and fewer net beneficial drugs would be approved. The REMS program was introduced by Congress as a mechanism to ensure that society could reap the benefits from particularly risky drugs and biologics — rather than the FDA preventing them from entering the market at all. It accomplishes this by ensuring (among other things) that brands and generics adopt appropriate safety protocols for distribution and use of drugs — particularly when a drug has the potential to cause serious side effects, or has an unusually high abuse profile.

The FDA-determined REMS protocols can range from the simple (e.g., requiring a medication guide or a package insert about potential risks) to the more burdensome (including restrictions on a drug’s sale and distribution, or what the FDA calls “Elements to Assure Safe Use” (“ETASU”)). Most relevant here, the REMS process seems to allow brands considerable leeway to determine whether generic manufacturers are compliant or able to comply with ETASUs. Given this discretion, it is no surprise that brand manufacturers may be tempted to block competition by citing “safety concerns.”

Although the FDA specifically forbids the use of REMS to block lower-cost, generic alternatives from entering the market (of course), almost immediately following the law’s enactment, certain less-scrupulous branded pharmaceutical companies began using REMS for just that purpose (also, of course).

REMS abuse

To enter into pharmaceutical markets that no longer have any underlying IP protections, manufactures must submit to the FDA an Abbreviated New Drug Application (ANDA) for a generic, or an Abbreviated Biologic License Application (ABLA) for a biosimilar, of the brand drug. The purpose is to prove to the FDA that the competing product is as safe and effective as the branded reference product. In order to perform the testing sufficient to prove efficacy and safety, generic and biosimilar drug manufacturers must acquire a sample (many samples, in fact) of the reference product they are trying to replicate.

For the narrow class of dangerous or highly abused drugs, generic manufacturers are forced to comply with any REMS restrictions placed upon the brand manufacturer — even when the terms require the brand manufacturer to tightly control the distribution of its product.

And therein lies the problem. Because the brand manufacturer controls access to its products, it can refuse to provide the needed samples, using REMS as an excuse. In theory, it may be true in certain cases that a brand manufacturer is justified in refusing to distribute samples of its product, of course; some would-be generic manufacturers certainly may not meet the requisite standards for safety and security.

But in practice it turns out that most of the (known) examples of brands refusing to provide samples happen across the board — they preclude essentially all generic competition, not just the few firms that might have insufficient safeguards. It’s extremely difficult to justify such refusals on the basis of a generic manufacturer’s suitability when all would-be generic competitors are denied access, including well-established, high-quality manufacturers.

But, for a few brand manufacturers, at least, that seems to be how the REMS program is implemented. Thus, for example, Jon Haas, director of patient access at Turing Pharmaceuticals, referred to the practice of denying generics samples this way:

Most likely I would block that purchase… We spent a lot of money for this drug. We would like to do our best to avoid generic competition. It’s inevitable. They seem to figure out a way [to make generics], no matter what. But I’m certainly not going to make it easier for them. We’re spending millions and millions in research to find a better Daraprim, if you will.

As currently drafted, the REMS program gives branded manufacturers the ability to limit competition by stringing along negotiations for product samples for months, if not years. Although access to a few samples for testing is seemingly such a small, trivial thing, the ability to block this access allows a brand manufacturer to limit competition (at least from bioequivalent and generic drugs; obviously competition between competing branded drugs remains).

And even if a generic competitor manages to get ahold of samples, the law creates an additional wrinkle by imposing a requirement that brand and generic manufacturers enter into a single shared REMS plan for bioequivalent and generic drugs. But negotiating the particulars of the single, shared program can drag on for years. Consequently, even when a generic manufacturer has received the necessary samples, performed the requisite testing, and been approved by the FDA to sell a competing drug, it still may effectively be barred from entering the marketplace because of REMS.

The number of drugs covered by REMS is small: fewer than 100 in a universe of several thousand FDA-approved drugs. And the number of these alleged to be subject to abuse is much smaller still. Nonetheless, abuse of this regulation by certain brand manufacturers has likely limited competition and increased prices.

Antitrust is not the answer

Whether the complex, underlying regulatory scheme that allocates the relative rights of brands and generics — and that balances safety against access — gets the balance correct or not is an open question, to be sure. But given the regulatory framework we have and the perceived need for some sort of safety controls around access to samples and for shared REMS plans, the law should at least work to do what it intends, without creating an opportunity for harmful manipulation. Yet it appears that the ambiguity of the current law has allowed some brand manufacturers to exploit these safety protections to limit competition.

As noted above, some are quite keen to make this an antitrust issue. But, as also noted, antitrust is a poor fit for handling such abuses.

First, antitrust law has an uneasy relationship with other regulatory schemes. Not least because of Trinko, it is a tough case to make that brand manufacturers are violating antitrust laws when they rely upon legal obligations under a safety program that is essentially designed to limit generic entry on safety grounds. The issue is all the more properly removed from the realm of antitrust enforcement given that the problem is actually one of regulatory failure, not market failure.

Second, antitrust law doesn’t impose a duty to deal with rivals except in very limited circumstances. In Trinko, for example, the Court rejected the invitation to extend a duty to deal to situations where an existing, voluntary economic relationship wasn’t terminated. By definition this is unlikely to be the case here where the alleged refusal to deal is what prevents the generic from entering the market in the first place. The logic behind Trinko (and a host of other cases that have limited competitors’ obligations to assist their rivals) was to restrict duty to deal cases to those rare circumstances where it reliably leads to long-term competitive harm — not where it amounts to a perfectly legitimate effort to compete without giving rivals a leg-up.

But antitrust is such a powerful tool and such a flexible “catch-all” regulation, that there are always efforts to thwart reasonable limits on its use. As several of us at TOTM have written about at length in the past, former FTC Commissioner Rosch and former FTC Chairman Leibowitz were vocal proponents of using Section 5 of the FTC Act to circumvent sensible judicial limits on making out and winning antitrust claims, arguing that the limits were meant only for private plaintiffs — not (implicitly infallible) government enforcers. Although no one at the FTC has yet (publicly) suggested bringing a REMS case as a standalone Section 5 case, such a case would be consistent with the sorts of theories that animated past standalone Section 5 cases.

Again, this approach serves as an end-run around the reasonable judicial constraints that evolved as a result of judges actually examining the facts of individual cases over time, and is a misguided way of dealing with what is, after all, fundamentally a regulatory design problem.

The CREATES Act

Senator Lee’s bill, on the other hand, aims to solve the problem with a more straightforward approach by improving the existing regulatory mechanism and by adding a limited judicial remedy to incentivize compliance under the amended regulatory scheme. In summary:

  • The bill creates a cause of action for a refusal to deal only where plaintiff can prove, by a preponderance of the evidence, that certain well-defined conditions are met.
  • For samples, if a drug is not covered by a REMS, or if the generic manufacturer is specifically authorized, then the generic can sue if it doesn’t receive sufficient quantities of samples on commercially reasonable terms. This is not a per se offense subject to outsized antitrust damages. Instead, the remedy is a limited injunction ensuring the sale of samples on commercially reasonable terms, reasonable attorneys’ fees, and a monetary fine limited to revenue earned from sale of the drug during the refusal period.
  • The bill also gives a brand manufacturer an affirmative defense if it can prove by a preponderance of the evidence that, regardless of its own refusal to supply them, samples were nevertheless available elsewhere on commercially reasonable terms, or where the brand manufacturer is unable to supply the samples because it does not actually produce or market the drug.
  • In order to deal with the REMS process problems, the bill creates similar rights with similar limitations when the license holders and generics cannot come to an agreement about a shared REMS on commercially reasonable terms within 120 days of first contact by an eligible developer.
  • The bill also explicitly limits brand manufacturers’ liability for claims “arising out of the failure of an [eligible generic manufacturer] to follow adequate safeguards,” thus removing one of the (perfectly legitimate) objections to the bill pressed by brand manufacturers.

The primary remedy is limited, injunctive relief to end the delay. And brands are protected from frivolous litigation by an affirmative defense under which they need only show that the product is available for purchase on reasonable terms elsewhere. Damages are similarly limited and are awarded only if a court finds that the brand manufacturer lacked a legitimate business justification for its conduct (which, under the drug safety regime, means essentially a reasonable belief that its own REMS plan would be violated by dealing with the generic entrant). And monetary damages do not include punitive damages.

Finally, the proposed bill completely avoids the question of whether antitrust laws are applicable, leaving that possibility open to determination by courts — as is appropriate. Moreover, by establishing even more clearly the comprehensive regulatory regime governing potential generic entrants’ access to dangerous drugs, the bill would, given the holding in Trinko, probably make application of antitrust laws here considerably less likely.

Ultimately Senator Lee’s bill is a well-thought-out and targeted fix to an imperfect regulation that seems to be facilitating anticompetitive conduct by a few bad actors. It does so without trampling on the courts’ well-established antitrust jurisprudence, and without imposing excessive cost or risk on the majority of brand manufacturers that behave perfectly appropriately under the law.

Last week the International Center for Law & Economics filed comments on the FCC’s Broadband Privacy NPRM. ICLE was joined in its comments by the following scholars of law & economics:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Adam Candeub, Professor of Law, Michigan State University College of Law
  • Justin (Gus) Hurwitz, Assistant Professor of Law, Nebraska College of Law
  • Daniel Lyons, Associate Professor, Boston College Law School
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University Department of Economics

As we note in our comments:

The Commission’s NPRM would shoehorn the business models of a subset of new economy firms into a regime modeled on thirty-year-old CPNI rules designed to address fundamentally different concerns about a fundamentally different market. The Commission’s hurried and poorly supported NPRM demonstrates little understanding of the data markets it proposes to regulate and the position of ISPs within that market. And, what’s more, the resulting proposed rules diverge from analogous rules the Commission purports to emulate. Without mounting a convincing case for treating ISPs differently than the other data firms with which they do or could compete, the rules contemplate disparate regulatory treatment that would likely harm competition and innovation without evident corresponding benefit to consumers.

In particular, we focus on the FCC’s failure to justify treating ISPs differently than other competitors, and its failure to justify more stringent treatment for ISPs in general:

In short, the Commission has not made a convincing case that discrimination between ISPs and edge providers makes sense for the industry or for consumer welfare. The overwhelming body of evidence upon which other regulators have relied in addressing privacy concerns urges against a hard opt-in approach. That same evidence and analysis supports a consistent regulatory approach for all competitors, and nowhere advocates for a differential approach for ISPs when they are participating in the broader informatics and advertising markets.

With respect to the proposed opt-in regime, the NPRM ignores the weight of economic evidence on opt-in rules and fails to justify the specific rules it prescribes. Of most significance is the imposition of this opt-in requirement for the sharing of non-sensitive data.

On net opt-in regimes may tend to favor the status quo, and to maintain or grow the position of a few dominant firms. Opt-in imposes additional costs on consumers and hurts competition — and it may not offer any additional protections over opt-out. In the absence of any meaningful evidence or rigorous economic analysis to the contrary, the Commission should eschew imposing such a potentially harmful regime on broadband and data markets.

Finally, we explain that, although the NPRM purports to embrace a regulatory regime consistent with the current “federal privacy regime,” and particularly the FTC’s approach to privacy regulation, it actually does no such thing — a sentiment echoed by a host of current and former FTC staff and commissioners, including the Bureau of Consumer Protection staff, Commissioner Maureen Ohlhausen, former Chairman Jon Leibowitz, former Commissioner Josh Wright, and former BCP Director Howard Beales.

Our full comments are available here.

Earlier this week I testified before the U.S. House Subcommittee on Commerce, Manufacturing, and Trade regarding several proposed FTC reform bills.

You can find my written testimony here. That testimony was drawn from a 100 page report, authored by Berin Szoka and me, entitled “The Federal Trade Commission: Restoring Congressional Oversight of the Second National Legislature — An Analysis of Proposed Legislation.” In the report we assess 9 of the 17 proposed reform bills in great detail, and offer a host of suggested amendments or additional reform proposals that, we believe, would help make the FTC more accountable to the courts. As I discuss in my oral remarks, that judicial oversight was part of the original plan for the Commission, and an essential part of ensuring that its immense discretion is effectively directed toward protecting consumers as technology and society evolve around it.

The report is “Report 2.0” of the FTC: Technology & Reform Project, which was convened by the International Center for Law & Economics and TechFreedom with an inaugural conference in 2013. Report 1.0 lays out some background on the FTC and its institutional dynamics, identifies the areas of possible reform at the agency, and suggests the key questions/issues each of them raises.

The text of my oral remarks follow, or, if you prefer, you can watch them here:

Chairman Burgess, Ranking Member Schakowsky, and Members of the Subcommittee, thank you for the opportunity to appear before you today.

I’m Executive Director of the International Center for Law & Economics, a non-profit, non-partisan research center. I’m a former law professor, I used to work at Microsoft, and I had what a colleague once called the most illustrious FTC career ever — because, at approximately 2 weeks, it was probably the shortest.

I’m not typically one to advocate active engagement by Congress in anything (no offense). But the FTC is different.

Despite Congressional reforms, the FTC remains the closest thing we have to a second national legislature. Its jurisdiction covers nearly every company in America. Section 5, at its heart, runs just 20 words — leaving the Commission enormous discretion to make policy decisions that are essentially legislative.

The courts were supposed to keep the agency on course. But they haven’t. As Former Chairman Muris has written, “the agency has… traditionally been beyond judicial control.”

So it’s up to Congress to monitor the FTC’s processes, and tweak them when the FTC goes off course, which is inevitable.

This isn’t a condemnation of the FTC’s dedicated staff. Rather, this one way ratchet of ever-expanding discretion is simply the nature of the beast.

Yet too many people lionize the status quo. They see any effort to change the agency from the outside as an affront. It’s as if Congress was struck by a bolt of lightning in 1914 and the Perfect Platonic Agency sprang forth.

But in the real world, an agency with massive scope and discretion needs oversight — and feedback on how its legal doctrines evolve.

So why don’t the courts play that role? Companies essentially always settle with the FTC because of its exceptionally broad investigatory powers, its relatively weak standard for voting out complaints, and the fact that those decisions effectively aren’t reviewable in federal court.

Then there’s the fact that the FTC sits in judgment of its own prosecutions. So even if a company doesn’t settle and actually wins before the ALJ, FTC staff still wins 100% of the time before the full Commission.

Able though FTC staffers are, this can’t be from sheer skill alone.

Whether by design or by neglect, the FTC has become, as Chairman Muris again described it, “a largely unconstrained agency.”

Please understand: I say this out of love. To paraphrase Churchill, the FTC is the “worst form of regulatory agency — except for all the others.”

Eventually Congress had to course-correct the agency — to fix the disconnect and to apply its own pressure to refocus Section 5 doctrine.

So a heavily Democratic Congress pressured the Commission to adopt the Unfairness Policy Statement in 1980. The FTC promised to restrain itself by balancing the perceived benefits of its unfairness actions against the costs, and not acting when injury is insignificant or consumers could have reasonably avoided injury on their own. It is, inherently, an economic calculus.

But while the Commission pays lip service to the test, you’d be hard-pressed to identify how (or whether) it’s implemented it in practice. Meanwhile, the agency has essentially nullified the “materiality” requirement that it volunteered in its 1983 Deception Policy Statement.

Worst of all, Congress failed to anticipate that the FTC would resume exercising its vast discretion through what it now proudly calls its “common law of consent decrees” in data security cases.

Combined with a flurry of recommended best practices in reports that function as quasi-rulemakings, these settlements have enabled the FTC to circumvent both Congressional rulemaking reforms and meaningful oversight by the courts.

The FTC’s data security settlements aren’t an evolving common law. They’re a static statement of “reasonable” practices, repeated about 55 times over the past 14 years. At this point, it’s reasonable to assume that they apply to all circumstances — much like a rule (which is, more or less, the opposite of the common law).

Congressman Pompeo’s SHIELD Act would help curtail this practice, especially if amended to include consent orders and reports. It would also help focus the Commission on the actual elements of the Unfairness Policy Statement — which should be codified through Congressman Mullins’ SURE Act.

Significantly, only one data security case has actually come before an Article III court. The FTC trumpets Wyndham as an out-and-out win. But it wasn’t. In fact, the court agreed with Wyndham on the crucial point that prior consent orders were of little use in trying to understand the requirements of Section 5.

More recently the FTC suffered another rebuke. While it won its product design suit against Amazon, the Court rejected the Commission’s “fencing in” request to permanently hover over the company and micromanage practices that Amazon had already ended.

As the FTC grapples with such cutting-edge legal issues, it’s drifting away from the balance it promised Congress.

But Congress can’t fix these problems simply by telling the FTC to take its bedrock policy statements more seriously. Instead it must regularly reassess the process that’s allowed the FTC to avoid meaningful judicial scrutiny. The FTC requires significant course correction if its model is to move closer to a true “common law.”

Yesterday a federal district court in Washington state granted the FTC’s motion for summary judgment against Amazon in FTC v. Amazon — the case alleging unfair trade practices in Amazon’s design of the in-app purchases interface for apps available in its mobile app store. The headlines score the decision as a loss for Amazon, and the FTC, of course, claims victory. But the court also granted Amazon’s motion for partial summary judgment on a significant aspect of the case, and the Commission’s win may be decidedly pyrrhic.

While the district court (very wrongly, in my view) essentially followed the FTC in deciding that a well-designed user experience doesn’t count as a consumer benefit for assessing substantial harm under the FTC Act, it rejected the Commission’s request for a permanent injunction against Amazon. It also called into question the FTC’s calculation of monetary damages. These last two may be huge. 

The FTC may have “won” the case, but it’s becoming increasingly apparent why it doesn’t want to take these cases to trial. First in Wyndham, and now in Amazon, courts have begun to chip away at the FTC’s expansive Section 5 discretion, even while handing the agency nominal victories.

The Good News

The FTC largely escapes judicial oversight in cases like these because its targets almost always settle (Amazon is a rare exception). These settlements — consent orders — typically impose detailed 20-year injunctions and give the FTC ongoing oversight of the companies’ conduct for the same period. The agency has wielded the threat of these consent orders as a powerful tool to micromanage tech companies, and it currently has at least one consent order in place with Twitter, Google, Apple, Facebook and several others.

As I wrote in a WSJ op-ed on these troubling consent orders:

The FTC prefers consent orders because they extend the commission’s authority with little judicial oversight, but they are too blunt an instrument for regulating a technology company. For the next 20 years, if the FTC decides that Google’s product design or billing practices don’t provide “express, informed consent,” the FTC could declare Google in violation of the new consent decree. The FTC could then impose huge penalties—tens or even hundreds of millions of dollars—without establishing that any consumer had actually been harmed.

Yesterday’s decision makes that outcome less likely. Companies will be much less willing to succumb to the FTC’s 20-year oversight demands if they know that courts may refuse the FTC’s injunction request and accept companies’ own, independent and market-driven efforts to address consumer concerns — without any special regulatory micromanagement.

In the same vein, while the court did find that Amazon was liable for repayment of unauthorized charges made without “express, informed authorization,” it also found the FTC’s monetary damages calculation questionable and asked for further briefing on the appropriate amount. If, as seems likely, it ultimately refuses to simply accept the FTC’s damages claims, that, too, will take some of the wind out of the FTC’s sails. Other companies have settled with the FTC and agreed to 20-year consent decrees in part, presumably, because of the threat of excessive damages if they litigate. That, too, is now less likely to happen.

Collectively, these holdings should help to force the FTC to better target its complaints to cases of still-ongoing and truly-harmful practices — the things the FTC Act was really meant to address, like actual fraud. Tech companies trying to navigate ever-changing competitive waters by carefully constructing their user interfaces and payment mechanisms (among other things) shouldn’t be treated the same way as fraudulent phishing scams.

The Bad News

The court’s other key holding is problematic, however. In essence, the court, like the FTC, seems to believe that regulators are better than companies’ product managers, designers and engineers at designing app-store user interfaces:

[A] clear and conspicuous disclaimer regarding in-app purchases and request for authorization on the front-end of a customer’s process could actually prove to… be more seamless than the somewhat unpredictable password prompt formulas rolled out by Amazon.

Never mind that Amazon has undoubtedly spent tremendous resources researching and designing the user experience in its app store. And never mind that — as Amazon is certainly aware — a consumer’s experience of a product is make-or-break in the cut-throat world of online commerce, advertising and search (just ask Jet).

Instead, for the court (and the FTC), the imagined mechanism of “affirmatively seeking a customer’s authorized consent to a charge” is all benefit and no cost. Whatever design decisions may have informed the way Amazon decided to seek consent are either irrelevant, or else the user-experience benefits they confer are negligible.

As I’ve written previously:

Amazon has built its entire business around the “1-click” concept — which consumers love — and implemented a host of notification and security processes hewing as much as possible to that design choice, but nevertheless taking account of the sorts of issues raised by in-app purchases. Moreover — and perhaps most significantly — it has implemented an innovative and comprehensive parental control regime (including the ability to turn off all in-app purchases) — Kindle Free Time — that arguably goes well beyond anything the FTC required in its Apple consent order.

Amazon is not abdicating its obligation to act fairly under the FTC Act and to ensure that users are protected from unauthorized charges. It’s just doing so in ways that also take account of the costs such protections may impose — particularly, in this case, on the majority of Amazon customers who didn’t and wouldn’t suffer such unauthorized charges.

Amazon began offering Kindle Free Time in 2012 as an innovative solution to a problem — children’s access to apps and in-app purchases — that affects only a small subset of Amazon’s customers. To dismiss that effort without considering that Amazon might have made a perfectly reasonable judgment that balanced consumer protection and product design disregards the cost-benefit balancing required by Section 5 of the FTC Act.

Moreover, the FTC Act imposes liability for harm only when they are not “reasonably avoidable.” Kindle Free Time is an outstanding example of an innovative mechanism that allows consumers at risk of unauthorized purchases by children to “reasonably avoid” harm. The court’s and the FTC’s disregard for it is inconsistent with the statute.

Conclusion

The court’s willingness to reinforce the FTC’s blackboard design “expertise” (such as it is) to second guess user-interface and other design decisions made by firms competing in real markets is unfortunate. But there’s a significant silver lining. By reining in the FTC’s discretion to go after these companies as if they were common fraudsters, the court has given consumers an important victory. After all, it is consumers who otherwise bear the costs (both directly and as a result of reduced risk-taking and innovation) of the FTC’s largely unchecked ability to extract excessive concessions from its enforcement targets.

On Friday the the International Center for Law & Economics filed comments with the FCC in response to Chairman Wheeler’s NPRM (proposed rules) to “unlock” the MVPD (i.e., cable and satellite subscription video, essentially) set-top box market. Plenty has been written on the proposed rulemaking—for a few quick hits (among many others) see, e.g., Richard Bennett, Glenn Manishin, Larry Downes, Stuart Brotman, Scott Wallsten, and me—so I’ll dispense with the background and focus on the key points we make in our comments.

Our comments explain that the proposal’s assertion that the MVPD set-top box market isn’t competitive is a product of its failure to appreciate the dynamics of the market (and its disregard for economics). Similarly, the proposal fails to acknowledge the complexity of the markets it intends to regulate, and, in particular, it ignores the harmful effects on content production and distribution the rules would likely bring about.

“Competition, competition, competition!” — Tom Wheeler

“Well, uh… just because I don’t know what it is, it doesn’t mean I’m lying.” — Claude Elsinore

At root, the proposal is aimed at improving competition in a market that is already hyper-competitive. As even Chairman Wheeler has admitted,

American consumers enjoy unprecedented choice in how they view entertainment, news and sports programming. You can pretty much watch what you want, where you want, when you want.

Of course, much of this competition comes from outside the MVPD market, strictly speaking—most notably from OVDs like Netflix. It’s indisputable that the statute directs the FCC to address the MVPD market and the MVPD set-top box market. But addressing competition in those markets doesn’t mean you simply disregard the world outside those markets.

The competitiveness of a market isn’t solely a function of the number of competitors in the market. Even relatively constrained markets like these can be “fully competitive” with only a few competing firms—as is the case in every market in which MVPDs operate (all of which are presumed by the Commission to be subject to “effective competition”).

The truly troubling thing, however, is that the FCC knows that MVPDs compete with OVDs, and thus that the competitiveness of the “MVPD market” (and the “MVPD set-top box market”) isn’t solely a matter of direct, head-to-head MVPD competition.

How do we know that? As I’ve recounted before, in a recent speech FCC General Counsel Jonathan Sallet approvingly explained that Commission staff recommended rejecting the Comcast/Time Warner Cable merger precisely because of the alleged threat it posed to OVD competitors. In essence, Sallet argued that Comcast sought to undertake a $45 billion merger primarily—if not solely—in order to ameliorate the competitive threat to its subscription video services from OVDs:

Simply put, the core concern came down to whether the merged firm would have an increased incentive and ability to safeguard its integrated Pay TV business model and video revenues by limiting the ability of OVDs to compete effectively.…

Thus, at least when it suits it, the Chairman’s office appears not only to believe that this competitive threat is real, but also that Comcast, once the largest MVPD in the country, believes so strongly that the OVD competitive threat is real that it was willing to pay $45 billion for a mere “increased ability” to limit it.

UPDATE 4/26/2016

And now the FCC has approved the Charter/Time Warner Cable, imposing conditions that, according to Wheeler,

focus on removing unfair barriers to video competition. First, New Charter will not be permitted to charge usage-based prices or impose data caps. Second, New Charter will be prohibited from charging interconnection fees, including to online video providers, which deliver large volumes of internet traffic to broadband customers. Additionally, the Department of Justice’s settlement with Charter both outlaws video programming terms that could harm OVDs and protects OVDs from retaliation—an outcome fully supported by the order I have circulated today.

If MVPDs and OVDs don’t compete, why would such terms be necessary? And even if the threat is merely potential competition, as we note in our comments (citing to this, among other things),

particularly in markets characterized by the sorts of technological change present in video markets, potential competition can operate as effectively as—or even more effectively than—actual competition to generate competitive market conditions.

/UPDATE

Moreover, the proposal asserts that the “market” for MVPD set-top boxes isn’t competitive because “consumers have few alternatives to leasing set-top boxes from their MVPDs, and the vast majority of MVPD subscribers lease boxes from their MVPD.”

But the MVPD set-top box market is an aftermarket—a secondary market; no one buys set-top boxes without first buying MVPD service—and always or almost always the two are purchased at the same time. As Ben Klein and many others have shown, direct competition in the aftermarket need not be plentiful for the market to nevertheless be competitive.

Whether consumers are fully informed or uninformed, consumers will pay a competitive package price as long as sufficient competition exists among sellers in the [primary] market.

The competitiveness of the MVPD market in which the antecedent choice of provider is made incorporates consumers’ preferences regarding set-top boxes, and makes the secondary market competitive.

The proposal’s superficial and erroneous claim that the set-top box market isn’t competitive thus reflects bad economics, not competitive reality.

But it gets worse. The NPRM doesn’t actually deny the importance of OVDs and app-based competitors wholesale — it only does so when convenient. As we note in our Comments:

The irony is that the NPRM seeks to give a leg up to non-MVPD distribution services in order to promote competition with MVPDs, while simultaneously denying that such competition exists… In order to avoid triggering [Section 629’s sunset provision,] the Commission is forced to pretend that we still live in the world of Blockbuster rentals and analog cable. It must ignore the Netflix behind the curtain—ignore the utter wealth of video choices available to consumers—and focus on the fact that a consumer might have a remote for an Apple TV sitting next to her Xfinity remote.

“Yes, but you’re aware that there’s an invention called television, and on that invention they show shows?” — Jules Winnfield

The NPRM proposes to create a world in which all of the content that MVPDs license from programmers, and all of their own additional services, must be provided to third-party device manufacturers under a zero-rate compulsory license. Apart from the complete absence of statutory authority to mandate such a thing (or, I should say, apart from statutory language specifically prohibiting such a thing), the proposed rules run roughshod over the copyrights and negotiated contract rights of content providers:

The current rulemaking represents an overt assault on the web of contracts that makes content generation and distribution possible… The rules would create a new class of intermediaries lacking contractual privity with content providers (or MVPDs), and would therefore force MVPDs to bear the unpredictable consequences of providing licensed content to third-parties without actual contracts to govern those licenses…

Because such nullification of license terms interferes with content owners’ right “to do and to authorize” their distribution and performance rights, the rules may facially violate copyright law… [Moreover,] the web of contracts that support the creation and distribution of content are complicated, extensively negotiated, and subject to destabilization. Abrogating the parties’ use of the various control points that support the financing, creation, and distribution of content would very likely reduce the incentive to invest in new and better content, thereby rolling back the golden age of television that consumers currently enjoy.

You’ll be hard-pressed to find any serious acknowledgement in the NPRM that its rules could have any effect on content providers, apart from this gem:

We do not currently have evidence that regulations are needed to address concerns raised by MVPDs and content providers that competitive navigation solutions will disrupt elements of service presentation (such as agreed-upon channel lineups and neighborhoods), replace or alter advertising, or improperly manipulate content…. We also seek comment on the extent to which copyright law may protect against these concerns, and note that nothing in our proposal will change or affect content creators’ rights or remedies under copyright law.

The Commission can’t rely on copyright to protect against these concerns, at least not without admitting that the rules require MVPDs to violate copyright law and to breach their contracts. And in fact, although it doesn’t acknowledge it, the NPRM does require the abrogation of content owners’ rights embedded in licenses negotiated with MVPD distributors to the extent that they conflict with the terms of the rule (which many of them must).   

“You keep using that word. I do not think it means what you think it means.” — Inigo Montoya

Finally, the NPRM derives its claimed authority for these rules from an interpretation of the relevant statute (Section 629 of the Communications Act) that is absurdly unreasonable. That provision requires the FCC to enact rules to assure the “commercial availability” of set-top boxes from MVPD-unaffiliated vendors. According to the NPRM,

we cannot assure a commercial market for devices… unless companies unaffiliated with an MVPD are able to offer innovative user interfaces and functionality to consumers wishing to access that multichannel video programming.

This baldly misconstrues a term plainly meant to refer to the manner in which consumers obtain their navigation devices, not how those devices should function. It also contradicts the Commission’s own, prior readings of the statute:

As structured, the rules will place a regulatory thumb on the scale in favor of third-parties and to the detriment of MVPDs and programmers…. [But] Congress explicitly rejected language that would have required unbundling of MVPDs’ content and services in order to promote other distribution services…. Where Congress rejected language that would have favored non-MVPD services, the Commission selectively interprets the language Congress did employ in order to accomplish exactly what Congress rejected.

And despite the above noted problems (and more), the Commission has failed to do even a cursory economic evaluation of the relative costs of the NPRM, instead focusing narrowly on one single benefit it believes might occur (wider distribution of set-top boxes from third-parties) despite the consistent failure of similar FCC efforts in the past.

All of the foregoing leads to a final question: At what point do the costs of these rules finally outweigh the perceived benefits? On the one hand are legal questions of infringement, inducements to violate agreements, and disruptions of complex contractual ecosystems supporting content creation. On the other hand are the presence of more boxes and apps that allow users to choose who gets to draw the UI for their video content…. At some point the Commission needs to take seriously the costs of its actions, and determine whether the public interest is really served by the proposed rules.

Our full comments are available here.

Today’s Canadian Competition Bureau (CCB) Google decision marks yet another regulator joining the chorus of competition agencies around the world that have already dismissed similar complaints relating to Google’s Search or Android businesses (including the US FTC, the Korea FTC, the Taiwan FTC, and AG offices in Texas and Ohio).

A number of courts around the world have also rejected competition complaints against the company, including courts in the US, France, the UK, Germany, and Brazil.

After an extensive, three-year investigation into Google’s business practices in Canada, the CCB

did not find sufficient evidence that Google engaged in [search manipulation, preferential treatment of Google services, syndication agreements, distribution agreements, exclusion of competitors from its YouTube mobile app, or tying of mobile ads with those on PCs and tablets] for an anti-competitive purpose, and/or that the practices resulted in a substantial lessening or prevention of competition in any relevant market.

Like the US FTC, the CCB did find fault with Google’s use of restriction on its AdWords API — but Google had already revised those terms worldwide following the FTC investigation, and has committed to the CCB to maintain the revised terms for at least another 5 years.

Other than a negative ruling from Russia’s competition agency last year in favor of Yandex — essentially “the Russian Google,” and one of only a handful of Russian tech companies of significance (surely a coincidence…) — no regulator has found against Google on the core claims brought against it.

True, investigations in a few jurisdictions, including the EU and India, are ongoing. And a Statement of Objections in the EU’s Android competition investigation appears imminent. But at some point, regulators are going to have to take a serious look at the motivations of the entities that bring complaints before wasting more investigatory resources on their behalf.

Competitor after competitor has filed complaints against Google that amount to, essentially, a claim that Google’s superior services make it too hard to compete. But competition law doesn’t require that Google or any other large firm make life easier for competitors. Without a finding of exclusionary harm/abuse of dominance (and, often, injury to consumers), this just isn’t anticompetitive conduct — it’s competition. And the overwhelming majority of competition authorities that have examined the company have agreed.

Exactly when will regulators be a little more skeptical of competitors trying to game the antitrust laws for their own advantage?

Canada joins the chorus

The Canadian decision mirrors the reasoning that regulators around the world have employed in reaching the decision that Google hasn’t engaged in anticompetitive conduct.

Two of the more important results in the CCB’s decision relate to preferential treatment of Google’s services (e.g., promotion of its own Map or Shopping results, instead of links to third-party aggregators of the same services) — the tired “search bias” claim that started all of this — and the distribution agreements that Google enters into with device manufacturers requiring inclusion of Google search as a default installation on Google Android phones.

On these key issues the CCB was unequivocal in its conclusions.

On search bias:

The Bureau sought evidence of the harm allegedly caused to market participants in Canada as a result of any alleged preferential treatment of Google’s services. The Bureau did not find adequate evidence to support the conclusion that this conduct has had an exclusionary effect on rivals, or that it has resulted in a substantial lessening or prevention of competition in a market.

And on search distribution agreements:

Google competes with other search engines for the business of hardware manufacturers and software developers. Other search engines can and do compete for these agreements so they appear as the default search engine…. Consumers can and do change the default search engine on their desktop and mobile devices if they prefer a different one to the pre-loaded default…. Google’s distribution agreements have not resulted in a substantial lessening or prevention of competition in Canada.

And here is the crucial point of the CCB’s insight (which, so far, everyone but Russia seems to appreciate): Despite breathless claims from rivals alleging they can’t compete in the face of their placement in Google’s search results, data barriers to entry, or default Google search on mobile devices, Google does actually face significant competition. Both the search bias and Android distribution claims were dismissed essentially because, whatever competitors may prefer Google do, its conduct doesn’t actually preclude access to competing services.

The True North strong and free [of meritless competitor complaints]

Exclusionary conduct must, well, exclude. But surfacing Google’s own “subjective” search results, even if they aren’t as high quality, doesn’t exclude competitors, according to the CCB and the other regulatory agencies that have also dismissed such claims. Similarly, consumers’ ability to switch search engines (“competition is just a click away,” remember), as well as OEMs’ ability to ship devices with different search engine defaults, ensure that search competitors can access consumers.

Former FTC Commissioner Josh Wright’s analysis of “search bias” in Google’s results applies with equal force to these complaints:

It is critical to recognize that bias alone is not evidence of competitive harm and it must be evaluated in the appropriate antitrust economic context of competition and consumers, rather [than] individual competitors and websites… [but these results] are not useful from an antitrust policy perspective because they erroneously—and contrary to economic theory and evidence—presume natural and procompetitive product differentiation in search rankings to be inherently harmful.

The competitors that bring complaints to antitrust authorities seek to make a demand of Google that is rarely made of any company: that it must provide access to its competitors on equal terms. But one can hardly imagine a valid antitrust complaint arising because McDonald’s refuses to sell a Whopper. The law on duties to deal is heavily circumscribed for good reason, as Josh Wright and I have pointed out:

The [US Supreme] Court [in Trinko] warned that the imposition of a duty to deal would threaten to “lessen the incentive for the monopolist, the rival, or both to invest in… economically beneficial facilities.”… Because imposition of a duty to deal with rivals threatens to decrease the incentive to innovate by creating new ways of producing goods at lower costs, satisfying consumer demand, or creating new markets altogether, courts and antitrust agencies have been reluctant to expand the duty.

Requiring Google to link to other powerful and sophisticated online search companies, or to provide them with placement on Google Android mobile devices, on the precise terms it does its own products would reduce the incentives of everyone to invest in their underlying businesses to begin with.

This is the real threat to competition. And kudos to the CCB for recognizing it.

The CCB’s investigation was certainly thorough, and its decision appears to be well-reasoned. Other regulators should take note before moving forward with yet more costly investigations.