Archives For monopolization

Today ICLE released a white paper entitled, A critical assessment of the latest charge of Google’s anticompetitive bias from Yelp and Tim Wu.

The paper is a comprehensive response to a study by Michael Luca, Timothy Wu, Sebastian Couvidat, Daniel Frank, & William Seltzer, entitled, Is Google degrading search? Consumer harm from Universal Search.

The Wu, et al. paper will be one of the main topics of discussion at today’s Capitol Forum and George Washington Institute of Public Policy event on Dominant Platforms Under the Microscope: Policy Approaches in the US and EU, at which I will be speaking — along with a host of luminaries including, inter alia, Josh Wright, Jonathan Kanter, Allen Grunes, Catherine Tucker, and Michael Luca — one of the authors of the Universal Search study.

Follow the link above to register — the event starts at noon today at the National Press Club.

Meanwhile, here’s a brief description of our paper:

Late last year, Tim Wu of Columbia Law School (and now the White House Office of Management and Budget), Michael Luca of Harvard Business School (and a consultant for Yelp), and a group of Yelp data scientists released a study claiming that Google has been purposefully degrading search results from its more-specialized competitors in the area of local search. The authors’ claim is that Google is leveraging its dominant position in general search to thwart competition from specialized search engines by favoring its own, less-popular, less-relevant results over those of its competitors:

To improve the popularity of its specialized search features, Google has used the power of its dominant general search engine. The primary means for doing so is what is called the “universal search” or the “OneBox.”

This is not a new claim, and researchers have been attempting (and failing) to prove Google’s “bias” for some time. Likewise, these critics have drawn consistent policy conclusions from their claims, asserting that antitrust violations lie at the heart of the perceived bias. But the studies are systematically marred by questionable methodology and bad economics.

This latest study by Tim Wu, along with a cadre of researchers employed by Yelp (one of Google’s competitors and one of its chief antitrust provocateurs), fares no better, employing slightly different but equally questionable methodology, bad economics, and a smattering of new, but weak, social science. (For a thorough criticism of the inherent weaknesses of Wu et al.’s basic social science methodology, see Miguel de la Mano, Stephen Lewis, and Andrew Leyden, Focus on the Evidence: A Brief Rebuttal of Wu, Luca, et al (2016), available here).

The basic thesis of the study is that Google purposefully degrades its local searches (e.g., for restaurants, hotels, services, etc.) to the detriment of its specialized search competitors, local businesses, consumers, and even Google’s bottom line — and that this is an actionable antitrust violation.

But in fact the study shows nothing of the kind. Instead, the study is marred by methodological problems that, in the first instance, make it impossible to draw any reliable conclusions. Nor does the study show that Google’s conduct creates any antitrust-relevant problems. Rather, the construction of the study and the analysis of its results reflect a superficial and inherently biased conception of consumer welfare that completely undermines the study’s purported legal and economic conclusions.

Read the whole thing here.

Since the European Commission (EC) announced its first inquiry into Google’s business practices in 2010, the company has been the subject of lengthy investigations by courts and competition agencies around the globe. Regulatory authorities in the United States, France, the United Kingdom, Canada, Brazil, and South Korea have all opened and rejected similar antitrust claims.

And yet the EC marches on, bolstered by Google’s myriad competitors, who continue to agitate for further investigations and enforcement actions, even as we — companies and consumers alike — enjoy the benefits of an increasingly dynamic online marketplace.

Indeed, while the EC has spent more than half a decade casting about for some plausible antitrust claim, the online economy has thundered ahead. Since 2010, Facebook has tripled its active users and multiplied its revenue ninefold; the number of apps available in the Amazon app store has grown from less than 4000 to over 400,000 today; and there are almost 1.5 billion more Internet users globally than there were in 2010. And consumers are increasingly using new and different ways to search for information: Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and Facebook’s Messenger are a few of the many new innovations challenging traditional search engines.

Advertisers have adapted to this evolution, moving increasingly online, and from search to display ads as mobile adoption has skyrocketedSocial networks like Twitter and Snapchat have come into their own, competing for the same (and ever-increasing) advertising dollars. For marketers, advertising on social networks is now just as important as advertising in search. No wonder e-commerce sales have more than doubled, to almost $2 trillion worldwide; for the first time, consumers purchased more online than in stores this past year.

To paraphrase Louis C.K.: Everything is amazing — and no one at the European Commission is happy.

The EC’s market definition is fatally flawed

Like its previous claims, the Commission’s most recent charges are rooted in the assertion that Google abuses its alleged dominance in “general search” advertising to unfairly benefit itself and to monopolize other markets. But European regulators continue to miss the critical paradigm shift among online advertisers and consumers that has upended this stale view of competition on the Internet. The reality is that Google’s competition may not, and need not, look exactly like Google itself, but it is competition nonetheless. And it’s happening in spades.

The key to understanding why the European Commission’s case is fundamentally flawed lies in an examination of how it defines the relevant market. Through a series of economically and factually unjustified assumptions, the Commission defines search as a distinct market in which Google faces limited competition and enjoys an 80% market share. In other words, for the EC, “general search” apparently means only nominal search providers like Google and Bing; it doesn’t mean companies like Amazon, Facebook and Twitter — Google’s biggest competitors.  

But the reality is that “general search” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google happens to use to match consumers and advertisers doesn’t reflect the substitutability of other mechanisms that do the same thing — merely because these mechanisms aren’t called “search.”

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive.

Consumers today are increasingly using platforms like Amazon and Facebook as substitutes for the searches they might have run on Google or Bing. “Closed” platforms like the iTunes store and innumerable apps handle copious search traffic but also don’t figure in the EC’s market calculations. And so-called “dark social” interactions like email, text messages, and IMs, drive huge amounts of some of the most valuable traffic on the Internet. This, in turn, has led to a competitive scramble to roll out completely new technologies like chatbots to meet consumers’ informational (and merchants’ advertising) needs.

Properly construed, Google’s market position is precarious

Like Facebook and Twitter (and practically every other Internet platform), advertising is Google’s primary source of revenue. Instead of charging for fancy hardware or offering services to users for a fee, Google offers search, the Android operating system, and a near-endless array of other valuable services for free to users. The company’s very existence relies on attracting Internet users and consumers to its properties in order to effectively connect them with advertisers.

But being an online matchmaker is a difficult and competitive enterprise. Among other things, the ability to generate revenue turns crucially on the quality of the match: All else equal, an advertiser interested in selling widgets will pay more for an ad viewed by a user who can be reliably identified as being interested in buying widgets.

Google’s primary mechanism for attracting users to match with advertisers — general search — is substantially about information, not commerce, and the distinction between product and informational searches is crucially important to understanding Google’s market and the surprisingly limited and tenuous market power it possesses.

General informational queries aren’t nearly as valuable to advertisers: Significantly, only about 30 percent of Google’s searches even trigger any advertising at all. Meanwhile, as of 2012, one-third of product searches started on Amazon while only 13% started on a general search engine.

As economist Hal Singer aptly noted in 2012,

[the data] suggest that Google lacks market power in a critical segment of search — namely, product searches. Even though searches for items such as power tools or designer jeans account for only 10 to 20 percent of all searches, they are clearly some of the most important queries for search engines from a business perspective, as they are far easier to monetize than informational queries like “Kate Middleton.”

While Google Search clearly offers substantial value to advertisers, its ability to continue to do so is precarious when confronted with the diverse array of competitors that, like Facebook, offer a level of granularity in audience targeting that general search can’t match, or that, like Amazon, systematically offer up the most valuable searchers.

In order to compete in this market — one properly defined to include actual competitors — Google has had to constantly innovate to maintain its position. Unlike a complacent monopolist, it has evolved to meet changing consumer demand, shifting technology and inventive competitors. Thus, Google’s search algorithm has changed substantially over the years to make more effective use of the information available to ensure relevance; search results have evolved to give consumers answers to queries rather than just links, and to provide more-direct access to products and services; and, as users have shifted more and more of their time and attention to mobile devices, search has incorporated more-localized results.

Competitors want a free lunch

Critics complain, nevertheless, that these developments have made it harder, in one way or another, for rivals to compete. And the EC has provided a willing ear. According to Commissioner Vestager last week:

Google has come up with many innovative products that have made a difference to our lives. But that doesn’t give Google the right to deny other companies the chance to compete and innovate. Today, we have further strengthened our case that Google has unduly favoured its own comparison shopping service in its general search result pages…. (Emphasis added).

Implicit in this statement is the remarkable assertion that by favoring its own comparison shopping services, Google “den[ies] other companies the chance to compete and innovate.” Even assuming Google does “favor” its own results, this is an astounding claim.

First, it is not a violation of competition law simply to treat competitors’ offerings differently than one’s own, even for a dominant firm. Instead, conduct must actually exclude competitors from the market, without offering countervailing advantages to consumers. But Google’s conduct is not exclusionary, and there are many benefits to consumers.

As it has from the start of its investigations of Google, the EC begins with a flawed assumption: that Google’s competitors both require, and may be entitled to, unfettered access to Google’s property in order to compete. But this is patently absurd. Google is not an essential facility: Billions of users reach millions of companies everyday through direct browser navigation, apps, email links, review sites and blogs, and countless other means — all without once touching

Google Search results do not exclude competitors, whether comparison shopping sites or others. For example, 72% of TripAdvisor’s U.S. traffic comes from search, and almost all of that from organic results; other specialized search sites see similar traffic volumes.

More important, however, in addition to continuing to reach rival sites through Google Search, billions of consumers access rival services directly through their mobile apps. In fact, for Yelp,

Approximately 21 million unique devices accessed Yelp via the mobile app on a monthly average basis in the first quarter of 2016, an increase of 32% compared to the same period in 2015. App users viewed approximately 70% of page views in the first quarter and were more than 10 times as engaged as website users, as measured by number of pages viewed. (Emphasis added).

And a staggering 40 percent of mobile browsing is now happening inside the Facebook app, competing with the browsers and search engines pre-loaded on smartphones.

Millions of consumers also directly navigate to Google’s rivals via their browser by simply typing, for example, “” in their address bar. And as noted above, consumers are increasingly using Google rivals’ new disruptive information engines like Alexa and Siri for their search needs. Even the traditional search engine space is competitive — in fact, according to Wired, as of July 2016:

Microsoft has now captured more than one-third of Internet searches. Microsoft’s transformation from a company that sells boxed software to one that sells services in the cloud is well underway. (Emphasis added).

With such numbers, it’s difficult to see how rivals are being foreclosed from reaching consumers in any meaningful way.

Meanwhile, the benefits to consumers are obvious: Google is directly answering questions for consumers rather than giving them a set of possible links to click through and further search. In some cases its results present entirely new and valuable forms of information (e.g., search trends and structured data); in others they serve to hone searches by suggesting further queries, or to help users determine which organic results (including those of its competitors) may be most useful. And, of course, consumers aren’t forced to endure these innovations if they don’t find them useful, as they can quickly switch to other providers.  

Nostalgia makes for bad regulatory policy

Google is not the unstoppable monopolist of the EU competition regulators’ imagining. Rather, it is a continual innovator, forced to adapt to shifting consumer demand, changing technology, and competitive industry dynamics. And, instead of trying to hamstring Google, if they are to survive, Google’s competitors (and complainants) must innovate as well.

Dominance in technology markets — especially online — has always been ephemeral. Once upon a time, MySpace, AOL, and Yahoo were the dominant Internet platforms. Kodak, once practically synonymous with “instant camera” let the digital revolution pass it by. The invincible Sony Walkman was upended by mp3s and the iPod. Staid, keyboard-operated Blackberries and Nokias simply couldn’t compete with app-driven, graphical platforms from Apple and Samsung. Even today, startups like Snapchat, Slack, and Spotify gain massive scale and upend entire industries with innovative new technology that can leave less-nimble incumbents in the dustbin of tech history.

Put differently, companies that innovate are able to thrive, while those that remain dependent on yesterday’s technology and outdated business models usually fail — and deservedly so. It should never be up to regulators to pick winners and losers in a highly dynamic and competitive market, particularly if doing so constrains the market’s very dynamism. As Alfonso Lamadrid has pointed out:

It is companies and not competition enforcers which will strive or fail in the adoption of their business models, and it is therefore companies and not competition enforcers who are to decide on what business models to use. Some will prove successful and others will not; some companies will thrive and some will disappear, but with experimentation with business models, success and failure are and have always been part of the game.

In other words, we should not forget that competition law is, or should be, business-model agnostic, and that regulators are – like anyone else – far from omniscient.

Like every other technology company before them, Google and its competitors must be willing and able to adapt in order to keep up with evolving markets — just as for Lewis Carroll’s Red Queen, “it takes all the running you can do, to keep in the same place.” Google confronts a near-constantly evolving marketplace and fierce competition from unanticipated quarters; companies that build their businesses around Google face a near-constantly evolving Google. In the face of such relentless market dynamism, neither consumers nor firms are well served by regulatory policy rooted in nostalgia.  

Thanks to Geoff for the introduction. I look forward to posting a few things over the summer.

I’d like to begin by discussing Geoff’s post on the pending legislative proposals designed to combat strategic abuse of drug safety regulations to prevent generic competition. Specifically, I’d like to address the economic incentive structure that is in effect in this highly regulated market.

Like many others, I first noticed the abuse of drug safety regulations to prevent competition when Turing Pharmaceuticals—then led by now infamous CEO Martin Shkreli—acquired the manufacturing rights for the anti-parasitic drug Daraprim, and raised the price of the drug by over 5,000%. The result was a drug that cost $750 per tablet. Daraprim (pyrimethamine) is used to combat malaria and toxoplasma gondii infections in immune-compromised patients, especially those with HIV. The World Health Organization includes Daraprim on its “List of Essential Medicines” as a medicine important to basic health systems. After the huge price hike, the drug was effectively out of reach for many insurance plans and uninsured patients who needed it for the six to eight week course of treatment for toxoplasma gondii infections.

It’s not unusual for drugs to sell at huge multiples above their manufacturing cost. Indeed, a primary purpose of patent law is to allow drug companies to earn sufficient profits to engage in the expensive and risky business of developing new drugs. But Daraprim was first sold in 1953 and thus has been off patent for decades. With no intellectual property protection Daraprim should, theoretically, now be available from generic drug manufactures for only a little above cost. Indeed, this is what we see in the rest of the world. Daraprim is available all over the world for very cheap prices. The per tablet price is 3 rupees (US$0.04) in India, R$0.07 (US$0.02) in Brazil, US$0.18 in Australia, and US$0.66 in the UK.

So what gives in the U.S.? Or rather, what does not give? What in our system of drug distribution has gotten stuck and is preventing generic competition from swooping in to compete down the high price of off-patent drugs like Daraprim? The answer is not market failure, but rather regulatory failure, as Geoff noted in his post. While generics would love to enter a market where a drug is currently selling for high profits, they cannot do so without getting FDA approval for their generic version of the drug at issue. To get approval, a generic simply has to file an Abbreviated New Drug Application (“ANDA”) that shows that its drug is equivalent to the branded drug with which it wants to compete. There’s no need for the generic to repeat the safety and efficacy tests that the brand manufacturer originally conducted. To test for equivalence, the generic needs samples of the brand drug. Without those samples, the generic cannot meet its burden of showing equivalence. This is where the strategic use of regulation can come into play.

Geoff’s post explains the potential abuse of Risk Evaluation and Mitigation Strategies (“REMS”). REMS are put in place to require certain safety steps (like testing a woman for pregnancy before prescribing a drug that can cause birth defects) or to restrict the distribution channels for dangerous or addictive drugs. As Geoff points out, there is evidence that a few brand name manufacturers have engaged in bad-faith refusals to provide samples using the excuse of REMS or restricted distribution programs to (1) deny requests for samples, (2) prevent generic manufacturers from buying samples from resellers, and (3) deny generics whose drugs have won approval access to the REMS system that is required for generics to distribute their drugs. Once the FDA has certified that a generic manufacturer can safely handle the drug at issue, there is no legitimate basis for the owners of brand name drugs to deny samples to the generic maker. Expressed worries about liability from entering joint REMS programs with generics also ring hollow, for the most part, and would be ameliorated by the pending legislation.

It’s important to note that this pricing situation is unique to drugs because of the regulatory framework surrounding drug manufacture and distribution. If a manufacturer of, say, an off-patent vacuum cleaner wants to prevent competitors from copying its vacuum cleaner design, it is unlikely to be successful. Even if the original manufacturer refuses to sell any vacuum cleaners to a competitor, and instructs its retailers not to sell either, this will be very difficult to monitor and enforce. Moreover, because of an unrestricted resale market, a competitor would inevitably be able to obtain samples of the vacuum cleaner it wishes to copy. Only patent law can successfully protect against the copying of a product sold to the general public, and when the patent expires, so too will the ability to prevent copying.

Drugs are different. The only way a consumer can resell prescription drugs is by breaking the law. Pills bought from an illegal secondary market would be useless to generics for purposes of FDA approval anyway, because the chain of custody would not exist to prove that the samples are the real thing. This means generics need to get samples from the authorized manufacturer or distribution company. When a drug is subject to a REMS-required restricted distribution program, it is even more difficult, if not impossible, for a generic maker to get samples of the drugs for which it wants to make generic versions. Restricted distribution programs, which are used for dangerous or addictive drugs, by design very tightly control the chain of distribution so that the drugs go only to patients with proper prescriptions from authorized doctors.

A troubling trend has arisen recently in which drug owners put their branded drugs into restricted distribution programs not because of any FDA REMS requirement, but instead as a method to prevent generics from obtaining samples and making generic versions of the drugs. This is the strategy that Turing used before it raised prices over 5,000% on Daraprim. And Turing isn’t the only company to use this strategy. It is being emulated by others, although perhaps not so conspicuously. For instance, in 2015, Valeant Pharmaceuticals completed a hostile takeover of Allergan Pharmaceuticals, with the help of the hedge fund, Pershing Square. Once Valeant obtained ownership of Allergan and its drug portfolio, it adopted restricted distribution programs and raised the prices on its off-patent drugs substantially. It raised the price of two life-saving heart drugs by 212% and 525% respectively. Others have followed suit.

A key component of the strategy to profit from hiking prices on off-patent drugs while avoiding competition from generics is to select drugs that do not currently have generic competitors. Sometimes this is because a drug has recently come off patent, and sometimes it is because the drug is for a small patient population, and thus generics haven’t bothered to enter the market given that brand name manufacturers generally drop their prices to close to cost after the drug comes off patent. But with the strategic control of samples and refusals to allow generics to enter REMS programs, the (often new) owners of the brand name drugs seek to prevent the generic competition that we count on to make products cheap and plentiful once their patent protection expires.

Most brand name drug makers do not engage in withholding samples from generics and abusing restricted distribution and REMS programs. But the few that do cost patients and insurers dearly for important medicines that should be much cheaper once they go off patent. More troubling still is the recent strategy of taking drugs that have been off patent and cheap for years, and abusing the regulatory regime to raise prices and block competition. This growing trend of abusing restricted distribution and REMS to facilitate rent extraction from drug purchasers needs to be corrected.

Two bills addressing this issue are pending in Congress. Both bills (1) require drug companies to provide samples to generics after the FDA has certified the generic, (2) require drug companies to enter into shared REMS programs with generics, (3) allow generics to set up their own REMS compliant systems, and (4) exempt drug companies from liability for sharing products and REMS-compliant systems with generic companies in accordance with the steps set out in the bills. When it comes to remedies, however, the Senate version is significantly better. The penalties provided in the House bill are both vague and overly broad. The bill provides for treble damages and costs against the drug company “of the kind described in section 4(a) of the Clayton Act.” Not only is the application of the Clayton Act unclear in the context of the heavily regulated market for drugs (see Trinko), but treble damages may over-deter reasonably restrictive behavior by drug companies when it comes to distributing dangerous drugs.

The remedies in the Senate version are very well crafted to deter rent seeking behavior while not overly deterring reasonable behavior. The remedial scheme is particularly good, because it punishes most those companies that attempt to make exorbitant profits on drugs by denying generic entry. The Senate version provides as a remedy for unreasonable delay that the plaintiff shall be awarded attorneys’ fees, costs, and the defending drug company’s profits on the drug at issue during the time of the unreasonable delay. This means that a brand name drug company that sells an old drug for a low price and delays sharing only because of honest concern about the safety standards of a particular generic company will not face terribly high damages if it is found unreasonable. On the other hand, a company that sends the price of an off-patent drug soaring and then attempts to block generic entry will know that it can lose all of its rent-seeking profits, plus the cost of the victorious generic company’s attorneys fees. This vastly reduces the incentive for the company owning the brand name drug to raise prices and keep competitors out. It likewise greatly increases the incentive of a generic company to enter the market and–if it is unreasonably blocked–to file a civil action the result of which would be to transfer the excess profits to the generic. This provides a rather elegant fix to the regulatory gaming in this area that has become an increasing problem. The balancing of interests and incentives in the Senate bill should leave many congresspersons feeling comfortable to support the bill.

On Friday the the International Center for Law & Economics filed comments with the FCC in response to Chairman Wheeler’s NPRM (proposed rules) to “unlock” the MVPD (i.e., cable and satellite subscription video, essentially) set-top box market. Plenty has been written on the proposed rulemaking—for a few quick hits (among many others) see, e.g., Richard Bennett, Glenn Manishin, Larry Downes, Stuart Brotman, Scott Wallsten, and me—so I’ll dispense with the background and focus on the key points we make in our comments.

Our comments explain that the proposal’s assertion that the MVPD set-top box market isn’t competitive is a product of its failure to appreciate the dynamics of the market (and its disregard for economics). Similarly, the proposal fails to acknowledge the complexity of the markets it intends to regulate, and, in particular, it ignores the harmful effects on content production and distribution the rules would likely bring about.

“Competition, competition, competition!” — Tom Wheeler

“Well, uh… just because I don’t know what it is, it doesn’t mean I’m lying.” — Claude Elsinore

At root, the proposal is aimed at improving competition in a market that is already hyper-competitive. As even Chairman Wheeler has admitted,

American consumers enjoy unprecedented choice in how they view entertainment, news and sports programming. You can pretty much watch what you want, where you want, when you want.

Of course, much of this competition comes from outside the MVPD market, strictly speaking—most notably from OVDs like Netflix. It’s indisputable that the statute directs the FCC to address the MVPD market and the MVPD set-top box market. But addressing competition in those markets doesn’t mean you simply disregard the world outside those markets.

The competitiveness of a market isn’t solely a function of the number of competitors in the market. Even relatively constrained markets like these can be “fully competitive” with only a few competing firms—as is the case in every market in which MVPDs operate (all of which are presumed by the Commission to be subject to “effective competition”).

The truly troubling thing, however, is that the FCC knows that MVPDs compete with OVDs, and thus that the competitiveness of the “MVPD market” (and the “MVPD set-top box market”) isn’t solely a matter of direct, head-to-head MVPD competition.

How do we know that? As I’ve recounted before, in a recent speech FCC General Counsel Jonathan Sallet approvingly explained that Commission staff recommended rejecting the Comcast/Time Warner Cable merger precisely because of the alleged threat it posed to OVD competitors. In essence, Sallet argued that Comcast sought to undertake a $45 billion merger primarily—if not solely—in order to ameliorate the competitive threat to its subscription video services from OVDs:

Simply put, the core concern came down to whether the merged firm would have an increased incentive and ability to safeguard its integrated Pay TV business model and video revenues by limiting the ability of OVDs to compete effectively.…

Thus, at least when it suits it, the Chairman’s office appears not only to believe that this competitive threat is real, but also that Comcast, once the largest MVPD in the country, believes so strongly that the OVD competitive threat is real that it was willing to pay $45 billion for a mere “increased ability” to limit it.

UPDATE 4/26/2016

And now the FCC has approved the Charter/Time Warner Cable, imposing conditions that, according to Wheeler,

focus on removing unfair barriers to video competition. First, New Charter will not be permitted to charge usage-based prices or impose data caps. Second, New Charter will be prohibited from charging interconnection fees, including to online video providers, which deliver large volumes of internet traffic to broadband customers. Additionally, the Department of Justice’s settlement with Charter both outlaws video programming terms that could harm OVDs and protects OVDs from retaliation—an outcome fully supported by the order I have circulated today.

If MVPDs and OVDs don’t compete, why would such terms be necessary? And even if the threat is merely potential competition, as we note in our comments (citing to this, among other things),

particularly in markets characterized by the sorts of technological change present in video markets, potential competition can operate as effectively as—or even more effectively than—actual competition to generate competitive market conditions.


Moreover, the proposal asserts that the “market” for MVPD set-top boxes isn’t competitive because “consumers have few alternatives to leasing set-top boxes from their MVPDs, and the vast majority of MVPD subscribers lease boxes from their MVPD.”

But the MVPD set-top box market is an aftermarket—a secondary market; no one buys set-top boxes without first buying MVPD service—and always or almost always the two are purchased at the same time. As Ben Klein and many others have shown, direct competition in the aftermarket need not be plentiful for the market to nevertheless be competitive.

Whether consumers are fully informed or uninformed, consumers will pay a competitive package price as long as sufficient competition exists among sellers in the [primary] market.

The competitiveness of the MVPD market in which the antecedent choice of provider is made incorporates consumers’ preferences regarding set-top boxes, and makes the secondary market competitive.

The proposal’s superficial and erroneous claim that the set-top box market isn’t competitive thus reflects bad economics, not competitive reality.

But it gets worse. The NPRM doesn’t actually deny the importance of OVDs and app-based competitors wholesale — it only does so when convenient. As we note in our Comments:

The irony is that the NPRM seeks to give a leg up to non-MVPD distribution services in order to promote competition with MVPDs, while simultaneously denying that such competition exists… In order to avoid triggering [Section 629’s sunset provision,] the Commission is forced to pretend that we still live in the world of Blockbuster rentals and analog cable. It must ignore the Netflix behind the curtain—ignore the utter wealth of video choices available to consumers—and focus on the fact that a consumer might have a remote for an Apple TV sitting next to her Xfinity remote.

“Yes, but you’re aware that there’s an invention called television, and on that invention they show shows?” — Jules Winnfield

The NPRM proposes to create a world in which all of the content that MVPDs license from programmers, and all of their own additional services, must be provided to third-party device manufacturers under a zero-rate compulsory license. Apart from the complete absence of statutory authority to mandate such a thing (or, I should say, apart from statutory language specifically prohibiting such a thing), the proposed rules run roughshod over the copyrights and negotiated contract rights of content providers:

The current rulemaking represents an overt assault on the web of contracts that makes content generation and distribution possible… The rules would create a new class of intermediaries lacking contractual privity with content providers (or MVPDs), and would therefore force MVPDs to bear the unpredictable consequences of providing licensed content to third-parties without actual contracts to govern those licenses…

Because such nullification of license terms interferes with content owners’ right “to do and to authorize” their distribution and performance rights, the rules may facially violate copyright law… [Moreover,] the web of contracts that support the creation and distribution of content are complicated, extensively negotiated, and subject to destabilization. Abrogating the parties’ use of the various control points that support the financing, creation, and distribution of content would very likely reduce the incentive to invest in new and better content, thereby rolling back the golden age of television that consumers currently enjoy.

You’ll be hard-pressed to find any serious acknowledgement in the NPRM that its rules could have any effect on content providers, apart from this gem:

We do not currently have evidence that regulations are needed to address concerns raised by MVPDs and content providers that competitive navigation solutions will disrupt elements of service presentation (such as agreed-upon channel lineups and neighborhoods), replace or alter advertising, or improperly manipulate content…. We also seek comment on the extent to which copyright law may protect against these concerns, and note that nothing in our proposal will change or affect content creators’ rights or remedies under copyright law.

The Commission can’t rely on copyright to protect against these concerns, at least not without admitting that the rules require MVPDs to violate copyright law and to breach their contracts. And in fact, although it doesn’t acknowledge it, the NPRM does require the abrogation of content owners’ rights embedded in licenses negotiated with MVPD distributors to the extent that they conflict with the terms of the rule (which many of them must).   

“You keep using that word. I do not think it means what you think it means.” — Inigo Montoya

Finally, the NPRM derives its claimed authority for these rules from an interpretation of the relevant statute (Section 629 of the Communications Act) that is absurdly unreasonable. That provision requires the FCC to enact rules to assure the “commercial availability” of set-top boxes from MVPD-unaffiliated vendors. According to the NPRM,

we cannot assure a commercial market for devices… unless companies unaffiliated with an MVPD are able to offer innovative user interfaces and functionality to consumers wishing to access that multichannel video programming.

This baldly misconstrues a term plainly meant to refer to the manner in which consumers obtain their navigation devices, not how those devices should function. It also contradicts the Commission’s own, prior readings of the statute:

As structured, the rules will place a regulatory thumb on the scale in favor of third-parties and to the detriment of MVPDs and programmers…. [But] Congress explicitly rejected language that would have required unbundling of MVPDs’ content and services in order to promote other distribution services…. Where Congress rejected language that would have favored non-MVPD services, the Commission selectively interprets the language Congress did employ in order to accomplish exactly what Congress rejected.

And despite the above noted problems (and more), the Commission has failed to do even a cursory economic evaluation of the relative costs of the NPRM, instead focusing narrowly on one single benefit it believes might occur (wider distribution of set-top boxes from third-parties) despite the consistent failure of similar FCC efforts in the past.

All of the foregoing leads to a final question: At what point do the costs of these rules finally outweigh the perceived benefits? On the one hand are legal questions of infringement, inducements to violate agreements, and disruptions of complex contractual ecosystems supporting content creation. On the other hand are the presence of more boxes and apps that allow users to choose who gets to draw the UI for their video content…. At some point the Commission needs to take seriously the costs of its actions, and determine whether the public interest is really served by the proposed rules.

Our full comments are available here.

It appears that White House’s zeal for progressive-era legal theory has … progressed (or regressed?) further. Late last week President Obama signed an Executive Order that nominally claims to direct executive agencies (and “strongly encourages” independent agencies) to adopt “pro-competitive” policies. It’s called Steps to Increase Competition and Better Inform Consumers and Workers to Support Continued Growth of the American Economy, and was produced alongside an issue brief from the Council of Economic Advisors titled Benefits of Competition and Indicators of Market Power.

TL;DR version: the Order and its brief do not appear so much aimed at protecting consumers or competition, as they are at providing justification for favored regulatory adventures.

In truth, it’s not exactly clear what problem the President is trying to solve. And there is language in both the Order and the brief that could be interpreted in a positive light, and, likewise, language that could be more of a shot across the bow of “unruly” corporate citizens who have not gotten in line with the President’s agenda. Most of the Order and the corresponding CEA brief read as a rote recital of basic antitrust principles: price fixing bad, collusion bad, competition good. That said, there were two items in the Order that particularly stood out.

The (Maybe) Good

Section 2 of the Order states that

Executive departments … with authorities that could be used to enhance competition (agencies) shall … use those authorities to promote competition, arm consumers and workers with the information they need to make informed choices, and eliminate regulations that restrict competition without corresponding benefits to the American public. (emphasis added)

Obviously this is music to the ears of anyone who has thought that agencies should be required to do a basic economic analysis before undertaking brave voyages of regulatory adventure. And this is what the Supreme Court was getting at in Michigan v. EPA when it examined the meaning of the phrase “appropriate” in connection with environmental regulations:

One would not say that it is even rational, never mind “appropriate,” to impose billions of dollars in economic costs in return for a few dollars in health or environmental benefits.

Thus, if this Order follows the direction of Michigan v. EPA, and it becomes the standard for agencies to conduct cost-benefit analyses before issuing regulation (and to review old regulations through such an analysis), then wonderful! Moreover, this mandate to agencies to reduce regulations that restrict competition could lead to an unexpected reformation of a variety of regulations – even outside of the agencies themselves. For instance, the FTC is laudable in its ongoing efforts both to correct anticompetitive state licensing laws as well as to resist state-protected incumbents, such as taxi-cab companies.

Still, I have trouble believing that the President — and this goes for any president, really, regardless of party — would truly intend for agencies under his control to actually cede regulatory ground when a little thing like economic reality points in a different direction than official policy. After all, there was ample information available that the Title II requirements on broadband providers would be both costly and result in reduced capital expenditures, and the White House nonetheless encouraged the FCC to go ahead with reclassification.

And this isn’t the first time that the President has directed agencies to perform retrospective review of regulation (see the Identifying and Reducing Regulatory Burdens Order of 2012). To date, however, there appears to be little evidence that the burdens of the regulatory state have lessened. Last year set a record for the page count of the Federal Register (80k+ pages), and the data suggest that the cost of the regulatory state is only increasing. Thus, despite the pleasant noises the Order makes with regard to imposing economic discipline on agencies – and despite the good example Canada has set for us in this regard – I am not optimistic of the actual result.

And the (maybe) good builds an important bridge to the (probably) bad of the Order. It is well and good to direct agencies to engage in economic calculation when they write and administer regulations, but such calculation must be in earnest, and must be directed by the learning that was hard earned over the course of the development of antitrust jurisprudence in the US. As Geoffrey Manne and Josh Wright have noted:

Without a serious methodological commitment to economic science, the incorporation of economics into antitrust is merely a façade, allowing regulators and judges to select whichever economic model fits their earlier beliefs or policy preferences rather than the model that best fits the real‐world data. Still, economic theory remains essential to antitrust law. Economic analysis constrains and harnesses antitrust law so that it protects consumers rather than competitors.

Unfortunately, the brief does not indicate that it is interested in more than a façade of economic rigor. For instance, it relies on the outmoded 50 firm revenue concentration numbers gathered by the Census Bureau to support the proposition that the industries themselves are highly concentrated and, therefore, are anticompetitive. But, it’s been fairly well understood since the 1970s that concentration says nothing directly about monopoly power and its exercise. In fact, concentration can often be seen as an indicator of superior efficiency that results in better outcomes for consumers (depending on the industry).

The (Probably) Bad

Apart from general concerns (such as having a host of federal agencies with no antitrust expertise now engaging in competition turf wars) there is one specific area that could have a dramatically bad result for long term policy, and that moreover reflects either ignorance or willful blindness of antitrust jurisprudence. Specifically, the Order directs agencies to

identify specific actions that they can take in their areas of responsibility to build upon efforts to detect abuses such as price fixing, anticompetitive behavior in labor and other input markets, exclusionary conduct, and blocking access to critical resources that are needed for competitive entry. (emphasis added).

It then goes on to say that

agencies shall submit … an initial list of … any specific practices, such as blocking access to critical resources, that potentially restrict meaningful consumer or worker choice or unduly stifle new market entrants (emphasis added)

The generally uncontroversial language regarding price fixing and exclusionary conduct are bromides – after all, as the Order notes, we already have the FTC and DOJ very actively policing this sort of conduct. What’s novel here, however, is that the highlighted language above seems to amount to a mandate to executive agencies (and a strong suggestion to independent agencies) that they begin to seek out “essential facilities” within their regulated industries.

But “critical resources … needed for competitive entry” could mean nearly anything, depending on how you define competition and relevant markets. And asking non-antitrust agencies to integrate one of the more esoteric (and controversial) parts of antitrust law into their mission is going to be a recipe for disaster.

In fact, this may be one of the reasons why the Supreme Court declined to recognize the essential facilities doctrine as a distinct rule in Trinko, where it instead characterized the exclusionary conduct in Aspen Skiing as ‘at or near the outer boundary’ of Sherman Act § 2 liability.

In short, the essential facilities doctrine is widely criticized, by pretty much everyone. In their respected treatise, Antitrust Law, Herbert Hovenkamp and Philip Areeda have said that “the essential facility doctrine is both harmful and unnecessary and should be abandoned”; Michael Boudin has noted that the doctrine is full of “embarrassing weaknesses”; and Gregory Werden has opined that “Courts should reject the doctrine.” One important reason for the broad criticism is because

At bottom, a plaintiff … is saying that the defendant has a valuable facility that it would be difficult to reproduce … But … the fact that the defendant has a highly valued facility is a reason to reject sharing, not to require it, since forced sharing “may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities.” (quoting Trinko)

Further, it’s really hard to say when one business is so critical to a particular market that its own internal functions need to be exposed for competitors’ advantage. For instance, is Big Data – which the CEA brief specifically notes as a potential “critical resource” — an essential facility when one company serves so many consumers that it has effectively developed an entire market that it dominates? ( In case you are wondering, it’s actually not). When exactly does a firm so outcompete its rivals that access to its business infrastructure can be seen by regulators as “essential” to competition? And is this just a set-up for punishing success — which hardly promotes competition, innovation or consumer welfare?

And, let’s be honest here, when the CEA is considering Big Data as an essential facility they are at least partially focused on Google and its various search properties. Google is frequently the target for “essentialist” critics who argue, among other things, that Google’s prioritization of its own properties in its own search results violates antitrust rules. The story goes that Google search is so valuable that when Google publishes its own shopping results ahead of its various competitors, it is engaging in anticompetitive conduct. But this is a terribly myopic view of what the choices are for search services because, as Geoffrey Manne has so ably noted before, “competitors denied access to the top few search results at Google’s site are still able to advertise their existence and attract users through a wide range of other advertising outlets[.]”

Moreover, as more and more users migrate to specialized apps on their mobile devices for a variety of content, Google’s desktop search becomes just one choice among many for finding information. All of this leaves to one side, of course, the fact that for some categories, Google has incredibly stiff competition.

Thus it is that

to the extent that inclusion in Google search results is about “Stiglerian” search-cost reduction for websites (and it can hardly be anything else), the range of alternate facilities for this function is nearly limitless.

The troubling thing here is that, given the breezy analysis of the Order and the CEA brief, I don’t think the White House is really considering the long-term legal and economic implications of its command; the Order appears to be much more about political support for favored agency actions already under way.

Indeed, despite the length of the CEA brief and the variety of antitrust principles recited in the Order itself, an accompanying release points to what is really going on (at least in part). The White House, along with the FCC, seems to think that the embedded streams in a cable or satellite broadcast should be considered a form of essential facility that is an indispensable component of video consumers’ choice (which is laughable given the magnitude of choice in video consumption options that consumers enjoy today).

And, to the extent that courts might apply the (controversial) essential facilities doctrine, an “indispensable requirement … is the unavailability of access to the ‘essential facilities’[.]” This is clearly not the case with much of what the CEA brief points to as examples of ostensibly laudable pro-competitive regulation.

The doctrine wouldn’t apply, for instance, to the FCC’s Open Internet Order since edge providers have access to customers over networks, even where network providers want to zero-rate, employ usage-based billing or otherwise negotiate connection fees and prioritization. And it also doesn’t apply to the set-top box kerfuffle; while third-parties aren’t able to access the video streams that make-up a cable broadcast, the market for consuming those streams is a single part of the entire video ecosystem. What really matters there is access to viewers, and the ability to provide services to consumers and compete for their business.

Yet, according to the White House, “the set-top box is the mascot” for the administration’s competition Order, because, apparently, cable boxes represent “what happens when you don’t have the choice to go elsewhere.” ( “Elsewhere” to the White House, I assume, cannot include Roku, Apple TV, Hulu, Netflix, and a myriad of other video options  that consumers can currently choose among.)

The set-top box is, according to the White House, a prime example of the problem that

[a]cross our economy, too many consumers are dealing with inferior or overpriced products, too many workers aren’t getting the wage increases they deserve, too many entrepreneurs and small businesses are getting squeezed out unfairly by their bigger competitors, and overall we are not seeing the level of innovative growth we would like to see.

This is, of course, nonsense. Consumers enjoy an incredible amount of low-cost, high quality goods (including video options) – far more than at any point in history.  After all:

From cable to Netflix to Roku boxes to Apple TV to Amazon FireStick, we have more ways to find and watch TV than ever — and we can do so in our living rooms, on our phones and tablets, and on seat-back screens at 30,000 feet. Oddly enough, FCC Chairman Tom Wheeler … agrees: “American consumers enjoy unprecedented choice in how they view entertainment, news and sports programming. You can pretty much watch what you want, where you want, when you want.”

Thus, I suspect that the White House has its eye on a broader regulatory agenda.

For instance, the Department of Labor recently announced that it would be extending its reach in the financial services industry by changing the standard for when financial advice might give rise to a fiduciary relationship under ERISA. It seems obvious that the SEC or FINRA could have taken up the slack for any financial services regulatory issues – it’s certainly within their respective wheelhouses. But that’s not the direction the administration took, possibly because SEC and FINRA are independent agencies. Thus, the DOL – an agency with substantially less financial and consumer protection experience than either the SEC or FINRA — has expansive new authority.

And that’s where more of the language in the Order comes into focus. It directs agencies to “ensur[e] that consumers and workers have access to the information needed to make informed choices[.]” The text of the DOL rule develops for itself a basis in competition law as well:

The current proposal’s defined boundaries between fiduciary advice, education, and sales activity directed at large plans, may bring greater clarity to the IRA and plan services markets. Innovation in new advice business models, including technology-driven models, may be accelerated, and nudged away from conflicts and toward transparency, thereby promoting healthy competition in the fiduciary advice market.

Thus, it’s hard to see what the White House is doing in the Order, other than laying the groundwork for expansive authority of non-independent executive agencies under the thin guise of promoting competition. Perhaps the President believes that couching this expansion in free market terms ( i.e. that its “pro-competition”) will somehow help the initiatives go through with minimal friction. But there is nothing in the Order or the CEA brief to provide any confidence that competition will, in fact, be promoted. And in the end I have trouble seeing how this sort of regulatory adventurism does not run afoul of separation of powers issues, as well as assorted other legal challenges.

Finally, conjuring up a regulatory version of the essential facilities doctrine as a support for this expansion is simply a terrible idea — one that smacks much more of industrial policy than of sound regulatory reform or consumer protection.

By William Kolasky

Jon Jacobson in his initial posting claims that it would be “hard to find an easier case” than Apple e-Books, and David Balto and Chris Sagers seem to agree. I suppose that would be true if, as Richard Epstein claims, “the general view is that horizontal arrangements are per se unlawful.”

That, however, is not the law, and has not been since William Howard Taft’s 1898 opinion in Addyston Pipe. In his opinion, borrowing from an earlier dissenting opinion by Justice Edward Douglas White in Trans-Missouri Freight Ass’n, Taft surveyed the common law of restraints of trade. He showed that it was already well established in 1898 that even horizontal restraints of trade were not necessarily unlawful if they were ancillary to some legitimate business transaction or arrangement.

Building on that opinion, the Supreme Court, in what is now a long series of decisions beginning with BMI and continuing through Actavis, has made it perfectly clear that even a horizontal restraint cannot be condemned as per se unlawful unless it is a “naked” restraint that, on its face, could not serve any “plausible” procompetitive business purpose. That there are many horizontal arrangements that are not per se unlawful is shown by the DOJ’s own Competitor Collaboration Guidelines, which provide many examples, including joint sales agents.

As I suggested in my initial posting, Apple may have dug its own grave by devoting so much effort to denying the obvious—namely, that it had helped facilitate a horizontal agreement among the publishers, just as the lower courts found. Apple might have had more success had it instead spent more time explaining why it needed a horizontal agreement among the publishers as to the terms on which they would designate Apple as their common sales agent in order for it to successfully enter the e-book market, and why those terms did not amount to a naked horizontal price fixing agreement. Had it done so, Apple likely could have made a stronger case for why a rule of reason review was necessary than it did by trying to fit a square peg into a round hole by insisting that its agreements were purely vertical.

By Morgan Reed

In Philip K. Dick’s famous short story that inspired the Total Recall movies, a company called REKAL could implant “extra-factual memories” into the minds of anyone. That technology may be fictional, but the Apple eBooks case suggests that the ability to insert extra-factual memories into the courts already exists.

The Department of Justice, the Second Circuit majority, and even the Solicitor General’s most recent filing opposing cert. all assert that the large publishing houses invented a new “agency” business model as a way to provide leverage to raise prices, and then pushed it on Apple.

The basis of the government’s claim is that Apple had “just two months to develop a business model” once Steve Jobs had approved the “iBookstore” ebook marketplace. The government implies that Apple was a company so obviously old, inept, and out-of-ideas that it had to rely on the big publishers for an innovative business model to help it enter the market. And the court bought it “wholesale,” as it were. (Describing Apple’s “a-ha” moment when it decided to try the agency model, the court notes, “[n]otably, the possibility of an agency arrangement was first mentioned by Hachette and HarperCollins as a way ‘to fix Amazon pricing.'”)

The claim has no basis in reality, of course. Apple had embraced the agency model long before, as it sought to disrupt the way software was distributed. In just the year prior, Apple had successfully launched the app store, a ground-breaking example of the agency model that started with only 500 apps but had grown to more than 100,000 in 12 months. This was an explosion of competition — remember, nearly all of those apps represented a new publisher: 100,000 new potential competitors.

So why would the government create such an absurd fiction?

Because without that fiction, Apple moves from “conspirator” to “competitor.” Instead of anticompetitive scourge, it becomes a disruptor, bringing new competition to an existing market with a single dominant player (Amazon Kindle), and shattering the control held by the existing publishing industry.

More than a decade before the App Store, software developers had observed that the wholesale model for distribution created tremendous barriers for entry, increased expense, and incredible delays in getting to market. Developers were beholden to a tiny number of physical stores that sold shelf space and required kickbacks (known as spiffs). Today, there are legions of developers producing App content, and developers have earned more than $10 billion in sales through Apple’s App Store. Anyone with an App idea or, moreover, an idea for a book, can take it straight to consumers rather than having to convince a publisher, wholesaler or retailer that it is worth purchasing and marketing.

This disintermediation is of critical benefit to consumers — and yet the Second Circuit missed it. The court chose instead to focus on the claim that if the horizontal competitors conspired, then Apple, which had approached the publishers to ensure initial content would exist at time of launch, was complicit. Somehow Apple could be a horizontal competitor even through it wasn’t part of the publishing industry!

There was another significant consumer and competitive benefit from Apple’s entry into the market and the shift to the agency model. Prior to the Apple iPad, truly interactive books were mostly science fiction, and the few pilot projects that existed had little consumer traction. Amazon, which held 90% of the electronic books market, chose to focus on creating technology that mirrored the characteristics of reading on paper: a black and white screen and the barest of annotation capabilities.

When the iPad was released, Apple sent up a signal flag that interactivity would be a focal point of the technology by rolling out tools that would allow developers to access the iPad’s accelerometer and touch sensitive screen to create an immersive experience. The result? Products that help children with learning disabilities, and competitors fighting back with improved products.

Finally, Apple’s impact on consumers and competition was profound. Amazon switched, as well, and the nascent world of self publishing exploded. Books like Hugh Howey’s Wool series (soon to be a major motion picture) were released as smaller chunks for only 99 cents. And “the Martian,” which is up for several Academy Awards found a home and an audience long before any major publisher came calling.

We all need to avoid the trip to REKAL and remember what life was like before the advent of the agency model. Because if the Second Circuit decision is allowed to stand, the implication for any outside competitor looking to disrupt a market is as grim and barren as the surface of Mars.

By Thomas Hazlett

The Apple e-books case is throwback to Dr. Miles, the 1911 Supreme Court decision that managed to misinterpret the economics of competition and so thwart productive activity for over a century. The active debate here at TOTM reveals why.

The District Court and Second Circuit have employed a per se rule to find that the Apple e-books agreement with five major publishers constituted a violation of Section 1 of the Sherman Act. Citing the active cooperation in contract negotiations involving multiple horizontal competitors (publishers) and the Apple offer, which appears to have raised prices paid for e-books, the conclusion that this is a case of horizontal collusion appears a slam dunk to some. “Try as one may,” writes Jonathan Jacobson, “it is hard to find an easier antitrust case than United States v. Apple.”

I’m guessing that that is what Charles Evans Hughes thought about the Dr. Miles case in 1911.

Upon scrutiny, the apparent simplicity in either instance evaporates. Dr. Miles has been revised as per GTE Sylvania, Leegin, and (thanks, Keith Hylton) Business Electronics v. Sharp Electronics. Let’s here look at the pending Apple dispute.

First, the Second Circuit verdict was not only a split decision on application of the per se rule, the dissent ably stated a case for why the Apple e-books deal should be regarded as pro-competitive and, thus, legal.

Second, the price increase cited as determinative occurred in a two-sided market; the fact asserted does not establish a monopolistic restriction of output. Further analysis, as called for under the rule of reason, is needed to flesh out the totality of the circumstances and the net impact of the Apple-publisher agreement on consumer welfare. That includes evidence regarding what happens to total revenues as market structure and prices change.

Third, a new entrant emerged as per the actions undertaken — the agreements pointedly did not “lack…. any redeeming virtue” (Northwest Wholesale Stationers, 1985), the justification for per se illegality. The fact that a new platform — Apple challenging Amazon’s e-book dominance — was both cause and effect of the alleged anti-competitive behavior is a textbook example of ancillarity. The “naked restraints” that publishers might have imposed had Apple not brought new products and alternative content distribution channels into the mix thus dressed up. It is argued by some that the clothes were skimpy. But that fashion statement is what a rule of reason analysis is needed to determine.

Fourth, the successful market foray that came about in the two-sided e-book market is a competitive victory not to be trifled. As the Supreme Court determined in Leegin: A “per se rule cannot be justified by the possibility of higher prices absent a further showing of anticompetitive conduct. The antitrust laws are designed to protect interbrand competition from which lower prices can later result.” The Supreme Court need here overturn U.S. v. Apple as decided by the Second Circuit in order that the “later result” be reasonably examined.

Fifth, lock-in is avoided with a rule of reason. As the Supreme Court said in Leegin:

As courts gain experience considering the effects of these restraints by applying the rule of reason… they can establish the litigation structure to ensure the rule operates to eliminate anticompetitive restraints….

The lock-in, conversely, comes with per se rules that nip the analysis in the bud, assuming simplicity where complexity obtains.

Sixth, Judge Denise Cote, who issued the District Court ruling against Apple, shows why the rule of reason is needed to counter her per se approach:

Here we have every necessary component: with Apple’s active encouragement and assistance, the Publisher Defendants agreed to work together to eliminate retail price competition and raise e-book prices, and again with Apple’s knowing and active participation, they brought their scheme to fruition.

But that cannot be “every necessary component.” It is not in Apple’s interest to raise prices, but to lower prices paid. Something more has to be going on. Indeed, in raising prices the judge unwittingly cites an unarguable pro-competitive aspect of Apple’s foray: It is competing with Amazon and bidding resources from a rival. Indeed, the rival is, arguably, an incumbent with market power. This cannot be the end of the analysis. That it is constitutes a throwback to the anti-competitive per se rule of Dr. Miles.

Seventh, in oral arguments at the Second Circuit, Judge Raymond J. Lohier, Jr. directed a question to Justice Department counsel, asking how Apple and the publishers “could have broken Amazon’s monopoly of the e-book market without violating antitrust laws.” The DOJ attorney responded, according to an article in The New Yorker, by advising that

Apple could have let the competition among companies play out naturally without pursuing explicit strategies to push prices higher—or it could have sued, or complained to the Justice Department and to federal regulatory authorities.

But the DOJ itself brought no complaint against Amazon — it, instead, sued Apple. And the admonition that an aggressive innovator should sit back and let things “play out naturally” is exactly what will kill efficiency enhancing “creative destruction.” Moreover, the government’s view that Apple “pursued an explicit strategy to push prices higher” fails to acknowledge that Apple was the buyer. Such as it was, Apple’s effort was to compete, luring content suppliers from a rival. The response of the government is to recommend, on the one hand, litigation it will not itself pursue and, on the other, passive acceptance that avoids market disruption. It displays the error, as Judge Jacobs’ Second Circuit dissent puts it, “That antitrust law is offended by gloves off competition.” Why might innovation not be well served by this policy?

Eighth, the choice of rule of reason does not let Apple escape scrutiny, but applies it to both sides of the argument. It adds important policy symmetry. Dr. Miles impeded efficient market activity for nearly a century. The creation of new platforms in Internet markets ought not to have such handicaps. It should be recalled that, in introducing its iTunes platform and its vertically linked iPod music players, circa 2002, the innovative Apple likewise faced attack from competition policy makers (more in Europe, indeed, than the U.S.). Happily, progress in the law had loosened barriers to business model innovation, and the revolutionary ecosystem was allowed to launch. Key to that progressive step was the bulk bargain struck with music labels. Richard Epstein thinks that such industry-wide dealing now endangers Apple’s more recent platform launch. Perhaps. But there is no reason to jump to that conclusion, and much to find out before we embrace it.

By Chris Sagers

United States v. Apple has fascinated me continually ever since the instantly-sensational complaint was made public, more than three years ago. Just one small, recent manifestation of the larger theme that makes it so interesting is the improbable range of folks who apparently consider certiorari rather likely—not least some commenters here, and even SCOTUSblog, which listed the case on their “Petitions We’re Watching.” It seems improbable, I say, not because reasonable people couldn’t differ on the policy issues. In this day and age somebody pops up to doubt every antitrust case brought against anybody no matter what. Rather, on the traditional criteria, the case just seems really ill-suited for cert.[*]

But it is in keeping with the larger story that people might expect the Court to take this basically hum-drum fact case in which there’s no circuit split. People have been savaging this case since its beginnings, despite the fact that to almost all antitrust lawyers it was such a legal slam dunk that so long as the government could prove its facts, it couldn’t lose.

And so I’m left with questions I’ve been asking since the case came out. Why, given the straightforward facts, nicely fitting a per se standard generally thought to be well-settled, involving conduct that on the elaborate trial record had no plausible effect except a substantial price increase,[**] do so many people hate this case? Why, more specifically, do so many people think there is something special about it, such that it shouldn’t be subject to the same rules that would apply to anybody else who did what these defendants did?

To be clear, I think the case is interesting. Big time. But what is interesting is not its facts or the underlying conduct or anything about book publishing or technological change or any of that. In other words, I don’t think the case is special. Like Jonathan Jacobson, I think it is simple.  What is remarkable is the reactions it has generated, across the political spectrum.

In the years of its pendency, on any number of panels and teleconferences and brown-bags and so on we’ve heard BigLaw corporate defense lawyers talking about the case like they’re Louis Brandeis. The problem, you see, is not a naked horizontal producer cartel coordinated by a retail entrant with a strong incentive to discipline its retail rival. No, no, no. The problem was actually Amazon, and the problem with Amazon was that it is big. Moreover, this case is about entry, they say, and entry is what antitrust is all about. Entry must be good, because numerosity in and of itself is competition. Consider too the number of BigLaw antitrust partners who’ve publicly argued that Amazon is in fact a monopolist, and that it engaged in predatory pricing, of all things.

When has anyone ever heard this group of people talk like that?

For another example, consider how nearly identical have been the views of left-wing critics like the New America Foundation’s Barry Lynn to those of the Second Circuit dissenter in Apple, the genteel, conservative Bush appointee, Judge Dennis Jacobs. They both claim, as essentially their only argument, that Amazon is a powerful firm, which can be tamed only if publishers can set their own retail prices (even if they do so collusively).

And there are so many other examples. The government’s case was condemned by no less than a Democrat and normally pro-enforcement member of the Senate antitrust committee, as it was by two papers as otherwise divergent as the Wall Street Journal and the New York Times. Meanwhile, the damnedest thing about the case, as I’ll show in a second, is that it frequently causes me to talk like Robert Bork.

So what the hell is going on?

I have a theory.  We in America have almost as our defining character, almost uniquely among developed nations, a commitment to markets, competition, and individual enterprise. But we tend to forget until a case like Apple reminds us that markets, when they work as they are supposed to, are machines for producing pain. Firms fail, people lose jobs, valuable institutions—like, perhaps, the paper book—are sometimes lost. And it can be hard to believe that such a free, decentralized mess will somehow magically optimize organization, distribution, and innovation. I think the reason people find a case like Apple hard to support is that, because we find all that loss and anarchy so hard to swallow, we as a people do not actually believe in competition at all.

I think it helps in making this point to work through the individual arguments that the Apple defendants and their supporters have made, in court and out. For my money, what we find is not only that most of the arguments are not really that strong, but that they are the same arguments that all defendants make, all the time. As it turns out, there has never been an antitrust defendant that didn’t think its market was special.

Taking the arguments I’ve heard, roughly in increasing order of plausibility:

  • Should it matter that discipline of Amazon’s aggressive pricing might help keep the publisher defendants in business? Hardly. While the lamentations of the publishers seem overblown—they may be forced to adapt, and it may not be painless, but that is much more likely at the moment than their insolvency—if they are forced out because they cannot compete on a price basis, then that is exactly what is supposed to happen. Econ 101.
  • Was Apple’s entry automatically good just because it was entry? Emphatically no. There is no rule in antitrust that entry is inherently good, and a number of strong rules to the contrary (consider, for example, the very foundation of the Brook Group predation standard, which is that we should provide no legal protection to less efficient competitors, including entrants). That is for a simple reason: entry is good when causes quality-adjusted price to go down. The opposite occurred in Apple[***]
  • Is Amazon the real villain, so obviously that we should allow its suppliers to regulate its power through horizontal cartel? I rather think not. While I have no doubt that Amazon is a dangerous entity, that probably will merit scrutiny on any number of grounds now or in the future, it seems implausible that it priced e-books predatorily, surely not on the legal standard that currently prevails in the United States. In fact, an illuminating theme in The Everything Store, Brad Stone’s comprehensive study of the company, was the ubiquity of supplier allegations of Amazon’s predation in all kinds of products, complaints that have gone on throughout the company’s two-decade existence. I don’t believe Amazon is any hero or that it poses no threats, but what it’s done in these cases is just charge lower prices. It’s been able to do so in a sustained manner mainly through innovation in distribution. And in any case, whether Amazon is big and bad or whatever, the right tool to constrain it is not a price fixing cartel. No regulator cares less about the public interest.
  • Does it make the case special in some way that a technological change drove the defendants to their conspiracy? No. The technological change afoot was in effect just a change in costs. It is much cheaper to deliver content electronically than in hard copy, not least because as things have unfolded, consumers have actually paid for and own most of the infrastructure. To that extent there’s nothing different about Apple than any case in which an innovation in production or distribution has given one player a cost advantage. In fact, the publishers’ primary need to defend against pricing of e-books at some measure of their actual cost is that the publishers’ whole structure is devoted to an expensive intermediating function that becomes largely irrelevant with digital distribution.
  • Is there reason to believe that a horizontal cartel orchestrated by a powerful distributor will achieve better quality-adjusted prices, which I take to be Geoff Manne’s overall theme? I mean, come on. This is essentially a species of destructive competition argument, that otherwise healthy markets can be so little trusted efficiently to supply products that customers want that we’ll put the government to a full rule of reason challenge to attack a horizontal cartel? Do we believe in competition at all?
  • Should it matter that valuable cultural institutions may be at risk, including the viability of paper books, independent bookstores, and perhaps the livelihoods of writers or even literature itself? This seems more troubling than the other points, but hardly is unique to the case or a particularly good argument for self-help by cartel. Consider, if you will, another, much older case. The sailing ship industry was thousands of years old and of great cultural and human significance when it met its demise in the 1870s at the hands of the emerging steamship industry. Ships that must await the fickle winds cannot compete with those that can offer the reliable, regular departures that shipper customers desire. There followed a period of desperate price war following which the sail industry was destroyed. That was sad, because tall-masted sailing ships are very swashbuckling and fun, and were entwined in our literature and culture. But should we have allowed the two industries to fix their prices, to preserve sailing ships as a living technology?

There are other arguments, and we could keep working through them one by one, but the end result is the same. The arguments mostly are weak, and even those with a bit more heft do nothing more than pose the problem inherent in that very last point. Healthy markets sometimes produce pain, with genuinely regrettable consequences.  But that just forces us to ask: do we believe in competition or don’t we?


[*] Except possibly for one narrow issue, Apple is at this point emphatically a fact case, and the facts were resolved on an extensive record by an esteemed trial judge, in a long and elaborate opinion, and left undisturbed on appeal (even in the strongly worded dissent). The one narrow issue that is actually a legal one, and that Apple mainly stresses in its petition—whether in the wake of Leegin the hub in a hub-and-spoke arrangement can face per se liability—is one on which I guess people could plausibly disagree. But even when that is the case this Court virtually never grants cert. in the absence of a significant circuit split, and here there isn’t one.

Apple points only to one other Circuit decision, the Third Circuit’s Toledo Mack. It is true as Apple argues that a passage in Toledo Mack seemed to read language from Leegin fairly broadly, and to apply even when there is horizontal conspiracy at the retail level. But Toledo Mack was not a hub-and-spoke case. While plaintiff alleged a horizontal conspiracy among retailers of heavy trucks, and Mack Trucks later acquiescence in it, Mack played no role in coordinating the conspiracy. Separately, whether Toledo Mack really conflicts with Apple or not, the law supporting the old per se rule against hub-and-spoke conspiracies is pretty strong (take a look, for example, at pp. 17-18 of the Justice Department’s opposition brief.

So, I suppose one might think there is no distinction between a hub-and-spoke and a case like Toledo Mack, in which a manufacturer merely agreed after the fact to assist an existing retail conspiracy, and that there is therefore a circuit split, but that would be rather in contrast to a lot of Supreme Court authority. On the other hand, if there is some legal difference between a hub-and-spoke and the facts of Toledo Mack, then Toledo Mack is relevant only if it is understood to have read Leegin to apply to all “vertical” conduct, including true hub-and-spoke agreements. But that would be a broad reading indeed of both Leegin and Toledo Mack. It would require believing that Leegin reversed sub silentio a number of important decisions on an issue that was not before the Court in Leegin. It would also make a circuit split out of a point that would be only dicta in Toledo Mack. And yes, yes, yes, I know, Judge Jacobs in dissent below himself said that his panel’s decision created a circuit split with Toledo Mack. But I mean, come on. A circuit split means that two holdings are in conflict, not that one bit of dicta commented on some other bit of dicta.

A whole different reason cert. seems improbable is that the issue presented is whether per se treatment was appropriate. But the trial court specifically found the restraint to have been unreasonable under a rule of reason standard. Of course that wouldn’t preclude the Court from reversing the trial court’s holding that the per se rule applies, but it would render a reversal almost certainly academic in the case actually before the Court.

Don’t get me wrong. Nothing the courts do really surprises me anymore, and there are still four members of the Court, even in the wake of Justice Scalia’s passing, who harbor open animosity for antitrust and a strong fondness for Leegin. It is also plausible that those four will see the case Apple’s way, and favor reversing Interstate Circuit (though that seems unlikely to me; read a case like Ticor or North Carolina Dental Examiners if you want to know how Anthony Kennedy feels about naked cartel conduct). But the ideological affinities of the Justices, in and of themselves, just don’t usually turn an otherwise ordinary case into a cert-worthy one.

[**] Yes, yes, yes, Grasshopper, I know, Apple argued that in fact its entry increased quality and consumer choice, and also put on an argument that the output of e-books actually expanded during the period of the publishers’ conspiracy. But, a couple of things. First, as the government observed in some juicy briefing in the case, and Judge Cote found in specific findings, each of Apple’s purported quality enhancements turned out to involve either other firms’ innovations or technological enhancements that appeared in the iPad before Apple ever communicated with the publishers. As for the expanded output argument, it was fairly demolished by the government’s experts, a finding not disturbed even in Judge Jacobs’ dissent.

In any case, any benefit Apple did manage to supply came at the cost of a price increase of fifty freaking percent, across thousands of titles, that were sustained for the entire two years that the conspiracy survived.

[***] There have also been the usual squabbles over factual details that are said to be very important, but these points are especially uninteresting. E.g., the case involved “MFNs” and “agency contracts,” and there is supposed to be some magic in either their vertical nature or the great uncertainty of their consequences that count against per se treatment. There isn’t. Neither the government’s complaint, the district court, nor the Second Circuit attacked the bilateral agreements in and of themselves; on the contrary, both courts emphatically stressed that they only found illegal the horizontal price fixing conspiracy and Apple’s role in coordinating it.

Likewise, some stress that the publisher defendants in fact earned slightly less per price-fixed book under their agency agreements than they had with Apple. Why would they do that, if there weren’t some pro-competitive reason? Simple. The real money in trade publishing was not then or now in the puny e-book sector, but in hard-cover, new-release best sellers, which publishers have long sold at very significant mark-ups over cost. Those margins were threatened by Amazon’s very low e-book prices, and the loss on agency sales was worth it to preserve the real money makers.

The Apple E-Books Antitrust Case: Implications for Antitrust Law and for the Economy — Day 2

February 16, 2016

We will have a few more posts today to round out the Apple e-books case symposium started yesterday.

You can find all of the current posts, and eventually all of the symposium posts, here. Yesterdays’ posts, in order of posting:

Look for posts a little later today from:

  • Tom Hazlett
  • Morgan Reed
  • Chris Sagers

And possibly a follow-up post or two from some of yesterday’s participants.