Archives For

TOTM is pleased to welcome guest blogger Nicolas Petit, Professor of Law & Economics at the University of Liege, Belgium.

Nicolas has also recently been named a (non-resident) Senior Scholar at ICLE (joining Joshua Wright, Joanna Shepherd, and Julian Morris).

Nicolas is also (as of March 2017) a Research Professor at the University of South Australia, co-director of the Liege Competition & Innovation Institute and director of the LL.M. program in EU Competition and Intellectual Property Law. He is also a part-time advisor to the Belgian competition authority.

Nicolas is a prolific scholar specializing in competition policy, IP law, and technology regulation. Nicolas Petit is the co-author (with Damien Geradin and Anne Layne-Farrar) of EU Competition Law and Economics (Oxford University Press, 2012) and the author of Droit européen de la concurrence (Domat Montchrestien, 2013), a monograph that was awarded the prize for the best law book of the year at the Constitutional Court in France.

One of his most recent papers, Significant Impediment to Industry Innovation: A Novel Theory of Harm in EU Merger Control?, was recently published as an ICLE Competition Research Program White Paper. His scholarship is available on SSRN and he tweets at @CompetitionProf.

Welcome, Nicolas!

My colleague, Neil Turkewitz, begins his fine post for Fair Use Week (read: crashing Fair Use Week) by noting that

Many of the organizations celebrating fair use would have you believe, because it suits their analysis, that copyright protection and the public interest are diametrically opposed. This is merely a rhetorical device, and is a complete fallacy.

If I weren’t a recovering law professor, I would just end there: that about sums it up, and “the rest is commentary,” as they say. Alas….  

All else equal, creators would like as many people to license their works as possible; there’s no inherent incompatibility between “incentives and access” (which is just another version of the fallacious “copyright protection versus the public interest” trope). Everybody wants as much access as possible. Sure, consumers want to pay as little as possible for it, and creators want to be paid as much as possible. That’s a conflict, and at the margin it can seem like a conflict between access and incentives. But it’s not a fundamental, philosophical, and irreconcilable difference — it’s the last 15 minutes of negotiation before the contract is signed.

Reframing what amounts to a fundamental agreement into a pitched battle for society’s soul is indeed a purely rhetorical device — and a mendacious one, at that.

The devil is in the details, of course, and there are still disputes on the margin, as I said. But it helps to know what they’re really about, and why they are so far from the fanciful debates the copyright scolds wish we were having.

First, price is, in fact, a big deal. For the creative industries it can be the difference between, say, making one movie or a hundred, and for artists is can be the difference between earning a livelihood writing songs or packing it in for a desk job.

But despite their occasional lip service to the existence of trade-offs, many “fair-users” see price — i.e., licensing agreements — as nothing less than a threat to social welfare. After all, the logic runs, if copies can be made at (essentially) zero marginal cost, a positive price is just extortion. They say, “more access!,” but they don’t mean, “more access at an agreed-upon price;” they mean “zero-price access, and nothing less.” These aren’t the same thing, and when “fair use” is a stand-in for “zero-price use,” fair-users moving the goalposts — and being disingenuous about it.

The other, related problem, of course, is piracy. Sometimes rightsholders’ objections to the expansion of fair use are about limiting access. But typically that’s true only where fine-tuned contracting isn’t feasible, and where the only realistic choice they’re given is between no access for some people, and pervasive (and often unstoppable) piracy. There are any number of instances where rightsholders have no realistic prospect of efficiently negotiating licensing terms and receiving compensation, and would welcome greater access to their works even without a license — as long as the result isn’t also (or only) excessive piracy. The key thing is that, in such cases, opposition to fair use isn’t opposition to reasonable access, even free access. It’s opposition to piracy.

Time-shifting with VCRs and space-shifting with portable mp3 players (to take two contentious historical examples) fall into this category (even if they are held up — as they often are — by the fair-users as totems of their fanciful battle ). At least at the time of the Sony and Diamond Rio cases, when there was really no feasible way to enforce licenses or charge differential prices for such uses, the choice rightsholders faced was effectively all-or-nothing, and they had to pick one. I’m pretty sure, all else equal, they would have supported such uses, even without licenses and differential compensation — except that the piracy risk was so significant that it swamped the likely benefits, tilting the scale toward “nothing” instead of “all.”

Again, the reality is that creators and rightsholders were confronted with a choice between two imperfect options; neither was likely “right,” and they went with the lesser evil. But one can’t infer from that constrained decision an inherent antipathy to fair use. Sadly, such decisions have to be made in the real world, not law reviews and EFF blog posts. As economists Benjamin Klein, Andres Lerner and Kevin Murphy put it regarding the Diamond Rio case:

[R]ather than representing an attempt by copyright-holders to increase their profits by controlling legally established “fair uses,”… the obvious record-company motivation is to reduce the illegal piracy that is encouraged by the technology. Eliminating a “fair use” [more accurately, “opposing an expansion of fair use” -ed.] is not a benefit to the record companies; it is an unfortunate cost they have to bear to solve the much larger problem of infringing uses. The record companies face competitive pressure to avoid these costs by developing technologies that distinguish infringing from non-infringing copying.

This last point is important, too. Fair-users don’t like technological protection measures, either, even if they actually facilitate licensing and broader access to copyrighted content. But that really just helps to reveal the poverty of their position. They should welcome technology that expands access, even if it also means that it enables rightsholders to fine-tune their licenses and charge a positive price. Put differently: Why do they hate Spotify!?

I’m just hazarding a guess here, but I suspect that the antipathy to technological solutions goes well beyond the short-term limits on some current use of content that copyright minimalists think shouldn’t be limited. If technology, instead of fair use, is truly determinative of the extent of zero-price access, then their ability to seriously influence (read: rein in) the scope of copyright is diminished. Fair use is amorphous. They can bring cases, they can lobby Congress, they can pen strongly worded blog posts, and they can stage protests. But they can’t do much to stop technological progress. Of course, technology does at least as much to limit the enforceability of licenses and create new situations where zero-price access is the norm. But still, R&D is a lot harder than PR.

What’s more, if technology were truly determinative, it would frequently mean that former fair uses could become infringing at some point (or vice versa, of course). Frankly, there’s no reason for time-shifting of TV content to continue to be considered a fair use today. We now have the technology to both enable time shifting and to efficiently license content for the purpose, charge a differential price for it, and enforce the terms. In fact, all of that is so pervasive today that most users do pay for time-shifting technologies, under license terms that presumably define the scope of their right to do so; they just may not have read the contract. Where time-shifting as a fair use rears its ugly head today is in debates over new, infringing technology where, in truth, the fair use argument is really a malleable pretext to advocate for a restriction on the scope of copyright (e.g., Aereo).

In any case, as the success of business models like Spotify and Netflix (to say nothing of Comcast’s X1 interface and new Xfinity Stream app) attest, technology has enabled users to legitimately engage in what was once conceivable seemingly only under fair use. Yes, at a price — one that millions of people are willing to pay. It is surely the case that rightsholders’ licensing of technologies like these have made content more accessible, to more people, and with higher-quality service, than a regime of expansive unlicensed use could ever have done.

At the same time, let’s not forget that, often, even when they could efficiently distribute content only at a positive price, creators offer up scads of content for free, in myriad ways. Sure, the objective is to maximize revenue overall by increasing exposure, price discriminating, or enhancing the quality of paid-for content in some way — but so what? More content is more content, and easier access is easier access. All of that uncompensated distribution isn’t rightsholders nodding toward the copyright scolds’ arguments; it’s perfectly consistent with licensing. Obviously, the vast majority of music, for example, is listened-to subject to license agreements, not because of fair use exceptions or rightsholders’ largesse.

For the vast majority of creators, users and uses, licensed access works, and gets us massive amounts of content and near ubiquitous access. The fair use disputes we do have aren’t really about ensuring broad access; that’s already happening. Rather, those disputes are either niggling over the relatively few ambiguous margins on the one hand, or, on the other, fighting the fair-users’ manufactured, existential fight over whether copyright exceptions will subsume the rule. The former is to be expected: Copyright boundaries will always be imperfect, and courts will always be asked to make the close calls. The latter, however, is simply a drain on resources that could be used to create more content, improve its quality, distribute it more broadly, or lower prices.

Copyright law has always been, and always will be, operating in the shadow of technology — technology both for distribution and novel uses, as well as for pirating content. The irony is that, as digital distribution expands, it has dramatically increased the risk of piracy, even as copyright minimalists argue that the low costs of digital access justify a more expansive interpretation of fair use — which would, in turn, further increase the risk of piracy.

Creators’ opposition to this expansion has nothing to do with opposition to broad access to content, and everything to do with ensuring that piracy doesn’t overwhelm their ability to get paid, and to produce content in the first place.

Even were fair use to somehow disappear tomorrow, there would be more and higher-quality content, available to more people in more places, than ever before. But creators have no interest in seeing fair use disappear. What they do have is an interest in is licensing their content as broadly as possible when doing so is efficient, and in minimizing piracy. Sometimes legitimate fair-use questions get caught in the middle. We could and should have a reasonable debate over the precise contours of fair use in such cases. But the false dichotomy of creators against users makes that extremely difficult. Until the disingenuous rhetoric is clawed back, we’re stuck with needless fights that don’t benefit either users or creators — although they do benefit the policy scolds, academics, wonks and businesses that foment them.

I just posted a new ICLE white paper, co-authored with former ICLE Associate Director, Ben Sperry:

When Past Is Not Prologue: The Weakness of the Economic Evidence Against Health Insurance Mergers.

Yesterday the hearing in the DOJ’s challenge to stop the Aetna-Humana merger got underway, and last week phase 1 of the Cigna-Anthem merger trial came to a close.

The DOJ’s challenge in both cases is fundamentally rooted in a timeworn structural analysis: More consolidation in the market (where “the market” is a hotly-contested issue, of course) means less competition and higher premiums for consumers.

Following the traditional structural playbook, the DOJ argues that the Aetna-Humana merger (to pick one) would result in presumptively anticompetitive levels of concentration, and that neither new entry not divestiture would suffice to introduce sufficient competition. It does not (in its pretrial brief, at least) consider other market dynamics (including especially the complex and evolving regulatory environment) that would constrain the firm’s ability to charge supracompetitive prices.

Aetna & Humana, for their part, contend that things are a bit more complicated than the government suggests, that the government defines the relevant market incorrectly, and that

the evidence will show that there is no correlation between the number of [Medicare Advantage organizations] in a county (or their shares) and Medicare Advantage pricing—a fundamental fact that the Government’s theories of harm cannot overcome.

The trial will, of course, feature expert economic evidence from both sides. But until we see that evidence, or read the inevitable papers derived from it, we are stuck evaluating the basic outlines of the economic arguments based on the existing literature.

A host of antitrust commentators, politicians, and other interested parties have determined that the literature condemns the mergers, based largely on a small set of papers purporting to demonstrate that an increase of premiums, without corresponding benefit, inexorably follows health insurance “consolidation.” In fact, virtually all of these critics base their claims on a 2012 case study of a 1999 merger (between Aetna and Prudential) by economists Leemore Dafny, Mark Duggan, and Subramaniam Ramanarayanan, Paying a Premium on Your Premium? Consolidation in the U.S. Health Insurance Industry, as well as associated testimony by Prof. Dafny, along with a small number of other papers by her (and a couple others).

Our paper challenges these claims. As we summarize:

This white paper counsels extreme caution in the use of past statistical studies of the purported effects of health insurance company mergers to infer that today’s proposed mergers—between Aetna/Humana and Anthem/Cigna—will likely have similar effects. Focusing on one influential study—Paying a Premium on Your Premium…—as a jumping off point, we highlight some of the many reasons that past is not prologue.

In short: extrapolated, long-term, cumulative, average effects drawn from 17-year-old data may grab headlines, but they really don’t tell us much of anything about the likely effects of a particular merger today, or about the effects of increased concentration in any particular product or geographic market.

While our analysis doesn’t necessarily undermine the paper’s limited, historical conclusions, it does counsel extreme caution for inferring the study’s applicability to today’s proposed mergers.

By way of reference, Dafny, et al. found average premium price increases from the 1999 Aetna/Prudential merger of only 0.25 percent per year for two years following the merger in the geographic markets they studied. “Health Insurance Mergers May Lead to 0.25 Percent Price Increases!” isn’t quite as compelling a claim as what critics have been saying, but it’s arguably more accurate (and more relevant) than the 7 percent price increase purportedly based on the paper that merger critics like to throw around.

Moreover, different markets and a changed regulatory environment alone aren’t the only things suggesting that past is not prologue. When we delve into the paper more closely we find even more significant limitations on the paper’s support for the claims made in its name, and its relevance to the current proposed mergers.

The full paper is available here.

Neil TurkewitzTruth on the Market is delighted to welcome our newest blogger, Neil Turkewitz. Neil is the newly minted Senior Policy Counsel at the International Center for Law & Economics (so we welcome him to ICLE, as well!).

Prior to joining ICLE, Neil spent 30 years at the Recording Industry Association of America (RIAA), most recently as Executive Vice President, International.

Neil has spent most of his career working to expand economic opportunities for the music industry through modernization of copyright legislation and effective enforcement in global markets. He has worked closely with creative communities around the globe, with the US and foreign governments, and with international organizations (including WIPO and the WTO), to promote legal and enforcement reforms to respond to evolving technology, and to promote a balanced approach to digital trade and Internet governance premised upon the importance of regulatory coherence, elimination of inefficient barriers to global communications, and respect for Internet freedom and the rule of law.

Among other things, Neil was instrumental in the negotiation of the WTO TRIPS Agreement, worked closely with the US and foreign governments in the negotiation of free trade agreements, helped to develop the OECD’s Communique on Principles for Internet Policy Making, coordinated a global effort culminating in the production of the WIPO Internet Treaties, served as a formal advisor to the Secretary of Commerce and the USTR as Vice-Chairman of the Industry Trade Advisory Committee on Intellectual Property Rights, and served as a member of the Board of the Chamber of Commerce’s Global Intellectual Property Center.

You can read some of his thoughts on Internet governance, IP, and international trade here and here.

Welcome Neil!

Next week the FCC is slated to vote on the second iteration of Chairman Wheeler’s proposed broadband privacy rules. Of course, as has become all too common, none of us outside the Commission has actually seen the proposal. But earlier this month Chairman Wheeler released a Fact Sheet that suggests some of the ways it would update the rules he initially proposed.

According to the Fact Sheet, the new proposed rules are

designed to evolve with changing technologies and encourage innovation, and are in harmony with other key privacy frameworks and principles — including those outlined by the Federal Trade Commission and the Administration’s Consumer Privacy Bill of Rights.

Unfortunately, the Chairman’s proposal appears to fall short of the mark on both counts.

As I discuss in detail in a letter filed with the Commission yesterday, despite the Chairman’s rhetoric, the rules described in the Fact Sheet fail to align with the FTC’s approach to privacy regulation embodied in its 2012 Privacy Report in at least two key ways:

  • First, the Fact Sheet significantly expands the scope of information that would be considered “sensitive” beyond that contemplated by the FTC. That, in turn, would impose onerous and unnecessary consumer consent obligations on commonplace uses of data, undermining consumer welfare, depriving consumers of information and access to new products and services, and restricting competition.
  • Second, unlike the FTC’s framework, the proposal described by the Fact Sheet ignores the crucial role of “context” in determining the appropriate level of consumer choice before affected companies may use consumer data. Instead, the Fact Sheet takes a rigid, acontextual approach that would stifle innovation and harm consumers.

The Chairman’s proposal moves far beyond the FTC’s definition of “sensitive” information requiring “opt-in” consent

The FTC’s privacy guidance is, in its design at least, appropriately flexible, aimed at balancing the immense benefits of information flows with sensible consumer protections. Thus it eschews an “inflexible list of specific practices” that would automatically trigger onerous consent obligations and “risk[] undermining companies’ incentives to innovate and develop new products and services….”

Under the FTC’s regime, depending on the context in which it is used (on which see the next section, below), the sensitivity of data delineates the difference between data uses that require “express affirmative” (opt-in) consent and those that do not (requiring only “other protections” short of opt-in consent — e.g., opt-out).

Because the distinction is so important — because opt-in consent is much more likely to staunch data flows — the FTC endeavors to provide guidance as to what data should be considered sensitive, and to cabin the scope of activities requiring opt-in consent. Thus, the FTC explains that “information about children, financial and health information, Social Security numbers, and precise geolocation data [should be treated as] sensitive.” But beyond those instances, the FTC doesn’t consider any other type of data as inherently sensitive.

By contrast, and without explanation, Chairman Wheeler’s Fact Sheet significantly expands what constitutes “sensitive” information requiring “opt-in” consent by adding “web browsing history,” “app usage history,” and “the content of communications” to the list of categories of data deemed sensitive in all cases.

By treating some of the most common and important categories of data as always “sensitive,” and by making the sensitivity of data the sole determinant for opt-in consent, the Chairman’s proposal would make it almost impossible for ISPs to make routine (to say nothing of innovative), appropriate, and productive uses of data comparable to those undertaken by virtually every major Internet company.  This goes well beyond anything contemplated by the FTC — with no evidence of any corresponding benefit to consumers and with obvious harm to competition, innovation, and the overall economy online.

And because the Chairman’s proposal would impose these inappropriate and costly restrictions only on ISPs, it would create a barrier to competition by ISPs in other platform markets, without offering a defensible consumer protection rationale to justify either the disparate treatment or the restriction on competition.

As Fred Cate and Michael Staten have explained,

“Opt-in” offers no greater privacy protection than allowing consumers to “opt-out”…, yet it imposes significantly higher costs on consumers, businesses, and the economy.

Not surprisingly, these costs fall disproportionately on the relatively poor and the less technology-literate. In the former case, opt-in requirements may deter companies from offering services at all, even to people who would make a very different trade-off between privacy and monetary price. In the latter case, because an initial decision to opt-in must be taken in relative ignorance, users without much experience to guide their decisions will face effectively higher decision-making costs than more knowledgeable users.

The Chairman’s proposal ignores the central role of context in the FTC’s privacy framework

In part for these reasons, central to the FTC’s more flexible framework is the establishment of a sort of “safe harbor” for data uses where the benefits clearly exceed the costs and consumer consent may be inferred:

Companies do not need to provide choice before collecting and using consumer data for practices that are consistent with the context of the transaction or the company’s relationship with the consumer….

Thus for many straightforward uses of data, the “context of the transaction,” not the asserted “sensitivity” of the underlying data, is the threshold question in evaluating the need for consumer choice in the FTC’s framework.

Chairman Wheeler’s Fact Sheet, by contrast, ignores this central role of context in its analysis. Instead, it focuses solely on data sensitivity, claiming that doing so is “in line with customer expectations.”

But this is inconsistent with the FTC’s approach.

In fact, the FTC’s framework explicitly rejects a pure “consumer expectations” standard:

Rather than relying solely upon the inherently subjective test of consumer expectations, the… standard focuses on more objective factors related to the consumer’s relationship with a business.

And while everyone agrees that sensitivity is a key part of pegging privacy regulation to actual consumer and corporate relationships, the FTC also recognizes that the importance of the sensitivity of the underlying data varies with the context in which it is used. Or, in the words of the White House’s 2012 Consumer Data Privacy in a Networked World Report (introducing its Consumer Privacy Bill of Rights), “[c]ontext should shape the balance and relative emphasis of particular principles” guiding the regulation of privacy.

By contrast, Chairman Wheeler’s “sensitivity-determines-consumer-expectations” framing is a transparent attempt to claim fealty to the FTC’s (and the Administration’s) privacy standards while actually implementing a privacy regime that is flatly inconsistent with them.

The FTC’s approach isn’t perfect, but that’s no excuse to double down on its failings

The FTC’s privacy guidance, and even more so its privacy enforcement practices under Section 5, are far from perfect. The FTC should be commended for its acknowledgement that consumers’ privacy preferences and companies’ uses of data will change over time, and that there are trade-offs inherent in imposing any constraints on the flow of information. But even the FTC fails to actually assess the magnitude of the costs and benefits of, and the deep complexities involved in, the trade-off, and puts an unjustified thumb on the scale in favor of limiting data use.  

But that’s no excuse for Chairman Wheeler to ignore what the FTC gets right, and to double down on its failings. Based on the Fact Sheet (and the initial NPRM), it’s a virtual certainty that the Chairman’s proposal doesn’t heed the FTC’s refreshing call for humility and flexibility regarding the application of privacy rules to ISPs (and other Internet platforms):

These are complex and rapidly evolving areas, and more work should be done to learn about the practices of all large platform providers, their technical capabilities with respect to consumer data, and their current and expected uses of such data.

The rhetoric of the Chairman’s Fact Sheet is correct: the FCC should in fact conform its approach to privacy to the framework established by the FTC. Unfortunately, the reality of the Fact Sheet simply doesn’t comport with its rhetoric.

As the FCC’s vote on the Chairman’s proposal rapidly nears, and in light of its significant defects, we can only hope that the rest of the Commission refrains from reflexively adopting the proposed regime, and works to ensure that these problematic deviations from the FTC’s framework are addressed before moving forward.

Today ICLE released a white paper entitled, A critical assessment of the latest charge of Google’s anticompetitive bias from Yelp and Tim Wu.

The paper is a comprehensive response to a study by Michael Luca, Timothy Wu, Sebastian Couvidat, Daniel Frank, & William Seltzer, entitled, Is Google degrading search? Consumer harm from Universal Search.

The Wu, et al. paper will be one of the main topics of discussion at today’s Capitol Forum and George Washington Institute of Public Policy event on Dominant Platforms Under the Microscope: Policy Approaches in the US and EU, at which I will be speaking — along with a host of luminaries including, inter alia, Josh Wright, Jonathan Kanter, Allen Grunes, Catherine Tucker, and Michael Luca — one of the authors of the Universal Search study.

Follow the link above to register — the event starts at noon today at the National Press Club.

Meanwhile, here’s a brief description of our paper:

Late last year, Tim Wu of Columbia Law School (and now the White House Office of Management and Budget), Michael Luca of Harvard Business School (and a consultant for Yelp), and a group of Yelp data scientists released a study claiming that Google has been purposefully degrading search results from its more-specialized competitors in the area of local search. The authors’ claim is that Google is leveraging its dominant position in general search to thwart competition from specialized search engines by favoring its own, less-popular, less-relevant results over those of its competitors:

To improve the popularity of its specialized search features, Google has used the power of its dominant general search engine. The primary means for doing so is what is called the “universal search” or the “OneBox.”

This is not a new claim, and researchers have been attempting (and failing) to prove Google’s “bias” for some time. Likewise, these critics have drawn consistent policy conclusions from their claims, asserting that antitrust violations lie at the heart of the perceived bias. But the studies are systematically marred by questionable methodology and bad economics.

This latest study by Tim Wu, along with a cadre of researchers employed by Yelp (one of Google’s competitors and one of its chief antitrust provocateurs), fares no better, employing slightly different but equally questionable methodology, bad economics, and a smattering of new, but weak, social science. (For a thorough criticism of the inherent weaknesses of Wu et al.’s basic social science methodology, see Miguel de la Mano, Stephen Lewis, and Andrew Leyden, Focus on the Evidence: A Brief Rebuttal of Wu, Luca, et al (2016), available here).

The basic thesis of the study is that Google purposefully degrades its local searches (e.g., for restaurants, hotels, services, etc.) to the detriment of its specialized search competitors, local businesses, consumers, and even Google’s bottom line — and that this is an actionable antitrust violation.

But in fact the study shows nothing of the kind. Instead, the study is marred by methodological problems that, in the first instance, make it impossible to draw any reliable conclusions. Nor does the study show that Google’s conduct creates any antitrust-relevant problems. Rather, the construction of the study and the analysis of its results reflect a superficial and inherently biased conception of consumer welfare that completely undermines the study’s purported legal and economic conclusions.

Read the whole thing here.

Since the European Commission (EC) announced its first inquiry into Google’s business practices in 2010, the company has been the subject of lengthy investigations by courts and competition agencies around the globe. Regulatory authorities in the United States, France, the United Kingdom, Canada, Brazil, and South Korea have all opened and rejected similar antitrust claims.

And yet the EC marches on, bolstered by Google’s myriad competitors, who continue to agitate for further investigations and enforcement actions, even as we — companies and consumers alike — enjoy the benefits of an increasingly dynamic online marketplace.

Indeed, while the EC has spent more than half a decade casting about for some plausible antitrust claim, the online economy has thundered ahead. Since 2010, Facebook has tripled its active users and multiplied its revenue ninefold; the number of apps available in the Amazon app store has grown from less than 4000 to over 400,000 today; and there are almost 1.5 billion more Internet users globally than there were in 2010. And consumers are increasingly using new and different ways to search for information: Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and Facebook’s Messenger are a few of the many new innovations challenging traditional search engines.

Advertisers have adapted to this evolution, moving increasingly online, and from search to display ads as mobile adoption has skyrocketedSocial networks like Twitter and Snapchat have come into their own, competing for the same (and ever-increasing) advertising dollars. For marketers, advertising on social networks is now just as important as advertising in search. No wonder e-commerce sales have more than doubled, to almost $2 trillion worldwide; for the first time, consumers purchased more online than in stores this past year.

To paraphrase Louis C.K.: Everything is amazing — and no one at the European Commission is happy.

The EC’s market definition is fatally flawed

Like its previous claims, the Commission’s most recent charges are rooted in the assertion that Google abuses its alleged dominance in “general search” advertising to unfairly benefit itself and to monopolize other markets. But European regulators continue to miss the critical paradigm shift among online advertisers and consumers that has upended this stale view of competition on the Internet. The reality is that Google’s competition may not, and need not, look exactly like Google itself, but it is competition nonetheless. And it’s happening in spades.

The key to understanding why the European Commission’s case is fundamentally flawed lies in an examination of how it defines the relevant market. Through a series of economically and factually unjustified assumptions, the Commission defines search as a distinct market in which Google faces limited competition and enjoys an 80% market share. In other words, for the EC, “general search” apparently means only nominal search providers like Google and Bing; it doesn’t mean companies like Amazon, Facebook and Twitter — Google’s biggest competitors.  

But the reality is that “general search” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google happens to use to match consumers and advertisers doesn’t reflect the substitutability of other mechanisms that do the same thing — merely because these mechanisms aren’t called “search.”

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive.

Consumers today are increasingly using platforms like Amazon and Facebook as substitutes for the searches they might have run on Google or Bing. “Closed” platforms like the iTunes store and innumerable apps handle copious search traffic but also don’t figure in the EC’s market calculations. And so-called “dark social” interactions like email, text messages, and IMs, drive huge amounts of some of the most valuable traffic on the Internet. This, in turn, has led to a competitive scramble to roll out completely new technologies like chatbots to meet consumers’ informational (and merchants’ advertising) needs.

Properly construed, Google’s market position is precarious

Like Facebook and Twitter (and practically every other Internet platform), advertising is Google’s primary source of revenue. Instead of charging for fancy hardware or offering services to users for a fee, Google offers search, the Android operating system, and a near-endless array of other valuable services for free to users. The company’s very existence relies on attracting Internet users and consumers to its properties in order to effectively connect them with advertisers.

But being an online matchmaker is a difficult and competitive enterprise. Among other things, the ability to generate revenue turns crucially on the quality of the match: All else equal, an advertiser interested in selling widgets will pay more for an ad viewed by a user who can be reliably identified as being interested in buying widgets.

Google’s primary mechanism for attracting users to match with advertisers — general search — is substantially about information, not commerce, and the distinction between product and informational searches is crucially important to understanding Google’s market and the surprisingly limited and tenuous market power it possesses.

General informational queries aren’t nearly as valuable to advertisers: Significantly, only about 30 percent of Google’s searches even trigger any advertising at all. Meanwhile, as of 2012, one-third of product searches started on Amazon while only 13% started on a general search engine.

As economist Hal Singer aptly noted in 2012,

[the data] suggest that Google lacks market power in a critical segment of search — namely, product searches. Even though searches for items such as power tools or designer jeans account for only 10 to 20 percent of all searches, they are clearly some of the most important queries for search engines from a business perspective, as they are far easier to monetize than informational queries like “Kate Middleton.”

While Google Search clearly offers substantial value to advertisers, its ability to continue to do so is precarious when confronted with the diverse array of competitors that, like Facebook, offer a level of granularity in audience targeting that general search can’t match, or that, like Amazon, systematically offer up the most valuable searchers.

In order to compete in this market — one properly defined to include actual competitors — Google has had to constantly innovate to maintain its position. Unlike a complacent monopolist, it has evolved to meet changing consumer demand, shifting technology and inventive competitors. Thus, Google’s search algorithm has changed substantially over the years to make more effective use of the information available to ensure relevance; search results have evolved to give consumers answers to queries rather than just links, and to provide more-direct access to products and services; and, as users have shifted more and more of their time and attention to mobile devices, search has incorporated more-localized results.

Competitors want a free lunch

Critics complain, nevertheless, that these developments have made it harder, in one way or another, for rivals to compete. And the EC has provided a willing ear. According to Commissioner Vestager last week:

Google has come up with many innovative products that have made a difference to our lives. But that doesn’t give Google the right to deny other companies the chance to compete and innovate. Today, we have further strengthened our case that Google has unduly favoured its own comparison shopping service in its general search result pages…. (Emphasis added).

Implicit in this statement is the remarkable assertion that by favoring its own comparison shopping services, Google “den[ies] other companies the chance to compete and innovate.” Even assuming Google does “favor” its own results, this is an astounding claim.

First, it is not a violation of competition law simply to treat competitors’ offerings differently than one’s own, even for a dominant firm. Instead, conduct must actually exclude competitors from the market, without offering countervailing advantages to consumers. But Google’s conduct is not exclusionary, and there are many benefits to consumers.

As it has from the start of its investigations of Google, the EC begins with a flawed assumption: that Google’s competitors both require, and may be entitled to, unfettered access to Google’s property in order to compete. But this is patently absurd. Google is not an essential facility: Billions of users reach millions of companies everyday through direct browser navigation, apps, email links, review sites and blogs, and countless other means — all without once touching Google.com.

Google Search results do not exclude competitors, whether comparison shopping sites or others. For example, 72% of TripAdvisor’s U.S. traffic comes from search, and almost all of that from organic results; other specialized search sites see similar traffic volumes.

More important, however, in addition to continuing to reach rival sites through Google Search, billions of consumers access rival services directly through their mobile apps. In fact, for Yelp,

Approximately 21 million unique devices accessed Yelp via the mobile app on a monthly average basis in the first quarter of 2016, an increase of 32% compared to the same period in 2015. App users viewed approximately 70% of page views in the first quarter and were more than 10 times as engaged as website users, as measured by number of pages viewed. (Emphasis added).

And a staggering 40 percent of mobile browsing is now happening inside the Facebook app, competing with the browsers and search engines pre-loaded on smartphones.

Millions of consumers also directly navigate to Google’s rivals via their browser by simply typing, for example, “Yelp.com” in their address bar. And as noted above, consumers are increasingly using Google rivals’ new disruptive information engines like Alexa and Siri for their search needs. Even the traditional search engine space is competitive — in fact, according to Wired, as of July 2016:

Microsoft has now captured more than one-third of Internet searches. Microsoft’s transformation from a company that sells boxed software to one that sells services in the cloud is well underway. (Emphasis added).

With such numbers, it’s difficult to see how rivals are being foreclosed from reaching consumers in any meaningful way.

Meanwhile, the benefits to consumers are obvious: Google is directly answering questions for consumers rather than giving them a set of possible links to click through and further search. In some cases its results present entirely new and valuable forms of information (e.g., search trends and structured data); in others they serve to hone searches by suggesting further queries, or to help users determine which organic results (including those of its competitors) may be most useful. And, of course, consumers aren’t forced to endure these innovations if they don’t find them useful, as they can quickly switch to other providers.  

Nostalgia makes for bad regulatory policy

Google is not the unstoppable monopolist of the EU competition regulators’ imagining. Rather, it is a continual innovator, forced to adapt to shifting consumer demand, changing technology, and competitive industry dynamics. And, instead of trying to hamstring Google, if they are to survive, Google’s competitors (and complainants) must innovate as well.

Dominance in technology markets — especially online — has always been ephemeral. Once upon a time, MySpace, AOL, and Yahoo were the dominant Internet platforms. Kodak, once practically synonymous with “instant camera” let the digital revolution pass it by. The invincible Sony Walkman was upended by mp3s and the iPod. Staid, keyboard-operated Blackberries and Nokias simply couldn’t compete with app-driven, graphical platforms from Apple and Samsung. Even today, startups like Snapchat, Slack, and Spotify gain massive scale and upend entire industries with innovative new technology that can leave less-nimble incumbents in the dustbin of tech history.

Put differently, companies that innovate are able to thrive, while those that remain dependent on yesterday’s technology and outdated business models usually fail — and deservedly so. It should never be up to regulators to pick winners and losers in a highly dynamic and competitive market, particularly if doing so constrains the market’s very dynamism. As Alfonso Lamadrid has pointed out:

It is companies and not competition enforcers which will strive or fail in the adoption of their business models, and it is therefore companies and not competition enforcers who are to decide on what business models to use. Some will prove successful and others will not; some companies will thrive and some will disappear, but with experimentation with business models, success and failure are and have always been part of the game.

In other words, we should not forget that competition law is, or should be, business-model agnostic, and that regulators are – like anyone else – far from omniscient.

Like every other technology company before them, Google and its competitors must be willing and able to adapt in order to keep up with evolving markets — just as for Lewis Carroll’s Red Queen, “it takes all the running you can do, to keep in the same place.” Google confronts a near-constantly evolving marketplace and fierce competition from unanticipated quarters; companies that build their businesses around Google face a near-constantly evolving Google. In the face of such relentless market dynamism, neither consumers nor firms are well served by regulatory policy rooted in nostalgia.  

Yesterday, the International Center for Law & Economics filed reply comments in the docket of the FCC’s Broadband Privacy NPRM. ICLE was joined in its comments by the following scholars of law & economics:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Adam Candeub, Professor of Law, Michigan State University College of Law
  • Justin (Gus) Hurwitz, Assistant Professor of Law, Nebraska College of Law
  • Daniel Lyons, Associate Professor, Boston College Law School
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University Department of Economics

As in our initial comments, we drew on the economic scholarship of multi-sided platforms to argue that the FCC failed to consider the ways in which asymmetric regulation will ultimately have negative competitive effects and harm consumers. The FCC and some critics claimed that ISPs are gatekeepers deserving of special regulation — a case that both the FCC and the critics failed to make.

The NPRM fails adequately to address these issues, to make out an adequate case for the proposed regulation, or to justify treating ISPs differently than other companies that collect and use data.

Perhaps most important, the NPRM also fails to acknowledge or adequately assess the actual market in which the use of consumer data arises: the advertising market. Whether intentionally or not, this NPRM is not primarily about regulating consumer privacy; it is about keeping ISPs out of the advertising business. But in this market, ISPs are upstarts challenging the dominant position of firms like Google and Facebook.

Placing onerous restrictions upon ISPs alone results in either under-regulation of edge providers or over-regulation of ISPs within the advertising market, without any clear justification as to why consumer privacy takes on different qualities for each type of advertising platform. But the proper method of regulating privacy is, in fact, the course that both the FTC and the FCC have historically taken, and which has yielded a stable, evenly administered regime: case-by-case examination of actual privacy harms and a minimalist approach to ex ante, proscriptive regulations.

We also responded to particular claims made by New America’s Open Technology Institute about the expectations of consumers regarding data collection online, the level of competitiveness in the marketplace, and the technical realities that differentiate ISPs from edge providers.

OTI attempts to substitute its own judgment of what consumers (should) believe about their data for that of consumers themselves. And in the process it posits a “context” that can and will never shift as new technology and new opportunities emerge. Such a view of consumer expectations is flatly anti-innovation and decidedly anti-consumer, consigning broadband users to yesterday’s technology and business models. The rule OTI supports could effectively forbid broadband providers from offering consumers the option to trade data for lower prices.

Our reply comments went on to point out that much of the basis upon which the NPRM relies — and alleged lack of adequate competition among ISPs — was actually a “manufactured scarcity” based upon the Commission’s failure to properly analyze the relevant markets.

The Commission’s claim that ISPs, uniquely among companies in the modern data economy, face insufficient competition in the broadband market is… insufficiently supported. The flawed manner in which the Commission has defined the purported relevant market for broadband distorts the analysis upon which the proposed rules are based, and manufactures a false scarcity in order to justify unduly burdensome privacy regulations for ISPs. Even the Commission’s own data suggest that consumer choice is alive and well in broadband… The reality is that there is in fact enough competition in the broadband market to offer privacy-sensitive consumers options if they are ever faced with what they view as overly invasive broadband business practices. According to the Commission, as of December 2014, 74% of American homes had a choice of two or more wired ISPs delivering download speeds of at least 10 Mbps, and 88% had a choice of at least two providers of 3 Mbps service. Meanwhile, 93% of consumers have access to at least three mobile broadband providers. Looking forward, consumer choice at all download speeds is increasing at rapid rates due to extensive network upgrades and new entry in a highly dynamic market.

Finally, we rebutted the contention that predictive analytics was a magical tool that would enable ISPs to dominate information gathering and would, consequently, lead to consumer harms — even where ISPs had access only to seemingly trivial data about users.

Some comments in support of the proposed rules attempt to cast ISPs as all powerful by virtue of their access to apparently trivial data — IP addresses, access timing, computer ports, etc. — because of the power of predictive analytics. These commenters assert that the possibility of predictive analytics coupled with a large data set undermines research that demonstrates that ISPs, thanks to increasing encryption, do not have access to any better quality data, and probably less quality data, than edge providers themselves have.

But this is a curious bit of reasoning. It essentially amounts to the idea that, not only should consumers be permitted to control with whom their data is shared, but that all other parties online should be proscribed from making their own independent observations about consumers. Such a rule would be akin to telling supermarkets that they are not entitled to observe traffic patterns in their stores in order to place particular products in relatively more advantageous places, for example. But the reality is that most data is noise; simply having more of it is not necessarily a boon, and predictive analytics is far from a panacea. In fact, the insights gained from extensive data collection are frequently useless when examining very large data sets, and are better employed by single firms answering particular questions about their users and products.

Our full reply comments are available here.

As regulatory review of the merger between Aetna and Humana hits the homestretch, merger critics have become increasingly vocal in their opposition to the deal. This is particularly true of a subset of healthcare providers concerned about losing bargaining power over insurers.

Fortunately for consumers, the merger appears to be well on its way to approval. California recently became the 16th of 20 state insurance commissions that will eventually review the merger to approve it. The U.S. Department of Justice is currently reviewing the merger and may issue its determination as early as July.

Only Missouri has issued a preliminary opinion that the merger might lead to competitive harm. But Missouri is almost certain to remain an outlier, and its analysis simply doesn’t hold up to scrutiny.

The Missouri opinion echoed the Missouri Hospital Association’s (MHA) concerns about the effect of the merger on Medicare Advantage (MA) plans. It’s important to remember, however, that hospital associations like the MHA are not consumer advocacy groups. They are trade organizations whose primary function is to protect the interests of their member hospitals.

In fact, the American Hospital Association (AHA) has mounted continuous opposition to the deal. This is itself a good indication that the merger will benefit consumers, in part by reducing hospital reimbursement costs under MA plans.

More generally, critics have argued that history proves that health insurance mergers lead to higher premiums, without any countervailing benefits. Merger opponents place great stock in a study by economist Leemore Dafny and co-authors that purports to show that insurance mergers have historically led to seven percent higher premiums.

But that study, which looked at a pre-Affordable Care Act (ACA) deal and assessed its effects only on premiums for traditional employer-provided plans, has little relevance today.

The Dafny study first performed a straightforward statistical analysis of overall changes in concentration (that is, the number of insurers in a given market) and price, and concluded that “there is no significant association between concentration levels and premium growth.” Critics never mention this finding.

The study’s secondary, more speculative, analysis took the observed effects of a single merger — the 1999 merger between Prudential and Aetna — and extrapolated for all changes in concentration (i.e., the number of insurers in a given market) and price over an eight-year period. It concluded that, on average, seven percent of the cumulative increase in premium prices between 1998 and 2006 was the result of a reduction in the number of insurers.

But what critics fail to mention is that when the authors looked at the actual consequences of the 1999 Prudential/Aetna merger, they found effects lasting only two years — and an average price increase of only one half of one percent. And these negligible effects were restricted to premiums paid under plans purchased by large employers, a critical limitation of the studies’ relevance to today’s proposed mergers.

Moreover, as the study notes in passing, over the same eight-year period, average premium prices increased in total by 54 percent. Yet the study offers no insights into what was driving the vast bulk of premium price increases — or whether those factors are still present today.  

Few sectors of the economy have changed more radically in the past few decades than healthcare has. While extrapolated effects drawn from 17-year-old data may grab headlines, they really don’t tell us much of anything about the likely effects of a particular merger today.

Indeed, the ACA and current trends in healthcare policy have dramatically altered the way health insurance markets work. Among other things, the advent of new technologies and the move to “value-based” care are redefining the relationship between insurers and healthcare providers. Nowhere is this more evident than in the Medicare and Medicare Advantage market at the heart of the Aetna/Humana merger.

In an effort to stop the merger on antitrust grounds, critics claim that Medicare and MA are distinct products, in distinct markets. But it is simply incorrect to claim that Medicare Advantage and traditional Medicare aren’t “genuine alternatives.”

In fact, as the Office of Insurance Regulation in Florida — a bellwether state for healthcare policy — concluded in approving the merger: “Medicare Advantage, the private market product, competes directly with Traditional Medicare.”

Consumers who search for plans at Medicare.gov are presented with a direct comparison between traditional Medicare and available MA plans. And the evidence suggests that they regularly switch between the two. Today, almost a third of eligible Medicare recipients choose MA plans, and the majority of current MA enrollees switched to MA from traditional Medicare.

True, Medicare and MA plans are not identical. But for antitrust purposes, substitutes need not be perfect to exert pricing discipline on each other. Take HMOs and PPOs, for example. No one disputes that they are substitutes, and that prices for one constrain prices for the other. But as anyone who has considered switching between an HMO and a PPO knows, price is not the only variable that influences consumers’ decisions.

The same is true for MA and traditional Medicare. For many consumers, Medicare’s standard benefits, more-expensive supplemental benefits, plus a wider range of provider options present a viable alternative to MA’s lower-cost expanded benefits and narrower, managed provider network.

The move away from a traditional fee-for-service model changes how insurers do business. It requires larger investments in technology, better tracking of preventive care and health outcomes, and more-holistic supervision of patient care by insurers. Arguably, all of this may be accomplished most efficiently by larger insurers with more resources and a greater ability to work with larger, more integrated providers.

This is exactly why many hospitals, which continue to profit from traditional, fee-for-service systems, are opposed to a merger that promises to expand these value-based plans. Significantly, healthcare providers like Encompass Medical Group, which have done the most to transition their services to the value-based care model, have offered letters of support for the merger.

Regardless of their rhetoric — whether about market definition or historic precedent — the most vocal merger critics are opposed to the deal for a very simple reason: They stand to lose money if the merger is approved. That may be a good reason for some hospitals to wish the merger would go away, but it is a terrible reason to actually stop it.

[This post was first published on June 27, 2016 in The Hill as “Don’t believe the critics, Aetna-Humana merger a good deal for consumers“]

I’m delighted to announce that David Olson will be guest blogging at Truth on the Market this summer.

olson1David is an Associate Professor at Boston College Law School. He teaches antitrust, patents, and intellectual property law. Professor Olson’s writing has been cited in Supreme Court and other legal opinions. Olson came to Boston College from Stanford Law School’s Center for Internet and Society, where he researched in patent law and litigated copyright fair use impact cases. Before entering academia, Professor Olson practiced as a patent litigator. He has published scholarly articles on patent law and antitrust, copyright law, and First Amendment copyright issues. He has been quoted in stories by the Wall Street Journal, Associated Press and Reuters, and has appeared as a guest panelist on WBUR’s Radio Boston, WAMU’s Kojo Namdi Show, and on Public Radio Canada. His scholarly papers are available here.

Perhaps foremost among his many deserved claims to fame, David (along with Hofstra law professor Irina Manta) is the co-author of an excellent paper with musician Aloe Blacc (of Wake Me Up fame).

Welcome David!