Archives For antitrust

Next week the FCC is slated to vote on the second iteration of Chairman Wheeler’s proposed broadband privacy rules. Of course, as has become all too common, none of us outside the Commission has actually seen the proposal. But earlier this month Chairman Wheeler released a Fact Sheet that suggests some of the ways it would update the rules he initially proposed.

According to the Fact Sheet, the new proposed rules are

designed to evolve with changing technologies and encourage innovation, and are in harmony with other key privacy frameworks and principles — including those outlined by the Federal Trade Commission and the Administration’s Consumer Privacy Bill of Rights.

Unfortunately, the Chairman’s proposal appears to fall short of the mark on both counts.

As I discuss in detail in a letter filed with the Commission yesterday, despite the Chairman’s rhetoric, the rules described in the Fact Sheet fail to align with the FTC’s approach to privacy regulation embodied in its 2012 Privacy Report in at least two key ways:

  • First, the Fact Sheet significantly expands the scope of information that would be considered “sensitive” beyond that contemplated by the FTC. That, in turn, would impose onerous and unnecessary consumer consent obligations on commonplace uses of data, undermining consumer welfare, depriving consumers of information and access to new products and services, and restricting competition.
  • Second, unlike the FTC’s framework, the proposal described by the Fact Sheet ignores the crucial role of “context” in determining the appropriate level of consumer choice before affected companies may use consumer data. Instead, the Fact Sheet takes a rigid, acontextual approach that would stifle innovation and harm consumers.

The Chairman’s proposal moves far beyond the FTC’s definition of “sensitive” information requiring “opt-in” consent

The FTC’s privacy guidance is, in its design at least, appropriately flexible, aimed at balancing the immense benefits of information flows with sensible consumer protections. Thus it eschews an “inflexible list of specific practices” that would automatically trigger onerous consent obligations and “risk[] undermining companies’ incentives to innovate and develop new products and services….”

Under the FTC’s regime, depending on the context in which it is used (on which see the next section, below), the sensitivity of data delineates the difference between data uses that require “express affirmative” (opt-in) consent and those that do not (requiring only “other protections” short of opt-in consent — e.g., opt-out).

Because the distinction is so important — because opt-in consent is much more likely to staunch data flows — the FTC endeavors to provide guidance as to what data should be considered sensitive, and to cabin the scope of activities requiring opt-in consent. Thus, the FTC explains that “information about children, financial and health information, Social Security numbers, and precise geolocation data [should be treated as] sensitive.” But beyond those instances, the FTC doesn’t consider any other type of data as inherently sensitive.

By contrast, and without explanation, Chairman Wheeler’s Fact Sheet significantly expands what constitutes “sensitive” information requiring “opt-in” consent by adding “web browsing history,” “app usage history,” and “the content of communications” to the list of categories of data deemed sensitive in all cases.

By treating some of the most common and important categories of data as always “sensitive,” and by making the sensitivity of data the sole determinant for opt-in consent, the Chairman’s proposal would make it almost impossible for ISPs to make routine (to say nothing of innovative), appropriate, and productive uses of data comparable to those undertaken by virtually every major Internet company.  This goes well beyond anything contemplated by the FTC — with no evidence of any corresponding benefit to consumers and with obvious harm to competition, innovation, and the overall economy online.

And because the Chairman’s proposal would impose these inappropriate and costly restrictions only on ISPs, it would create a barrier to competition by ISPs in other platform markets, without offering a defensible consumer protection rationale to justify either the disparate treatment or the restriction on competition.

As Fred Cate and Michael Staten have explained,

“Opt-in” offers no greater privacy protection than allowing consumers to “opt-out”…, yet it imposes significantly higher costs on consumers, businesses, and the economy.

Not surprisingly, these costs fall disproportionately on the relatively poor and the less technology-literate. In the former case, opt-in requirements may deter companies from offering services at all, even to people who would make a very different trade-off between privacy and monetary price. In the latter case, because an initial decision to opt-in must be taken in relative ignorance, users without much experience to guide their decisions will face effectively higher decision-making costs than more knowledgeable users.

The Chairman’s proposal ignores the central role of context in the FTC’s privacy framework

In part for these reasons, central to the FTC’s more flexible framework is the establishment of a sort of “safe harbor” for data uses where the benefits clearly exceed the costs and consumer consent may be inferred:

Companies do not need to provide choice before collecting and using consumer data for practices that are consistent with the context of the transaction or the company’s relationship with the consumer….

Thus for many straightforward uses of data, the “context of the transaction,” not the asserted “sensitivity” of the underlying data, is the threshold question in evaluating the need for consumer choice in the FTC’s framework.

Chairman Wheeler’s Fact Sheet, by contrast, ignores this central role of context in its analysis. Instead, it focuses solely on data sensitivity, claiming that doing so is “in line with customer expectations.”

But this is inconsistent with the FTC’s approach.

In fact, the FTC’s framework explicitly rejects a pure “consumer expectations” standard:

Rather than relying solely upon the inherently subjective test of consumer expectations, the… standard focuses on more objective factors related to the consumer’s relationship with a business.

And while everyone agrees that sensitivity is a key part of pegging privacy regulation to actual consumer and corporate relationships, the FTC also recognizes that the importance of the sensitivity of the underlying data varies with the context in which it is used. Or, in the words of the White House’s 2012 Consumer Data Privacy in a Networked World Report (introducing its Consumer Privacy Bill of Rights), “[c]ontext should shape the balance and relative emphasis of particular principles” guiding the regulation of privacy.

By contrast, Chairman Wheeler’s “sensitivity-determines-consumer-expectations” framing is a transparent attempt to claim fealty to the FTC’s (and the Administration’s) privacy standards while actually implementing a privacy regime that is flatly inconsistent with them.

The FTC’s approach isn’t perfect, but that’s no excuse to double down on its failings

The FTC’s privacy guidance, and even more so its privacy enforcement practices under Section 5, are far from perfect. The FTC should be commended for its acknowledgement that consumers’ privacy preferences and companies’ uses of data will change over time, and that there are trade-offs inherent in imposing any constraints on the flow of information. But even the FTC fails to actually assess the magnitude of the costs and benefits of, and the deep complexities involved in, the trade-off, and puts an unjustified thumb on the scale in favor of limiting data use.  

But that’s no excuse for Chairman Wheeler to ignore what the FTC gets right, and to double down on its failings. Based on the Fact Sheet (and the initial NPRM), it’s a virtual certainty that the Chairman’s proposal doesn’t heed the FTC’s refreshing call for humility and flexibility regarding the application of privacy rules to ISPs (and other Internet platforms):

These are complex and rapidly evolving areas, and more work should be done to learn about the practices of all large platform providers, their technical capabilities with respect to consumer data, and their current and expected uses of such data.

The rhetoric of the Chairman’s Fact Sheet is correct: the FCC should in fact conform its approach to privacy to the framework established by the FTC. Unfortunately, the reality of the Fact Sheet simply doesn’t comport with its rhetoric.

As the FCC’s vote on the Chairman’s proposal rapidly nears, and in light of its significant defects, we can only hope that the rest of the Commission refrains from reflexively adopting the proposed regime, and works to ensure that these problematic deviations from the FTC’s framework are addressed before moving forward.

On September 28, the American Antitrust Institute released a report (“AAI Report”) on the state of U.S. antitrust policy, provocatively entitled “A National Competition Policy:  Unpacking the Problem of Declining Competition and Setting Priorities for Moving Forward.”  Although the AAI Report contains some valuable suggestions, in important ways it reminds one of the drunkard who seeks his (or her) lost key under the nearest lamppost.  What it requires is greater sobriety and a broader vision of the problems that beset the American economy.

The AAI Report begins by asserting that “[n]ot since the first federal antitrust law was enacted over 120 years ago has there been the level of public concern over the concentration of economic and political power that we see today.”  Well, maybe, although I for one am not convinced.  The paper then states that “competition is now on the front pages, as concerns over rising concentration, extraordinary profits accruing to the top slice of corporations, slowing innovation, and widening income and wealth inequality have galvanized attention.”  It then goes on to call for a more aggressive federal antitrust enforcement policy, with particular attention paid to concentrated markets.  The implicit message is that dedicated antitrust enforcers during the Obama Administration, led by Federal Trade Commission Chairs Jonathan Leibowitz and Edith Ramirez, and Antitrust Division chiefs Christine Varney, Bill Baer, and Renata Hesse (Acting) have been laggard or asleep at the switch.  But where is the evidence for this?  I am unaware of any and the AAI doesn’t say.  Indeed, federal antitrust officials in the Obama Administration consistently have called for tough enforcement, and they have actively pursued vertical as well as horizontal conduct cases and novel theories of IP-antitrust liability.  Thus, the AAI Report’s contention that antitrust needs to be “reinvigorated” is unconvincing.

The AAI Report highlights three “symptoms” of declining competition:  (1) rising concentration, (2) higher profits to the few and slowing rates of start-up activity, and (3) widening income and wealth inequality.  But these concerns are not something that antitrust policy is designed to address.  Mergers that threaten to harm competition are within the purview of antitrust, but modern antitrust rightly focuses on the likely effects of such mergers, not on the mere fact that they may increase concentration.  Furthermore, antitrust assesses the effects of business agreements on the competitive process.  Antitrust does not ask whether business arrangements yield “unacceptably” high profits, or “overly low” rates of business formation, or “unacceptable” wealth and income inequality.  Indeed, antitrust is not well equipped to address such questions, nor does it possess the tools to “solve” them (even assuming they need to be solved).

In short, if American competition is indeed declining based on the symptoms flagged by the AAI Report, the key to the solution will not be found by searching under the antitrust policy lamppost for illumination.  Rather, a more thorough search, with the help of “common sense” flashlights, is warranted.

The search outside the antitrust spotlight is not, however, a difficult one.  Finding the explanation for lagging competitive conditions in the United States requires no great policy legerdemain, because sound published research already provides the answer.  And that answer centers on government failures, not private sector abuses.

Consider overregulation.  In its annual Red Tape Rising reports (see here for the latest one), the Heritage Foundation has documented the growing burden of federal regulation on the American economy.  Overregulation acts like an implicit tax on businesses and disincentivizes business start-ups.  Moreover, as regulatory requirements grow in complexity and burdensomeness, they increasingly place a premium on large size – only relatively larger businesses can better afford the fixed costs needed to establish regulatory compliance department than their smaller rivals.  Heritage Foundation Scholar Norbert Michel summarizes this phenomenon in his article Dodd-Frank and Glass-Steagall – ‘Consumer Protection for Billionaires’:

Even when it’s not by nefarious design, we end up with rules that favor the largest/best-funded firms over their smaller/less-well-funded competitors. Put differently, our massive regulatory state ends up keeping large firms’ competitors at bay.  The more detailed regulators try to be, the more complex the rules become. And the more complex the rules become, the smaller the number of people who really care. Hence, more complicated rules and regulations serve to protect existing firms from competition more than simple ones. All of this means consumers lose. They pay higher prices, they have fewer choices of financial products and services, and they pretty much end up with the same level of protection they’d have with a smaller regulatory state.

What’s worse, some of the most onerous regulatory schemes are explicitly designed to favor large competitors over small ones.  A prime example is financial services regulation, and, in particular, the rules adopted pursuant to the 2010 Dodd-Frank Act (other examples could readily be provided).  As a Heritage Foundation report explains (footnote citations omitted):

The [Dodd-Frank] act was largely intended to reduce the risk of a major bank failure, but the regulatory burden is crippling community banks (which played little role in the financial crisis). According to Harvard University researchers Marshall Lux and Robert Greene, small banks’ share of U.S. commercial banking assets declined nearly twice as much since the second quarter of 2010—around the time of Dodd–Frank’s passage—as occurred between 2006 and 2010. Their share currently stands at just 22 percent, down from 41 percent in 1994.

The increased consolidation rate is driven by regulatory economies of scale—larger banks are better suited to handle increased regulatory burdens than are smaller banks, causing the average costs of community banks to rise. The decline in small bank assets spells trouble for their primary customer base—small business loans and those seeking residential mortgages.

Ironically, Dodd–Frank proponents pushed for the law as necessary to rein in the big banks and Wall Street. In fact, the regulations are giving the largest companies a competitive advantage over smaller enterprises—the opposite outcome sought by Senator Christopher Dodd (D–CT), Representative Barney Frank (D–MA), and their allies. As Goldman Sachs CEO Lloyd Blankfein recently explained: “More intense regulatory and technology requirements have raised the barriers to entry higher than at any other time in modern history. This is an expensive business to be in, if you don’t have the market share in scale.

In sum, as Dodd-Frank and other regulatory programs illustrate, large government rulemaking schemes often are designed to favor large and wealthy well-connected rent-seekers at the expense of smaller and more dynamic competitors.

More generally, as Heritage Foundation President Jim DeMint and Heritage Action for America CEO Mike Needham have emphasized, well-connected businesses use lobbying and inside influence to benefit themselves by having government enact special subsidies, bailouts and complex regulations, including special tax preferences. Those special preferences undermine competition on the merits by firms that lack insider status, to the public detriment.  Relatedly, the hideously complex system of American business taxation, which features the highest corporate tax rates in the developed world (which can better be manipulated by very large corporate players), depresses wages and is a serious drag on the American economy, as shown by Heritage Foundation scholars Curtis Dubay and David Burton.  In a similar vein, David Burton testified before Congress in 2015 on how the various excesses of the American regulatory state (including bad tax, health care, immigration, and other regulatory policies, combined with an overly costly legal system) undermine U.S. entrepreneurship (see here).

In other words, special subsidies, regulations, and tax and regulatory programs for the well-connected are part and parcel of crony capitalism, which (1) favors large businesses, tending to raise concentration; (2) confers higher profits on the well-connected while discouraging small business entrepreneurship; and (3) promotes income and wealth inequality, with the greatest returns going to the wealthiest government cronies who know best how to play the Washington “rent seeking game.”  Unfortunately, crony capitalism has grown like topsy during the Obama Administration.

Accordingly, I would counsel AAI to turn its scholarly gaze away from antitrust and toward the true source of the American competitive ailments it spotlights:  crony capitalism enabled by the growth of big government special interest programs and increasingly costly regulatory schemes.  Let’s see if AAI takes my advice.

This week, the International Center for Law & Economics filed comments  on the proposed revision to the joint U.S. Federal Trade Commission (FTC) – U.S. Department of Justice (DOJ) Antitrust-IP Licensing Guidelines. Overall, the guidelines present a commendable framework for the IP-Antitrust intersection, in particular as they broadly recognize the value of IP and licensing in spurring both innovation and commercialization.

Although our assessment of the proposed guidelines is generally positive,  we do go on to offer some constructive criticism. In particular, we believe, first, that the proposed guidelines should more strongly recognize that a refusal to license does not deserve special scrutiny; and, second, that traditional antitrust analysis is largely inappropriate for the examination of innovation or R&D markets.

On refusals to license,

Many of the product innovation cases that have come before the courts rely upon what amounts to an implicit essential facilities argument. The theories that drive such cases, although not explicitly relying upon the essential facilities doctrine, encourage claims based on variants of arguments about interoperability and access to intellectual property (or products protected by intellectual property). But, the problem with such arguments is that they assume, incorrectly, that there is no opportunity for meaningful competition with a strong incumbent in the face of innovation, or that the absence of competitors in these markets indicates inefficiency … Thanks to the very elements of IP that help them to obtain market dominance, firms in New Economy technology markets are also vulnerable to smaller, more nimble new entrants that can quickly enter and supplant incumbents by leveraging their own technological innovation.

Further, since a right to exclude is a fundamental component of IP rights, a refusal to license IP should continue to be generally considered as outside the scope of antitrust inquiries.

And, with respect to conducting antitrust analysis of R&D or innovation “markets,” we note first that “it is the effects on consumer welfare against which antitrust analysis and remedies are measured” before going on to note that the nature of R&D makes it effects very difficult to measure on consumer welfare. Thus, we recommend that the the agencies continue to focus on actual goods and services markets:

[C]ompetition among research and development departments is not necessarily a reliable driver of innovation … R&D “markets” are inevitably driven by a desire to innovate with no way of knowing exactly what form or route such an effort will take. R&D is an inherently speculative endeavor, and standard antitrust analysis applied to R&D will be inherently flawed because “[a] challenge for any standard applied to innovation is that antitrust analysis is likely to occur after the innovation, but ex post outcomes reveal little about whether the innovation was a good decision ex ante, when the decision was made.”

Public comments on the proposed revision to the joint U.S. Federal Trade Commission (FTC) – U.S. Department of Justice (DOJ) Antitrust-IP Licensing Guidelines have, not surprisingly, focused primarily on fine points of antitrust analysis carried out by those two federal agencies (see, for example, the thoughtful recommendations by the Global Antitrust Institute, here).  In a September 23 submission to the FTC and the DOJ, however, U.S. International Trade Commissioner F. Scott Kieff focused on a broader theme – that patent-antitrust assessments should keep in mind the indirect effects on commercialization that stem from IP (and, in particular, patents).  Kieff argues that antitrust enforcers have employed a public law “rules-based” approach that balances the “incentive to innovate” created when patents prevent copying against the goals of competition.  In contrast, Kieff characterizes the commercialization approach as rooted in the property rights nature of patents and the use of private contracting to bring together complementary assets and facilitate coordination.  As Kieff explains (in italics, footnote citations deleted):

A commercialization approach to IP views IP more in the tradition of private law, rather than public law. It does so by placing greater emphasis on viewing IP as property rights, which in turn is accomplished by greater reliance on interactions among private parties over or around those property rights, including via contracts. Centered on the relationships among private parties, this approach to IP emphasizes a different target and a different mechanism by which IP can operate. Rather than target particular individuals who are likely to respond to IP as incentives to create or invent in particular, this approach targets a broad, diverse set of market actors in general; and it does so indirectly. This broad set of indirectly targeted actors encompasses the creator or inventor of the underlying IP asset as well as all those complementary users of a creation or an invention who can help bring it to market, such as investors (including venture capitalists), entrepreneurs, managers, marketers, developers, laborers, and owners of other key assets, tangible and intangible, including other creations or inventions. Another key difference in this approach to IP lies in the mechanism by which these private actors interact over and around IP assets. This approach sees IP rights as tools for facilitating coordination among these diverse private actors, in furtherance of their own private interests in commercializing the creation or invention.

This commercialization approach sees property rights in IP serving a role akin to beacons in the dark, drawing to themselves all of those potential complementary users of the IP-protected-asset to interact with the IP owner and each other. This helps them each explore through the bargaining process the possibility of striking contracts with each other.

Several payoffs can flow from using this commercialization approach. Focusing on such a beacon-and-bargain effect can relieve the governmental side of the IP system of the need to amass the detailed information required to reasonably tailor a direct targeted incentive, such as each actor’s relative interests and contributions, needs, skills, or the like. Not only is amassing all of that information hard for the government to do, but large, established market actors may be better able than smaller market entrants to wield the political influence needed to get the government to act, increasing risk of concerns about political economy, public choice, and fairness. Instead, when governmental bodies closely adhere to a commercialization approach, each private party can bring its own expertise and other assets to the negotiating table while knowing—without necessarily having to reveal to other parties or the government—enough about its own level of interest and capability when it decides whether to strike a deal or not.            

Such successful coordination may help bring new business models, products, and services to market, thereby decreasing anticompetitive concentration of market power. It also can allow IP owners and their contracting parties to appropriate the returns to any of the rival inputs they invested towards developing and commercializing creations or inventions—labor, lab space, capital, and the like. At the same time, the government can avoid having to then go back to evaluate and trace the actual relative contributions that each participant brought to a creation’s or an invention’s successful commercialization—including, again, the cost of obtaining and using that information and the associated risks of political influence—by enforcing the terms of the contracts these parties strike with each other to allocate any value resulting from the creation’s or invention’s commercialization. In addition, significant economic theory and empirical evidence suggests this can all happen while the quality-adjusted prices paid by many end users actually decline and public access is high. In keeping with this commercialization approach, patents can be important antimonopoly devices, helping a smaller “David” come to market and compete against a larger “Goliath.”

A commercialization approach thereby mitigates many of the challenges raised by the tension that is a focus of the other intellectual approaches to IP, as well as by the responses these other approaches have offered to that tension, including some – but not all – types of AT regulation and enforcement. Many of the alternatives to IP that are often suggested by other approaches to IP, such as rewards, tax credits, or detailed rate regulation of royalties by AT enforcers can face significant challenges in facilitating the private sector coordination benefits envisioned by the commercialization approach to IP. While such approaches often are motivated by concerns about rising prices paid by consumers and direct benefits paid to creators and inventors, they may not account for the important cases in which IP rights are associated with declines in quality-adjusted prices paid by consumers and other forms of commercial benefits accrued to the entire IP production team as well as to consumers and third parties, which are emphasized in a commercialization approach. In addition, a commercialization approach can embrace many of the practical checks on the market power of an IP right that are often suggested by other approaches to IP, such as AT review, government takings, and compulsory licensing. At the same time this approach can show the importance of maintaining self-limiting principles within each such check to maintain commercialization benefits and mitigate concerns about dynamic efficiency, public choice, fairness, and the like.

To be sure, a focus on commercialization does not ignore creators or inventors or creations or inventions themselves. For example, a system successful in commercializing inventions can have the collateral benefit of providing positive incentives to those who do invent through the possibility of sharing in the many rewards associated with successful commercialization. Nor does a focus on commercialization guarantee that IP rights cause more help than harm. Significant theoretical and empirical questions remain open about benefits and costs of each approach to IP. And significant room to operate can remain for AT enforcers pursuing their important public mission, including at the IP-AT interface.

Commissioner Kieff’s evaluation is in harmony with other recent scholarly work, including Professor Dan Spulber’s explanation that the actual nature of long-term private contracting arrangements among patent licensors and licensees avoids alleged competitive “imperfections,” such as harmful “patent hold-ups,” “patent thickets,” and “royalty stacking” (see my discussion here).  More generally, Commissioner Kieff’s latest pronouncement is part of a broader and growing theoretical and empirical literature that demonstrates close associations between strong patent systems and economic growth and innovation (see, for example, here).

There is a major lesson here for U.S. (and foreign) antitrust enforcement agencies.  As I have previously pointed out (see, for example, here), in recent years, antitrust enforcers here and abroad have taken positions that tend to weaken patent rights.  Those positions typically are justified by the existence of “patent policy deficiencies” such as those that Professor Spulber’s paper debunks, as well as an alleged epidemic of low quality “probabilistic patents” (see, for example, here) – justifications that ignore the substantial economic benefits patents confer on society through contracting and commercialization.  It is high time for antitrust to accommodate the insights drawn from this new learning.  Specifically, government enforcers should change their approach and begin incorporating private law/contracting/commercialization considerations into patent-antitrust analysis, in order to advance the core goals of antitrust – the promotion of consumer welfare and efficiency.  Better yet, if the FTC and DOJ truly want to maximize the net welfare benefits of antitrust, they should undertake a more general “policy reboot” and adopt a “decision-theoretic” error cost approach to enforcement policy, rooted in cost-benefit analysis (see here) and consistent with the general thrust of Roberts Court antitrust jurisprudence (see here).

The Global Antitrust Institute (GAI) at George Mason University’s Antonin Scalia Law School released today a set of comments on the joint U.S. Department of Justice (DOJ) – Federal Trade Commission (FTC) August 12 Proposed Update to their 1995 Antitrust Guidelines for the Licensing of Intellectual Property (Proposed Update).  As has been the case with previous GAI filings (see here, for example), today’s GAI Comments are thoughtful and on the mark.

For those of you who are pressed for time, the latest GAI comments make these major recommendations (summary in italics):

Standard Essential Patents (SEPs):  The GAI Comments commended the DOJ and the FTC for preserving the principle that the antitrust framework is sufficient to address potential competition issues involving all IPRs—including both SEPs and non-SEPs.  In doing so, the DOJ and the FTC correctly rejected the invitation to adopt a special brand of antitrust analysis for SEPs in which effects-based analysis was replaced with unique presumptions and burdens of proof. 

o   The GAI Comments noted that, as FTC Chairman Edith Ramirez has explained, “the same key enforcement principles [found in the 1995 IP Guidelines] also guide our analysis when standard essential patents are involved.”

o   This is true because SEP holders, like other IP holders, do not necessarily possess market power in the antitrust sense, and conduct by SEP holders, including breach of a voluntary assurance to license its SEP on fair, reasonable, and nondiscriminatory (FRAND) terms, does not necessarily result in harm to the competitive process or to consumers. 

o   Again, as Chairwoman Ramirez has stated, “it is important to recognize that a contractual dispute over royalty terms, whether the rate or the base used, does not in itself raise antitrust concerns.”

Refusals to License:  The GAI Comments expressed concern that the statements regarding refusals to license in Sections 2.1 and 3 of the Proposed Update seem to depart from the general enforcement approach set forth in the 2007 DOJ-FTC IP Report in which those two agencies stated that “[a]ntitrust liability for mere unilateral, unconditional refusals to license patents will not play a meaningful part in the interface between patent rights and antitrust protections.”  The GAI recommended that the DOJ and the FTC incorporate this approach into the final version of their updated IP Guidelines.

“Unreasonable Conduct”:  The GAI Comments recommended that Section 2.2 of the Proposed Update be revised to replace the phrase “unreasonable conduct” with a clear statement that the agencies will only condemn licensing restraints when anticompetitive effects outweigh procompetitive benefits.

R&D Markets:  The GAI Comments urged the DOJ and the FTC to reconsider the inclusion (or, at the very least, substantially limit the use) of research and development (R&D) markets because: (1) the process of innovation is often highly speculative and decentralized, making it impossible to identify all market participants to be; (2) the optimal relationship between R&D and innovation is unknown; (3) the market structure most conducive to innovation is unknown; (4) the capacity to innovate is hard to monopolize given that the components of modern R&D—research scientists, engineers, software developers, laboratories, computer centers, etc.—are continuously available on the market; and (5) anticompetitive conduct can be challenged under the actual potential competition theory or at a later time.

While the GAI Comments are entirely on point, even if their recommendations are all adopted, much more needs to be done.  The Proposed Update, while relatively sound, should be viewed in the larger context of the Obama Administration’s unfortunate use of antitrust policy to weaken patent rights (see my article here, for example).  In addition to strengthening the revised Guidelines, as suggested by the GAI, the DOJ and the FTC should work with other component agencies of the next Administration – including the Patent Office and the White House – to signal enhanced respect for IP rights in general.  In short, a general turnaround in IP policy is called for, in order to spur American innovation, which has been all too lacking in recent years.

Today ICLE released a white paper entitled, A critical assessment of the latest charge of Google’s anticompetitive bias from Yelp and Tim Wu.

The paper is a comprehensive response to a study by Michael Luca, Timothy Wu, Sebastian Couvidat, Daniel Frank, & William Seltzer, entitled, Is Google degrading search? Consumer harm from Universal Search.

The Wu, et al. paper will be one of the main topics of discussion at today’s Capitol Forum and George Washington Institute of Public Policy event on Dominant Platforms Under the Microscope: Policy Approaches in the US and EU, at which I will be speaking — along with a host of luminaries including, inter alia, Josh Wright, Jonathan Kanter, Allen Grunes, Catherine Tucker, and Michael Luca — one of the authors of the Universal Search study.

Follow the link above to register — the event starts at noon today at the National Press Club.

Meanwhile, here’s a brief description of our paper:

Late last year, Tim Wu of Columbia Law School (and now the White House Office of Management and Budget), Michael Luca of Harvard Business School (and a consultant for Yelp), and a group of Yelp data scientists released a study claiming that Google has been purposefully degrading search results from its more-specialized competitors in the area of local search. The authors’ claim is that Google is leveraging its dominant position in general search to thwart competition from specialized search engines by favoring its own, less-popular, less-relevant results over those of its competitors:

To improve the popularity of its specialized search features, Google has used the power of its dominant general search engine. The primary means for doing so is what is called the “universal search” or the “OneBox.”

This is not a new claim, and researchers have been attempting (and failing) to prove Google’s “bias” for some time. Likewise, these critics have drawn consistent policy conclusions from their claims, asserting that antitrust violations lie at the heart of the perceived bias. But the studies are systematically marred by questionable methodology and bad economics.

This latest study by Tim Wu, along with a cadre of researchers employed by Yelp (one of Google’s competitors and one of its chief antitrust provocateurs), fares no better, employing slightly different but equally questionable methodology, bad economics, and a smattering of new, but weak, social science. (For a thorough criticism of the inherent weaknesses of Wu et al.’s basic social science methodology, see Miguel de la Mano, Stephen Lewis, and Andrew Leyden, Focus on the Evidence: A Brief Rebuttal of Wu, Luca, et al (2016), available here).

The basic thesis of the study is that Google purposefully degrades its local searches (e.g., for restaurants, hotels, services, etc.) to the detriment of its specialized search competitors, local businesses, consumers, and even Google’s bottom line — and that this is an actionable antitrust violation.

But in fact the study shows nothing of the kind. Instead, the study is marred by methodological problems that, in the first instance, make it impossible to draw any reliable conclusions. Nor does the study show that Google’s conduct creates any antitrust-relevant problems. Rather, the construction of the study and the analysis of its results reflect a superficial and inherently biased conception of consumer welfare that completely undermines the study’s purported legal and economic conclusions.

Read the whole thing here.

The Antitrust Division of the U.S. Department of Justice (DOJ) ignored sound law and economics principles in its August 4 decision announcing a new interpretation of seventy-five year-old music licensing consent decrees it had entered into separately with the two major American “performing rights organizations” (PROs)  —  the American Society of Composers, Authors, and Publishers (see ASCAP) and Broadcast Music, Inc. (see BMI).  It also acted in a matter at odds with international practice.  DOJ should promptly rescind its new interpretation and restore the welfare-enhancing licensing flexibility that ASCAP and BMI previously enjoyed.   If DOJ fails to do this, the court overseeing the decrees or Congress should be prepared to act.


ASCAP and BMI contract with music copyright holders to act as intermediaries that provide “blanket” licenses to music users (e.g., television and radio stations, bars, and internet music distributors) for use of their full copyrighted musical repertoires, without the need for song-specific licensing negotiations.  This greatly reduces the transactions costs of arranging for the playing of musical works, benefiting music users, the listening public, and copyright owners (all of whom are assured of at least some compensation for their endeavors).  ASCAP and BMI are big businesses, with each PRO holding licenses to over ten million works and accounting for roughly 45 percent of the domestic music licensing market (ninety percent combined).  Because both ASCAP and BMI pool copyrighted songs that could otherwise compete with each other, and both grant users a single-price “blanket license” conveying the rights to play their full set of copyrighted works, the two organizations could be seen as restricting competition among copyrighted works and fixing the prices of copyrighted substitutes – raising serious questions under section 1 of the Sherman Antitrust Act, which condemns contracts that unreasonably restrain trade.  This led the DOJ to bring antitrust suits against ASCAP and BMI over eighty years ago, which were settled by separate judicially-filed consent decrees in 1941.  The decrees imposed a variety of limitations on the two PROs’ licensing practices, aimed at preventing ASCAP and BMI from exercising anticompetitive market power (such as the setting of excessive licensing rates).  The decrees were amended twice over the years, most recently in 2001, to take account of changing market conditions.  The U.S. Supreme Court noted the constraining effect of the decrees in BMI v. CBS (1979), in ruling that the BMI and ASCAP blanket licenses did not constitute per se illegal price fixing.  The Court held, rather, that the licenses should be evaluated on a case-by-case basis under the antitrust “rule of reason,” since the licenses inherently generated great efficiency benefits (“the immediate use of covered compositions, without the delay of prior individual negotiations”) that had to be weighed against potential anticompetitive harms.

The August 4, 2016 DOJ Consent Decree Interpretation

Fast forward to 2014, when DOJ undertook a new review of the ASCAP and BMI decrees, and requested the submission of public comments to aid it in its deliberations.  This review came to an official conclusion two year laters, on August 4, 2016, when DOJ decided not to amend the decrees – but announced a decree interpretation that limits ASCAP’s and BMI’s flexibility.  Specifically, DOJ stated that the decrees needed to be “more consistently applied.”  By this, the DOJ meant that BMI and ASCAP should only grant blanket licenses that cover all of the rights to 100 percent of the works in the PROs’ respective catalogs, not licenses that cover only partial interests in those works.  DOJ stated:

Only full-work licensing can yield the substantial procompetitive benefits associated with blanket licenses that distinguish ASCAP’s and BMI’s activities from other agreements among competitors that present serious issues under the antitrust laws.

The New DOJ Interpretation is bad as a Matter of Policy

DOJ’s August 4 interpretation rejects industry practice.  Under it, ASCAP and BMI will only be able to offer a license covering all of the copyright interests in a musical competition, even if the license covers a joint work.  For example, consider a band of five composer-musicians, each of whom has a fractional interest in the copyright covering the band’s new album which is a joint work.  Previously, each musician was able to offer a partial interest in the joint work to a performance rights organization, reflecting the relative shares of the total copyright interest covering the work. The organization could offer a partial license, and a user could aggregate different partial licenses in order to cover the whole joint work.

Now, however, under DOJ’s new interpretation, BMI and ASCAP will be prevented from offering partial licenses to that work to users. This may deny the band’s individual members the opportunity to deal profitably with BMI and ASCAP, thereby undermining their ability to receive fair compensation.  As the two PROs have noted, this approach “will cause unnecessary chaos in the marketplace and place unfair financial burdens and creative constraints on songwriters and composers.”  According to ASCAP President Paul Williams, “It is as if the DOJ saw songwriters struggling to stay afloat in a sea of outdated regulations and decided to hand us an anchor, in the form of 100 percent licensing, instead of a life preserver.”  Furthermore, the president and CEO of BMI, Mike O’Neill, stated:  “We believe the DOJ’s interpretation benefits no one – not BMI or ASCAP, not the music publishers, and not the music users – but we are most sensitive to the impact this could have on you, our songwriters and composers.”  These views are bolstered by a January 2016 U.S. Copyright Office report, which concluded that “an interpretation of the consent decrees that would require 100-percent licensing or removal of a work from the ASCAP or BMI repertoire would appear to be fraught with legal and logistical problems, and might well result in a sharp decrease in repertoire available through these [performance rights organizations’] blanket licenses.”  Regrettably, during the decree review period, DOJ ignored the expert opinion of the Copyright Office, as well as the public record comments of numerous publishers and artists (see here, for example) indicating that a 100 percent licensing requirement would depress returns to copyright owners and undermine the creative music industry.

Most fundamentally, DOJ’s new interpretation of the BMI and ASCAP consent decrees involves an abridgment of economic freedom.  It further limits the flexibility of copyright music holders and music users to contract with intermediaries to promote the efficient distribution of music performance rights, in a manner that benefits the listening public while allowing creative artists sufficient compensation for their efforts.  DOJ made no compelling showing that a new consent decree constraint is needed to promote competition (100 percent licensing only).  Far from promoting competition, DOJ’s new interpretation undermines it.  In short, DOJ micromanagement of copyright licensing by consent decree reinterpretation is a costly new regulatory initiative that reflects a lack of appreciation for intellectual property rights, which incentivize innovation.  In short, DOJ’s latest interpretation of the ASCAP and BMI decrees is terrible policy.

The New DOJ Interpretation is bad as a Matter of Law

DOJ’s new interpretation not only is bad policy, it is inconsistent with sound textual construction of the decrees themselves.  As counsel for BMI explained in an August 4 federal court filing (in the Southern District of New York, which oversees the decrees), the BMI decree (and therefore the analogous ASCAP decree as well) does not expressly require 100 percent licensing and does not unambiguously prohibit fractional licensing.  Accordingly, since a consent decree is an injunction, and any activity not expressly required or prohibited thereunder is permitted, fractional shares licensing should be authorized.  DOJ’s new interpretation ignores this principle.  It also is at odds with a report of the U.S. Copyright Office that concluded the BMI consent decree “must be understood to include partial interests in musical works.”  Furthermore, the new interpretation is belied by the fact that the PRO licensing market has developed and functioned efficiently for decades by pricing, colleting, and distributing fees for royalties on a fractional basis.  Courts view such evidence of trade practice and custom as relevant in determining the meaning of a consent decree.


The New DOJ Interpretation Runs Counter to International Norms

Finally, according to Gadi Oron, Director General of the International Confederation of Societies of Authors and Composers (CISAC), a Paris-based organization that regroups 239 rights societies from 123 countries, including ASCAP, BMI, and SESAC, adoption of the new interpretation would depart from international norms in the music licensing industry and have disruptive international effects:

It is clear that the DoJ’s decisions have been made without taking the interests of creators, neither American nor international, into account. It is also clear that they were made with total disregard for the international framework, where fractional licensing is practiced, even if it’s less of a factor because many countries only have one performance rights organization representing songwriters in their territory. International copyright laws grant songwriters exclusive rights, giving them the power to decide who will license their rights in each territory and it is these rights that underpin the landscape in which authors’ societies operate. The international system of collective management of rights, which is based on reciprocal representation agreements and founded on the freedom of choice of the rights holder, would be negatively affected by such level of government intervention, at a time when it needs support more than ever.


In sum, DOJ should take account of these concerns and retract its new interpretation of the ASCAP and BMI consent decrees, restoring the status quo ante.  If it fails to do so, a federal court should be prepared to act, and, if necessary, Congress should seriously consider appropriate corrective legislation.

On August 6, the Global Antitrust Institute (the GAI, a division of the Antonin Scalia Law School at George Mason University) submitted a filing (GAI filing or filing) in response to the Japan Fair Trade Commission’s (JFTC’s) consultation on reforms to the Japanese system of administrative surcharges assessed for competition law violations (see here for a link to the GAI’s filing).  The GAI’s outstanding filing was authored by GAI Director Koren Wong Ervin and Professors Douglas Ginsburg, Joshua Wright, and Bruce Kobayashi of the Scalia Law School.

The GAI filing’s three sets of major recommendations, set forth in italics, are as follows:

(1)   Due Process

 While the filing recognizes that the process may vary depending on the jurisdiction, the filing strongly urges the JFTC to adopt the core features of a fair and transparent process, including:   

(a)        Legal representation for parties under investigation, allowing the participation of local and foreign counsel of the parties’ choosing;

(b)        Notifying the parties of the legal and factual bases of an investigation and sharing the evidence on which the agency relies, including any exculpatory evidence and excluding only confidential business information;

(c)        Direct and meaningful engagement between the parties and the agency’s investigative staff and decision-makers;

(d)        Allowing the parties to present their defense to the ultimate decision-makers; and

(e)        Ensuring checks and balances on agency decision-making, including meaningful access to independent courts.

(2)   Calculation of Surcharges

The filing agrees with the JFTC that Japan’s current inflexible system of surcharges is unlikely to accurately reflect the degree of economic harm caused by anticompetitive practices.  As a general matter, the filing recommends that under Japan’s new surcharge system, surcharges imposed should rely upon economic analysis, rather than using sales volume as a proxy, to determine the harm caused by violations of Japan’s Antimonopoly Act.   

In that light, and more specifically, the filing therefore recommends that the JFTC limit punitive surcharges to matters in which:

(a)          the antitrust violation is clear (i.e., if considered at the time the conduct is undertaken, and based on existing laws, rules, and regulations, a reasonable party should expect the conduct at issue would likely be illegal) and is without any plausible efficiency justification;

(b)          it is feasible to articulate and calculate the harm caused by the violation;

(c)           the measure of harm calculated is the basis for any fines or penalties imposed; and

(d)          there are no alternative remedies that would adequately deter future violations of the law. 

In the alternative, and at the very least, the filing urges the JFTC to expand the circumstances under which it will not seek punitive surcharges to include two types of conduct that are widely recognized as having efficiency justifications:

  • unilateral conduct, such as refusals to deal and discriminatory dealing; and
  • vertical restraints, such as exclusive dealing, tying and bundling, and resale price maintenance.

(3)   Settlement Process

The filing recommends that the JFTC consider incorporating safeguards that prevent settlement provisions unrelated to the violation and limit the use of extended monitoring programs.  The filing notes that consent decrees and commitments extracted to settle a case too often end up imposing abusive remedies that undermine the welfare-enhancing goals of competition policy.  An agency’s ability to obtain in terrorem concessions reflects a party’s weighing of the costs and benefits of litigating versus the costs and benefits of acquiescing in the terms sought by the agency.  When firms settle merely to avoid the high relative costs of litigation and regulatory procedures, an agency may be able to extract more restrictive terms on firm behavior by entering into an agreement than by litigating its accusations in a court.  In addition, while settlements may be a more efficient use of scarce agency resources, the savings may come at the cost of potentially stunting the development of the common law arising through adjudication.

In sum, the latest filing maintains the GAI’s practice of employing law and economics analysis to recommend reforms in the imposition of competition law remedies (see here, here, and here for summaries of prior GAI filings that are in the same vein).  The GAI’s dispassionate analysis highlights principles of universal application – principles that may someday point the way toward greater economically-sensible convergence among national antitrust remedial systems.


In addition to reforming substantive antitrust doctrine, the Supreme Court in recent decades succeeded in curbing the unwarranted costs of antitrust litigation by erecting new procedural barriers to highly questionable antitrust suits.  It did this principally through three key “gatekeeper” decisions, Monsanto (1984), Matsushita (1986), and Twombly (2007).

Prior to those holdings, bare allegations in a complaint typically were sufficient to avoid dismissal.  Furthermore, summary judgment was very hard to obtain, given the Supreme Court’s pronouncement in Poller v. CBS (1962) that “summary procedures should be used sparingly in complex antitrust litigation.”  Thus, plaintiffs had a strong incentive to file dubious (if not meritless) antitrust suits, in the hope of coercing unwarranted settlements from defendants faced with the prospect of burdensome, extended antitrust litigation – litigation that could impose serious business reputational costs over time, in addition to direct and indirect litigation costs.

This all changed starting in 1984.  Monsanto required that a plaintiff show a “conscious commitment to a common scheme designed to achieve an unlawful objective” to support a Sherman Act Section 1 (Section 1) antitrust conspiracy allegation.  Building on Monsanto, Matsushita held that “conduct as consistent with permissible competition as with illegal conspiracy does not, standing alone, support an inference of antitrust conspiracy.”  In Twombly, the Supreme Court made it easier to succeed on a motion to dismiss a Section 1 complaint, holding that mere evidence of parallel conduct does not establish a conspiracy.  Rather, under Twombly, a plaintiff seeking relief under Section 1 must allege, at a minimum, the general contours of when an agreement was made and must support those allegations with a context that tends to make such an agreement plausible.  (The Twombly Court’s approval of motions to dismiss as a tool to rein in excessive antitrust litigation costs was implicit in its admonition not to “forget that proceeding to antitrust discovery can be expensive.”)

In sum, as Professor Herbert Hovenkamp has put it, “[t]he effects of Twombly and Matsushita has [sic] been a far-reaching shift in the way antitrust cases proceed, and today a likely majority are dismissed on the pleadings or summary judgment before going to trial.”

Visa v. Osborn

So far, so good.  Trial lawyers never rest, however, and old lessons sometimes need to be relearned, as demonstrated by the D.C. Circuit’s strange opinion in Visa v. Osborn (2015).

Visa v. Osborn involves a putative class action filed against Visa, MasterCard, and three banks, essentially involving a bare bones complaint alleging that similar automatic teller machine pricing rules imposed by Visa and MasterCard were part of a price-fixing conspiracy among the banks and the credit card companies.  As I explained in my recent Competition Policy International article discussing this case, plaintiffs neither alleged any facts indicating any communications among defendants, nor did they suggest anything to undermine the very real possibility that the credit card firms separately adopted the rules as being in their independent self-interest.  In short, there is nothing in the complaint indicating that allegations of an anticompetitive agreement are plausible, and, as such, Twombly dictates that the complaint must be dismissed.  Amazingly, however, a D.C. Circuit panel held that the mere allegation “that the member banks used the bankcard associations to adopt and enforce” the purportedly anticompetitive access fee rule was “enough to satisfy the plausibility standard” required to survive a motion to dismiss.

Fortunately, the D.C. Circuit’s Osborn holding (which, in addition to being ill-reasoned, is inconsistent with Third, Fourth, and Ninth Circuit precedents) attracted the eye of the Supreme Court, which granted certiorari on June 28.  Specifically, the Supreme Court agreed to resolve the question “[w]hether allegations that members of a business association agreed to adhere to the association’s rules and possess governance rights in the association, without more, are sufficient to plead the element of conspiracy in violation of Section 1 of the Sherman Act, . . . or are insufficient, as the Third, Fourth, and Ninth Circuits have held.”


As I concluded in my Competition Policy International article:

Business associations bestow economic benefits on society through association rules that enable efficient cooperative activities.  Subjecting association members to potential antitrust liability merely for signing on to such rules and participating in association governance would substantially chill participation in associations and undermine the development of new and efficient forms of collaboration among businesses.  Such a development would reduce economic dynamism and harm both producers and consumers.  By decisively overruling the D.C. Circuit’s flawed decision in Osborn, the Supreme Court would preclude a harmful form of antitrust risk and establish an environment in which fruitful business association decision-making is granted greater freedom, to the benefit of the business community, consumers, and the overall economy.  

In addition, and more generally, the Court may wish to remind litigants that the antitrust litigation gatekeeper function laid out in Monsanto, Matsushita, and Twombly remains as strong and as vital as ever.  In so doing, the Court would reaffirm that motions to dismiss and summary judgment motions remain critically important tools needed to curb socially costly abusive antitrust litigation.

Since the European Commission (EC) announced its first inquiry into Google’s business practices in 2010, the company has been the subject of lengthy investigations by courts and competition agencies around the globe. Regulatory authorities in the United States, France, the United Kingdom, Canada, Brazil, and South Korea have all opened and rejected similar antitrust claims.

And yet the EC marches on, bolstered by Google’s myriad competitors, who continue to agitate for further investigations and enforcement actions, even as we — companies and consumers alike — enjoy the benefits of an increasingly dynamic online marketplace.

Indeed, while the EC has spent more than half a decade casting about for some plausible antitrust claim, the online economy has thundered ahead. Since 2010, Facebook has tripled its active users and multiplied its revenue ninefold; the number of apps available in the Amazon app store has grown from less than 4000 to over 400,000 today; and there are almost 1.5 billion more Internet users globally than there were in 2010. And consumers are increasingly using new and different ways to search for information: Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and Facebook’s Messenger are a few of the many new innovations challenging traditional search engines.

Advertisers have adapted to this evolution, moving increasingly online, and from search to display ads as mobile adoption has skyrocketedSocial networks like Twitter and Snapchat have come into their own, competing for the same (and ever-increasing) advertising dollars. For marketers, advertising on social networks is now just as important as advertising in search. No wonder e-commerce sales have more than doubled, to almost $2 trillion worldwide; for the first time, consumers purchased more online than in stores this past year.

To paraphrase Louis C.K.: Everything is amazing — and no one at the European Commission is happy.

The EC’s market definition is fatally flawed

Like its previous claims, the Commission’s most recent charges are rooted in the assertion that Google abuses its alleged dominance in “general search” advertising to unfairly benefit itself and to monopolize other markets. But European regulators continue to miss the critical paradigm shift among online advertisers and consumers that has upended this stale view of competition on the Internet. The reality is that Google’s competition may not, and need not, look exactly like Google itself, but it is competition nonetheless. And it’s happening in spades.

The key to understanding why the European Commission’s case is fundamentally flawed lies in an examination of how it defines the relevant market. Through a series of economically and factually unjustified assumptions, the Commission defines search as a distinct market in which Google faces limited competition and enjoys an 80% market share. In other words, for the EC, “general search” apparently means only nominal search providers like Google and Bing; it doesn’t mean companies like Amazon, Facebook and Twitter — Google’s biggest competitors.  

But the reality is that “general search” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google happens to use to match consumers and advertisers doesn’t reflect the substitutability of other mechanisms that do the same thing — merely because these mechanisms aren’t called “search.”

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive.

Consumers today are increasingly using platforms like Amazon and Facebook as substitutes for the searches they might have run on Google or Bing. “Closed” platforms like the iTunes store and innumerable apps handle copious search traffic but also don’t figure in the EC’s market calculations. And so-called “dark social” interactions like email, text messages, and IMs, drive huge amounts of some of the most valuable traffic on the Internet. This, in turn, has led to a competitive scramble to roll out completely new technologies like chatbots to meet consumers’ informational (and merchants’ advertising) needs.

Properly construed, Google’s market position is precarious

Like Facebook and Twitter (and practically every other Internet platform), advertising is Google’s primary source of revenue. Instead of charging for fancy hardware or offering services to users for a fee, Google offers search, the Android operating system, and a near-endless array of other valuable services for free to users. The company’s very existence relies on attracting Internet users and consumers to its properties in order to effectively connect them with advertisers.

But being an online matchmaker is a difficult and competitive enterprise. Among other things, the ability to generate revenue turns crucially on the quality of the match: All else equal, an advertiser interested in selling widgets will pay more for an ad viewed by a user who can be reliably identified as being interested in buying widgets.

Google’s primary mechanism for attracting users to match with advertisers — general search — is substantially about information, not commerce, and the distinction between product and informational searches is crucially important to understanding Google’s market and the surprisingly limited and tenuous market power it possesses.

General informational queries aren’t nearly as valuable to advertisers: Significantly, only about 30 percent of Google’s searches even trigger any advertising at all. Meanwhile, as of 2012, one-third of product searches started on Amazon while only 13% started on a general search engine.

As economist Hal Singer aptly noted in 2012,

[the data] suggest that Google lacks market power in a critical segment of search — namely, product searches. Even though searches for items such as power tools or designer jeans account for only 10 to 20 percent of all searches, they are clearly some of the most important queries for search engines from a business perspective, as they are far easier to monetize than informational queries like “Kate Middleton.”

While Google Search clearly offers substantial value to advertisers, its ability to continue to do so is precarious when confronted with the diverse array of competitors that, like Facebook, offer a level of granularity in audience targeting that general search can’t match, or that, like Amazon, systematically offer up the most valuable searchers.

In order to compete in this market — one properly defined to include actual competitors — Google has had to constantly innovate to maintain its position. Unlike a complacent monopolist, it has evolved to meet changing consumer demand, shifting technology and inventive competitors. Thus, Google’s search algorithm has changed substantially over the years to make more effective use of the information available to ensure relevance; search results have evolved to give consumers answers to queries rather than just links, and to provide more-direct access to products and services; and, as users have shifted more and more of their time and attention to mobile devices, search has incorporated more-localized results.

Competitors want a free lunch

Critics complain, nevertheless, that these developments have made it harder, in one way or another, for rivals to compete. And the EC has provided a willing ear. According to Commissioner Vestager last week:

Google has come up with many innovative products that have made a difference to our lives. But that doesn’t give Google the right to deny other companies the chance to compete and innovate. Today, we have further strengthened our case that Google has unduly favoured its own comparison shopping service in its general search result pages…. (Emphasis added).

Implicit in this statement is the remarkable assertion that by favoring its own comparison shopping services, Google “den[ies] other companies the chance to compete and innovate.” Even assuming Google does “favor” its own results, this is an astounding claim.

First, it is not a violation of competition law simply to treat competitors’ offerings differently than one’s own, even for a dominant firm. Instead, conduct must actually exclude competitors from the market, without offering countervailing advantages to consumers. But Google’s conduct is not exclusionary, and there are many benefits to consumers.

As it has from the start of its investigations of Google, the EC begins with a flawed assumption: that Google’s competitors both require, and may be entitled to, unfettered access to Google’s property in order to compete. But this is patently absurd. Google is not an essential facility: Billions of users reach millions of companies everyday through direct browser navigation, apps, email links, review sites and blogs, and countless other means — all without once touching

Google Search results do not exclude competitors, whether comparison shopping sites or others. For example, 72% of TripAdvisor’s U.S. traffic comes from search, and almost all of that from organic results; other specialized search sites see similar traffic volumes.

More important, however, in addition to continuing to reach rival sites through Google Search, billions of consumers access rival services directly through their mobile apps. In fact, for Yelp,

Approximately 21 million unique devices accessed Yelp via the mobile app on a monthly average basis in the first quarter of 2016, an increase of 32% compared to the same period in 2015. App users viewed approximately 70% of page views in the first quarter and were more than 10 times as engaged as website users, as measured by number of pages viewed. (Emphasis added).

And a staggering 40 percent of mobile browsing is now happening inside the Facebook app, competing with the browsers and search engines pre-loaded on smartphones.

Millions of consumers also directly navigate to Google’s rivals via their browser by simply typing, for example, “” in their address bar. And as noted above, consumers are increasingly using Google rivals’ new disruptive information engines like Alexa and Siri for their search needs. Even the traditional search engine space is competitive — in fact, according to Wired, as of July 2016:

Microsoft has now captured more than one-third of Internet searches. Microsoft’s transformation from a company that sells boxed software to one that sells services in the cloud is well underway. (Emphasis added).

With such numbers, it’s difficult to see how rivals are being foreclosed from reaching consumers in any meaningful way.

Meanwhile, the benefits to consumers are obvious: Google is directly answering questions for consumers rather than giving them a set of possible links to click through and further search. In some cases its results present entirely new and valuable forms of information (e.g., search trends and structured data); in others they serve to hone searches by suggesting further queries, or to help users determine which organic results (including those of its competitors) may be most useful. And, of course, consumers aren’t forced to endure these innovations if they don’t find them useful, as they can quickly switch to other providers.  

Nostalgia makes for bad regulatory policy

Google is not the unstoppable monopolist of the EU competition regulators’ imagining. Rather, it is a continual innovator, forced to adapt to shifting consumer demand, changing technology, and competitive industry dynamics. And, instead of trying to hamstring Google, if they are to survive, Google’s competitors (and complainants) must innovate as well.

Dominance in technology markets — especially online — has always been ephemeral. Once upon a time, MySpace, AOL, and Yahoo were the dominant Internet platforms. Kodak, once practically synonymous with “instant camera” let the digital revolution pass it by. The invincible Sony Walkman was upended by mp3s and the iPod. Staid, keyboard-operated Blackberries and Nokias simply couldn’t compete with app-driven, graphical platforms from Apple and Samsung. Even today, startups like Snapchat, Slack, and Spotify gain massive scale and upend entire industries with innovative new technology that can leave less-nimble incumbents in the dustbin of tech history.

Put differently, companies that innovate are able to thrive, while those that remain dependent on yesterday’s technology and outdated business models usually fail — and deservedly so. It should never be up to regulators to pick winners and losers in a highly dynamic and competitive market, particularly if doing so constrains the market’s very dynamism. As Alfonso Lamadrid has pointed out:

It is companies and not competition enforcers which will strive or fail in the adoption of their business models, and it is therefore companies and not competition enforcers who are to decide on what business models to use. Some will prove successful and others will not; some companies will thrive and some will disappear, but with experimentation with business models, success and failure are and have always been part of the game.

In other words, we should not forget that competition law is, or should be, business-model agnostic, and that regulators are – like anyone else – far from omniscient.

Like every other technology company before them, Google and its competitors must be willing and able to adapt in order to keep up with evolving markets — just as for Lewis Carroll’s Red Queen, “it takes all the running you can do, to keep in the same place.” Google confronts a near-constantly evolving marketplace and fierce competition from unanticipated quarters; companies that build their businesses around Google face a near-constantly evolving Google. In the face of such relentless market dynamism, neither consumers nor firms are well served by regulatory policy rooted in nostalgia.