Archives For error costs

This blurb published yesterday by Competition Policy International nicely illustrates the problem with the growing focus on unilateral conduct investigations by the European Commission (EC) and other leading competition agencies:

EU: Qualcomm to face antitrust complaint on predatory pricing

Dec 03, 2015

The European Union is preparing an antitrust complaint against Qualcomm Inc. over suspected predatory pricing tactics that could hobble smaller rivals, according to three people familiar with the probe.

Regulators are in the final stages of preparing a so-called statement of objections, based on a complaint by a unit of Nvidia Corp., that asked the EU to act against predatory pricing for mobile-phone chips, the people said. Qualcomm designs chipsets that power most of the world’s smartphones, licensing its technology across the industry.

Qualcomm would add to a growing list of U.S. technology companies to face EU antitrust action, following probes into Google, Microsoft Corp. and Intel Corp. A statement of objections may lead to fines, capped at 10 percent of yearly global revenue, which can be avoided if a company agrees to make changes to business behavior.

Regulators are less advanced with another probe into whether the company grants payments, rebates or other financial incentives to customers in returning for buying Qualcomm chipsets. Another case that focused on complaints that the company was charging excessive royalties on patents was dropped in 2009.

“Predatory pricing” complaints by competitors of successful innovators are typically aimed at hobbling efficient rivals and reducing aggressive competition.  If and when successful, such rent-seeking complaints attenuate competitive vigor (thereby disincentivizing innovation) and tend to raise prices to consumers – a result inimical with antitrust’s overarching goal, consumer welfare promotion.  Although I admittedly am not privy to the facts at issue in the Qualcomm predatory pricing investigation, Nvidia is not a firm that fits the model of a rival being decimated by economic predation (given its overall success and its rapid growth and high profitability in smartchip markets).  In this competitive and dynamic industry, the likelihood that Qualcomm could recoup short-term losses from predation through sustainable monopoly pricing following Nvidia’s exit from the market would seem to be infinitesimally small or non-existent (even assuming pricing below average variable cost or average avoidable cost could be shown).  Thus, there is good reason to doubt the wisdom of the EC’s apparent decision to issue a statement of objections to Qualcomm regarding predatory pricing for mobile phone chips.

The investigation of (presumably loyalty) payments and rebates to buyers of Qualcomm chipsets also is unlikely to enhance consumer welfare.  As a general matter, such financial incentives lower costs to loyal customers, and may promote efficiencies such as guaranteed purchase volumes under favorable terms.  Although theoretically loyalty payments might be structured to effectuate anticompetitive exclusion of competitors under very special circumstances, as a general matter such payments – which like alleged “predatory” pricing typically benefit consumers – should not be a high priority for investigation by competition agencies.  This conclusion applies in spades to chipset markets, which are characterized by vigorous competition among successful firms.  Rebate schemes in dynamic markets of this sort are almost certainly a symptom of creative, welfare-enhancing competitive vigor, rather than inefficient exclusionary behavior.

A pattern of investigating price reductions and discounting plans in highly dynamic and innovative industries, exemplified by the EC’s Qualcomm investigations summarized above, is troubling in at least two respects.

First, it creates regulatory disincentives to aggressive welfare-enhancing competition aimed at capturing the customer’s favor.  Companies like Qualcomm, after being suitably chastised, may well “take the cue” and decide to avoid future trouble by “playing nice” and avoiding innovative discounting, to the detriment of future consumers and industry efficiency.

Second, the dedication of enforcement resources to investigating discounting practices by successful firms that (based on first principles and industry conditions) are highly likely to be procompetitive points to a severe misallocation of resources by the responsible competition agencies.  Such agencies should seek to optimize the use of their scarce resources by allocating them to the highest-valued targets in welfare terms, such as anticompetitive government restraints on competition and hard-core cartel conduct.  Spending any resources on chasing down what is almost certainly efficient unilateral pricing conduct not only sends a bad signal to industry (see point one), it suggests that agency priorities are badly misplaced.  (Admittedly, a problem faced by the EC and many other competition authorities is that they are required to respond to third party complaints, but the nature of that response and the resources allocated could be better calibrated to the likely merit of such complaints.  Whether the law should be changed to grant such competition authorities broad prosecutorial discretion to ignore clearly non-meritorious complaints (such as the wide discretion enjoyed by U.S. antitrust enforcers) is beyond the scope of this commentary, and merits separate treatment.)

A proper application of decision theory and its error cost approach could help the EC and other competition enforcers avoid the problem of inefficiently chasing down procompetitive unilateral conduct.  Such an approach would focus intensively on highly welfare inimical conduct that lacks credible efficiencies (thus minimizing false positives in enforcement) that can be pursued with a relatively low expenditure of administrative costs (given the lack of credible efficiency justifications that need to be evaluated).  As indicated above, a substantial allocation of resources to hard core cartel conduct, bid rigging, and anticompetitive government-imposed market distortions (including poorly designed regulations and state aids) would be consistent with such an approach.  Relatedly, investigating single firm conduct, which is central to spurring a dynamic competitive process and is often misdiagnosed as anticompetitive (thereby imposing false positive costs), should be deemphasized.  (Obviously, even under a decision-theoretic framework, certain agency resources would continue to be devoted to mandatory merger reviews and other core legally required agency functions.)

My article with Thom Lambert arguing that the Supreme Court – but not the Obama Administration – has substantially adopted an error cost approach to antitrust enforcement, appears in the newly released September 2015 issue of the Journal of Competition Law and Economics.  To whet your appetite, I am providing the abstract:

In his seminal 1984 article, The Limits of Antitrust, Judge Frank Easterbrook proposed that courts and enforcers adopt a simple set of screening rules for application in antitrust cases, in order to minimize error and decision costs and thereby maximize antitrust’s social value. Over time, federal courts in general—and the U.S. Supreme Court in particular, under Chief Justice Roberts—have in substantial part adopted Easterbrook’s “limits of antitrust” approach, thereby helping to reduce costly antitrust uncertainty. Recently, however, antitrust enforcers in the Obama Administration (unlike their predecessors in the Reagan, Bush, and Clinton Administrations) have been less attuned to this approach, and have undertaken initiatives that reduce clarity and predictability in antitrust enforcement. Regardless of the cause of the diverging stances on the limits of antitrust, two things are clear. First, recent enforcement agency policies are severely at odds with the philosophy that informs Supreme Court antitrust jurisprudence. Second, if the agencies do not reverse course, acknowledge antitrust’s limits, and seek to optimize the law in light of those limits, consumers will suffer.

Let us hope that error cost considerations figure more prominently in antitrust enforcement under the next Administration.

As the organizer of this retrospective on Josh Wright’s tenure as FTC Commissioner, I have the (self-conferred) honor of closing out the symposium.

When Josh was confirmed I wrote that:

The FTC will benefit enormously from Josh’s expertise and his error cost approach to antitrust and consumer protection law will be a tremendous asset to the Commission — particularly as it delves further into the regulation of data and privacy. His work is rigorous, empirically grounded, and ever-mindful of the complexities of both business and regulation…. The Commissioners and staff at the FTC will surely… profit from his time there.

Whether others at the Commission have really learned from Josh is an open question, but there’s no doubt that Josh offered an enormous amount from which they could learn. As Tim Muris said, Josh “did not disappoint, having one of the most important and memorable tenures of any non-Chair” at the agency.

Within a month of his arrival at the Commission, in fact, Josh “laid down the cost-benefit-analysis gauntlet” in a little-noticed concurring statement regarding a proposed amendment to the Hart-Scott-Rodino Rules. The technical details of the proposed rule don’t matter for these purposes, but, as Josh noted in his statement, the situation intended to be avoided by the rule had never arisen:

The proposed rulemaking appears to be a solution in search of a problem. The Federal Register notice states that the proposed rules are necessary to prevent the FTC and DOJ from “expend[ing] scarce resources on hypothetical transactions.” Yet, I have not to date been presented with evidence that any of the over 68,000 transactions notified under the HSR rules have required Commission resources to be allocated to a truly hypothetical transaction.

What Josh asked for in his statement was not that the rule be scrapped, but simply that, before adopting the rule, the FTC weigh its costs and benefits.

As I noted at the time:

[I]t is the Commission’s responsibility to ensure that the rules it enacts will actually be beneficial (it is a consumer protection agency, after all). The staff, presumably, did a perfectly fine job writing the rule they were asked to write. Josh’s point is simply that it isn’t clear the rule should be adopted because it isn’t clear that the benefits of doing so would outweigh the costs.

As essentially everyone who has contributed to this symposium has noted, Josh was singularly focused on the rigorous application of the deceptively simple concept that the FTC should ensure that the benefits of any rule or enforcement action it adopts outweigh the costs. The rest, as they say, is commentary.

For Josh, this basic principle should permeate every aspect of the agency, and permeate the way it thinks about everything it does. Only an entirely new mindset can ensure that outcomes, from the most significant enforcement actions to the most trivial rule amendments, actually serve consumers.

While the FTC has a strong tradition of incorporating economic analysis in its antitrust decision-making, its record in using economics in other areas is decidedly mixed, as Berin points out. But even in competition policy, the Commission frequently uses economics — but it’s not clear it entirely understands economics. The approach that others have lauded Josh for is powerful, but it’s also subtle.

Inherent limitations on anyone’s knowledge about the future of technology, business and social norms caution skepticism, as regulators attempt to predict whether any given business conduct will, on net, improve or harm consumer welfare. In fact, a host of factors suggests that even the best-intentioned regulators tend toward overconfidence and the erroneous condemnation of novel conduct that benefits consumers in ways that are difficult for regulators to understand. Coase’s famous admonition in a 1972 paper has been quoted here before (frequently), but bears quoting again:

If an economist finds something – a business practice of one sort or another – that he does not understand, he looks for a monopoly explanation. And as in this field we are very ignorant, the number of ununderstandable practices tends to be very large, and the reliance on a monopoly explanation, frequent.

Simply “knowing” economics, and knowing that it is important to antitrust enforcement, aren’t enough. Reliance on economic formulae and theoretical models alone — to say nothing of “evidence-based” analysis that doesn’t or can’t differentiate between probative and prejudicial facts — doesn’t resolve the key limitations on regulatory decisionmaking that threaten consumer welfare, particularly when it comes to the modern, innovative economy.

As Josh and I have written:

[O]ur theoretical knowledge cannot yet confidently predict the direction of the impact of additional product market competition on innovation, much less the magnitude. Additionally, the multi-dimensional nature of competition implies that the magnitude of these impacts will be important as innovation and other forms of competition will frequently be inversely correlated as they relate to consumer welfare. Thus, weighing the magnitudes of opposing effects will be essential to most policy decisions relating to innovation. Again, at this stage, economic theory does not provide a reliable basis for predicting the conditions under which welfare gains associated with greater product market competition resulting from some regulatory intervention will outweigh losses associated with reduced innovation.

* * *

In sum, the theoretical and empirical literature reveals an undeniably complex interaction between product market competition, patent rules, innovation, and consumer welfare. While these complexities are well understood, in our view, their implications for the debate about the appropriate scale and form of regulation of innovation are not.

Along the most important dimensions, while our knowledge has expanded since 1972, the problem has not disappeared — and it may only have magnified. As Tim Muris noted in 2005,

[A] visitor from Mars who reads only the mathematical IO literature could mistakenly conclude that the U.S. economy is rife with monopoly power…. [Meanwhile, Section 2’s] history has mostly been one of mistaken enforcement.

It may not sound like much, but what is needed, what Josh brought to the agency, and what turns out to be absolutely essential to getting it right, is unflagging awareness of and attention to the institutional, political and microeconomic relationships that shape regulatory institutions and regulatory outcomes.

Regulators must do their best to constantly grapple with uncertainty, problems of operationalizing useful theory, and, perhaps most important, the social losses associated with error costs. It is not (just) technicians that the FTC needs; it’s regulators imbued with the “Economic Way of Thinking.” In short, what is needed, and what Josh brought to the Commission, is humility — the belief that, as Coase also wrote, sometimes the best answer is to “do nothing at all.”

The technocratic model of regulation is inconsistent with the regulatory humility required in the face of fast-changing, unexpected — and immeasurably valuable — technological advance. As Virginia Postrel warns in The Future and Its Enemies:

Technocrats are “for the future,” but only if someone is in charge of making it turn out according to plan. They greet every new idea with a “yes, but,” followed by legislation, regulation, and litigation…. By design, technocrats pick winners, establish standards, and impose a single set of values on the future.

For Josh, the first JD/Econ PhD appointed to the FTC,

economics provides a framework to organize the way I think about issues beyond analyzing the competitive effects in a particular case, including, for example, rulemaking, the various policy issues facing the Commission, and how I weigh evidence relative to the burdens of proof and production. Almost all the decisions I make as a Commissioner are made through the lens of economics and marginal analysis because that is the way I have been taught to think.

A representative example will serve to illuminate the distinction between merely using economics and evidence and understanding them — and their limitations.

In his Nielson/Arbitron dissent Josh wrote:

The Commission thus challenges the proposed transaction based upon what must be acknowledged as a novel theory—that is, that the merger will substantially lessen competition in a market that does not today exist.

[W]e… do not know how the market will evolve, what other potential competitors might exist, and whether and to what extent these competitors might impose competitive constraints upon the parties.

Josh’s straightforward statement of the basis for restraint stands in marked contrast to the majority’s decision to impose antitrust-based limits on economic activity that hasn’t even yet been contemplated. Such conduct is directly at odds with a sensible, evidence-based approach to enforcement, and the economic problems with it are considerable, as Josh also notes:

[I]t is an exceedingly difficult task to predict the competitive effects of a transaction where there is insufficient evidence to reliably answer the[] basic questions upon which proper merger analysis is based.

When the Commission’s antitrust analysis comes unmoored from such fact-based inquiry, tethered tightly to robust economic theory, there is a more significant risk that non-economic considerations, intuition, and policy preferences influence the outcome of cases.

Compare in this regard Josh’s words about Nielsen with Deborah Feinstein’s defense of the majority from such charges:

The Commission based its decision not on crystal-ball gazing about what might happen, but on evidence from the merging firms about what they were doing and from customers about their expectations of those development plans. From this fact-based analysis, the Commission concluded that each company could be considered a likely future entrant, and that the elimination of the future offering of one would likely result in a lessening of competition.

Instead of requiring rigorous economic analysis of the facts, couched in an acute awareness of our necessary ignorance about the future, for Feinstein the FTC fulfilled its obligation in Nielsen by considering the “facts” alone (not economic evidence, mind you, but customer statements and expressions of intent by the parties) and then, at best, casually applying to them the simplistic, outdated structural presumption – the conclusion that increased concentration would lead inexorably to anticompetitive harm. Her implicit claim is that all the Commission needed to know about the future was what the parties thought about what they were doing and what (hardy disinterested) customers thought they were doing. This shouldn’t be nearly enough.

Worst of all, Nielsen was “decided” with a consent order. As Josh wrote, strongly reflecting the essential awareness of the broader institutional environment that he brought to the Commission:

[w]here the Commission has endorsed by way of consent a willingness to challenge transactions where it might not be able to meet its burden of proving harm to competition, and which therefore at best are competitively innocuous, the Commission’s actions may alter private parties’ behavior in a manner that does not enhance consumer welfare.

Obviously in this regard his successful effort to get the Commission to adopt a UMC enforcement policy statement is a most welcome development.

In short, Josh is to be applauded not because he brought economics to the Commission, but because he brought the economic way of thinking. Such a thing is entirely too rare in the modern administrative state. Josh’s tenure at the FTC was relatively short, but he used every moment of it to assiduously advance his singular, and essential, mission. And, to paraphrase the last line of the movie The Right Stuff (it helps to have the rousing film score playing in the background as you read this): “for a brief moment, [Josh Wright] became the greatest [regulator] anyone had ever seen.”

I would like to extend my thanks to everyone who participated in this symposium. The contributions here will stand as a fitting and lasting tribute to Josh and his legacy at the Commission. And, of course, I’d also like to thank Josh for a tenure at the FTC very much worth honoring.

The Wall Street Journal reported yesterday that the FTC Bureau of Competition staff report to the commissioners in the Google antitrust investigation recommended that the Commission approve an antitrust suit against the company.

While this is excellent fodder for a few hours of Twitter hysteria, it takes more than 140 characters to delve into the nuances of a 20-month federal investigation. And the bottom line is, frankly, pretty ho-hum.

As I said recently,

One of life’s unfortunate certainties, as predictable as death and taxes, is this: regulators regulate.

The Bureau of Competition staff is made up of professional lawyers — many of them litigators, whose existence is predicated on there being actual, you know, litigation. If you believe in human fallibility at all, you have to expect that, when they err, FTC staff errs on the side of too much, rather than too little, enforcement.

So is it shocking that the FTC staff might recommend that the Commission undertake what would undoubtedly have been one of the agency’s most significant antitrust cases? Hardly.

Nor is it surprising that the commissioners might not always agree with staff. In fact, staff recommendations are ignored all the time, for better or worse. Here are just a few examples: R.J Reynolds/Brown & Williamson merger, POM Wonderful , Home Shopping Network/QVC merger, cigarette advertising. No doubt there are many, many more.

Regardless, it also bears pointing out that the staff did not recommend the FTC bring suit on the central issue of search bias “because of the strong procompetitive justifications Google has set forth”:

Complainants allege that Google’s conduct is anticompetitive because if forecloses alternative search platforms that might operate to constrain Google’s dominance in search and search advertising. Although it is a close call, we do not recommend that the Commission issue a complaint against Google for this conduct.

But this caveat is enormous. To report this as the FTC staff recommending a case is seriously misleading. Here they are forbearing from bringing 99% of the case against Google, and recommending suit on the marginal 1% issues. It would be more accurate to say, “FTC staff recommends no case against Google, except on a couple of minor issues which will be immediately settled.”

And in fact it was on just these minor issues that Google agreed to voluntary commitments to curtail some conduct when the FTC announced it was not bringing suit against the company.

The Wall Street Journal quotes some other language from the staff report bolstering the conclusion that this is a complex market, the conduct at issue was ambiguous (at worst), and supporting the central recommendation not to sue:

We are faced with a set of facts that can most plausibly be accounted for by a narrative of mixed motives: one in which Google’s course of conduct was premised on its desire to innovate and to produce a high quality search product in the face of competition, blended with the desire to direct users to its own vertical offerings (instead of those of rivals) so as to increase its own revenues. Indeed, the evidence paints a complex portrait of a company working toward an overall goal of maintaining its market share by providing the best user experience, while simultaneously engaging in tactics that resulted in harm to many vertical competitors, and likely helped to entrench Google’s monopoly power over search and search advertising.

On a global level, the record will permit Google to show substantial innovation, intense competition from Microsoft and others, and speculative long-run harm.

This is exactly when you want antitrust enforcers to forbear. Predicting anticompetitive effects is difficult, and conduct that could be problematic is simultaneously potentially vigorous competition.

That the staff concluded that some of what Google was doing “harmed competitors” isn’t surprising — there were lots of competitors parading through the FTC on a daily basis claiming Google harmed them. But antitrust is about protecting consumers, not competitors. Far more important is the staff finding of “substantial innovation, intense competition from Microsoft and others, and speculative long-run harm.”

Indeed, the combination of “substantial innovation,” “intense competition from Microsoft and others,” and “Google’s strong procompetitive justifications” suggests a well-functioning market. It similarly suggests an antitrust case that the FTC would likely have lost. The FTC’s litigators should probably be grateful that the commissioners had the good sense to vote to close the investigation.

Meanwhile, the Wall Street Journal also reports that the FTC’s Bureau of Economics simultaneously recommended that the Commission not bring suit at all against Google. It is not uncommon for the lawyers and the economists at the Commission to disagree. And as a general (though not inviolable) rule, we should be happy when the Commissioners side with the economists.

While the press, professional Google critics, and the company’s competitors may want to make this sound like a big deal, the actual facts of the case and a pretty simple error-cost analysis suggests that not bringing a case was the correct course.

Does anyone really still believe that the threat of antitrust enforcement doesn’t lead to undesirable caution on the part of potential defendants?

Whatever you may think of the merits of the Google/ITA merger (and obviously I suspect the merits cut in favor of the merger), there can be no doubt that restraining Google’s (and other large companies’) ability to acquire other firms will hurt those other firms (in ITA’s case, for example, they stand to lose $700 million).  There should also be no doubt that this restraint will exceed whatever efficient level is supposed by supporters of aggressive antitrust enforcement.  And the follow-on effect from that will be less venture funding and thus less innovation.  Perhaps we have too much innovation in the economy right now?

Reuters fleshes out the point in an article titled, “Google’s M&A Machine Stuck in Antitrust Limbo.”  That about sums it up.

Here are the most salient bits:

Not long ago, selling to Google offered one of the best alternatives to an initial public offering for up-and-coming technology startups. . . . But Google’s M&A machine looks to be gumming up.

* * *

The problem is antitrust limbo.

* * *

Ironically that may make it less appealing to sell to Google. The company has announced just $200 million of acquisitions in 2011 — the smallest sum since the panic of 2008.

* * *

The ITA acquisition has sent a warning signal to the venture capital and startup communities. Patents may still be available. But no fast-moving entrepreneur wants to get stuck the way ITA has since agreeing to be sold last July 1.

* * *

For a small, growing business the risks are huge.

* * *

That doesn’t exclude Google as an exit option. But the regulatory risk needs to be hedged with a huge breakup fee. . . . With Google’s rising antitrust issues, however, the fee needs to be as big as the purchase price.

Like Mike, we also have a short article in the latest issue of the CPI Antitrust Chronicle.  Also available on SSRN, for those without a CPI subscription.

Here’s our stab at an abstract:

There are very few industries that can attract the attention of Congress, multiple federal and state agencies, consumer groups, economists, antitrust lawyers, the business community, farmers, ranchers, and academics as the agriculture workshops have.  Of course, with intense interest from stakeholders comes intense pressure from potential winners and losers in the political process, heated disagreement over how gains from trade should be distributed among various stakeholders, and certainly a variety of competing views over the correct approach to competition policy in agriculture markets.  These pressures have the potential to distract antitrust analysis from its core mission: protecting competition and consumer welfare.  While imperfect, the economic approach to antitrust that has generated remarkable improvements in outcomes over the last fifty years has rejected simplistic and misleading notions that antitrust is meant to protect “small dealers and worthy men” or to fulfill non-economic objectives; that market concentration is a predictor of market performance; or that competition policy and intellectual property cannot peacefully co-exist.  Unfortunately, in the run-up to and during the workshops much of the policy rhetoric encouraged adopting these outdated antitrust approaches, especially ones that would favor one group of stakeholders over another rather than protecting the competitive process. In this essay, we argue that a first principles approach to antitrust analysis is required to guarantee the benefits of competition in the agricultural sector, and discuss three fundamental principles of modern antitrust that, at times, appear to be given short-shrift in the recent debate.

We have just uploaded to SSRN a draft of our article assessing the economics and the law of the antitrust case directed at the core of Google’s business:  Its search and search advertising platform.  The article is Google and the Limits of Antitrust: The Case Against the Antitrust Case Against Google.  This is really the first systematic attempt to address both the amorphous and the concrete (as in the TradeComet complaint) claims about Google’s business and its legal and economic importance in its primary market.  It’s giving nothing away to say we’re skeptical of the claims, and, moreover, that an approach to the issues appropriately sensitive to the potential error costs would be extremely deferential.  As we discuss, the economics of search and search advertising are indeterminate and subtle, and the risk of error is high (claims of network effects, for example, are greatly exaggerated, and the pro-competitive justifications for Google’s use of a quality score are legion, despite frequent claims to the contrary).  We welcome comments on the article, and we look forward to the debate.  The abstract is here:

The antitrust landscape has changed dramatically in the last decade.  Within the last two years alone, the United States Department of Justice has held hearings on the appropriate scope of Section 2, issued a comprehensive Report, and then repudiated it; and the European Commission has risen as an aggressive leader in single firm conduct enforcement by bringing abuse of dominance actions and assessing heavy fines against firms including Qualcomm, Intel, and Microsoft.  In the United States, two of the most significant characteristics of the “new” antitrust approach have been a more intense focus on innovative companies in high-tech industries and a weakening of longstanding concerns that erroneous antitrust interventions will hinder economic growth.  But this focus is dangerous, and these concerns should not be dismissed so lightly.  In this article we offer a comprehensive cautionary tale in the context of a detailed factual, legal and economic analysis of the next Microsoft: the theoretical, but perhaps imminent, enforcement action against Google.  Close scrutiny of the complex economics of Google’s technology, market and business practices reveals a range of real but subtle, pro-competitive explanations for features that have been held out instead as anticompetitive.  Application of the relevant case law then reveals a set of concerns where economic complexity and ambiguity, coupled with an insufficiently-deferential approach to innovative technology and pricing practices in the most relevant precedent (the D.C. Circuit’s decision in Microsoft), portend a potentially erroneous—and costly—result.  Our analysis, by contrast, embraces the cautious and evidence-based approach to uncertainty, complexity and dynamic innovation contained within the well-established “error cost framework.”  As we demonstrate, while there is an abundance of error-cost concern in the Supreme Court precedent, there is a real risk that the current, aggressive approach to antitrust error, coupled with the uncertain economics of Google’s innovative conduct, will nevertheless yield a costly intervention.  The point is not that we know that Google’s conduct is procompetitive, but rather that the very uncertainty surrounding it counsels caution, not aggression.