Archives For merger guidelines

As the organizer of this retrospective on Josh Wright’s tenure as FTC Commissioner, I have the (self-conferred) honor of closing out the symposium.

When Josh was confirmed I wrote that:

The FTC will benefit enormously from Josh’s expertise and his error cost approach to antitrust and consumer protection law will be a tremendous asset to the Commission — particularly as it delves further into the regulation of data and privacy. His work is rigorous, empirically grounded, and ever-mindful of the complexities of both business and regulation…. The Commissioners and staff at the FTC will surely… profit from his time there.

Whether others at the Commission have really learned from Josh is an open question, but there’s no doubt that Josh offered an enormous amount from which they could learn. As Tim Muris said, Josh “did not disappoint, having one of the most important and memorable tenures of any non-Chair” at the agency.

Within a month of his arrival at the Commission, in fact, Josh “laid down the cost-benefit-analysis gauntlet” in a little-noticed concurring statement regarding a proposed amendment to the Hart-Scott-Rodino Rules. The technical details of the proposed rule don’t matter for these purposes, but, as Josh noted in his statement, the situation intended to be avoided by the rule had never arisen:

The proposed rulemaking appears to be a solution in search of a problem. The Federal Register notice states that the proposed rules are necessary to prevent the FTC and DOJ from “expend[ing] scarce resources on hypothetical transactions.” Yet, I have not to date been presented with evidence that any of the over 68,000 transactions notified under the HSR rules have required Commission resources to be allocated to a truly hypothetical transaction.

What Josh asked for in his statement was not that the rule be scrapped, but simply that, before adopting the rule, the FTC weigh its costs and benefits.

As I noted at the time:

[I]t is the Commission’s responsibility to ensure that the rules it enacts will actually be beneficial (it is a consumer protection agency, after all). The staff, presumably, did a perfectly fine job writing the rule they were asked to write. Josh’s point is simply that it isn’t clear the rule should be adopted because it isn’t clear that the benefits of doing so would outweigh the costs.

As essentially everyone who has contributed to this symposium has noted, Josh was singularly focused on the rigorous application of the deceptively simple concept that the FTC should ensure that the benefits of any rule or enforcement action it adopts outweigh the costs. The rest, as they say, is commentary.

For Josh, this basic principle should permeate every aspect of the agency, and permeate the way it thinks about everything it does. Only an entirely new mindset can ensure that outcomes, from the most significant enforcement actions to the most trivial rule amendments, actually serve consumers.

While the FTC has a strong tradition of incorporating economic analysis in its antitrust decision-making, its record in using economics in other areas is decidedly mixed, as Berin points out. But even in competition policy, the Commission frequently uses economics — but it’s not clear it entirely understands economics. The approach that others have lauded Josh for is powerful, but it’s also subtle.

Inherent limitations on anyone’s knowledge about the future of technology, business and social norms caution skepticism, as regulators attempt to predict whether any given business conduct will, on net, improve or harm consumer welfare. In fact, a host of factors suggests that even the best-intentioned regulators tend toward overconfidence and the erroneous condemnation of novel conduct that benefits consumers in ways that are difficult for regulators to understand. Coase’s famous admonition in a 1972 paper has been quoted here before (frequently), but bears quoting again:

If an economist finds something – a business practice of one sort or another – that he does not understand, he looks for a monopoly explanation. And as in this field we are very ignorant, the number of ununderstandable practices tends to be very large, and the reliance on a monopoly explanation, frequent.

Simply “knowing” economics, and knowing that it is important to antitrust enforcement, aren’t enough. Reliance on economic formulae and theoretical models alone — to say nothing of “evidence-based” analysis that doesn’t or can’t differentiate between probative and prejudicial facts — doesn’t resolve the key limitations on regulatory decisionmaking that threaten consumer welfare, particularly when it comes to the modern, innovative economy.

As Josh and I have written:

[O]ur theoretical knowledge cannot yet confidently predict the direction of the impact of additional product market competition on innovation, much less the magnitude. Additionally, the multi-dimensional nature of competition implies that the magnitude of these impacts will be important as innovation and other forms of competition will frequently be inversely correlated as they relate to consumer welfare. Thus, weighing the magnitudes of opposing effects will be essential to most policy decisions relating to innovation. Again, at this stage, economic theory does not provide a reliable basis for predicting the conditions under which welfare gains associated with greater product market competition resulting from some regulatory intervention will outweigh losses associated with reduced innovation.

* * *

In sum, the theoretical and empirical literature reveals an undeniably complex interaction between product market competition, patent rules, innovation, and consumer welfare. While these complexities are well understood, in our view, their implications for the debate about the appropriate scale and form of regulation of innovation are not.

Along the most important dimensions, while our knowledge has expanded since 1972, the problem has not disappeared — and it may only have magnified. As Tim Muris noted in 2005,

[A] visitor from Mars who reads only the mathematical IO literature could mistakenly conclude that the U.S. economy is rife with monopoly power…. [Meanwhile, Section 2’s] history has mostly been one of mistaken enforcement.

It may not sound like much, but what is needed, what Josh brought to the agency, and what turns out to be absolutely essential to getting it right, is unflagging awareness of and attention to the institutional, political and microeconomic relationships that shape regulatory institutions and regulatory outcomes.

Regulators must do their best to constantly grapple with uncertainty, problems of operationalizing useful theory, and, perhaps most important, the social losses associated with error costs. It is not (just) technicians that the FTC needs; it’s regulators imbued with the “Economic Way of Thinking.” In short, what is needed, and what Josh brought to the Commission, is humility — the belief that, as Coase also wrote, sometimes the best answer is to “do nothing at all.”

The technocratic model of regulation is inconsistent with the regulatory humility required in the face of fast-changing, unexpected — and immeasurably valuable — technological advance. As Virginia Postrel warns in The Future and Its Enemies:

Technocrats are “for the future,” but only if someone is in charge of making it turn out according to plan. They greet every new idea with a “yes, but,” followed by legislation, regulation, and litigation…. By design, technocrats pick winners, establish standards, and impose a single set of values on the future.

For Josh, the first JD/Econ PhD appointed to the FTC,

economics provides a framework to organize the way I think about issues beyond analyzing the competitive effects in a particular case, including, for example, rulemaking, the various policy issues facing the Commission, and how I weigh evidence relative to the burdens of proof and production. Almost all the decisions I make as a Commissioner are made through the lens of economics and marginal analysis because that is the way I have been taught to think.

A representative example will serve to illuminate the distinction between merely using economics and evidence and understanding them — and their limitations.

In his Nielson/Arbitron dissent Josh wrote:

The Commission thus challenges the proposed transaction based upon what must be acknowledged as a novel theory—that is, that the merger will substantially lessen competition in a market that does not today exist.

[W]e… do not know how the market will evolve, what other potential competitors might exist, and whether and to what extent these competitors might impose competitive constraints upon the parties.

Josh’s straightforward statement of the basis for restraint stands in marked contrast to the majority’s decision to impose antitrust-based limits on economic activity that hasn’t even yet been contemplated. Such conduct is directly at odds with a sensible, evidence-based approach to enforcement, and the economic problems with it are considerable, as Josh also notes:

[I]t is an exceedingly difficult task to predict the competitive effects of a transaction where there is insufficient evidence to reliably answer the[] basic questions upon which proper merger analysis is based.

When the Commission’s antitrust analysis comes unmoored from such fact-based inquiry, tethered tightly to robust economic theory, there is a more significant risk that non-economic considerations, intuition, and policy preferences influence the outcome of cases.

Compare in this regard Josh’s words about Nielsen with Deborah Feinstein’s defense of the majority from such charges:

The Commission based its decision not on crystal-ball gazing about what might happen, but on evidence from the merging firms about what they were doing and from customers about their expectations of those development plans. From this fact-based analysis, the Commission concluded that each company could be considered a likely future entrant, and that the elimination of the future offering of one would likely result in a lessening of competition.

Instead of requiring rigorous economic analysis of the facts, couched in an acute awareness of our necessary ignorance about the future, for Feinstein the FTC fulfilled its obligation in Nielsen by considering the “facts” alone (not economic evidence, mind you, but customer statements and expressions of intent by the parties) and then, at best, casually applying to them the simplistic, outdated structural presumption – the conclusion that increased concentration would lead inexorably to anticompetitive harm. Her implicit claim is that all the Commission needed to know about the future was what the parties thought about what they were doing and what (hardy disinterested) customers thought they were doing. This shouldn’t be nearly enough.

Worst of all, Nielsen was “decided” with a consent order. As Josh wrote, strongly reflecting the essential awareness of the broader institutional environment that he brought to the Commission:

[w]here the Commission has endorsed by way of consent a willingness to challenge transactions where it might not be able to meet its burden of proving harm to competition, and which therefore at best are competitively innocuous, the Commission’s actions may alter private parties’ behavior in a manner that does not enhance consumer welfare.

Obviously in this regard his successful effort to get the Commission to adopt a UMC enforcement policy statement is a most welcome development.

In short, Josh is to be applauded not because he brought economics to the Commission, but because he brought the economic way of thinking. Such a thing is entirely too rare in the modern administrative state. Josh’s tenure at the FTC was relatively short, but he used every moment of it to assiduously advance his singular, and essential, mission. And, to paraphrase the last line of the movie The Right Stuff (it helps to have the rousing film score playing in the background as you read this): “for a brief moment, [Josh Wright] became the greatest [regulator] anyone had ever seen.”

I would like to extend my thanks to everyone who participated in this symposium. The contributions here will stand as a fitting and lasting tribute to Josh and his legacy at the Commission. And, of course, I’d also like to thank Josh for a tenure at the FTC very much worth honoring.

Alden Abbott and I recently co-authored an article, forthcoming in the Journal of Competition Law and Economics, in which we examined the degree to which the Supreme Court and the federal enforcement agencies have recognized the inherent limits of antitrust law. We concluded that the Roberts Court has admirably acknowledged those limits and has for the most part crafted liability rules that will maximize antitrust’s social value. The enforcement agencies, by contrast, have largely ignored antitrust’s intrinsic limits. In a number of areas, they have sought to expand antitrust’s reach in ways likely to reduce consumer welfare.

The bright spot in federal antitrust enforcement in the last few years has been Josh Wright. Time and again, he has bucked the antitrust establishment, reminding the mandarins that their goal should not be to stop every instance of anticompetitive behavior but instead to optimize antitrust by minimizing the sum of error costs (from both false negatives and false positives) and decision costs. As Judge Easterbrook famously explained, and as Josh Wright has emphasized more than anyone I know, inevitable mistakes (error costs) and heavy information requirements (decision costs) constrain what antitrust can do. Every liability rule, every defense, every immunity doctrine should be crafted with those limits in mind.

Josh will no doubt be remembered, and justifiably so, for spearheading the effort to provide guidance on how the Federal Trade Commission will exercise its amorphous authority to police “unfair methods of competition.” Several others have lauded Josh’s fine contribution on that matter (as have I), so I won’t gild that lily here. Instead, let me briefly highlight two other areas in which Josh has properly pushed for a recognition of antitrust’s inherent limits.

Vertical Restraints

Vertical restraints—both intrabrand restraints like resale price maintenance (RPM) and interbrand restraints like exclusive dealing—are a competitive mixed bag. Under certain conditions, such restraints may reduce overall market output, causing anticompetitive harm. Under other, more commonly occurring conditions, vertical restraints may enhance market output. Empirical evidence suggests that most vertical restraints are output-enhancing rather than output-reducing. Enforcers taking an optimizing, limits of antitrust approach will therefore exercise caution in condemning or discouraging vertical restraints.

That’s exactly what Josh Wright has done. In an early post-Leegin RPM order predating Josh’s tenure, the FTC endorsed a liability rule that placed an inappropriately heavy burden on RPM defendants. Josh later laid the groundwork for correcting that mistake, advocating a much more evidence-based (and defendant-friendly) RPM rule. In the McWane case, the Commission condemned an exclusive dealing arrangement that had been in place for long enough to cause anticompetitive harm but hadn’t done so. Josh rightly called out the majority for elevating theoretical harm over actual market evidence. (Adopting a highly deferential stance, the Eleventh Circuit affirmed the Commission majority, but Josh was right to criticize the majority’s implicit hostility toward exclusive dealing.) In settling the Graco case, the Commission again went beyond the evidence, requiring the defendant to cease exclusive dealing and to stop giving loyalty rebates even though there was no evidence that either sort of vertical restraint contributed to the anticompetitive harm giving rise to the action at issue. Josh rightly took the Commission to task for reflexively treating vertical restraints as suspect when they’re usually procompetitive and had an obvious procompetitive justification (avoidance of interbrand free-riding) in the case at hand.

Horizontal Mergers

Horizontal mergers, like vertical restraints, are competitive mixed bags. Any particular merger of competitors may impose some consumer harm by reducing the competition facing the merged firm. The same merger, though, may provide some consumer benefit by lowering the merged firm’s costs and thereby allowing it to compete more vigorously (most notably, by lowering its prices). A merger policy committed to minimizing the consumer welfare losses from unwarranted condemnations of net beneficial mergers and improper acquittals of net harmful ones would afford equal treatment to claims of anticompetitive harm and procompetitive benefit, requiring each to be established by the same quantum of proof.

The federal enforcement agencies’ new Horizontal Merger Guidelines, however, may put a thumb on the scale, tilting the balance toward a finding of anticompetitive harm. The Guidelines make it easier for the agencies to establish likely anticompetitive harm. Enforcers may now avoid defining a market if they point to adverse unilateral effects using the gross upward pricing pressure index (GUPPI). The merging parties, by contrast, bear a heavy burden when they seek to show that their contemplated merger will occasion efficiencies. They must: (1) prove that any claimed efficiencies are “merger-specific” (i.e., incapable of being achieved absent the merger); (2) “substantiate” asserted efficiencies; and (3) show that such efficiencies will result in the very markets in which the agencies have established likely anticompetitive effects.

In an important dissent (Ardagh), Josh observed that the agencies’ practice has evolved such that there are asymmetric burdens in establishing competitive effects, and he cautioned that this asymmetry will enhance error costs. (Geoff praised that dissent here.) In another dissent (Family Dollar/Dollar Tree), Josh acknowledged some potential problems with the promising but empirically unverified GUPPI, and he wisely advocated the creation of safe harbors for mergers generating very low GUPPI scores. (I praised that dissent here.)

I could go on and on, but these examples suffice to illustrate what has been, in my opinion, Josh’s most important contribution as an FTC commissioner: his constant effort to strengthen antitrust’s effectiveness by acknowledging its inevitable and inexorable limits. Coming on the heels of the FTC’s and DOJ’s rejection of the Section 2 Report—a document that was highly attuned to antitrust’s limits—Josh was just what antitrust needed.

by Jonathan Jacobson, partner & Ryan Maddock, associate, Wilson Sonsini Goodrich & Rosati

Excluding the much talked about Section 5 policy statement, Commissioner Wright’s tenure at the FTC was highlighted by his numerous dissents. If there is one unifying theme in those dissents it is his insistence that rigorous economic analysis be at the very core of all the Commission’s decisions. This theme was perhaps most evident in his decision to dissent in the Ardaugh/Saint-Gobain and Sysco/US Foods mergers, two cases that presented interesting questions about how the Commission and courts should balance a merger’s likely anticompetitive effects with its procompetitive efficiencies.

In April of 2014 the Commission announced that it had accepted a consent decree in Ardaugh/Saint-Gobain that remedied its competitive concerns related to the merger of the second and third largest firms in the market for “glass containers sold to beer and wine distributors in the United States.” The majority, which consisted of Commissioners Ramirez, Ohlhausen, and Brill, argued that the merger would lead to both coordinated and unilateral anticompetitive effects in the market and further stated that “the parties put forward insufficient evidence showing that the level of synergies that could be substantiated and verified would outweigh the clear evidence of consumer harm.” Commissioner Wright, who was the lone dissenter, strongly disagreed with the majority’s conclusions and found that the merger’s cognizable efficiencies were “up to six times greater than any likely unilateral price effect,” and thus the merger should have been approved without requiring a remedy.

Commissioner Wright also used his Ardaugh dissent to discuss whether the merging parties and Commission face asymmetric burdens of proof regarding competitive effects. Specifically, Commissioner Wright asked whether the “merging parties [must] overcome a greater burden of proof on efficiencies in practice than does the FTC to satisfy its prima facie burden of establishing anticompetitive effects?” Commissioner Wright stated that the Commission has acknowledged that in theory the burdens of proof should be uniform; however, he argued that the only way the majority could have found that the Ardaugh/Saint-Gobain merger would generate almost no cognizable efficiencies is by applying asymmetric burdens. He explained that the majority’s approach “embraces probabilistic prediction, estimation, presumption, and simulation of anticompetitive effects on the one hand but requires efficiencies to be proven on the other.”

Commissioner Wright, who was joined by Commissioner Ohlhausen, also dissented from the Commission’s decision to challenge the Sysco/US Foods merger. While the Commissioners did not issue a formal dissent because of the FTC’s then pending litigation, Commissioner Wright tweeted that he had “no reason to believe the proposed Sysco/US Foods transaction violated the Clayton Act.” The lack of a formal dissent makes it challenging to ascertain all of Commissioner Wright’s objections, but a reading of the Commission’s administrative complaint provides insight on his likely positions. For example, Commissioner Wright undoubtedly disagreed with the complaint’s treatment the parties’ proffered efficiencies:

Extraordinary Merger-specific efficiencies are necessary to outweigh the Merger’s likely significant harm to competition in the relevant markets. Respondents cannot demonstrate cognizable efficiencies that would be sufficient to rebut the strong presumption and evidence that the Merger likely would substantially lessen competition in the relevant markets.

Commissioner Wright’s Ardaugh dissent makes it clear that he does not believe that the balancing of anticompetitive effects and efficiencies should be an afterthought to the agency’s merger analysis, which is how the majority’s complaint appears to treat it. This case likely represents another instance where Commissioner Wright believed that the majority of commissioners applied asymmetric burdens of proof when balancing the merger’s competitive effects.

Commissioner Wright is not the first person to ask whether current merger analysis favors anticompetitive effects over efficiencies; however, that does not detract from the question’s importance.  His views reflect a belief shared by others that antitrust policy should be based on an aggregate welfare standard, rather than the consumer welfare standard that the agencies and the courts have for the most applied over the past few decades. In Commissioner Wright’s view, by applying asymmetric burdens–which is functionally the same as discounting efficiencies–antitrust agencies could harm both total welfare and consumers by increasing the chance that a procompetitive merger might be blocked. It stands in contrast to the majority view that a merger that raises prices requires efficiencies, specific to the merger, of a magnitude sufficient to defeat any increase in consumer prices–and that, because the efficiency information is in the hands of the proponents, shifting the burden to them is appropriate.

While his tenure at the FTC has come to an end, expect to continue to see Commissioner Wright at the front and center of this and many other important antitrust issues.

FTC Commissioner Josh Wright has some wise thoughts on how to handle a small GUPPI. I don’t mean the fish. Dissenting in part in the Commission’s disposition of the Family Dollar/Dollar Tree merger, Commissioner Wright calls for creating a safe harbor for mergers where the competitive concern is unilateral effects and the merger generates a low score on the “Gross Upward Pricing Pressure Index,” or “GUPPI.”

Before explaining why Wright is right on this one, some quick background on the GUPPI. In 2010, the DOJ and FTC revised their Horizontal Merger Guidelines to reflect better the actual practices the agencies follow in conducting pre-merger investigations. Perhaps the most notable new emphasis in the revised guidelines was a move away from market definition, the traditional starting point for merger analysis, and toward consideration of potentially adverse “unilateral” effects—i.e., anticompetitive harms that, unlike collusion or even non-collusive oligopolistic pricing, need not involve participation of any non-merging firms in the market. The primary unilateral effect emphasized by the new guidelines is that the merger may put “upward pricing pressure” on brand-differentiated but otherwise similar products sold by the merging firms. The guidelines maintain that when upward pricing pressure seems significant, it may be unnecessary to define the relevant market before concluding that an anticompetitive effect is likely.

The logic of upward pricing pressure is straightforward. Suppose five firms sell competing products (Products A-E) that, while largely substitutable, are differentiated by brand. Given the brand differentiation, some of the products are closer substitutes than others. If the closest substitute to Product A is Product B and vice-versa, then a merger between Producer A and Producer B may result in higher prices even if the remaining producers (C, D, and E) neither raise their prices nor reduce their output. The merged firm will know that if it raises the price of Product A, most of the lost sales will be diverted to Product B, which that firm also produces. Similarly, sales diverted from Product B will largely flow to Product A. Thus, the merged company, seeking to maximize its profits, may face pressure to raise the prices of Products A and/or B.

The GUPPI seeks to assess the likelihood, absent countervailing efficiencies, that the merged firm (e.g., Producer A combined with Producer B) would raise the price of one of its competing products (e.g., Product A), causing some of the lost sales on that product to be diverted to its substitute (e.g., Product B). The GUPPI on Product A would thus consist of:

The Value of Sales Diverted to Product B
Foregone Revenues on Lost Product A Sales.

The value of sales diverted to Product B, the numerator, is equal to the number of units diverted from Product A to Product B times the profit margin (price minus marginal cost) on Product B. The foregone revenues on lost Product A sales, the denominator, is equal to the number of lost Product A sales times the price of Product A. Thus, the fraction set forth above is equal to:

Number of A Sales Diverted to B * Unit Margin on B
Number of A Sales Lost * Price of A.

The Guidelines do not specify how high the GUPPI for a particular product must be before competitive concerns are raised, but they do suggest that at some point, the GUPPI is so small that adverse unilateral effects are unlikely. (“If the value of diverted sales is proportionately small, significant unilateral price effects are unlikely.”) Consistent with this observation, DOJ’s Antitrust Division has concluded that a GUPPI of less than 5% will not give rise to a merger challenge.

Commissioner Wright has split with his fellow commissioners over whether the FTC should similarly adopt a safe harbor for horizontal mergers where the adverse competitive concern is unilateral effects and the GUPPIs are less than 5%. Of the 330 markets in which the Commission is requiring divestiture of stores, 27 involve GUPPIs of less than 5%. Commissioner Wright’s position is that the combinations in those markets should be deemed to fall within a safe harbor. At the very least, he says, there should be some safe harbor for very small GUPPIs, even if it kicks in somewhere below the 5% level. The Commission has taken the position that there should be no safe harbor for mergers where the competitive concern is unilateral effects, no matter how low the GUPPI. Instead, the Commission majority says, GUPPI is just a starting point; once the GUPPIs are calculated, each market should be assessed in light of qualitative factors, and a gestalt-like, “all things considered” determination should be made.

The Commission majority purports to have taken this approach in the Family Dollar/Dollar Tree case. It claims that having used GUPPI to identify some markets that were presumptively troubling (markets where GUPPIs were above a certain level) and others that were presumptively not troubling (low-GUPPI markets), it went back and considered qualitative evidence for each, allowing the presumption to be rebutted where appropriate. As Commissioner Wright observes, though, the actual outcome of this purported process is curious: almost none of the “presumptively anticompetitive” markets were cleared based on qualitative evidence, whereas 27 of the “presumptively competitive” markets were slated for a divestiture despite the low GUPPI. In practice, the Commission seems to be using high GUPPIs to condemn unilateral effects mergers, while not allowing low GUPPIs to acquit them. Wright, by contrast, contends that a low-enough GUPPI should be sufficient to acquit a merger where the only plausible competitive concern is adverse unilateral effects.

He’s right on this, for at least five reasons.

  1. Virtually every merger involves a positive GUPPI. As long as any sales would be diverted from one merging firm to the other and the firms are pricing above cost (so that there is some profit margin on their products), a merger will involve a positive GUPPI. (Recall that the numerator in the GUPPI is “number of diverted sales * profit margin on the product to which sales are diverted.”) If qualitative evidence must be considered and a gestalt-like decision made in even low-GUPPI cases, then that’s the approach that will always be taken and GUPPI data will be essentially irrelevant.
  2. Calculating GUPPIs is hard. Figuring the GUPPI requires the agencies to make some difficult determinations. Calculating the “diversion ratio” (the percentage of lost A sales that are diverted to B when the price of A is raised) requires determinations of A’s “own-price elasticity of demand” as well as the “cross-price elasticity of demand” between A and B. Calculating the profit margin on B requires determining B’s marginal cost. Assessing elasticity of demand and marginal cost is notoriously difficult. This difficulty matters here for a couple of reasons:
    • First, why go through the difficult task of calculating GUPPIs if they won’t simplify the process of evaluating a merger? Under the Commission’s purported approach, once GUPPI is calculated, enforcers still have to consider all the other evidence and make an “all things considered” judgment. A better approach would be to cut off the additional analysis if the GUPPI is sufficiently small.
    • Second, given the difficulty of assessing marginal cost (which is necessary to determine the profit margin on the product to which sales are diverted), enforcers are likely to use a proxy, and the most commonly used proxy for marginal cost is average variable cost (i.e., the total non-fixed costs of producing the products at issue divided by the number of units produced). Average variable cost, though, tends to be smaller than marginal cost over the relevant range of output, which will cause the profit margin (price – “marginal” cost) on the product to which sales are diverted to appear higher than it actually is. And that will tend to overstate the GUPPI. Thus, at some point, a positive but low GUPPI should be deemed insignificant.
  3. The GUPPI is biased toward an indication of anticompetitive effect. GUPPI attempts to assess gross upward pricing pressure. It takes no account of factors that tend to prevent prices from rising. In particular, it ignores entry and repositioning by other product-differentiated firms, factors that constrain the merged firm’s ability to raise prices. It also ignores merger-induced efficiencies, which tend to put downward pressure on the merged firm’s prices. (Granted, the merger guidelines call for these factors to be considered eventually, but the factors are generally subject to higher proof standards. Efficiencies, in particular, are pretty difficulty to establish under the guidelines.) The upshot is that the GUPPI is inherently biased toward an indication of anticompetitive harm. A safe harbor for mergers involving low GUPPIs would help counter-balance this built-in bias.
  4. Divergence from DOJ’s approach will create an arbitrary result. The FTC and DOJ’s Antitrust Division share responsibility for assessing proposed mergers. Having the two enforcement agencies use different standards in their evaluations injects a measure of arbitrariness into the law. In the interest of consistency, predictability, and other basic rule of law values, the agencies should get on the same page. (And, for reasons set forth above, DOJ’s is the better one.)
  5. A safe harbor is consistent with the Supreme Court’s decision-theoretic antitrust jurisprudence. In recent years, the Supreme Court has generally crafted antitrust rules to optimize the costs of errors and of making liability judgments (or, put differently, to “minimize the sum of error and decision costs”). On a number of occasions, the Court has explicitly observed that it is better to adopt a rule that will allow the occasional false acquittal if doing so will prevent greater costs from false convictions and administration. The Brooke Group rule that there can be no predatory pricing liability absent below-cost pricing, for example, is expressly not premised on the belief that low, but above-cost, pricing can never be anticompetitive; rather, the rule is justified on the ground that the false negatives it allows are less costly than the false positives and administrative difficulties a more “theoretically perfect” rule would generate. Indeed, the Supreme Court’s antitrust jurisprudence seems to have wholeheartedly endorsed Voltaire’s prudent aphorism, “The perfect is the enemy of the good.” It is thus no answer for the Commission to observe that adverse unilateral effects can sometimes occur when a combination involves a low (<5%) GUPPI. Low but above-cost pricing can sometimes be anticompetitive, but Brooke Group’s safe harbor is sensible and representative of the approach the Supreme Court thinks antitrust should take. The FTC should get on board.

One final point. It is important to note that Commissioner Wright is not saying—and would be wrong to say—that a high GUPPI should be sufficient to condemn a merger. The GUPPI has never been empirically verified as a means of identifying anticompetitive mergers. As Dennis Carlton observed, “[T]he use of UPP as a merger screen is untested; to my knowledge, there has been no empirical analysis that has been performed to validate its predictive value in assessing the competitive effects of mergers.” Dennis W. Carlton, Revising the Horizontal Merger Guidelines, 10 J. Competition L. & Econ. 1, 24 (2010). This dearth of empirical evidence seems especially problematic in light of the enforcement agencies’ spotty track record in predicting the effects of mergers. Craig Peters, for example, found that the agencies’ merger simulations produced wildly inaccurate predictions about the price effects of airline mergers. See Craig Peters, Evaluating the Performance of Merger Simulation: Evidence from the U.S. Airline Industry, 49 J.L. & Econ. 627 (2006). Professor Carlton thus warns (Carlton, supra, at 32):

UPP is effectively a simplified version of merger simulation. As such, Peters’s findings tell a cautionary tale—more such studies should be conducted before one treats UPP, or any other potential merger review method, as a consistently reliable methodology by which to identify anticompetitive mergers.

The Commission majority claims to agree that a high GUPPI alone should be insufficient to condemn a merger. But the actual outcome of the analysis in the case at hand—i.e., finding almost all combinations involving high GUPPIs to be anticompetitive, while deeming the procompetitive presumption to be rebutted in 27 low-GUPPI cases—suggests that the Commission is really allowing high GUPPIs to “prove” that anticompetitive harm is likely.

The point of dispute between Wright and the other commissioners, though, is about how to handle low GUPPIs. On that question, the Commission should either join the DOJ in recognizing a safe harbor for low-GUPPI mergers or play it straight with the public and delete the Horizontal Merger Guidelines’ observation that “[i]f the value of diverted sales is proportionately small, significant unilateral price effects are unlikely.” The better approach would be to affirm the Guidelines and recognize a safe harbor.

The FTC recently required divestitures in two merger investigations (here and here), based largely on the majority’s conclusion that

[when] a proposed merger significantly increases concentration in an already highly concentrated market, a presumption of competitive harm is justified under both the Guidelines and well-established case law.” (Emphasis added).

Commissioner Wright dissented in both matters (here and here), contending that

[the majority’s] reliance upon such shorthand structural presumptions untethered from empirical evidence subsidize a shift away from the more rigorous and reliable economic tools embraced by the Merger Guidelines in favor of convenient but obsolete and less reliable economic analysis.

Josh has the better argument, of course. In both cases the majority relied upon its structural presumption rather than actual economic evidence to make out its case. But as Josh notes in his dissent in In the Matter of ZF Friedrichshafen and TRW Automotive (quoting his 2013 dissent in In the Matter of Fidelity National Financial, Inc. and Lender Processing Services):

there is no basis in modern economics to conclude with any modicum of reliability that increased concentration—without more—will increase post-merger incentives to coordinate. Thus, the Merger Guidelines require the federal antitrust agencies to develop additional evidence that supports the theory of coordination and, in particular, an inference that the merger increases incentives to coordinate.

Or as he points out in his dissent in In the Matter of Holcim Ltd. and Lafarge S.A.

The unifying theme of the unilateral effects analysis contemplated by the Merger Guidelines is that a particularized showing that post-merger competitive constraints are weakened or eliminated by the merger is superior to relying solely upon inferences of competitive effects drawn from changes in market structure.

It is unobjectionable (and uninteresting) that increased concentration may, all else equal, make coordination easier, or enhance unilateral effects in the case of merger to monopoly. There are even cases (as in generic pharmaceutical markets) where rigorous, targeted research exists, sufficient to support a presumption that a reduction in the number of firms would likely lessen competition. But generally (as in these cases), absent actual evidence, market shares might be helpful as an initial screen (and may suggest greater need for a thorough investigation), but they are not analytically probative in themselves. As Josh notes in his TRW dissent:

The relevant question is not whether the number of firms matters but how much it matters.

The majority in these cases asserts that it did find evidence sufficient to support its conclusions, but — and this is where the rubber meets the road — the question remains whether its limited evidentiary claims are sufficient, particularly given analyses that repeatedly come back to the structural presumption. As Josh says in his Holcim dissent:

it is my view that the investigation failed to adduce particularized evidence to elevate the anticipated likelihood of competitive effects from “possible” to “likely” under any of these theories. Without this necessary evidence, the only remaining factual basis upon which the Commission rests its decision is the fact that the merger will reduce the number of competitors from four to three or three to two. This is simply not enough evidence to support a reason to believe the proposed transaction will violate the Clayton Act in these Relevant Markets.

Looking at the majority’s statements, I see a few references to the kinds of market characteristics that could indicate competitive concerns — but very little actual analysis of whether these characteristics are sufficient to meet the Clayton Act standard in these particular markets. The question is — how much analysis is enough? I agree with Josh that the answer must be “more than is offered here,” but it’s an important question to explore more deeply.

Presumably that’s exactly what the ABA’s upcoming program will do, and I highly recommend interested readers attend or listen in. The program details are below.

The Use of Structural Presumptions in Merger Analysis

June 26, 2015, 12:00 PM – 1:15 PM ET

Moderator:

  • Brendan Coffman, Wilson Sonsini Goodrich & Rosati LLP

Speakers:

  • Angela Diveley, Office of Commissioner Joshua D. Wright, Federal Trade Commission
  • Abbott (Tad) Lipsky, Latham & Watkins LLP
  • Janusz Ordover, Compass Lexecon
  • Henry Su, Office of Chairwoman Edith Ramirez, Federal Trade Commission

In-person location:

Latham & Watkins
555 11th Street,NW
Ste 1000
Washington, DC 20004

Register here.

Recently, Commissioner Pai praised the introduction of bipartisan legislation to protect joint sales agreements (“JSAs”) between local television stations. He explained that

JSAs are contractual agreements that allow broadcasters to cut down on costs by using the same advertising sales force. The efficiencies created by JSAs have helped broadcasters to offer services that benefit consumers, especially in smaller markets…. JSAs have served communities well and have promoted localism and diversity in broadcasting. Unfortunately, the FCC’s new restrictions on JSAs have already caused some stations to go off the air and other stations to carry less local news.

fccThe “new restrictions” to which Commissioner Pai refers were recently challenged in court by the National Association of Broadcasters (NAB), et. al., and on April 20, the International Center for Law & Economics and a group of law and economics scholars filed an amicus brief with the D.C. Circuit Court of Appeals in support of the petition, asking the court to review the FCC’s local media ownership duopoly rule restricting JSAs.

Much as it did with with net neutrality, the FCC is looking to extend another set of rules with no basis in sound economic theory or established facts.

At issue is the FCC’s decision both to retain the duopoly rule and to extend that rule to certain JSAs, all without completing a legally mandated review of the local media ownership rules, due since 2010 (but last completed in 2007).

The duopoly rule is at odds with sound competition policy because it fails to account for drastic changes in the media market that necessitate redefinition of the market for television advertising. Moreover, its extension will bring a halt to JSAs currently operating (and operating well) in nearly 100 markets.  As the evidence on the FCC rulemaking record shows, many of these JSAs offer public interest benefits and actually foster, rather than stifle, competition in broadcast television markets.

In the world of media mergers generally, competition law hasn’t yet caught up to the obvious truth that new media is competing with old media for eyeballs and advertising dollars in basically every marketplace.

For instance, the FTC has relied on very narrow market definitions to challenge newspaper mergers without recognizing competition from television and the Internet. Similarly, the generally accepted market in which Google’s search conduct has been investigated is something like “online search advertising” — a market definition that excludes traditional marketing channels, despite the fact that advertisers shift their spending between these channels on a regular basis.

But the FCC fares even worse here. The FCC’s duopoly rule is premised on an “eight voices” test for local broadcast stations regardless of the market shares of the merging stations. In other words, one entity cannot own FCC licenses to two or more TV stations in the same local market unless there are at least eight independently owned stations in that market, even if their combined share of the audience or of advertising are below the level that could conceivably give rise to any inference of market power.

Such a rule is completely unjustifiable under any sensible understanding of competition law.

Can you even imagine the FTC or DOJ bringing an 8 to 7 merger challenge in any marketplace? The rule is also inconsistent with the contemporary economic learning incorporated into the 2010 Merger Guidelines, which looks at competitive effects rather than just counting competitors.

Not only did the FCC fail to analyze the marketplace to understand how much competition there is between local broadcasters, cable, and online video, but, on top of that, the FCC applied this outdated duopoly rule to JSAs without considering their benefits.

The Commission offers no explanation as to why it now believes that extending the duopoly rule to JSAs, many of which it had previously approved, is suddenly necessary to protect competition or otherwise serve the public interest. Nor does the FCC cite any evidence to support its position. In fact, the record evidence actually points overwhelmingly in the opposite direction.

As a matter of sound regulatory practice, this is bad enough. But Congress directed the FCC in Section 202(h) of the Telecommunications Act of 1996 to review all of its local ownership rules every four years to determine whether they were still “necessary in the public interest as the result of competition,” and to repeal or modify those that weren’t. During this review, the FCC must examine the relevant data and articulate a satisfactory explanation for its decision.

So what did the Commission do? It announced that, instead of completing its statutorily mandated 2010 quadrennial review of its local ownership rules, it would roll that review into a new 2014 quadrennial review (which it has yet to perform). Meanwhile, the Commission decided to retain its duopoly rule pending completion of that review because it had “tentatively” concluded that it was still necessary.

In other words, the FCC hasn’t conducted its mandatory quadrennial review in more than seven years, and won’t, under the new rules, conduct one for another year and a half (at least). Oh, and, as if nothing of relevance has changed in the market since then, it “tentatively” maintains its already suspect duopoly rule in the meantime.

In short, because the FCC didn’t conduct the review mandated by statute, there is no factual support for the 2014 Order. By relying on the outdated findings from its earlier review, the 2014 Order fails to examine the significant changes both in competition policy and in the market for video programming that have occurred since the current form of the rule was first adopted, rendering the rulemaking arbitrary and capricious under well-established case law.

Had the FCC examined the record of the current rulemaking, it would have found substantial evidence that undermines, rather than supports, the FCC’s rule.

Economic studies have shown that JSAs can help small broadcasters compete more effectively with cable and online video in a world where their advertising revenues are drying up and where temporary economies of scale (through limited contractual arrangements like JSAs) can help smaller, local advertising outlets better implement giant, national advertising campaigns. A ban on JSAs will actually make it less likely that competition among local broadcasters can survive, not more.

OfficialPaiCommissioner Pai, in his dissenting statement to the 2014 Order, offered a number of examples of the benefits of JSAs (all of them studiously ignored by the Commission in its Order). In one of these, a JSA enabled two stations in Joplin, Missouri to use their $3.5 million of cost savings from a JSA to upgrade their Doppler radar system, which helped save lives when a devastating tornado hit the town in 2011. But such benefits figure nowhere in the FCC’s “analysis.”

Several econometric studies also provide empirical support for the (also neglected) contention that duopolies and JSAs enable stations to improve the quality and prices of their programming.

One study, by Jeff Eisenach and Kevin Caves, shows that stations operating under these agreements are likely to carry significantly more news, public affairs, and current affairs programming than other stations in their markets. The same study found an 11 percent increase in audience shares for stations acquired through a duopoly. Meanwhile, a study by Hal Singer and Kevin Caves shows that markets with JSAs have advertising prices that are, on average, roughly 16 percent lower than in non-duopoly markets — not higher, as would be expected if JSAs harmed competition.

And again, Commissioner Pai provides several examples of these benefits in his dissenting statement. In one of these, a JSA in Wichita, Kansas enabled one of the two stations to provide Spanish-language HD programming, including news, weather, emergency and community information, in a market where that Spanish-language programming had not previously been available. Again — benefit ignored.

Moreover, in retaining its duopoly rule on the basis of woefully outdated evidence, the FCC completely ignores the continuing evolution in the market for video programming.

In reality, competition from non-broadcast sources of programming has increased dramatically since 1999. Among other things:

  • VideoScreensToday, over 85 percent of American households watch TV over cable or satellite. Most households now have access to nearly 200 cable channels that compete with broadcast TV for programming content and viewers.
  • In 2014, these cable channels attracted twice as many viewers as broadcast channels.
  • Online video services such as Netflix, Amazon Prime, and Hulu have begun to emerge as major new competitors for video programming, leading 179,000 households to “cut the cord” and cancel their cable subscriptions in the third quarter of 2014 alone.
  • Today, 40 percent of U.S. households subscribe to an online streaming service; as a result, cable ratings among adults fell by nine percent in 2014.
  • At the end of 2007, when the FCC completed its last quadrennial review, the iPhone had just been introduced, and the launch of the iPad was still more than two years away. Today, two-thirds of Americans have a smartphone or tablet over which they can receive video content, using technology that didn’t even exist when the FCC last amended its duopoly rule.

In the face of this evidence, and without any contrary evidence of its own, the Commission’s action in reversing 25 years of agency practice and extending its duopoly rule to most JSAs is arbitrary and capricious.

The law is pretty clear that the extent of support adduced by the FCC in its 2014 Rule is insufficient. Among other relevant precedent (and there is a lot of it):

The Supreme Court has held that an agency

must examine the relevant data and articulate a satisfactory explanation for its action, including a rational connection between the facts found and the choice made.

In the DC Circuit:

the agency must explain why it decided to act as it did. The agency’s statement must be one of ‘reasoning’; it must not be just a ‘conclusion’; it must ‘articulate a satisfactory explanation’ for its action.

And:

[A]n agency acts arbitrarily and capriciously when it abruptly departs from a position it previously held without satisfactorily explaining its reason for doing so.

Also:

The FCC ‘cannot silently depart from previous policies or ignore precedent’ . . . .”

And most recently in Judge Silberman’s concurrence/dissent in the 2010 Verizon v. FCC Open Internet Order case:

factual determinations that underly [sic] regulations must still be premised on demonstrated — and reasonable — evidential support

None of these standards is met in this case.

It will be noteworthy to see what the DC Circuit does with these arguments given the pending Petitions for Review of the latest Open Internet Order. There, too, the FCC acted without sufficient evidentiary support for its actions. The NAB/Stirk Holdings case may well turn out to be a bellwether for how the court views the FCC’s evidentiary failings in that case, as well.

The scholars joining ICLE on the brief are:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Henry N. Butler, George Mason University Foundation Professor of Law and Executive Director of the Law & Economics Center, George Mason University School of Law (and newly appointed dean).
  • Richard Epstein, Laurence A. Tisch Professor of Law, Classical Liberal Institute, New York University School of Law
  • Stan Liebowitz, Ashbel Smith Professor of Economics, University of Texas at Dallas
  • Fred McChesney, de la Cruz-Mentschikoff Endowed Chair in Law and Economics, University of Miami School of Law
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University
  • Michael E. Sykuta, Associate Professor in the Division of Applied Social Sciences and Director of the Contracting and Organizations Research Institute, University of Missouri

The full amicus brief is available here.

Earlier this week the International Center for Law & Economics, along with a group of prominent professors and scholars of law and economics, filed an amicus brief with the Ninth Circuit seeking rehearing en banc of the court’s FTC, et al. v. St Luke’s case.

ICLE, joined by the Medicaid Defense Fund, also filed an amicus brief with the Ninth Circuit panel that originally heard the case.

The case involves the purchase by St. Luke’s Hospital of the Saltzer Medical Group, a multi-specialty physician group in Nampa, Idaho. The FTC and the State of Idaho sought to permanently enjoin the transaction under the Clayton Act, arguing that

[T]he combination of St. Luke’s and Saltzer would give it the market power to demand higher rates for health care services provided by primary care physicians (PCPs) in Nampa, Idaho and surrounding areas, ultimately leading to higher costs for health care consumers.

The district court agreed and its decision was affirmed by the Ninth Circuit panel.

Unfortunately, in affirming the district court’s decision, the Ninth Circuit made several errors in its treatment of the efficiencies offered by St. Luke’s in defense of the merger. Most importantly:

  • The court refused to recognize St. Luke’s proffered quality efficiencies, stating that “[i]t is not enough to show that the merger would allow St. Luke’s to better serve patients.”
  • The panel also applied the “less restrictive alternative” analysis in such a way that any theoretically possible alternative to a merger would discount those claimed efficiencies.
  • Finally, the Ninth Circuit panel imposed a much higher burden of proof for St. Luke’s to prove efficiencies than it did for the FTC to make out its prima facie case.

As we note in our brief:

If permitted to stand, the Panel’s decision will signal to market participants that the efficiencies defense is essentially unavailable in the Ninth Circuit, especially if those efficiencies go towards improving quality. Companies contemplating a merger designed to make each party more efficient will be unable to rely on an efficiencies defense and will therefore abandon transactions that promote consumer welfare lest they fall victim to the sort of reasoning employed by the panel in this case.

The following excerpts from the brief elaborate on the errors committed by the court and highlight their significance, particularly in the health care context:

The Panel implied that only price effects can be cognizable efficiencies, noting that the District Court “did not find that the merger would increase competition or decrease prices.” But price divorced from product characteristics is an irrelevant concept. The relevant concept is quality-adjusted price, and a showing that a merger would result in higher product quality at the same price would certainly establish cognizable efficiencies.

* * *

By placing the ultimate burden of proving efficiencies on the defendants and by applying a narrow, impractical view of merger specificity, the Panel has wrongfully denied application of known procompetitive efficiencies. In fact, under the Panel’s ruling, it will be nearly impossible for merging parties to disprove all alternatives when the burden is on the merging party to address any and every untested, theoretical less-restrictive structural alternative.

* * *

Significantly, the Panel failed to consider the proffered significant advantages that health care acquisitions may have over contractual alternatives or how these advantages impact the feasibility of contracting as a less restrictive alternative. In a complex integration of assets, “the costs of contracting will generally increase more than the costs of vertical integration.” (Benjamin Klein, Robert G. Crawford, and Armen A. Alchian, Vertical Integration, Appropriable Rents, and the Competitive Contracting Process, 21 J. L. & ECON. 297, 298 (1978)). In health care in particular, complexity is a given. Health care is characterized by dramatically imperfect information, and myriad specialized and differentiated products whose attributes are often difficult to measure. Realigning incentives through contract is imperfect and often unsuccessful. Moreover, the health care market is one of the most fickle, plagued by constantly changing market conditions arising from technological evolution, ever-changing regulations, and heterogeneous (and shifting) consumer demand. Such uncertainty frequently creates too many contingencies for parties to address in either writing or enforcing contracts, making acquisition a more appropriate substitute.

* * *

Sound antitrust policy and law do not permit the theoretical to triumph over the practical. One can always envision ways that firms could function to achieve potential efficiencies…. But this approach would harm consumers and fail to further the aims of the antitrust laws.

* * *

The Panel’s approach to efficiencies in this case demonstrates a problematic asymmetry in merger analysis. As FTC Commissioner Wright has cautioned:

Merger analysis is by its nature a predictive enterprise. Thinking rigorously about probabilistic assessment of competitive harms is an appropriate approach from an economic perspective. However, there is some reason for concern that the approach applied to efficiencies is deterministic in practice. In other words, there is a potentially dangerous asymmetry from a consumer welfare perspective of an approach that embraces probabilistic prediction, estimation, presumption, and simulation of anticompetitive effects on the one hand but requires efficiencies to be proven on the other. (Dissenting Statement of Commissioner Joshua D. Wright at 5, In the Matter of Ardagh Group S.A., and Saint-Gobain Containers, Inc., and Compagnie de Saint-Gobain)

* * *

In this case, the Panel effectively presumed competitive harm and then imposed unduly high evidentiary burdens on the merging parties to demonstrate actual procompetitive effects. The differential treatment and evidentiary burdens placed on St. Luke’s to prove competitive benefits is “unjustified and counterproductive.” (Daniel A. Crane, Rethinking Merger Efficiencies, 110 MICH. L. REV. 347, 390 (2011)). Such asymmetry between the government’s and St. Luke’s burdens is “inconsistent with a merger policy designed to promote consumer welfare.” (Dissenting Statement of Commissioner Joshua D. Wright at 7, In the Matter of Ardagh Group S.A., and Saint-Gobain Containers, Inc., and Compagnie de Saint-Gobain).

* * *

In reaching its decision, the Panel dismissed these very sorts of procompetitive and quality-enhancing efficiencies associated with the merger that were recognized by the district court. Instead, the Panel simply decided that it would not consider the “laudable goal” of improving health care as a procompetitive efficiency in the St. Luke’s case – or in any other health care provider merger moving forward. The Panel stated that “[i]t is not enough to show that the merger would allow St. Luke’s to better serve patients.” Such a broad, blanket conclusion can serve only to harm consumers.

* * *

By creating a barrier to considering quality-enhancing efficiencies associated with better care, the approach taken by the Panel will deter future provider realignment and create a “chilling” effect on vital provider integration and collaboration. If the Panel’s decision is upheld, providers will be considerably less likely to engage in realignment aimed at improving care and lowering long-term costs. As a result, both patients and payors will suffer in the form of higher costs and lower quality of care. This can’t be – and isn’t – the outcome to which appropriate antitrust law and policy aspires.

The scholars joining ICLE on the brief are:

  • George Bittlingmayer, Wagnon Distinguished Professor of Finance and Otto Distinguished Professor of Austrian Economics, University of Kansas
  • Henry Butler, George Mason University Foundation Professor of Law and Executive Director of the Law & Economics Center, George Mason University
  • Daniel A. Crane, Associate Dean for Faculty and Research and Professor of Law, University of Michigan
  • Harold Demsetz, UCLA Emeritus Chair Professor of Business Economics, University of California, Los Angeles
  • Bernard Ganglmair, Assistant Professor, University of Texas at Dallas
  • Gus Hurwitz, Assistant Professor of Law, University of Nebraska-Lincoln
  • Keith Hylton, William Fairfield Warren Distinguished Professor of Law, Boston University
  • Thom Lambert, Wall Chair in Corporate Law and Governance, University of Missouri
  • John Lopatka, A. Robert Noll Distinguished Professor of Law, Pennsylvania State University
  • Geoffrey Manne, Founder and Executive Director of the International Center for Law and Economics and Senior Fellow at TechFreedom
  • Stephen Margolis, Alumni Distinguished Undergraduate Professor, North Carolina State University
  • Fred McChesney, de la Cruz-Mentschikoff Endowed Chair in Law and Economics, University of Miami
  • Tom Morgan, Oppenheim Professor Emeritus of Antitrust and Trade Regulation Law, George Washington University
  • David Olson, Associate Professor of Law, Boston College
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University
  • D. Daniel Sokol, Professor of Law, University of Florida
  • Mike Sykuta, Associate Professor and Director of the Contracting and Organizations Research Institute, University of Missouri

The amicus brief is available here.

There is always a temptation for antitrust agencies and plaintiffs to center a case around so-called “hot” documents — typically company documents with a snippet or sound-bites extracted, some times out of context. Some practitioners argue that “[h]ot document can be crucial to the outcome of any antitrust matter.” Although “hot” documents can help catch the interest of the public, a busy judge or an unsophisticated jury, they often can lead to misleading results. But more times than not, antitrust cases are resolved on economics and what John Adams called “hard facts,” not snippets from emails or other corporate documents. Antitrust case books are littered with cases that initially looked promising based on some supposed hot documents, but ultimately failed because the foundations of a sound antitrust case were missing.

As discussed below this is especially true for a recent case brought by the FTC, FTC v. St. Luke’s, currently pending before the Ninth Circuit Court of Appeals, in which the FTC at each pleading stage has consistently relied on “hot” documents to make its case.

The crafting and prosecution of civil antitrust cases by federal regulators is a delicate balancing act. Regulators must adhere to well-defined principles of antitrust enforcement, and on the other hand appeal to the interests of a busy judge. The simple way of doing this is using snippets of documents to attempt to show the defendants knew they were violating the law.

After all, if federal regulators merely had to properly define geographic and relevant product markets, show a coherent model of anticompetitive harm, and demonstrate that any anticipated harm would outweigh any procompetitive benefits, where is the fun in that? The reality is that antitrust cases typically rely on economic analysis, not snippets of hot documents. Antitrust regulators routinely include internal company documents in their cases to supplement the dry mechanical nature of antitrust analysis. However, in isolation, these documents can create competitive concerns when they simply do not exist.

With this in mind, it is vital that antitrust regulators do not build an entire case around what seem to be inflammatory documents. Quotes from executives, internal memoranda about competitors, and customer presentations are the icing on the cake after a proper antitrust analysis. As the International Center for Law and Economics’ Geoff Manne once explained,

[t]he problem is that these documents are easily misunderstood, and thus, while the economic significance of such documents is often quite limited, their persuasive value is quite substantial.

Herein lies the problem illustrated by the Federal Trade Commission’s use of provocative documents in its suit against the vertical acquisition of Saltzer Medical Group, an independent physician group comprised of 41 doctors, by St. Luke’s Health System. The FTC seeks to stop the acquisition involving these two Idaho based health care providers, a $16 million transaction, and a number comparatively small to other health care mergers investigated by the antitrust agencies. The transaction would give St. Luke’s a total of 24 primary care physicians operating in and around Nampa, Idaho.

In St. Luke’s the FTC used “hot” documents in each stage of its pleadings, from its complaint through its merits brief on appeal. Some of the statements pulled from executives’ emails, notes and memoranda seem inflammatory suggesting St. Luke’s intended to increase prices and to control market share all in order to further its strength relative to payer contracting. These statements however have little grounding in the reality of health care competition.

The reliance by the FTC on these so-called hot documents is problematic for several reasons. First, the selective quoting of internal documents paints the intention of the merger solely to increase profit for St. Luke’s at the expense of payers, when the reality is that the merger is premised on the integration of health care services and the move from the traditional fee-for-service model to a patient-centric model. St Luke’s intention of incorporating primary care into its system is in-line with the goals of the Affordable Care Act to promote over all well-being through integration. The District Court in this case recognized that the purpose of the merger was “primarily to improve patient outcomes.” And, in fact, underserved and uninsured patients are already benefitting from the transaction.

Second, the selective quoting suggested a narrow geographic market, and therefore an artificially high level of concentration in Nampa, Idaho. The suggestion contradicts reality, that nearly one-third of Nampa residents seek primary care physician services outside of Nampa. The geographic market advanced by the FTC is not a proper market, regardless of whether selected documents appear to support it. Without a properly defined geographic market, it is impossible to determine market share and therefore prove a violation of the Clayton Antitrust Act.

The DOJ Antitrust Division and the FTC have acknowledged that markets can not properly be defined solely on spicy documents. Writing in their 2006 commentary on the Horizontal Merger Guidelines, the agencies noted that

[t]he Agencies are careful, however, not to assume that a ‘market’ identified for business purposes is the same as a relevant market defined in the context of a merger analysis. … It is unremarkable that ‘markets’ in common business usage do not always coincide with ‘markets’ in an antitrust context, inasmuch as the terms are used for different purposes.

Third, even if St. Luke’s had the intention of increasing prices, just because one wants to do something such as raise prices above a competitive level or scale back research and development expenses — even if it genuinely believes it is able — does not mean that it can. Merger analysis is not a question of mens rea (or subjective intent). Rather, the analysis must show that such behavior will be likely as a result of diminished competition. Regulators must not look at evidence of this subjective intent and then conclude that the behavior must be possible and that a merger is therefore likely to substantially lessen competition. This would be the tail wagging the dog. Instead, regulators must first determine whether, as a matter of economic principle, a merger is likely to have a particular effect. Then, once the analytical tests have been run, documents can support these theories. But without sound support for the underlying theories, documents (however condemning) cannot bring the case across the goal line.

Certainly, documents suggesting intent to raise prices should bring an antitrust plaintiff across the goal line? Not so, as Seventh Circuit Judge Frank Easterbrook has explained:

Almost all evidence bearing on “intent” tends to show both greed and desire to succeed and glee at a rival’s predicament. … [B]ut drive to succeed lies at the core of a rivalrous economy. Firms need not like their competitors; they need not cheer them on to success; a desire to extinguish one’s rivals is entirely consistent with, often is the motive behind competition.

As Harvard Law Professor Phil Areeda observed, relying on documents describing intent is inherently risky because

(1) the businessperson often uses a colorful and combative vocabulary far removed from the lawyer’s linguistic niceties, and (2) juries and judges may fail to distinguish a lawful competitive intent from a predatory state of mind. (7 Phillip E. Areeda & Herbert Hovenkamp, Antitrust Law § 1506 (2d ed. 2003).)

So-called “hot” documents may help guide merger analysis, but served up as a main course make a paltry meal. Merger cases rise or fall on hard facts and economics, and next week we will see if the Ninth Circuit recognizes this as both St. Luke’s and the FTC argue their cases.

A century ago Congress enacted the Clayton Act, which prohibits acquisitions that may substantially lessen competition. For years, the antitrust enforcement Agencies looked at only one part of the ledger – the potential for price increases. Agencies didn’t take into account the potential efficiencies in cost savings, better products, services, and innovation. One of the major reforms of the Clinton Administration was to fully incorporate efficiencies in merger analysis, helping to develop sound enforcement standards for the 21st Century.

But the current approach of the Federal Trade Commission (“FTC”), especially in hospital mergers, appears to be taking a major step backwards by failing to fully consider efficiencies and arguing for legal thresholds inconsistent with sound competition policy. The FTC’s approach used primarily in hospital mergers seems uniquely misguided since there is a tremendous need for smart hospital consolidation to help bend the cost curve and improve healthcare delivery.

The FTC’s backwards analysis of efficiencies is juxtaposed in two recent hospital-physician alliances.

As I discussed in my last post, no one would doubt the need for greater integration between hospitals and physicians – the debate during the enactment of the Affordable Care Act (“ACA”) detailed how the current siloed approach to healthcare is the worst of all worlds, leading to escalating costs and inferior care. In FTC v. St. Luke’s Health System, Ltd., the FTC challenged Boise-based St. Luke’s acquisition of a physician practice in neighboring Nampa, Idaho.

In the case, St. Luke’s presented a compelling case for efficiencies.

As noted by the St. Luke’s court, one of the leading factors in rising healthcare costs is the use of the ineffective fee-for-service system. In their attempt to control costs and abandon fee-for-service payment, the merging parties effectively demonstrated to the court that the combined entity would offer a high level of coordinated and patient-centered care. Therefore, along with integrating electronic records and increasing access for under-privileged patients, the merged entity can also successfully manage population health and offer risk-based payment initiatives to all employed physicians. Indeed, the transaction consummated several months ago has already shown significant cost savings and consumer benefits especially for underserved patients. The court recognized

[t]he Acquisition was intended by St. Luke’s and Saltzer primarily to improve patient outcomes. The Court believes that it would have that effect if left intact.

(Appellants’ Reply Brief at 22, FTC v. St. Luke’s Health Sys., No 14-35173 (9th Cir. Sept. 2, 2014).)

But the court gave no weight to the efficiencies primarily because the FTC set forward the wrong legal roadmap.

Under the FTC’s current roadmap for efficiencies, the FTC may prove antitrust harm via predication and presumption while defendants are required to decisively prove countervailing procompetitive efficiencies. Such asymmetric burdens of proof greatly favor the FTC and eliminate a court’s ability to properly analyze the procompetitive nature of efficiencies against the supposed antitrust harm.

Moreover, the FTC basically claims that any efficiencies can only be considered “merger-specific” if the parties are able to demonstrate there are no less anticompetitive means to achieve them. It is not enough that they result directly from the merger.

In the case of St. Luke’s, the court determined the defendants’ efficiencies would “improve the quality of medical care” in Nampa, Idaho, but were not merger-specific. The court relied on the FTC’s experts to find that efficiencies such as “elimination of fee-for-service reimbursement” and the movement “to risk-based reimbursement” were not merger-specific, because other entities had potentially achieved similar efficiencies within different provider “structures.” The FTC and their experts did not indicate the success of these other models nor dispute that St. Luke’s would achieve their stated efficiencies. Instead, the mere possibility of potential, alternative structures was enough to overcome merger efficiencies purposed to “move the focus of health care back to the patient.” (The case is currently on appeal and hopefully the Ninth Circuit can correct the lower court’s error).

In contrast to the St. Luke’s case is the recent FTC advisory letter to the Norman Physician Hospital Organization (“Norman PHO”). The Norman PHO proposed a competitive collaboration serving to integrate care between the Norman Physician Association’s 280 physicians and Norman Regional Health System, the largest health system in Norman, Oklahoma. In its analysis of the Norman PHO, the FTC found that the groups could not “quantify… the likely overall efficiency benefits of its proposed program” nor “provide direct evidence of actual efficiencies or competitive effects.” Furthermore, such an arrangement had the potential to “exercise market power.” Nonetheless, the FTC permitted the collaboration. Its decision was instead decided on the basis of Norman PHO’s non-exclusive physician contracting provisions.

It seems difficult if not impossible to reconcile the FTC’s approaches in Boise and Norman. In Norman the FTC relied on only theoretical efficiencies to permit an alliance with significant market power. The FTC was more than willing to accept Norman PHO’s “potential to… generate significant efficiencies.” Such an even-handed approach concerning efficiencies was not applied in analyzing efficiencies in St. Luke’s merger.

The starting point for understanding the FTC’s misguided analysis of efficiencies in St. Luke’s and other merger cases stems from the 2010 Horizontal Merger Guidelines (“Guidelines”).

A recent dissent by FTC Commissioner Joshua Wright outlines the problem – there are asymmetric burdens placed on the plaintiff and defendant. Using the Guidelines, FTC’s merger analysis

embraces probabilistic prediction, estimation, presumption, and simulation of anticompetitive effects on the one hand but requires efficiencies to be proven on the other.

Relying on the structural presumption established in United States v. Philadelphia Nat’l Bank, the FTC need only illustrate that a merger will substantially lessen competition, typically demonstrated through a showing of undue concentration in a relevant market, not actual anticompetitive effects. If this low burden is met, the burden is then shifted to the defendants to rebut the presumption of competitive harm.

As part of their defense, defendants must then prove that any proposed efficiencies are cognizable, meaning “merger-specific,” and have been “verified and do not arise from anticompetitive reductions in output or service.” Furthermore, merging parties must demonstrate “by reasonable means the likelihood and magnitude of each asserted efficiency, how and when each would be achieved…, how each would enhance the merged firm’s ability and incentive to compete, and why each would be merger-specific.”

As stated in a recent speech by FTC Commissioner Joshua Wright,

the critical lesson of the modern economic approach to mergers is that post-merger changes in pricing incentives and competitive effects are what matter.

The FTC’s merger policy “has long been dominated by a focus on only one side of the ledger—anticompetitive effects.” In other words the defendants must demonstrate efficiencies with certainty, while the government can condemn a merger based on a prediction. This asymmetric enforcement policy favors the FTC while requiring defendants meet stringent, unyielding standards.

As the ICLE amicus brief in St. Luke’s discusses, not satisfied with the asymmetric advantage, the plaintiffs in St. Luke’s attempt to “guild the lily” by claiming that efficiencies can only be considered in cases where there is a presumption of competitive harm, perhaps based solely on “first order” evidence, such as increased market shares. Of course, nothing in the law, Guidelines, or sound competition policy limits the defense in that fashion.

The court should consider efficiencies regardless of the level of economic harm. The question is whether the efficiencies will outweigh that harm. As Geoff recently pointed out:

There is no economic basis for demanding more proof of claimed efficiencies than of claimed anticompetitive harms. And the Guidelines since 1997 were (ostensibly) drafted in part precisely to ensure that efficiencies were appropriately considered by the agencies (and the courts) in their enforcement decisions.

With presumptions that strongly benefit the FTC, it is clear that efficiencies are often overlooked or ignored. From 1997-2007, FTC’s Bureau of Competition staff deliberated on a total of 342 efficiencies claims. Of the 342 efficiency claims, only 29 were accepted by FTC staff whereas 109 were rejected and 204 received “no decision.” The most common concerns among FTC staff were that stated efficiencies were not verifiable or were not merger specific.

Both “concerns” come directly from the Guidelines requiring plaintiffs provide significant and oftentimes impossible foresight and information to overcome evidentiary burdens. As former FTC Chairman Tim Muris observed

too often, the [FTC] found no cognizable efficiencies when anticompetitive effects were determined to be likely and seemed to recognize efficiency only when no adverse effects were predicted.

Thus, in situations in which the FTC believes the dominant issue is market concentration, plaintiffs’ attempts to demonstrate procompetitive reasoning are outright dismissed.

The FTC’s efficiency arguments are also not grounded in legal precedent. Courts have recognized that asymmetric burdens are inconsistent with the intent of the Act. As then D.C. Circuit Judge Clarence Thomas observed,

[i]mposing a heavy burden of production on a defendant would be particularly anomalous where … it is easy to establish a prima facie case.

Courts have recognized that efficiencies can be “speculative” or be “based on a prediction backed by sound business judgment.” And in Sherman Act cases the law places the burden on the plaintiff to demonstrate that there are less restrictive alternatives to a potentially illegal restraint – unlike the requirement applied by the FTC that the defendant prove there are no less restrictive alternatives to a merger to achieve efficiencies.

The FTC and the courts should deem worthy efficiencies wherein there is a reasonable likelihood that procompetitive effects will take place post-merger. Furthermore, the courts should not look at efficiencies inside a vacuum. In healthcare, policies and laws, such as the effects of the ACA, must be taken into account. The ACA promotes coordination among providers and incentivizes entities that can move away from fee-for-service payment. In the past, courts relying on the role of health policy in merger analysis have found that efficiencies leading to integrated medicine and “better medical care” are relevant.

In St. Luke’s the court observed that “the existing law seemed to hinder innovation and resist creative solutions” and that “flexibility and experimentation” are “two virtues that are not emphasized in the antitrust law.” Undoubtedly, the current approach to efficiencies makes it near impossible for providers to demonstrate efficiencies.

As Commissioner Wright has observed, these asymmetric evidentiary burdens

do not make economic sense and are inconsistent with a merger policy designed to promote consumer welfare.

In the context of St. Luke’s and other healthcare provider mergers, appropriate efficiency analysis is a keystone of determining a merger’s total effects. Dismissal of efficiencies on the basis of a rigid, incorrect legal procedural structure is not aligned with current economic thinking or a sound approach to incorporate competition analysis into the drive for healthcare reform. It is time for the FTC to set efficiency analysis in the right direction.

There is a consensus in America that we need to control health care costs and improve the delivery of health care. After a long debate on health care reform and careful scrutiny of health care markets, there seems to be agreement that the unintegrated, “siloed approach” to health care is inefficient, costly, and contrary to the goal of improving care. But some antitrust enforcers — most notably the FTC — are standing in the way.

Enlightened health care providers are responding to this consensus by entering into transactions that will lead to greater clinical and financial integration, facilitating a movement from volume-based to value-based delivery of care. Any many aspects of the Affordable Care Act encourage this path to integration. Yet when the market seeks to address these critical concerns about our health care system, the FTC and some state Attorneys General take positions diametrically opposed to sound national health care policy as adopted by Congress and implemented by the Department of Health and Human Services.

To be sure, not all state antitrust enforcers stand in the way of health care reform. For example, many states including New York, Pennsylvania and Massachusetts, seem to be willing to permit hospital mergers even in concentrated markets with an agreement for continued regulation. At the same time, however, the FTC has been aggressively challenging integration, taking the stance that hospital mergers will raise prices by giving those hospitals greater leverage in negotiations.

The distance between HHS and the FTC in DC is about 6 blocks, but in healthcare policy they seem to be are miles apart.

The FTC’s skepticism about integration is an old story. As I have discussed previously, during the last decade the agency challenged more than 30 physician collaborations even though those cases lacked any evidence that the collaborations led to higher prices. And, when physicians asked for advice on collaborations, it took the Commission on average more than 436 days to respond to those requests (about as long as it took Congress to debate and enact the Affordable Care Act).

The FTC is on a recent winning streak in challenging hospital mergers. But those were primarily simple cases with direct competition between hospitals in the same market with very high levels of concentration. The courts did not struggle long in these cases, because the competitive harm appeared straightforward.

Far more controversial is when a hospital acquires a physician practice. This type of vertical integration seems precisely what the advocates for health care reform are crying out for. The lack of integration between physicians and hospitals is a core to the problems in health care delivery. But the antitrust law is entirely solicitous of these types of vertical mergers. There has not been a vertical merger successfully challenged in the courts since 1980 – the days of reruns of the TV show Dr. Kildare. And even the supposedly pro-enforcement Obama Administration has not gone to court to challenge a vertical merger, and the Obama FTC has not even secured a merger consent under a vertical theory.

The case in which the FTC has decided to “bet the house” is its challenge to St. Luke’s Health System’s acquisition of Saltzer Medical Group in Nampa, Idaho.

St. Luke’s operates the largest hospital in Boise, and Saltzer is the largest physician practice in Nampa, roughly 20-miles away. But rather than recognizing that this was a vertical affiliation designed to integrate care and to promote a transition to a system in which the provider takes the risk of overutilization, the FTC characterized the transaction as purely horizontal – no different from the merger of two hospitals. In that manner, the FTC sought to paint concentration levels it designed to assure victory.

But back to the reasons why integration is essential. It is undisputed that provider integration is the key to improving American health care. Americans pay substantially more than any other industrialized nation for health care services, 17.2 percent of gross domestic product. Furthermore, these higher costs are not associated with better overall care or greater access for patients. As noted during the debate on the Affordable Care Act, the American health care system’s higher costs and lower quality and access are mostly associated with the usage of a fee-for-service system that pays for each individual medical service, and the “siloed approach” to medicine in which providers work autonomously and do not coordinate to improve patient outcomes.

In order to lower health care costs and improve care, many providers have sought to transform health care into a value-based, patient-centered approach. To institute such a health care initiative, medical staff, physicians, and hospitals must clinically integrate and align their financial incentives. Integrated providers utilize financial risk, share electronic records and data, and implement quality measures in order to provide the best patient care.

The most effective means of ensuring full-scale integration is through a tight affiliation, most often achieved through a merger. Unlike contractual arrangements that are costly, time-sensitive, and complicated by an outdated health care regulatory structure, integrated affiliations ensure that entities can effectively combine and promote structural change throughout the newly formed organization.

For nearly five weeks of trial in Boise St. Luke’s and the FTC fought these conflicting visions of integration and health care policy. Ultimately, the court decided the supposed Nampa primary care physician market posited by the FTC would become far more concentrated, and the merger would substantially lessen competition for “Adult Primary Care Services” by raising prices in Nampa. As such, the district court ordered an immediate divestiture.

Rarely, however, has an antitrust court expressed such anguish at its decision. The district court readily “applauded [St. Luke’s] for its efforts to improve the delivery of healthcare.” It acknowledged the positive impact the merger would have on health care within the region. The court further noted that Saltzer had attempted to coordinate with other providers via loose affiliations but had failed to reap any benefits. Due to Saltzer’s lack of integration, Saltzer physicians had limited “the number of Medicaid or uninsured patients they could accept.”

According to the district court, the combination of St. Luke’s and Saltzer would “improve the quality of medical care.” Along with utilizing the same electronic medical records system and giving the Saltzer physicians access to sophisticated quality metrics designed to improve their practices, the parties would improve care by abandoning fee-for-service payment for all employed physicians and institute population health management reimbursing the physicians via risk-based payment initiatives.

As noted by the district court, these stated efficiencies would improve patient outcomes “if left intact.” Along with improving coordination and quality of care, the merger, as noted by an amicus brief submitted by the International Center for Law & Economics and the Medicaid Defense Fund to the Ninth Circuit, has also already expanded access to Medicaid and uninsured patients by ensuring previously constrained Saltzer physicians can offer services to the most needy.

The court ultimately was not persuaded by the demonstrated procompetitive benefits. Instead, the district court relied on the FTC’s misguided arguments and determined that the stated efficiencies were not “merger-specific,” because such efficiencies could potentially be achieved via other organizational structures. The district court did not analyze the potential success of substitute structures in achieving the stated efficiencies; instead, it relied on the mere existence of alternative provider structures. As a result, as ICLE and the Medicaid Defense Fund point out:

By placing the ultimate burden of proving efficiencies on the Appellants and applying a narrow, impractical view of merger specificity, the court has wrongfully denied application of known procompetitive efficiencies. In fact, under the court’s ruling, it will be nearly impossible for merging parties to disprove all alternatives when the burden is on the merging party to oppose untested, theoretical less restrictive structural alternatives.

Notably, the district court’s divestiture order has been stayed by the Ninth Circuit. The appeal on the merits is expected to be heard some time this autumn. Along with reviewing the relevant geographic market and usage of divestiture as a remedy, the Ninth Circuit will also analyze the lower court’s analysis of the merger’s procompetitive efficiencies. For now, the stay order is a limited victory for underserved patients and the merging defendants. While such a ruling is not determinative of the Ninth Circuit’s decision on the merits, it does demonstrate that the merging parties have at least a reasonable possibility of success.

As one might imagine, the Ninth Circuit decision is of great importance to the antitrust and health care reform community. If the district court’s ruling is upheld, it could provide a deterrent to health care providers from further integrating via mergers, a precedent antithetical to the very goals of health care reform. However, if the Ninth Circuit finds the merger does not substantially lessen competition, then precompetitive vertical integration is less likely to be derailed by misapplication of the antitrust laws. The importance and impact of such a decision on American patients cannot be understated.