Archives For merger guidelines

Last week concluded round 3 of Congressional hearings on mergers in the healthcare provider and health insurance markets. Much like the previous rounds, the hearing saw predictable representatives, of predictable constituencies, saying predictable things.

The pattern is pretty clear: The American Hospital Association (AHA) makes the case that mergers in the provider market are good for consumers, while mergers in the health insurance market are bad. A scholar or two decries all consolidation in both markets. Another interested group, like maybe the American Medical Association (AMA), also criticizes the mergers. And it’s usually left to a representative of the insurance industry, typically one or more of the merging parties themselves, or perhaps a scholar from a free market think tank, to defend the merger.

Lurking behind the public and politicized airings of these mergers, and especially the pending Anthem/Cigna and Aetna/Humana health insurance mergers, is the Affordable Care Act (ACA). Unfortunately, the partisan politics surrounding the ACA, particularly during this election season, may be trumping the sensible economic analysis of the competitive effects of these mergers.

In particular, the partisan assessments of the ACA’s effect on the marketplace have greatly colored the Congressional (mis-)understandings of the competitive consequences of the mergers.  

Witness testimony and questions from members of Congress at the hearings suggest that there is widespread agreement that the ACA is encouraging increased consolidation in healthcare provider markets, for example, but there is nothing approaching unanimity of opinion in Congress or among interested parties regarding what, if anything, to do about it. Congressional Democrats, for their part, have insisted that stepped up vigilance, particularly of health insurance mergers, is required to ensure that continued competition in health insurance markets isn’t undermined, and that the realization of the ACA’s objectives in the provider market aren’t undermined by insurance companies engaging in anticompetitive conduct. Meanwhile, Congressional Republicans have generally been inclined to imply (or outright state) that increased concentration is bad, so that they can blame increasing concentration and any lack of competition on the increased regulatory costs or other effects of the ACA. Both sides appear to be missing the greater complexities of the story, however.

While the ACA may be creating certain impediments in the health insurance market, it’s also creating some opportunities for increased health insurance competition, and implementing provisions that should serve to hold down prices. Furthermore, even if the ACA is encouraging more concentration, those increases in concentration can’t be assumed to be anticompetitive. Mergers may very well be the best way for insurers to provide benefits to consumers in a post-ACA world — that is, the world we live in. The ACA may have plenty of negative outcomes, and there may be reasons to attack the ACA itself, but there is no reason to assume that any increased concentration it may bring about is a bad thing.

Asking the right questions about the ACA

We don’t need more self-serving and/or politicized testimony We need instead to apply an economic framework to the competition issues arising from these mergers in order to understand their actual, likely effects on the health insurance marketplace we have. This framework has to answer questions like:

  • How do we understand the effects of the ACA on the marketplace?
    • In what ways does the ACA require us to alter our understanding of the competitive environment in which health insurance and healthcare are offered?
    • Does the ACA promote concentration in health insurance markets?
    • If so, is that a bad thing?
  • Do efficiencies arise from increased integration in the healthcare provider market?
  • Do efficiencies arise from increased integration in the health insurance market?
  • How do state regulatory regimes affect the understanding of what markets are at issue, and what competitive effects are likely, for antitrust analysis?
  • What are the potential competitive effects of increased concentration in the health care markets?
  • Does increased health insurance market concentration exacerbate or counteract those effects?

Beginning with this post, at least a few of us here at TOTM will take on some of these issues, as part of a blog series aimed at better understanding the antitrust law and economics of the pending health insurance mergers.

Today, we will focus on the ambiguous competitive implications of the ACA. Although not a comprehensive analysis, in this post we will discuss some key insights into how the ACA’s regulations and subsidies should inform our assessment of the competitiveness of the healthcare industry as a whole, and the antitrust review of health insurance mergers in particular.

The ambiguous effects of the ACA

It’s an understatement to say that the ACA is an issue of great political controversy. While many Democrats argue that it has been nothing but a boon to consumers, Republicans usually have nothing good to say about the law’s effects. But both sides miss important but ambiguous effects of the law on the healthcare industry. And because they miss (or disregard) this ambiguity for political reasons, they risk seriously misunderstanding the legal and economic implications of the ACA for healthcare industry mergers.

To begin with, there are substantial negative effects, of course. Requiring insurance companies to accept patients with pre-existing conditions reduces the ability of insurance companies to manage risk. This has led to upward pricing pressure for premiums. While the mandate to buy insurance was supposed to help bring more young, healthy people into the risk pool, so far the projected signups haven’t been realized.

The ACA’s redefinition of what is an acceptable insurance policy has also caused many consumers to lose the policy of their choice. And the ACA’s many regulations, such as the Minimum Loss Ratio requiring insurance companies to spend 80% of premiums on healthcare, have squeezed the profit margins of many insurance companies, leading, in some cases, to exit from the marketplace altogether and, in others, to a reduction of new marketplace entry or competition in other submarkets.

On the other hand, there may be benefits from the ACA. While many insurers participated in private exchanges even before the ACA-mandated health insurance exchanges, the increased consumer education from the government’s efforts may have helped enrollment even in private exchanges, and may also have helped to keep premiums from increasing as much as they would have otherwise. At the same time, the increased subsidies for individuals have helped lower-income people afford those premiums. Some have even argued that increased participation in the on-demand economy can be linked to the ability of individuals to buy health insurance directly. On top of that, there has been some entry into certain health insurance submarkets due to lower barriers to entry (because there is less need for agents to sell in a new market with the online exchanges). And the changes in how Medicare pays, with a greater focus on outcomes rather than services provided, has led to the adoption of value-based pricing from both health care providers and health insurance companies.

Further, some of the ACA’s effects have  decidedly ambiguous consequences for healthcare and health insurance markets. On the one hand, for example, the ACA’s compensation rules have encouraged consolidation among healthcare providers, as noted. One reason for this is that the government gives higher payments for Medicare services delivered by a hospital versus an independent doctor. Similarly, increased regulatory burdens have led to higher compliance costs and more consolidation as providers attempt to economize on those costs. All of this has happened perhaps to the detriment of doctors (and/or patients) who wanted to remain independent from hospitals and larger health network systems, and, as a result, has generally raised costs for payors like insurers and governments.

But much of this consolidation has also arguably led to increased efficiency and greater benefits for consumers. For instance, the integration of healthcare networks leads to increased sharing of health information and better analytics, better care for patients, reduced overhead costs, and other efficiencies. Ultimately these should translate into higher quality care for patients. And to the extent that they do, they should also translate into lower costs for insurers and lower premiums — provided health insurers are not prevented from obtaining sufficient bargaining power to impose pricing discipline on healthcare providers.

In other words, both the AHA and AMA could be right as to different aspects of the ACA’s effects.

Understanding mergers within the regulatory environment

But what they can’t say is that increased consolidation per se is clearly problematic, nor that, even if it is correlated with sub-optimal outcomes, it is consolidation causing those outcomes, rather than something else (like the ACA) that is causing both the sub-optimal outcomes as well as consolidation.

In fact, it may well be the case that increased consolidation improves overall outcomes in healthcare provider and health insurance markets relative to what would happen under the ACA absent consolidation. For Congressional Democrats and others interested in bolstering the ACA and offering the best possible outcomes for consumers, reflexively challenging health insurance mergers because consolidation is “bad,” may be undermining both of these objectives.

Meanwhile, and for the same reasons, Congressional Republicans who decry Obamacare should be careful that they do not likewise condemn mergers under what amounts to a “big is bad” theory that is inconsistent with the rigorous law and economics approach that they otherwise generally support. To the extent that the true target is not health insurance industry consolidation, but rather underlying regulatory changes that have encouraged that consolidation, scoring political points by impugning mergers threatens both health insurance consumers in the short run, as well as consumers throughout the economy in the long run (by undermining the well-established economic critiques of a reflexive “big is bad” response).

It is simply not clear that ACA-induced health insurance mergers are likely to be anticompetitive. In fact, because the ACA builds on state regulation of insurance providers, requiring greater transparency and regulatory review of pricing and coverage terms, it seems unlikely that health insurers would be free to engage in anticompetitive price increases or reduced coverage that could harm consumers.

On the contrary, the managerial and transactional efficiencies from the proposed mergers, combined with greater bargaining power against now-larger providers are likely to lead to both better quality care and cost savings passed-on to consumers. Increased entry, at least in part due to the ACA in most of the markets in which the merging companies will compete, along with integrated health networks themselves entering and threatening entry into insurance markets, will almost certainly lead to more consumer cost savings. In the current regulatory environment created by the ACA, in other words, insurance mergers have considerable upside potential, with little downside risk.


In sum, regardless of what one thinks about the ACA and its likely effects on consumers, it is not clear that health insurance mergers, especially in a post-ACA world, will be harmful.

Rather, assessing the likely competitive effects of health insurance mergers entails consideration of many complicated (and, unfortunately, politicized) issues. In future blog posts we will discuss (among other things): the proper treatment of efficiencies arising from health insurance mergers, the appropriate geographic and product markets for health insurance merger reviews, the role of state regulations in assessing likely competitive effects, and the strengths and weaknesses of arguments for potential competitive harms arising from the mergers.

Last week, FCC General Counsel Jonathan Sallet pulled back the curtain on the FCC staff’s analysis behind its decision to block Comcast’s acquisition of Time Warner Cable. As the FCC staff sets out on its reported Rainbow Tour to reassure regulated companies that it’s not “hostile to the industries it regulates,” Sallet’s remarks suggest it will have an uphill climb. Unfortunately, the staff’s analysis appears to have been unduly speculative, disconnected from critical market realities, and decidedly biased — not characteristics in a regulator that tend to offer much reassurance.

Merger analysis is inherently speculative, but, as courts have repeatedly had occasion to find, the FCC has a penchant for stretching speculation beyond the breaking point, adopting theories of harm that are vaguely possible, even if unlikely and inconsistent with past practice, and poorly supported by empirical evidence. The FCC’s approach here seems to fit this description.

The FCC’s fundamental theory of anticompetitive harm

To begin with, as he must, Sallet acknowledged that there was no direct competitive overlap in the areas served by Comcast and Time Warner Cable, and no consumer would have seen the number of providers available to her changed by the deal.

But the FCC staff viewed this critical fact as “not outcome determinative.” Instead, Sallet explained that the staff’s opposition was based primarily on a concern that the deal might enable Comcast to harm “nascent” OVD competitors in order to protect its video (MVPD) business:

Simply put, the core concern came down to whether the merged firm would have an increased incentive and ability to safeguard its integrated Pay TV business model and video revenues by limiting the ability of OVDs to compete effectively, especially through the use of new business models.

The justification for the concern boiled down to an assumption that the addition of TWC’s subscriber base would be sufficient to render an otherwise too-costly anticompetitive campaign against OVDs worthwhile:

Without the merger, a company taking action against OVDs for the benefit of the Pay TV system as a whole would incur costs but gain additional sales – or protect existing sales — only within its footprint. But the combined entity, having a larger footprint, would internalize more of the external “benefits” provided to other industry members.

The FCC theorized that, by acquiring a larger footprint, Comcast would gain enough bargaining power and leverage, as well as the means to profit from an exclusionary strategy, leading it to employ a range of harmful tactics — such as impairing the quality/speed of OVD streams, imposing data caps, limiting OVD access to TV-connected devices, imposing higher interconnection fees, and saddling OVDs with higher programming costs. It’s difficult to see how such conduct would be permitted under the FCC’s Open Internet Order/Title II regime, but, nevertheless, the staff apparently believed that Comcast would possess a powerful “toolkit” with which to harm OVDs post-transaction.

Comcast’s share of the MVPD market wouldn’t have changed enough to justify the FCC’s purported fears

First, the analysis turned on what Comcast could and would do if it were larger. But Comcast was already the largest ISP and MVPD (now second largest MVPD, post AT&T/DIRECTV) in the nation, and presumably it has approximately the same incentives and ability to disadvantage OVDs today.

In fact, there’s no reason to believe that the growth of Comcast’s MVPD business would cause any material change in its incentives with respect to OVDs. Whatever nefarious incentives the merger allegedly would have created by increasing Comcast’s share of the MVPD market (which is where the purported benefits in the FCC staff’s anticompetitive story would be realized), those incentives would be proportional to the size of increase in Comcast’s national MVPD market share — which, here, would be about eight percentage points: from 22% to under 30% of the national market.

It’s difficult to believe that Comcast would gain the wherewithal to engage in this costly strategy by adding such a relatively small fraction of the MVPD market (which would still leave other MVPDs serving fully 70% of the market to reap the purported benefits instead of Comcast), but wouldn’t have it at its current size – and there’s no evidence that it has ever employed such strategies with its current market share.

It bears highlighting that the D.C. Circuit has already twice rejected FCC efforts to impose a 30% market cap on MVPDs, based on the Commission’s inability to demonstrate that a greater-than-30% share would create competitive problems, especially given the highly dynamic nature of the MVPD market. In vacating the FCC’s most recent effort to do so in 2009, the D.C. Circuit was resolute in its condemnation of the agency, noting:

In sum, the Commission has failed to demonstrate that allowing a cable operator to serve more than 30% of all [MVPD] subscribers would threaten to reduce either competition or diversity in programming.

The extent of competition and the amount of available programming (including original programming distributed by OVDs themselves) has increased substantially since 2009; this makes the FCC’s competitive claims even less sustainable today.

It’s damning enough to the FCC’s case that there is no marketplace evidence of such conduct or its anticompetitive effects in today’s market. But it’s truly impossible to square the FCC’s assertions about Comcast’s anticompetitive incentives with the fact that, over the past decade, Comcast has made massive investments in broadband, steadily increased broadband speeds, and freely licensed its programming, among other things that have served to enhance OVDs’ long-term viability and growth. Chalk it up to the threat of regulatory intervention or corporate incompetence if you can’t believe that competition alone could be responsible for this largesse, but, whatever the reason, the FCC staff’s fears appear completely unfounded in a marketplace not significantly different than the landscape that would have existed post-merger.

OVDs aren’t vulnerable, and don’t need the FCC’s “help”

After describing the “new entrants” in the market — such unfamiliar and powerless players as Dish, Sony, HBO, and CBS — Sallet claimed that the staff was principally animated by the understanding that

Entrants are particularly vulnerable when competition is nascent. Thus, staff was particularly concerned that this transaction could damage competition in the video distribution industry.

Sallet’s description of OVDs makes them sound like struggling entrepreneurs working in garages. But, in fact, OVDs have radically reshaped the media business and wield enormous clout in the marketplace.

Netflix, for example, describes itself as “the world’s leading Internet television network with over 65 million members in over 50 countries.” New services like Sony Vue and Sling TV are affiliated with giant, well-established media conglomerates. And whatever new offerings emerge from the FCC-approved AT&T/DIRECTV merger will be as well-positioned as any in the market.

In fact, we already know that the concerns of the FCC are off-base because they are of a piece with the misguided assumptions that underlie the Chairman’s recent NPRM to rewrite the MVPD rules to “protect” just these sorts of companies. But the OVDs themselves — the ones with real money and their competitive futures on the line — don’t see the world the way the FCC does, and they’ve resolutely rejected the Chairman’s proposal. Notably, the proposed rules would “protect” these services from exactly the sort of conduct that Sallet claims would have been a consequence of the Comcast-TWC merger.

If they don’t want or need broad protection from such “harms” in the form of revised industry-wide rules, there is surely no justification for the FCC to throttle a merger based on speculation that the same conduct could conceivably arise in the future.

The realities of the broadband market post-merger wouldn’t have supported the FCC’s argument, either

While a larger Comcast might be in a position to realize more of the benefits from the exclusionary strategy Sallet described, it would also incur more of the costs — likely in direct proportion to the increased size of its subscriber base.

Think of it this way: To the extent that an MVPD can possibly constrain an OVD’s scope of distribution for programming, doing so also necessarily makes the MVPD’s own broadband offering less attractive, forcing it to incur a cost that would increase in proportion to the size of the distributor’s broadband market. In this case, as noted, Comcast would have gained MVPD subscribers — but it would have also gained broadband subscribers. In a world where cable is consistently losing video subscribers (as Sallet acknowledged), and where broadband offers higher margins and faster growth, it makes no economic sense that Comcast would have valued the trade-off the way the FCC claims it would have.

Moreover, in light of the existing conditions imposed on Comcast under the Comcast/NBCU merger order from 2011 (which last for a few more years) and the restrictions adopted in the Open Internet Order, Comcast’s ability to engage in the sort of exclusionary conduct described by Sallet would be severely limited, if not non-existent. Nor, of course, is there any guarantee that former or would-be OVD subscribers would choose to subscribe to, or pay more for, any MVPD in lieu of OVDs. Meanwhile, many of the relevant substitutes in the MVPD market (like AT&T and Verizon FiOS) also offer broadband services – thereby increasing the costs that would be incurred in the broadband market even more, as many subscribers would shift not only their MVPD, but also their broadband service, in response to Comcast degrading OVDs.

And speaking of the Open Internet Order — wasn’t that supposed to prevent ISPs like Comcast from acting on their alleged incentives to impede the quality of, or access to, edge providers like OVDs? Why is merger enforcement necessary to accomplish the same thing once Title II and the rest of the Open Internet Order are in place? And if the argument is that the Open Internet Order might be defeated, aside from the completely speculative nature of such a claim, why wouldn’t a merger condition that imposed the same constraints on Comcast – as was done in the Comcast/NBCU merger order by imposing the former net neutrality rules on Comcast – be perfectly sufficient?

While the FCC staff analysis accepted as true (again, contrary to current marketplace evidence) that a bigger Comcast would have more incentive to harm OVDs post-merger, it rejected arguments that there could be countervailing benefits to OVDs and others from this same increase in scale. Thus, things like incremental broadband investments and speed increases, a larger Wi-Fi network, and greater business services market competition – things that Comcast is already doing and would have done on a greater and more-accelerated scale in the acquired territories post-transaction – were deemed insufficient to outweigh the expected costs of the staff’s entirely speculative anticompetitive theory.

In reality, however, not only OVDs, but consumers – and especially TWC subscribers – would have benefitted from the merger by access to Comcast’s faster broadband speeds, its new investments, and its superior video offerings on the X1 platform, among other things. Many low-income families would have benefitted from expansion of Comcast’s Internet Essentials program, and many businesses would have benefited from the addition of a more effective competitor to the incumbent providers that currently dominate the business services market. Yet these and other verifiable benefits were given short shrift in the agency’s analysis because they “were viewed by staff as incapable of outweighing the potential harms.”

The assumptions underlying the FCC staff’s analysis of the broadband market are arbitrary and unsupportable

Sallet’s claim that the combined firm would have 60% of all high-speed broadband subscribers in the U.S. necessarily assumes a national broadband market measured at 25 Mbps or higher, which is a red herring.

The FCC has not explained why 25 Mbps is a meaningful benchmark for antitrust analysis. The FCC itself endorsed a 10 Mbps baseline for its Connect America fund last December, noting that over 70% of current broadband users subscribe to speeds less than 25 Mbps, even in areas where faster speeds are available. And streaming online video, the most oft-cited reason for needing high bandwidth, doesn’t require 25 Mbps: Netflix says that 5 Mbps is the most that’s required for an HD stream, and the same goes for Amazon (3.5 Mbps) and Hulu (1.5 Mbps).

What’s more, by choosing an arbitrary, faster speed to define the scope of the broadband market (in an effort to assert the non-competitiveness of the market, and thereby justify its broadband regulations), the agency has – without proper analysis or grounding, in my view – unjustifiably shrunk the size of the relevant market. But, as it happens, doing so also shrinks the size of the increase in “national market share” that the merger would have brought about.

Recall that the staff’s theory was premised on the idea that the merger would give Comcast control over enough of the broadband market that it could unilaterally impose costs on OVDs sufficient to impair their ability to reach or sustain minimum viable scale. But Comcast would have added only one percent of this invented “market” as a result of the merger. It strains credulity to assert that there could be any transaction-specific harm from an increase in market share equivalent to a rounding error.

In any case, basing its rejection of the merger on a manufactured 25 Mbps relevant market creates perverse incentives and will likely do far more to harm OVDs than realization of even the staff’s worst fears about the merger ever could have.

The FCC says it wants higher speeds, and it wants firms to invest in faster broadband. But here Comcast did just that, and then was punished for it. Rather than acknowledging Comcast’s ongoing broadband investments as strong indication that the FCC staff’s analysis might be on the wrong track, the FCC leadership simply sidestepped that inconvenient truth by redefining the market.

The lesson is that if you make your product too good, you’ll end up with an impermissibly high share of the market you create and be punished for it. This can’t possibly promote the public interest.

Furthermore, the staff’s analysis of competitive effects even in this ersatz market aren’t likely supportable. As noted, most subscribers access OVDs on connections that deliver content at speeds well below the invented 25 Mbps benchmark, and they pay the same prices for OVD subscriptions as subscribers who receive their content at 25 Mbps. Confronted with the choice to consume content at 25 Mbps or 10 Mbps (or less), the majority of consumers voluntarily opt for slower speeds — and they purchase service from Netflix and other OVDs in droves, nonetheless.

The upshot? Contrary to the implications on which the staff’s analysis rests, if Comcast were to somehow “degrade” OVD content on the 25 Mbps networks so that it was delivered with characteristics of video content delivered over a 10-Mbps network, real-world, observed consumer preferences suggest it wouldn’t harm OVDs’ access to consumers at all. This is especially true given that OVDs often have a global focus and reach (again, Netflix has 65 million subscribers in over 50 countries), making any claims that Comcast could successfully foreclose them from the relevant market even more suspect.

At the same time, while the staff apparently viewed the broadband alternatives as “limited,” the reality is that Comcast, as well as other broadband providers, are surrounded by capable competitors, including, among others, AT&T, Verizon, CenturyLink, Google Fiber, many advanced VDSL and fiber-based Internet service providers, and high-speed mobile wireless providers. The FCC understated the complex impact of this robust, dynamic, and ever-increasing competition, and its analysis entirely ignored rapidly growing mobile wireless broadband competition.

Finally, as noted, Sallet claimed that the staff determined that merger conditions would be insufficient to remedy its concerns, without any further explanation. Yet the Commission identified similar concerns about OVDs in both the Comcast/NBCUniversal and AT&T/DIRECTV transactions, and adopted remedies to address those concerns. We know the agency is capable of drafting behavioral conditions, and we know they have teeth, as demonstrated by prior FCC enforcement actions. It’s hard to understand why similar, adequate conditions could not have been fashioned for this transaction.

In the end, while I appreciate Sallet’s attempt to explain the FCC’s decision to reject the Comcast/TWC merger, based on the foregoing I’m not sure that Comcast could have made any argument or showing that would have dissuaded the FCC from challenging the merger. Comcast presented a strong economic analysis answering the staff’s concerns discussed above, all to no avail. It’s difficult to escape the conclusion that this was a politically-driven result, and not one rigorously based on the facts or marketplace reality.

As the organizer of this retrospective on Josh Wright’s tenure as FTC Commissioner, I have the (self-conferred) honor of closing out the symposium.

When Josh was confirmed I wrote that:

The FTC will benefit enormously from Josh’s expertise and his error cost approach to antitrust and consumer protection law will be a tremendous asset to the Commission — particularly as it delves further into the regulation of data and privacy. His work is rigorous, empirically grounded, and ever-mindful of the complexities of both business and regulation…. The Commissioners and staff at the FTC will surely… profit from his time there.

Whether others at the Commission have really learned from Josh is an open question, but there’s no doubt that Josh offered an enormous amount from which they could learn. As Tim Muris said, Josh “did not disappoint, having one of the most important and memorable tenures of any non-Chair” at the agency.

Within a month of his arrival at the Commission, in fact, Josh “laid down the cost-benefit-analysis gauntlet” in a little-noticed concurring statement regarding a proposed amendment to the Hart-Scott-Rodino Rules. The technical details of the proposed rule don’t matter for these purposes, but, as Josh noted in his statement, the situation intended to be avoided by the rule had never arisen:

The proposed rulemaking appears to be a solution in search of a problem. The Federal Register notice states that the proposed rules are necessary to prevent the FTC and DOJ from “expend[ing] scarce resources on hypothetical transactions.” Yet, I have not to date been presented with evidence that any of the over 68,000 transactions notified under the HSR rules have required Commission resources to be allocated to a truly hypothetical transaction.

What Josh asked for in his statement was not that the rule be scrapped, but simply that, before adopting the rule, the FTC weigh its costs and benefits.

As I noted at the time:

[I]t is the Commission’s responsibility to ensure that the rules it enacts will actually be beneficial (it is a consumer protection agency, after all). The staff, presumably, did a perfectly fine job writing the rule they were asked to write. Josh’s point is simply that it isn’t clear the rule should be adopted because it isn’t clear that the benefits of doing so would outweigh the costs.

As essentially everyone who has contributed to this symposium has noted, Josh was singularly focused on the rigorous application of the deceptively simple concept that the FTC should ensure that the benefits of any rule or enforcement action it adopts outweigh the costs. The rest, as they say, is commentary.

For Josh, this basic principle should permeate every aspect of the agency, and permeate the way it thinks about everything it does. Only an entirely new mindset can ensure that outcomes, from the most significant enforcement actions to the most trivial rule amendments, actually serve consumers.

While the FTC has a strong tradition of incorporating economic analysis in its antitrust decision-making, its record in using economics in other areas is decidedly mixed, as Berin points out. But even in competition policy, the Commission frequently uses economics — but it’s not clear it entirely understands economics. The approach that others have lauded Josh for is powerful, but it’s also subtle.

Inherent limitations on anyone’s knowledge about the future of technology, business and social norms caution skepticism, as regulators attempt to predict whether any given business conduct will, on net, improve or harm consumer welfare. In fact, a host of factors suggests that even the best-intentioned regulators tend toward overconfidence and the erroneous condemnation of novel conduct that benefits consumers in ways that are difficult for regulators to understand. Coase’s famous admonition in a 1972 paper has been quoted here before (frequently), but bears quoting again:

If an economist finds something – a business practice of one sort or another – that he does not understand, he looks for a monopoly explanation. And as in this field we are very ignorant, the number of ununderstandable practices tends to be very large, and the reliance on a monopoly explanation, frequent.

Simply “knowing” economics, and knowing that it is important to antitrust enforcement, aren’t enough. Reliance on economic formulae and theoretical models alone — to say nothing of “evidence-based” analysis that doesn’t or can’t differentiate between probative and prejudicial facts — doesn’t resolve the key limitations on regulatory decisionmaking that threaten consumer welfare, particularly when it comes to the modern, innovative economy.

As Josh and I have written:

[O]ur theoretical knowledge cannot yet confidently predict the direction of the impact of additional product market competition on innovation, much less the magnitude. Additionally, the multi-dimensional nature of competition implies that the magnitude of these impacts will be important as innovation and other forms of competition will frequently be inversely correlated as they relate to consumer welfare. Thus, weighing the magnitudes of opposing effects will be essential to most policy decisions relating to innovation. Again, at this stage, economic theory does not provide a reliable basis for predicting the conditions under which welfare gains associated with greater product market competition resulting from some regulatory intervention will outweigh losses associated with reduced innovation.

* * *

In sum, the theoretical and empirical literature reveals an undeniably complex interaction between product market competition, patent rules, innovation, and consumer welfare. While these complexities are well understood, in our view, their implications for the debate about the appropriate scale and form of regulation of innovation are not.

Along the most important dimensions, while our knowledge has expanded since 1972, the problem has not disappeared — and it may only have magnified. As Tim Muris noted in 2005,

[A] visitor from Mars who reads only the mathematical IO literature could mistakenly conclude that the U.S. economy is rife with monopoly power…. [Meanwhile, Section 2’s] history has mostly been one of mistaken enforcement.

It may not sound like much, but what is needed, what Josh brought to the agency, and what turns out to be absolutely essential to getting it right, is unflagging awareness of and attention to the institutional, political and microeconomic relationships that shape regulatory institutions and regulatory outcomes.

Regulators must do their best to constantly grapple with uncertainty, problems of operationalizing useful theory, and, perhaps most important, the social losses associated with error costs. It is not (just) technicians that the FTC needs; it’s regulators imbued with the “Economic Way of Thinking.” In short, what is needed, and what Josh brought to the Commission, is humility — the belief that, as Coase also wrote, sometimes the best answer is to “do nothing at all.”

The technocratic model of regulation is inconsistent with the regulatory humility required in the face of fast-changing, unexpected — and immeasurably valuable — technological advance. As Virginia Postrel warns in The Future and Its Enemies:

Technocrats are “for the future,” but only if someone is in charge of making it turn out according to plan. They greet every new idea with a “yes, but,” followed by legislation, regulation, and litigation…. By design, technocrats pick winners, establish standards, and impose a single set of values on the future.

For Josh, the first JD/Econ PhD appointed to the FTC,

economics provides a framework to organize the way I think about issues beyond analyzing the competitive effects in a particular case, including, for example, rulemaking, the various policy issues facing the Commission, and how I weigh evidence relative to the burdens of proof and production. Almost all the decisions I make as a Commissioner are made through the lens of economics and marginal analysis because that is the way I have been taught to think.

A representative example will serve to illuminate the distinction between merely using economics and evidence and understanding them — and their limitations.

In his Nielson/Arbitron dissent Josh wrote:

The Commission thus challenges the proposed transaction based upon what must be acknowledged as a novel theory—that is, that the merger will substantially lessen competition in a market that does not today exist.

[W]e… do not know how the market will evolve, what other potential competitors might exist, and whether and to what extent these competitors might impose competitive constraints upon the parties.

Josh’s straightforward statement of the basis for restraint stands in marked contrast to the majority’s decision to impose antitrust-based limits on economic activity that hasn’t even yet been contemplated. Such conduct is directly at odds with a sensible, evidence-based approach to enforcement, and the economic problems with it are considerable, as Josh also notes:

[I]t is an exceedingly difficult task to predict the competitive effects of a transaction where there is insufficient evidence to reliably answer the[] basic questions upon which proper merger analysis is based.

When the Commission’s antitrust analysis comes unmoored from such fact-based inquiry, tethered tightly to robust economic theory, there is a more significant risk that non-economic considerations, intuition, and policy preferences influence the outcome of cases.

Compare in this regard Josh’s words about Nielsen with Deborah Feinstein’s defense of the majority from such charges:

The Commission based its decision not on crystal-ball gazing about what might happen, but on evidence from the merging firms about what they were doing and from customers about their expectations of those development plans. From this fact-based analysis, the Commission concluded that each company could be considered a likely future entrant, and that the elimination of the future offering of one would likely result in a lessening of competition.

Instead of requiring rigorous economic analysis of the facts, couched in an acute awareness of our necessary ignorance about the future, for Feinstein the FTC fulfilled its obligation in Nielsen by considering the “facts” alone (not economic evidence, mind you, but customer statements and expressions of intent by the parties) and then, at best, casually applying to them the simplistic, outdated structural presumption – the conclusion that increased concentration would lead inexorably to anticompetitive harm. Her implicit claim is that all the Commission needed to know about the future was what the parties thought about what they were doing and what (hardy disinterested) customers thought they were doing. This shouldn’t be nearly enough.

Worst of all, Nielsen was “decided” with a consent order. As Josh wrote, strongly reflecting the essential awareness of the broader institutional environment that he brought to the Commission:

[w]here the Commission has endorsed by way of consent a willingness to challenge transactions where it might not be able to meet its burden of proving harm to competition, and which therefore at best are competitively innocuous, the Commission’s actions may alter private parties’ behavior in a manner that does not enhance consumer welfare.

Obviously in this regard his successful effort to get the Commission to adopt a UMC enforcement policy statement is a most welcome development.

In short, Josh is to be applauded not because he brought economics to the Commission, but because he brought the economic way of thinking. Such a thing is entirely too rare in the modern administrative state. Josh’s tenure at the FTC was relatively short, but he used every moment of it to assiduously advance his singular, and essential, mission. And, to paraphrase the last line of the movie The Right Stuff (it helps to have the rousing film score playing in the background as you read this): “for a brief moment, [Josh Wright] became the greatest [regulator] anyone had ever seen.”

I would like to extend my thanks to everyone who participated in this symposium. The contributions here will stand as a fitting and lasting tribute to Josh and his legacy at the Commission. And, of course, I’d also like to thank Josh for a tenure at the FTC very much worth honoring.

Alden Abbott and I recently co-authored an article, forthcoming in the Journal of Competition Law and Economics, in which we examined the degree to which the Supreme Court and the federal enforcement agencies have recognized the inherent limits of antitrust law. We concluded that the Roberts Court has admirably acknowledged those limits and has for the most part crafted liability rules that will maximize antitrust’s social value. The enforcement agencies, by contrast, have largely ignored antitrust’s intrinsic limits. In a number of areas, they have sought to expand antitrust’s reach in ways likely to reduce consumer welfare.

The bright spot in federal antitrust enforcement in the last few years has been Josh Wright. Time and again, he has bucked the antitrust establishment, reminding the mandarins that their goal should not be to stop every instance of anticompetitive behavior but instead to optimize antitrust by minimizing the sum of error costs (from both false negatives and false positives) and decision costs. As Judge Easterbrook famously explained, and as Josh Wright has emphasized more than anyone I know, inevitable mistakes (error costs) and heavy information requirements (decision costs) constrain what antitrust can do. Every liability rule, every defense, every immunity doctrine should be crafted with those limits in mind.

Josh will no doubt be remembered, and justifiably so, for spearheading the effort to provide guidance on how the Federal Trade Commission will exercise its amorphous authority to police “unfair methods of competition.” Several others have lauded Josh’s fine contribution on that matter (as have I), so I won’t gild that lily here. Instead, let me briefly highlight two other areas in which Josh has properly pushed for a recognition of antitrust’s inherent limits.

Vertical Restraints

Vertical restraints—both intrabrand restraints like resale price maintenance (RPM) and interbrand restraints like exclusive dealing—are a competitive mixed bag. Under certain conditions, such restraints may reduce overall market output, causing anticompetitive harm. Under other, more commonly occurring conditions, vertical restraints may enhance market output. Empirical evidence suggests that most vertical restraints are output-enhancing rather than output-reducing. Enforcers taking an optimizing, limits of antitrust approach will therefore exercise caution in condemning or discouraging vertical restraints.

That’s exactly what Josh Wright has done. In an early post-Leegin RPM order predating Josh’s tenure, the FTC endorsed a liability rule that placed an inappropriately heavy burden on RPM defendants. Josh later laid the groundwork for correcting that mistake, advocating a much more evidence-based (and defendant-friendly) RPM rule. In the McWane case, the Commission condemned an exclusive dealing arrangement that had been in place for long enough to cause anticompetitive harm but hadn’t done so. Josh rightly called out the majority for elevating theoretical harm over actual market evidence. (Adopting a highly deferential stance, the Eleventh Circuit affirmed the Commission majority, but Josh was right to criticize the majority’s implicit hostility toward exclusive dealing.) In settling the Graco case, the Commission again went beyond the evidence, requiring the defendant to cease exclusive dealing and to stop giving loyalty rebates even though there was no evidence that either sort of vertical restraint contributed to the anticompetitive harm giving rise to the action at issue. Josh rightly took the Commission to task for reflexively treating vertical restraints as suspect when they’re usually procompetitive and had an obvious procompetitive justification (avoidance of interbrand free-riding) in the case at hand.

Horizontal Mergers

Horizontal mergers, like vertical restraints, are competitive mixed bags. Any particular merger of competitors may impose some consumer harm by reducing the competition facing the merged firm. The same merger, though, may provide some consumer benefit by lowering the merged firm’s costs and thereby allowing it to compete more vigorously (most notably, by lowering its prices). A merger policy committed to minimizing the consumer welfare losses from unwarranted condemnations of net beneficial mergers and improper acquittals of net harmful ones would afford equal treatment to claims of anticompetitive harm and procompetitive benefit, requiring each to be established by the same quantum of proof.

The federal enforcement agencies’ new Horizontal Merger Guidelines, however, may put a thumb on the scale, tilting the balance toward a finding of anticompetitive harm. The Guidelines make it easier for the agencies to establish likely anticompetitive harm. Enforcers may now avoid defining a market if they point to adverse unilateral effects using the gross upward pricing pressure index (GUPPI). The merging parties, by contrast, bear a heavy burden when they seek to show that their contemplated merger will occasion efficiencies. They must: (1) prove that any claimed efficiencies are “merger-specific” (i.e., incapable of being achieved absent the merger); (2) “substantiate” asserted efficiencies; and (3) show that such efficiencies will result in the very markets in which the agencies have established likely anticompetitive effects.

In an important dissent (Ardagh), Josh observed that the agencies’ practice has evolved such that there are asymmetric burdens in establishing competitive effects, and he cautioned that this asymmetry will enhance error costs. (Geoff praised that dissent here.) In another dissent (Family Dollar/Dollar Tree), Josh acknowledged some potential problems with the promising but empirically unverified GUPPI, and he wisely advocated the creation of safe harbors for mergers generating very low GUPPI scores. (I praised that dissent here.)

I could go on and on, but these examples suffice to illustrate what has been, in my opinion, Josh’s most important contribution as an FTC commissioner: his constant effort to strengthen antitrust’s effectiveness by acknowledging its inevitable and inexorable limits. Coming on the heels of the FTC’s and DOJ’s rejection of the Section 2 Report—a document that was highly attuned to antitrust’s limits—Josh was just what antitrust needed.

by Jonathan Jacobson, partner & Ryan Maddock, associate, Wilson Sonsini Goodrich & Rosati

Excluding the much talked about Section 5 policy statement, Commissioner Wright’s tenure at the FTC was highlighted by his numerous dissents. If there is one unifying theme in those dissents it is his insistence that rigorous economic analysis be at the very core of all the Commission’s decisions. This theme was perhaps most evident in his decision to dissent in the Ardaugh/Saint-Gobain and Sysco/US Foods mergers, two cases that presented interesting questions about how the Commission and courts should balance a merger’s likely anticompetitive effects with its procompetitive efficiencies.

In April of 2014 the Commission announced that it had accepted a consent decree in Ardaugh/Saint-Gobain that remedied its competitive concerns related to the merger of the second and third largest firms in the market for “glass containers sold to beer and wine distributors in the United States.” The majority, which consisted of Commissioners Ramirez, Ohlhausen, and Brill, argued that the merger would lead to both coordinated and unilateral anticompetitive effects in the market and further stated that “the parties put forward insufficient evidence showing that the level of synergies that could be substantiated and verified would outweigh the clear evidence of consumer harm.” Commissioner Wright, who was the lone dissenter, strongly disagreed with the majority’s conclusions and found that the merger’s cognizable efficiencies were “up to six times greater than any likely unilateral price effect,” and thus the merger should have been approved without requiring a remedy.

Commissioner Wright also used his Ardaugh dissent to discuss whether the merging parties and Commission face asymmetric burdens of proof regarding competitive effects. Specifically, Commissioner Wright asked whether the “merging parties [must] overcome a greater burden of proof on efficiencies in practice than does the FTC to satisfy its prima facie burden of establishing anticompetitive effects?” Commissioner Wright stated that the Commission has acknowledged that in theory the burdens of proof should be uniform; however, he argued that the only way the majority could have found that the Ardaugh/Saint-Gobain merger would generate almost no cognizable efficiencies is by applying asymmetric burdens. He explained that the majority’s approach “embraces probabilistic prediction, estimation, presumption, and simulation of anticompetitive effects on the one hand but requires efficiencies to be proven on the other.”

Commissioner Wright, who was joined by Commissioner Ohlhausen, also dissented from the Commission’s decision to challenge the Sysco/US Foods merger. While the Commissioners did not issue a formal dissent because of the FTC’s then pending litigation, Commissioner Wright tweeted that he had “no reason to believe the proposed Sysco/US Foods transaction violated the Clayton Act.” The lack of a formal dissent makes it challenging to ascertain all of Commissioner Wright’s objections, but a reading of the Commission’s administrative complaint provides insight on his likely positions. For example, Commissioner Wright undoubtedly disagreed with the complaint’s treatment the parties’ proffered efficiencies:

Extraordinary Merger-specific efficiencies are necessary to outweigh the Merger’s likely significant harm to competition in the relevant markets. Respondents cannot demonstrate cognizable efficiencies that would be sufficient to rebut the strong presumption and evidence that the Merger likely would substantially lessen competition in the relevant markets.

Commissioner Wright’s Ardaugh dissent makes it clear that he does not believe that the balancing of anticompetitive effects and efficiencies should be an afterthought to the agency’s merger analysis, which is how the majority’s complaint appears to treat it. This case likely represents another instance where Commissioner Wright believed that the majority of commissioners applied asymmetric burdens of proof when balancing the merger’s competitive effects.

Commissioner Wright is not the first person to ask whether current merger analysis favors anticompetitive effects over efficiencies; however, that does not detract from the question’s importance.  His views reflect a belief shared by others that antitrust policy should be based on an aggregate welfare standard, rather than the consumer welfare standard that the agencies and the courts have for the most applied over the past few decades. In Commissioner Wright’s view, by applying asymmetric burdens–which is functionally the same as discounting efficiencies–antitrust agencies could harm both total welfare and consumers by increasing the chance that a procompetitive merger might be blocked. It stands in contrast to the majority view that a merger that raises prices requires efficiencies, specific to the merger, of a magnitude sufficient to defeat any increase in consumer prices–and that, because the efficiency information is in the hands of the proponents, shifting the burden to them is appropriate.

While his tenure at the FTC has come to an end, expect to continue to see Commissioner Wright at the front and center of this and many other important antitrust issues.

FTC Commissioner Josh Wright has some wise thoughts on how to handle a small GUPPI. I don’t mean the fish. Dissenting in part in the Commission’s disposition of the Family Dollar/Dollar Tree merger, Commissioner Wright calls for creating a safe harbor for mergers where the competitive concern is unilateral effects and the merger generates a low score on the “Gross Upward Pricing Pressure Index,” or “GUPPI.”

Before explaining why Wright is right on this one, some quick background on the GUPPI. In 2010, the DOJ and FTC revised their Horizontal Merger Guidelines to reflect better the actual practices the agencies follow in conducting pre-merger investigations. Perhaps the most notable new emphasis in the revised guidelines was a move away from market definition, the traditional starting point for merger analysis, and toward consideration of potentially adverse “unilateral” effects—i.e., anticompetitive harms that, unlike collusion or even non-collusive oligopolistic pricing, need not involve participation of any non-merging firms in the market. The primary unilateral effect emphasized by the new guidelines is that the merger may put “upward pricing pressure” on brand-differentiated but otherwise similar products sold by the merging firms. The guidelines maintain that when upward pricing pressure seems significant, it may be unnecessary to define the relevant market before concluding that an anticompetitive effect is likely.

The logic of upward pricing pressure is straightforward. Suppose five firms sell competing products (Products A-E) that, while largely substitutable, are differentiated by brand. Given the brand differentiation, some of the products are closer substitutes than others. If the closest substitute to Product A is Product B and vice-versa, then a merger between Producer A and Producer B may result in higher prices even if the remaining producers (C, D, and E) neither raise their prices nor reduce their output. The merged firm will know that if it raises the price of Product A, most of the lost sales will be diverted to Product B, which that firm also produces. Similarly, sales diverted from Product B will largely flow to Product A. Thus, the merged company, seeking to maximize its profits, may face pressure to raise the prices of Products A and/or B.

The GUPPI seeks to assess the likelihood, absent countervailing efficiencies, that the merged firm (e.g., Producer A combined with Producer B) would raise the price of one of its competing products (e.g., Product A), causing some of the lost sales on that product to be diverted to its substitute (e.g., Product B). The GUPPI on Product A would thus consist of:

The Value of Sales Diverted to Product B
Foregone Revenues on Lost Product A Sales.

The value of sales diverted to Product B, the numerator, is equal to the number of units diverted from Product A to Product B times the profit margin (price minus marginal cost) on Product B. The foregone revenues on lost Product A sales, the denominator, is equal to the number of lost Product A sales times the price of Product A. Thus, the fraction set forth above is equal to:

Number of A Sales Diverted to B * Unit Margin on B
Number of A Sales Lost * Price of A.

The Guidelines do not specify how high the GUPPI for a particular product must be before competitive concerns are raised, but they do suggest that at some point, the GUPPI is so small that adverse unilateral effects are unlikely. (“If the value of diverted sales is proportionately small, significant unilateral price effects are unlikely.”) Consistent with this observation, DOJ’s Antitrust Division has concluded that a GUPPI of less than 5% will not give rise to a merger challenge.

Commissioner Wright has split with his fellow commissioners over whether the FTC should similarly adopt a safe harbor for horizontal mergers where the adverse competitive concern is unilateral effects and the GUPPIs are less than 5%. Of the 330 markets in which the Commission is requiring divestiture of stores, 27 involve GUPPIs of less than 5%. Commissioner Wright’s position is that the combinations in those markets should be deemed to fall within a safe harbor. At the very least, he says, there should be some safe harbor for very small GUPPIs, even if it kicks in somewhere below the 5% level. The Commission has taken the position that there should be no safe harbor for mergers where the competitive concern is unilateral effects, no matter how low the GUPPI. Instead, the Commission majority says, GUPPI is just a starting point; once the GUPPIs are calculated, each market should be assessed in light of qualitative factors, and a gestalt-like, “all things considered” determination should be made.

The Commission majority purports to have taken this approach in the Family Dollar/Dollar Tree case. It claims that having used GUPPI to identify some markets that were presumptively troubling (markets where GUPPIs were above a certain level) and others that were presumptively not troubling (low-GUPPI markets), it went back and considered qualitative evidence for each, allowing the presumption to be rebutted where appropriate. As Commissioner Wright observes, though, the actual outcome of this purported process is curious: almost none of the “presumptively anticompetitive” markets were cleared based on qualitative evidence, whereas 27 of the “presumptively competitive” markets were slated for a divestiture despite the low GUPPI. In practice, the Commission seems to be using high GUPPIs to condemn unilateral effects mergers, while not allowing low GUPPIs to acquit them. Wright, by contrast, contends that a low-enough GUPPI should be sufficient to acquit a merger where the only plausible competitive concern is adverse unilateral effects.

He’s right on this, for at least five reasons.

  1. Virtually every merger involves a positive GUPPI. As long as any sales would be diverted from one merging firm to the other and the firms are pricing above cost (so that there is some profit margin on their products), a merger will involve a positive GUPPI. (Recall that the numerator in the GUPPI is “number of diverted sales * profit margin on the product to which sales are diverted.”) If qualitative evidence must be considered and a gestalt-like decision made in even low-GUPPI cases, then that’s the approach that will always be taken and GUPPI data will be essentially irrelevant.
  2. Calculating GUPPIs is hard. Figuring the GUPPI requires the agencies to make some difficult determinations. Calculating the “diversion ratio” (the percentage of lost A sales that are diverted to B when the price of A is raised) requires determinations of A’s “own-price elasticity of demand” as well as the “cross-price elasticity of demand” between A and B. Calculating the profit margin on B requires determining B’s marginal cost. Assessing elasticity of demand and marginal cost is notoriously difficult. This difficulty matters here for a couple of reasons:
    • First, why go through the difficult task of calculating GUPPIs if they won’t simplify the process of evaluating a merger? Under the Commission’s purported approach, once GUPPI is calculated, enforcers still have to consider all the other evidence and make an “all things considered” judgment. A better approach would be to cut off the additional analysis if the GUPPI is sufficiently small.
    • Second, given the difficulty of assessing marginal cost (which is necessary to determine the profit margin on the product to which sales are diverted), enforcers are likely to use a proxy, and the most commonly used proxy for marginal cost is average variable cost (i.e., the total non-fixed costs of producing the products at issue divided by the number of units produced). Average variable cost, though, tends to be smaller than marginal cost over the relevant range of output, which will cause the profit margin (price – “marginal” cost) on the product to which sales are diverted to appear higher than it actually is. And that will tend to overstate the GUPPI. Thus, at some point, a positive but low GUPPI should be deemed insignificant.
  3. The GUPPI is biased toward an indication of anticompetitive effect. GUPPI attempts to assess gross upward pricing pressure. It takes no account of factors that tend to prevent prices from rising. In particular, it ignores entry and repositioning by other product-differentiated firms, factors that constrain the merged firm’s ability to raise prices. It also ignores merger-induced efficiencies, which tend to put downward pressure on the merged firm’s prices. (Granted, the merger guidelines call for these factors to be considered eventually, but the factors are generally subject to higher proof standards. Efficiencies, in particular, are pretty difficulty to establish under the guidelines.) The upshot is that the GUPPI is inherently biased toward an indication of anticompetitive harm. A safe harbor for mergers involving low GUPPIs would help counter-balance this built-in bias.
  4. Divergence from DOJ’s approach will create an arbitrary result. The FTC and DOJ’s Antitrust Division share responsibility for assessing proposed mergers. Having the two enforcement agencies use different standards in their evaluations injects a measure of arbitrariness into the law. In the interest of consistency, predictability, and other basic rule of law values, the agencies should get on the same page. (And, for reasons set forth above, DOJ’s is the better one.)
  5. A safe harbor is consistent with the Supreme Court’s decision-theoretic antitrust jurisprudence. In recent years, the Supreme Court has generally crafted antitrust rules to optimize the costs of errors and of making liability judgments (or, put differently, to “minimize the sum of error and decision costs”). On a number of occasions, the Court has explicitly observed that it is better to adopt a rule that will allow the occasional false acquittal if doing so will prevent greater costs from false convictions and administration. The Brooke Group rule that there can be no predatory pricing liability absent below-cost pricing, for example, is expressly not premised on the belief that low, but above-cost, pricing can never be anticompetitive; rather, the rule is justified on the ground that the false negatives it allows are less costly than the false positives and administrative difficulties a more “theoretically perfect” rule would generate. Indeed, the Supreme Court’s antitrust jurisprudence seems to have wholeheartedly endorsed Voltaire’s prudent aphorism, “The perfect is the enemy of the good.” It is thus no answer for the Commission to observe that adverse unilateral effects can sometimes occur when a combination involves a low (<5%) GUPPI. Low but above-cost pricing can sometimes be anticompetitive, but Brooke Group’s safe harbor is sensible and representative of the approach the Supreme Court thinks antitrust should take. The FTC should get on board.

One final point. It is important to note that Commissioner Wright is not saying—and would be wrong to say—that a high GUPPI should be sufficient to condemn a merger. The GUPPI has never been empirically verified as a means of identifying anticompetitive mergers. As Dennis Carlton observed, “[T]he use of UPP as a merger screen is untested; to my knowledge, there has been no empirical analysis that has been performed to validate its predictive value in assessing the competitive effects of mergers.” Dennis W. Carlton, Revising the Horizontal Merger Guidelines, 10 J. Competition L. & Econ. 1, 24 (2010). This dearth of empirical evidence seems especially problematic in light of the enforcement agencies’ spotty track record in predicting the effects of mergers. Craig Peters, for example, found that the agencies’ merger simulations produced wildly inaccurate predictions about the price effects of airline mergers. See Craig Peters, Evaluating the Performance of Merger Simulation: Evidence from the U.S. Airline Industry, 49 J.L. & Econ. 627 (2006). Professor Carlton thus warns (Carlton, supra, at 32):

UPP is effectively a simplified version of merger simulation. As such, Peters’s findings tell a cautionary tale—more such studies should be conducted before one treats UPP, or any other potential merger review method, as a consistently reliable methodology by which to identify anticompetitive mergers.

The Commission majority claims to agree that a high GUPPI alone should be insufficient to condemn a merger. But the actual outcome of the analysis in the case at hand—i.e., finding almost all combinations involving high GUPPIs to be anticompetitive, while deeming the procompetitive presumption to be rebutted in 27 low-GUPPI cases—suggests that the Commission is really allowing high GUPPIs to “prove” that anticompetitive harm is likely.

The point of dispute between Wright and the other commissioners, though, is about how to handle low GUPPIs. On that question, the Commission should either join the DOJ in recognizing a safe harbor for low-GUPPI mergers or play it straight with the public and delete the Horizontal Merger Guidelines’ observation that “[i]f the value of diverted sales is proportionately small, significant unilateral price effects are unlikely.” The better approach would be to affirm the Guidelines and recognize a safe harbor.

The FTC recently required divestitures in two merger investigations (here and here), based largely on the majority’s conclusion that

[when] a proposed merger significantly increases concentration in an already highly concentrated market, a presumption of competitive harm is justified under both the Guidelines and well-established case law.” (Emphasis added).

Commissioner Wright dissented in both matters (here and here), contending that

[the majority’s] reliance upon such shorthand structural presumptions untethered from empirical evidence subsidize a shift away from the more rigorous and reliable economic tools embraced by the Merger Guidelines in favor of convenient but obsolete and less reliable economic analysis.

Josh has the better argument, of course. In both cases the majority relied upon its structural presumption rather than actual economic evidence to make out its case. But as Josh notes in his dissent in In the Matter of ZF Friedrichshafen and TRW Automotive (quoting his 2013 dissent in In the Matter of Fidelity National Financial, Inc. and Lender Processing Services):

there is no basis in modern economics to conclude with any modicum of reliability that increased concentration—without more—will increase post-merger incentives to coordinate. Thus, the Merger Guidelines require the federal antitrust agencies to develop additional evidence that supports the theory of coordination and, in particular, an inference that the merger increases incentives to coordinate.

Or as he points out in his dissent in In the Matter of Holcim Ltd. and Lafarge S.A.

The unifying theme of the unilateral effects analysis contemplated by the Merger Guidelines is that a particularized showing that post-merger competitive constraints are weakened or eliminated by the merger is superior to relying solely upon inferences of competitive effects drawn from changes in market structure.

It is unobjectionable (and uninteresting) that increased concentration may, all else equal, make coordination easier, or enhance unilateral effects in the case of merger to monopoly. There are even cases (as in generic pharmaceutical markets) where rigorous, targeted research exists, sufficient to support a presumption that a reduction in the number of firms would likely lessen competition. But generally (as in these cases), absent actual evidence, market shares might be helpful as an initial screen (and may suggest greater need for a thorough investigation), but they are not analytically probative in themselves. As Josh notes in his TRW dissent:

The relevant question is not whether the number of firms matters but how much it matters.

The majority in these cases asserts that it did find evidence sufficient to support its conclusions, but — and this is where the rubber meets the road — the question remains whether its limited evidentiary claims are sufficient, particularly given analyses that repeatedly come back to the structural presumption. As Josh says in his Holcim dissent:

it is my view that the investigation failed to adduce particularized evidence to elevate the anticipated likelihood of competitive effects from “possible” to “likely” under any of these theories. Without this necessary evidence, the only remaining factual basis upon which the Commission rests its decision is the fact that the merger will reduce the number of competitors from four to three or three to two. This is simply not enough evidence to support a reason to believe the proposed transaction will violate the Clayton Act in these Relevant Markets.

Looking at the majority’s statements, I see a few references to the kinds of market characteristics that could indicate competitive concerns — but very little actual analysis of whether these characteristics are sufficient to meet the Clayton Act standard in these particular markets. The question is — how much analysis is enough? I agree with Josh that the answer must be “more than is offered here,” but it’s an important question to explore more deeply.

Presumably that’s exactly what the ABA’s upcoming program will do, and I highly recommend interested readers attend or listen in. The program details are below.

The Use of Structural Presumptions in Merger Analysis

June 26, 2015, 12:00 PM – 1:15 PM ET


  • Brendan Coffman, Wilson Sonsini Goodrich & Rosati LLP


  • Angela Diveley, Office of Commissioner Joshua D. Wright, Federal Trade Commission
  • Abbott (Tad) Lipsky, Latham & Watkins LLP
  • Janusz Ordover, Compass Lexecon
  • Henry Su, Office of Chairwoman Edith Ramirez, Federal Trade Commission

In-person location:

Latham & Watkins
555 11th Street,NW
Ste 1000
Washington, DC 20004

Register here.

Recently, Commissioner Pai praised the introduction of bipartisan legislation to protect joint sales agreements (“JSAs”) between local television stations. He explained that

JSAs are contractual agreements that allow broadcasters to cut down on costs by using the same advertising sales force. The efficiencies created by JSAs have helped broadcasters to offer services that benefit consumers, especially in smaller markets…. JSAs have served communities well and have promoted localism and diversity in broadcasting. Unfortunately, the FCC’s new restrictions on JSAs have already caused some stations to go off the air and other stations to carry less local news.

fccThe “new restrictions” to which Commissioner Pai refers were recently challenged in court by the National Association of Broadcasters (NAB), et. al., and on April 20, the International Center for Law & Economics and a group of law and economics scholars filed an amicus brief with the D.C. Circuit Court of Appeals in support of the petition, asking the court to review the FCC’s local media ownership duopoly rule restricting JSAs.

Much as it did with with net neutrality, the FCC is looking to extend another set of rules with no basis in sound economic theory or established facts.

At issue is the FCC’s decision both to retain the duopoly rule and to extend that rule to certain JSAs, all without completing a legally mandated review of the local media ownership rules, due since 2010 (but last completed in 2007).

The duopoly rule is at odds with sound competition policy because it fails to account for drastic changes in the media market that necessitate redefinition of the market for television advertising. Moreover, its extension will bring a halt to JSAs currently operating (and operating well) in nearly 100 markets.  As the evidence on the FCC rulemaking record shows, many of these JSAs offer public interest benefits and actually foster, rather than stifle, competition in broadcast television markets.

In the world of media mergers generally, competition law hasn’t yet caught up to the obvious truth that new media is competing with old media for eyeballs and advertising dollars in basically every marketplace.

For instance, the FTC has relied on very narrow market definitions to challenge newspaper mergers without recognizing competition from television and the Internet. Similarly, the generally accepted market in which Google’s search conduct has been investigated is something like “online search advertising” — a market definition that excludes traditional marketing channels, despite the fact that advertisers shift their spending between these channels on a regular basis.

But the FCC fares even worse here. The FCC’s duopoly rule is premised on an “eight voices” test for local broadcast stations regardless of the market shares of the merging stations. In other words, one entity cannot own FCC licenses to two or more TV stations in the same local market unless there are at least eight independently owned stations in that market, even if their combined share of the audience or of advertising are below the level that could conceivably give rise to any inference of market power.

Such a rule is completely unjustifiable under any sensible understanding of competition law.

Can you even imagine the FTC or DOJ bringing an 8 to 7 merger challenge in any marketplace? The rule is also inconsistent with the contemporary economic learning incorporated into the 2010 Merger Guidelines, which looks at competitive effects rather than just counting competitors.

Not only did the FCC fail to analyze the marketplace to understand how much competition there is between local broadcasters, cable, and online video, but, on top of that, the FCC applied this outdated duopoly rule to JSAs without considering their benefits.

The Commission offers no explanation as to why it now believes that extending the duopoly rule to JSAs, many of which it had previously approved, is suddenly necessary to protect competition or otherwise serve the public interest. Nor does the FCC cite any evidence to support its position. In fact, the record evidence actually points overwhelmingly in the opposite direction.

As a matter of sound regulatory practice, this is bad enough. But Congress directed the FCC in Section 202(h) of the Telecommunications Act of 1996 to review all of its local ownership rules every four years to determine whether they were still “necessary in the public interest as the result of competition,” and to repeal or modify those that weren’t. During this review, the FCC must examine the relevant data and articulate a satisfactory explanation for its decision.

So what did the Commission do? It announced that, instead of completing its statutorily mandated 2010 quadrennial review of its local ownership rules, it would roll that review into a new 2014 quadrennial review (which it has yet to perform). Meanwhile, the Commission decided to retain its duopoly rule pending completion of that review because it had “tentatively” concluded that it was still necessary.

In other words, the FCC hasn’t conducted its mandatory quadrennial review in more than seven years, and won’t, under the new rules, conduct one for another year and a half (at least). Oh, and, as if nothing of relevance has changed in the market since then, it “tentatively” maintains its already suspect duopoly rule in the meantime.

In short, because the FCC didn’t conduct the review mandated by statute, there is no factual support for the 2014 Order. By relying on the outdated findings from its earlier review, the 2014 Order fails to examine the significant changes both in competition policy and in the market for video programming that have occurred since the current form of the rule was first adopted, rendering the rulemaking arbitrary and capricious under well-established case law.

Had the FCC examined the record of the current rulemaking, it would have found substantial evidence that undermines, rather than supports, the FCC’s rule.

Economic studies have shown that JSAs can help small broadcasters compete more effectively with cable and online video in a world where their advertising revenues are drying up and where temporary economies of scale (through limited contractual arrangements like JSAs) can help smaller, local advertising outlets better implement giant, national advertising campaigns. A ban on JSAs will actually make it less likely that competition among local broadcasters can survive, not more.

OfficialPaiCommissioner Pai, in his dissenting statement to the 2014 Order, offered a number of examples of the benefits of JSAs (all of them studiously ignored by the Commission in its Order). In one of these, a JSA enabled two stations in Joplin, Missouri to use their $3.5 million of cost savings from a JSA to upgrade their Doppler radar system, which helped save lives when a devastating tornado hit the town in 2011. But such benefits figure nowhere in the FCC’s “analysis.”

Several econometric studies also provide empirical support for the (also neglected) contention that duopolies and JSAs enable stations to improve the quality and prices of their programming.

One study, by Jeff Eisenach and Kevin Caves, shows that stations operating under these agreements are likely to carry significantly more news, public affairs, and current affairs programming than other stations in their markets. The same study found an 11 percent increase in audience shares for stations acquired through a duopoly. Meanwhile, a study by Hal Singer and Kevin Caves shows that markets with JSAs have advertising prices that are, on average, roughly 16 percent lower than in non-duopoly markets — not higher, as would be expected if JSAs harmed competition.

And again, Commissioner Pai provides several examples of these benefits in his dissenting statement. In one of these, a JSA in Wichita, Kansas enabled one of the two stations to provide Spanish-language HD programming, including news, weather, emergency and community information, in a market where that Spanish-language programming had not previously been available. Again — benefit ignored.

Moreover, in retaining its duopoly rule on the basis of woefully outdated evidence, the FCC completely ignores the continuing evolution in the market for video programming.

In reality, competition from non-broadcast sources of programming has increased dramatically since 1999. Among other things:

  • VideoScreensToday, over 85 percent of American households watch TV over cable or satellite. Most households now have access to nearly 200 cable channels that compete with broadcast TV for programming content and viewers.
  • In 2014, these cable channels attracted twice as many viewers as broadcast channels.
  • Online video services such as Netflix, Amazon Prime, and Hulu have begun to emerge as major new competitors for video programming, leading 179,000 households to “cut the cord” and cancel their cable subscriptions in the third quarter of 2014 alone.
  • Today, 40 percent of U.S. households subscribe to an online streaming service; as a result, cable ratings among adults fell by nine percent in 2014.
  • At the end of 2007, when the FCC completed its last quadrennial review, the iPhone had just been introduced, and the launch of the iPad was still more than two years away. Today, two-thirds of Americans have a smartphone or tablet over which they can receive video content, using technology that didn’t even exist when the FCC last amended its duopoly rule.

In the face of this evidence, and without any contrary evidence of its own, the Commission’s action in reversing 25 years of agency practice and extending its duopoly rule to most JSAs is arbitrary and capricious.

The law is pretty clear that the extent of support adduced by the FCC in its 2014 Rule is insufficient. Among other relevant precedent (and there is a lot of it):

The Supreme Court has held that an agency

must examine the relevant data and articulate a satisfactory explanation for its action, including a rational connection between the facts found and the choice made.

In the DC Circuit:

the agency must explain why it decided to act as it did. The agency’s statement must be one of ‘reasoning’; it must not be just a ‘conclusion’; it must ‘articulate a satisfactory explanation’ for its action.


[A]n agency acts arbitrarily and capriciously when it abruptly departs from a position it previously held without satisfactorily explaining its reason for doing so.


The FCC ‘cannot silently depart from previous policies or ignore precedent’ . . . .”

And most recently in Judge Silberman’s concurrence/dissent in the 2010 Verizon v. FCC Open Internet Order case:

factual determinations that underly [sic] regulations must still be premised on demonstrated — and reasonable — evidential support

None of these standards is met in this case.

It will be noteworthy to see what the DC Circuit does with these arguments given the pending Petitions for Review of the latest Open Internet Order. There, too, the FCC acted without sufficient evidentiary support for its actions. The NAB/Stirk Holdings case may well turn out to be a bellwether for how the court views the FCC’s evidentiary failings in that case, as well.

The scholars joining ICLE on the brief are:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Henry N. Butler, George Mason University Foundation Professor of Law and Executive Director of the Law & Economics Center, George Mason University School of Law (and newly appointed dean).
  • Richard Epstein, Laurence A. Tisch Professor of Law, Classical Liberal Institute, New York University School of Law
  • Stan Liebowitz, Ashbel Smith Professor of Economics, University of Texas at Dallas
  • Fred McChesney, de la Cruz-Mentschikoff Endowed Chair in Law and Economics, University of Miami School of Law
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University
  • Michael E. Sykuta, Associate Professor in the Division of Applied Social Sciences and Director of the Contracting and Organizations Research Institute, University of Missouri

The full amicus brief is available here.

Earlier this week the International Center for Law & Economics, along with a group of prominent professors and scholars of law and economics, filed an amicus brief with the Ninth Circuit seeking rehearing en banc of the court’s FTC, et al. v. St Luke’s case.

ICLE, joined by the Medicaid Defense Fund, also filed an amicus brief with the Ninth Circuit panel that originally heard the case.

The case involves the purchase by St. Luke’s Hospital of the Saltzer Medical Group, a multi-specialty physician group in Nampa, Idaho. The FTC and the State of Idaho sought to permanently enjoin the transaction under the Clayton Act, arguing that

[T]he combination of St. Luke’s and Saltzer would give it the market power to demand higher rates for health care services provided by primary care physicians (PCPs) in Nampa, Idaho and surrounding areas, ultimately leading to higher costs for health care consumers.

The district court agreed and its decision was affirmed by the Ninth Circuit panel.

Unfortunately, in affirming the district court’s decision, the Ninth Circuit made several errors in its treatment of the efficiencies offered by St. Luke’s in defense of the merger. Most importantly:

  • The court refused to recognize St. Luke’s proffered quality efficiencies, stating that “[i]t is not enough to show that the merger would allow St. Luke’s to better serve patients.”
  • The panel also applied the “less restrictive alternative” analysis in such a way that any theoretically possible alternative to a merger would discount those claimed efficiencies.
  • Finally, the Ninth Circuit panel imposed a much higher burden of proof for St. Luke’s to prove efficiencies than it did for the FTC to make out its prima facie case.

As we note in our brief:

If permitted to stand, the Panel’s decision will signal to market participants that the efficiencies defense is essentially unavailable in the Ninth Circuit, especially if those efficiencies go towards improving quality. Companies contemplating a merger designed to make each party more efficient will be unable to rely on an efficiencies defense and will therefore abandon transactions that promote consumer welfare lest they fall victim to the sort of reasoning employed by the panel in this case.

The following excerpts from the brief elaborate on the errors committed by the court and highlight their significance, particularly in the health care context:

The Panel implied that only price effects can be cognizable efficiencies, noting that the District Court “did not find that the merger would increase competition or decrease prices.” But price divorced from product characteristics is an irrelevant concept. The relevant concept is quality-adjusted price, and a showing that a merger would result in higher product quality at the same price would certainly establish cognizable efficiencies.

* * *

By placing the ultimate burden of proving efficiencies on the defendants and by applying a narrow, impractical view of merger specificity, the Panel has wrongfully denied application of known procompetitive efficiencies. In fact, under the Panel’s ruling, it will be nearly impossible for merging parties to disprove all alternatives when the burden is on the merging party to address any and every untested, theoretical less-restrictive structural alternative.

* * *

Significantly, the Panel failed to consider the proffered significant advantages that health care acquisitions may have over contractual alternatives or how these advantages impact the feasibility of contracting as a less restrictive alternative. In a complex integration of assets, “the costs of contracting will generally increase more than the costs of vertical integration.” (Benjamin Klein, Robert G. Crawford, and Armen A. Alchian, Vertical Integration, Appropriable Rents, and the Competitive Contracting Process, 21 J. L. & ECON. 297, 298 (1978)). In health care in particular, complexity is a given. Health care is characterized by dramatically imperfect information, and myriad specialized and differentiated products whose attributes are often difficult to measure. Realigning incentives through contract is imperfect and often unsuccessful. Moreover, the health care market is one of the most fickle, plagued by constantly changing market conditions arising from technological evolution, ever-changing regulations, and heterogeneous (and shifting) consumer demand. Such uncertainty frequently creates too many contingencies for parties to address in either writing or enforcing contracts, making acquisition a more appropriate substitute.

* * *

Sound antitrust policy and law do not permit the theoretical to triumph over the practical. One can always envision ways that firms could function to achieve potential efficiencies…. But this approach would harm consumers and fail to further the aims of the antitrust laws.

* * *

The Panel’s approach to efficiencies in this case demonstrates a problematic asymmetry in merger analysis. As FTC Commissioner Wright has cautioned:

Merger analysis is by its nature a predictive enterprise. Thinking rigorously about probabilistic assessment of competitive harms is an appropriate approach from an economic perspective. However, there is some reason for concern that the approach applied to efficiencies is deterministic in practice. In other words, there is a potentially dangerous asymmetry from a consumer welfare perspective of an approach that embraces probabilistic prediction, estimation, presumption, and simulation of anticompetitive effects on the one hand but requires efficiencies to be proven on the other. (Dissenting Statement of Commissioner Joshua D. Wright at 5, In the Matter of Ardagh Group S.A., and Saint-Gobain Containers, Inc., and Compagnie de Saint-Gobain)

* * *

In this case, the Panel effectively presumed competitive harm and then imposed unduly high evidentiary burdens on the merging parties to demonstrate actual procompetitive effects. The differential treatment and evidentiary burdens placed on St. Luke’s to prove competitive benefits is “unjustified and counterproductive.” (Daniel A. Crane, Rethinking Merger Efficiencies, 110 MICH. L. REV. 347, 390 (2011)). Such asymmetry between the government’s and St. Luke’s burdens is “inconsistent with a merger policy designed to promote consumer welfare.” (Dissenting Statement of Commissioner Joshua D. Wright at 7, In the Matter of Ardagh Group S.A., and Saint-Gobain Containers, Inc., and Compagnie de Saint-Gobain).

* * *

In reaching its decision, the Panel dismissed these very sorts of procompetitive and quality-enhancing efficiencies associated with the merger that were recognized by the district court. Instead, the Panel simply decided that it would not consider the “laudable goal” of improving health care as a procompetitive efficiency in the St. Luke’s case – or in any other health care provider merger moving forward. The Panel stated that “[i]t is not enough to show that the merger would allow St. Luke’s to better serve patients.” Such a broad, blanket conclusion can serve only to harm consumers.

* * *

By creating a barrier to considering quality-enhancing efficiencies associated with better care, the approach taken by the Panel will deter future provider realignment and create a “chilling” effect on vital provider integration and collaboration. If the Panel’s decision is upheld, providers will be considerably less likely to engage in realignment aimed at improving care and lowering long-term costs. As a result, both patients and payors will suffer in the form of higher costs and lower quality of care. This can’t be – and isn’t – the outcome to which appropriate antitrust law and policy aspires.

The scholars joining ICLE on the brief are:

  • George Bittlingmayer, Wagnon Distinguished Professor of Finance and Otto Distinguished Professor of Austrian Economics, University of Kansas
  • Henry Butler, George Mason University Foundation Professor of Law and Executive Director of the Law & Economics Center, George Mason University
  • Daniel A. Crane, Associate Dean for Faculty and Research and Professor of Law, University of Michigan
  • Harold Demsetz, UCLA Emeritus Chair Professor of Business Economics, University of California, Los Angeles
  • Bernard Ganglmair, Assistant Professor, University of Texas at Dallas
  • Gus Hurwitz, Assistant Professor of Law, University of Nebraska-Lincoln
  • Keith Hylton, William Fairfield Warren Distinguished Professor of Law, Boston University
  • Thom Lambert, Wall Chair in Corporate Law and Governance, University of Missouri
  • John Lopatka, A. Robert Noll Distinguished Professor of Law, Pennsylvania State University
  • Geoffrey Manne, Founder and Executive Director of the International Center for Law and Economics and Senior Fellow at TechFreedom
  • Stephen Margolis, Alumni Distinguished Undergraduate Professor, North Carolina State University
  • Fred McChesney, de la Cruz-Mentschikoff Endowed Chair in Law and Economics, University of Miami
  • Tom Morgan, Oppenheim Professor Emeritus of Antitrust and Trade Regulation Law, George Washington University
  • David Olson, Associate Professor of Law, Boston College
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University
  • D. Daniel Sokol, Professor of Law, University of Florida
  • Mike Sykuta, Associate Professor and Director of the Contracting and Organizations Research Institute, University of Missouri

The amicus brief is available here.

There is always a temptation for antitrust agencies and plaintiffs to center a case around so-called “hot” documents — typically company documents with a snippet or sound-bites extracted, some times out of context. Some practitioners argue that “[h]ot document can be crucial to the outcome of any antitrust matter.” Although “hot” documents can help catch the interest of the public, a busy judge or an unsophisticated jury, they often can lead to misleading results. But more times than not, antitrust cases are resolved on economics and what John Adams called “hard facts,” not snippets from emails or other corporate documents. Antitrust case books are littered with cases that initially looked promising based on some supposed hot documents, but ultimately failed because the foundations of a sound antitrust case were missing.

As discussed below this is especially true for a recent case brought by the FTC, FTC v. St. Luke’s, currently pending before the Ninth Circuit Court of Appeals, in which the FTC at each pleading stage has consistently relied on “hot” documents to make its case.

The crafting and prosecution of civil antitrust cases by federal regulators is a delicate balancing act. Regulators must adhere to well-defined principles of antitrust enforcement, and on the other hand appeal to the interests of a busy judge. The simple way of doing this is using snippets of documents to attempt to show the defendants knew they were violating the law.

After all, if federal regulators merely had to properly define geographic and relevant product markets, show a coherent model of anticompetitive harm, and demonstrate that any anticipated harm would outweigh any procompetitive benefits, where is the fun in that? The reality is that antitrust cases typically rely on economic analysis, not snippets of hot documents. Antitrust regulators routinely include internal company documents in their cases to supplement the dry mechanical nature of antitrust analysis. However, in isolation, these documents can create competitive concerns when they simply do not exist.

With this in mind, it is vital that antitrust regulators do not build an entire case around what seem to be inflammatory documents. Quotes from executives, internal memoranda about competitors, and customer presentations are the icing on the cake after a proper antitrust analysis. As the International Center for Law and Economics’ Geoff Manne once explained,

[t]he problem is that these documents are easily misunderstood, and thus, while the economic significance of such documents is often quite limited, their persuasive value is quite substantial.

Herein lies the problem illustrated by the Federal Trade Commission’s use of provocative documents in its suit against the vertical acquisition of Saltzer Medical Group, an independent physician group comprised of 41 doctors, by St. Luke’s Health System. The FTC seeks to stop the acquisition involving these two Idaho based health care providers, a $16 million transaction, and a number comparatively small to other health care mergers investigated by the antitrust agencies. The transaction would give St. Luke’s a total of 24 primary care physicians operating in and around Nampa, Idaho.

In St. Luke’s the FTC used “hot” documents in each stage of its pleadings, from its complaint through its merits brief on appeal. Some of the statements pulled from executives’ emails, notes and memoranda seem inflammatory suggesting St. Luke’s intended to increase prices and to control market share all in order to further its strength relative to payer contracting. These statements however have little grounding in the reality of health care competition.

The reliance by the FTC on these so-called hot documents is problematic for several reasons. First, the selective quoting of internal documents paints the intention of the merger solely to increase profit for St. Luke’s at the expense of payers, when the reality is that the merger is premised on the integration of health care services and the move from the traditional fee-for-service model to a patient-centric model. St Luke’s intention of incorporating primary care into its system is in-line with the goals of the Affordable Care Act to promote over all well-being through integration. The District Court in this case recognized that the purpose of the merger was “primarily to improve patient outcomes.” And, in fact, underserved and uninsured patients are already benefitting from the transaction.

Second, the selective quoting suggested a narrow geographic market, and therefore an artificially high level of concentration in Nampa, Idaho. The suggestion contradicts reality, that nearly one-third of Nampa residents seek primary care physician services outside of Nampa. The geographic market advanced by the FTC is not a proper market, regardless of whether selected documents appear to support it. Without a properly defined geographic market, it is impossible to determine market share and therefore prove a violation of the Clayton Antitrust Act.

The DOJ Antitrust Division and the FTC have acknowledged that markets can not properly be defined solely on spicy documents. Writing in their 2006 commentary on the Horizontal Merger Guidelines, the agencies noted that

[t]he Agencies are careful, however, not to assume that a ‘market’ identified for business purposes is the same as a relevant market defined in the context of a merger analysis. … It is unremarkable that ‘markets’ in common business usage do not always coincide with ‘markets’ in an antitrust context, inasmuch as the terms are used for different purposes.

Third, even if St. Luke’s had the intention of increasing prices, just because one wants to do something such as raise prices above a competitive level or scale back research and development expenses — even if it genuinely believes it is able — does not mean that it can. Merger analysis is not a question of mens rea (or subjective intent). Rather, the analysis must show that such behavior will be likely as a result of diminished competition. Regulators must not look at evidence of this subjective intent and then conclude that the behavior must be possible and that a merger is therefore likely to substantially lessen competition. This would be the tail wagging the dog. Instead, regulators must first determine whether, as a matter of economic principle, a merger is likely to have a particular effect. Then, once the analytical tests have been run, documents can support these theories. But without sound support for the underlying theories, documents (however condemning) cannot bring the case across the goal line.

Certainly, documents suggesting intent to raise prices should bring an antitrust plaintiff across the goal line? Not so, as Seventh Circuit Judge Frank Easterbrook has explained:

Almost all evidence bearing on “intent” tends to show both greed and desire to succeed and glee at a rival’s predicament. … [B]ut drive to succeed lies at the core of a rivalrous economy. Firms need not like their competitors; they need not cheer them on to success; a desire to extinguish one’s rivals is entirely consistent with, often is the motive behind competition.

As Harvard Law Professor Phil Areeda observed, relying on documents describing intent is inherently risky because

(1) the businessperson often uses a colorful and combative vocabulary far removed from the lawyer’s linguistic niceties, and (2) juries and judges may fail to distinguish a lawful competitive intent from a predatory state of mind. (7 Phillip E. Areeda & Herbert Hovenkamp, Antitrust Law § 1506 (2d ed. 2003).)

So-called “hot” documents may help guide merger analysis, but served up as a main course make a paltry meal. Merger cases rise or fall on hard facts and economics, and next week we will see if the Ninth Circuit recognizes this as both St. Luke’s and the FTC argue their cases.