Archives For effects analysis

In the latest congressional hearing, purportedly analyzing Google’s “stacking the deck” in the online advertising marketplace, much of the opening statement and questioning by Senator Mike Lee and later questioning by Senator Josh Hawley focused on an episode of alleged anti-conservative bias by Google in threatening to demonetize The Federalist, a conservative publisher, unless they exercised a greater degree of control over its comments section. The senators connected this to Google’s “dominance,” arguing that it is only because Google’s ad services are essential that Google can dictate terms to a conservative website. A similar impulse motivates Section 230 reform efforts as well: allegedly anti-conservative online platforms wield their dominance to censor conservative speech, either through deplatforming or demonetization.

Before even getting into the analysis of how to incorporate political bias into antitrust analysis, though, it should be noted that there likely is no viable antitrust remedy. Even aside from the Section 230 debate, online platforms like Google are First Amendment speakers who have editorial discretion over their sites and apps, much like newspapers. An antitrust remedy compelling these companies to carry speech they disagree with would almost certainly violate the First Amendment.

But even aside from the First Amendment aspect of this debate, there is no easy way to incorporate concerns about political bias into antitrust. Perhaps the best way to understand this argument in the antitrust sense is as a non-price effects analysis. 

Political bias could be seen by end consumers as an important aspect of product quality. Conservatives have made the case that not only Google, but also Facebook and Twitter, have discriminated against conservative voices. The argument would then follow that consumer welfare is harmed when these dominant platforms leverage their control of the social media marketplace into the marketplace of ideas by censoring voices with whom they disagree. 

While this has theoretical plausibility, there are real practical difficulties. As Geoffrey Manne and I have written previously, in the context of incorporating privacy into antitrust analysis:

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application. 

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist. 

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

Just as with privacy and other product qualities, the analysis becomes increasingly complex first when tradeoffs between price and quality are introduced, and then even more so when tradeoffs between what different consumer groups perceive as quality is added. In fact, it is more complex than privacy. All but the most exhibitionistic would prefer more to less privacy, all other things being equal. But with political media consumption, most would prefer to have more of what they want to read available, even if it comes at the expense of what others may want. There is no easy way to understand what consumer welfare means in a situation where one group’s preferences need to come at the expense of another’s in moderation decisions.

Consider the case of The Federalist again. The allegation is that Google is imposing their anticonservative bias by “forcing” the website to clean up its comments section. The argument is that since The Federalist needs Google’s advertising money, it must play by Google’s rules. And since it did so, there is now one less avenue for conservative speech.

What this argument misses is the balance Google and other online services must strike as multi-sided platforms. The goal is to connect advertisers on one side of the platform, to the users on the other. If a site wants to take advantage of the ad network, it seems inevitable that intermediaries like Google will need to create rules about what can and can’t be shown or they run the risk of losing advertisers who don’t want to be associated with certain speech or conduct. For instance, most companies don’t want to be associated with racist commentary. Thus, they will take great pains to make sure they don’t sponsor or place ads in venues associated with racism. Online platforms connecting advertisers to potential consumers must take that into consideration.

Users, like those who frequent The Federalist, have unpriced access to content across those sites and apps which are part of ad networks like Google’s. Other models, like paid subscriptions (which The Federalist also has available), are also possible. But it isn’t clear that conservative voices or conservative consumers have been harmed overall by the option of unpriced access on one side of the platform, with advertisers paying on the other side. If anything, it seems the opposite is the case since conservatives long complained about legacy media having a bias and lauded the Internet as an opportunity to gain a foothold in the marketplace of ideas.

Online platforms like Google must balance the interests of users from across the political spectrum. If their moderation practices are too politically biased in one direction or another, users could switch to another online platform with one click or swipe. Assuming online platforms wish to maximize revenue, they will have a strong incentive to limit political bias from its moderation practices. The ease of switching to another platform which markets itself as more free speech-friendly, like Parler, shows entrepreneurs can take advantage of market opportunities if Google and other online platforms go too far with political bias. 

While one could perhaps argue that the major online platforms are colluding to keep out conservative voices, this is difficult to square with the different moderation practices each employs, as well as the data that suggests conservative voices are consistently among the most shared on Facebook

Antitrust is not a cure-all law. Conservatives who normally understand this need to reconsider whether antitrust is really well-suited for litigating concerns about anti-conservative bias online. 

Last Thursday and Friday, Truth on the Market hosted a symposium analyzing the Draft Vertical Merger Guidelines from the FTC and DOJ. The relatively short draft guidelines provided ample opportunity for discussion, as evidenced by the stellar roster of authors thoughtfully weighing in on the topic. 

We want to thank all of the participants for their excellent contributions. All of the posts are collected here, and below I briefly summarize each in turn. 

Symposium Day 1

Herbert Hovenkamp on the important advance of economic analysis in the draft guidelines

Hovenkamp views the draft guidelines as a largely positive development for the state of antitrust enforcement. Beginning with an observation — as was common among participants in the symposium — that the existing guidelines are outdated, Hovenkamp believes that the inclusion of 20% thresholds for market share and related product use represent a reasonable middle position between the extremes of zealous antitrust enforcement and non-enforcement.

Hovenkamp also observes that, despite their relative brevity, the draft guidelines contain much by way of reference to the 2010 Horizontal Merger Guidelines. Ultimately Hovenkamp believes that, despite the relative lack of detail in some respects, the draft guidelines are an important step in elaborating the “economic approaches that the agencies take toward merger analysis, one in which direct estimates play a larger role, with a comparatively reduced role for more traditional approaches depending on market definition and market share.”

Finally, he notes that, while the draft guidelines leave the current burden of proof in the hands of challengers, the presumption that vertical mergers are “invariably benign, particularly in highly concentrated markets or where the products in question are differentiated” has been weakened.

Full post.

Jonathan E. Neuchterlein on the lack of guidance in the draft vertical merger guidelines

Neuchterlein finds it hard to square elements of the draft vertical merger guidelines with both the past forty years of US enforcement policy as well as the empirical work confirming the largely beneficial nature of vertical mergers. Related to this, the draft guidelines lack genuine limiting principles when describing speculative theories of harm. Without better specificity, the draft guidelines will do little as a source of practical guidance.

One criticism from Neuchterlein is that the draft guidelines blur the distinction between “harm to competition” and “harm to competitors” by, for example, focusing on changes to rivals’ access to inputs and lost sales.

Neuchterlein also takes issue with what he characterizes as the “arbitrarily low” 20 percent thresholds. In particular, he finds the fact that the two separate 20 percent thresholds (relevant market and related product) being linked leads to a too-small set of situations in which firms might qualify for the safe harbor. Instead, by linking the two thresholds, he believes the provision does more to facilitate the agencies’ discretion, and little to provide clarity to firms and consumers.

Full post.

William J. Kolasky and Philip A. Giordano discuss the need to look to the EU for a better model for the draft guidelines

While Kolasky and Giordano believe that the 1984 guidelines are badly outdated, they also believe that the draft guidelines fail to recognize important efficiencies, and fail to give sufficiently clear standards for challenging vertical mergers.

By contrast, Kolasky and Giordano believe that the 2008 EU vertical merger guidelines provide much greater specificity and, in some cases, the 1984 guidelines were better aligned with the 2008 EU guidelines. Losing that specificity in the new draft guidelines sets back the standards. As such, they recommend that the DOJ and FTC adopt the EU vertical merger guidelines as a model for the US.

To take one example, the draft guidelines lose some of the important economic distinctions between vertical and horizontal mergers and need to be clarified, in particular with respect to burdens of proof related to efficiencies. The EU guidelines also provide superior guidance on how to distinguish between a firm’s ability and its incentive to raise rivals’ costs.

Full post.

Margaret Slade believes that the draft guidelines are a step in the right direction, but uneven on critical issues

Slade welcomes the new draft guidelines and finds them to be a good effort, if in need of some refinement.  She believes the agencies were correct to defer to the 2010 Horizontal Merger Guidelines for the the conceptual foundations of market definition and concentration, but believes that the 20 percent thresholds don’t reveal enough information. She believes that it would be helpful “to have a list of factors that could be used to determine which mergers that fall below those thresholds are more likely to be investigated, and vice versa.”

Slade also takes issue with the way the draft guidelines deal with EDM. Although she does not believe that EDM should always be automatically assumed, the guidelines do not offer enough detail to determine the cases where it should not be.

For Slade, the guidelines also fail to include a wide range of efficiencies that can arise from vertical integration. For instance “organizational efficiencies, such as mitigating contracting, holdup, and renegotiation costs, facilitating specific investments in physical and human capital, and providing appropriate incentives within firms” are important considerations that the draft guidelines should acknowledge.

Slade also advises caution when simulating vertical mergers. They are much more complex than horizontal simulations, which means that “vertical merger simulations have to be carefully crafted to fit the markets that are susceptible to foreclosure and that a one-size-fits-all model can be very misleading.”

Full post.

Joshua D. Wright, Douglas H. Ginsburg, Tad Lipsky, and John M. Yun on how to extend the economic principles present in the draft vertical merger guidelines

Wright et al. commend the agencies for highlighting important analytical factors while avoiding “untested merger assessment tools or theories of harm.”

They do, however, offer some points for improvement. First, EDM should be clearly incorporated into the unilateral effects analysis. The way the draft guidelines are currently structured improperly leaves the role of EDM in a sort of “limbo” between effects analysis and efficiencies analysis that could confuse courts and lead to an incomplete and unbalanced assessment of unilateral effects.

Second, Wright et al. also argue that the 20 percent thresholds in the draft guidelines do not have any basis in evidence or theory, nor are they of “any particular importance to predicting competitive effects.”

Third, by abandoning the 1984 guidelines’ acknowledgement of the generally beneficial effects of vertical mergers, the draft guidelines reject the weight of modern antitrust literature and fail to recognize “the empirical reality that vertical relationships are generally procompetitive or neutral.”

Finally, the draft guidelines should be more specific in recognizing that there are transaction costs associated with integration via contract. Properly conceived, the guidelines should more readily recognize that efficiencies arising from integration via merger are cognizable and merger specific.

Full post.

Gregory J. Werden and Luke M. Froeb on the the conspicuous silences of the proposed vertical merger guidelines

A key criticism offered by Werden and Froeb in their post is that “the proposed Guidelines do not set out conditions necessary or sufficient for the agencies to conclude that a merger likely would substantially lessen competition.” The draft guidelines refer to factors the agencies may consider as part of their deliberation, but ultimately do not give an indication as to how those different factors will be weighed. 

Further, Werden and Froeb believe that the draft guidelines fail even to communicate how the agencies generally view the competitive process — in particular, how the agencies’ views regard the critical differences between horizontal and vertical mergers. 

Full post.

Jonathan M. Jacobson and Kenneth Edelson on the missed opportunity to clarify merger analysis in the draft guidelines

Jacobson and Edelson begin with an acknowledgement that the guidelines are outdated and that there is a dearth of useful case law, thus leading to a need for clarified rules. Unfortunately, they do not feel that the current draft guidelines do nearly enough to satisfy this need for clarification. 

Generally positive about the 20% thresholds in the draft guidelines, Jacobson and Edelson nonetheless feel that this “loose safe harbor” leaves some problematic ambiguity. For example, the draft guidelines endorse a unilateral foreclosure theory of harm, but leave unspecified what actually qualifies as a harm. Also, while the Baker Hughes burden shifting framework is widely accepted, the guidelines fail to specify how burdens should be allocated in vertical merger cases. 

The draft guidelines also miss an important opportunity to specify whether or not EDM should be presumed to exist in vertical mergers, and whether it should be presumptively credited as merger-specific.

Full post.

Symposium Day 2

Timothy Brennan on the complexities of enforcement for “pure” vertical mergers

Brennan’s post focused on what he referred to as “pure” vertical mergers that do not include concerns about expansion into upstream or downstream markets. Brennan notes the highly complex nature of speculative theories of vertical harms that can arise from vertical mergers. Consequently, he concludes that, with respect to blocking pure vertical mergers, 

“[I]t is not clear that we are better off expending the resources to see whether something is bad, rather than accepting the cost of error from adopting imperfect rules — even rules that imply strict enforcement. Pure vertical merger may be an example of something that we might just want to leave be.”

Full post.

Steven J. Cernak on the burden of proof for EDM

Cernak’s post examines the absences and ambiguities in the draft guidelines as compared to the 1984 guidelines. He notes the absence of some theories of harm — for instance, the threat of regulatory evasion. And then moves on to point out the ambiguity in how the draft guidelines deal with pleading and proving EDM.

Specifically, the draft guidelines are unclear as to how EDM should be treated. Is EDM an affirmative defense, or is it a factor that agencies are required to include as part of their own analysis? In Cernak’s opinion, the agencies should be clearer on the point. 

Full post.

Eric Fruits on messy mergers and muddled guidelines

Fruits observes that the attempt of the draft guidelines to clarify how the Agencies think about mergers and competition actually demonstrates how complex markets, related products, and dynamic competition actually are.

Fruits goes on to describe how the nature of assumptions necessary to support the speculative theories of harm that the draft guidelines may rely upon are vulnerable to change. Ultimately, relying on such theories and strong assumptions may make market definition of even “obvious” markets and products a fraught exercise that devolves into a battle of experts. 

Full post.

Pozen, Cornell, Concklin, and Van Arsdall on the missed opportunity to harmonize with international law

Pozen et al. believe that the draft guidelines inadvisably move the US away from accepted international standards. The 20 percent threshold in the draft guidelines   is “arbitrarily low” given the generally pro competitive nature of vertical combinations. 

Instead, DOJ and the FTC should consider following the approaches taken by the EU, Japan and Chile by favoring a 30 percent threshold for challenges along with a post-merger  HHI measure below 2000.

Full post.

Scott Sher and Mattew McDonald write about the implications of the Draft Vertical Merger Guidelines for vertical mergers involving technology start-ups

Sher and McDonald describe how the draft Vertical guidelines miss a valuable opportunity to clarify speculative theories harm based on “potential competition.” 

In particular, the draft guidelines should address the literature that demonstrates that vertical acquisition of small tech firms by large tech firms is largely complementary and procompetitive. Large tech firms are good at process innovation and the smaller firms are good at product innovation leading to specialization and the realization of efficiencies through acquisition. 

Further, innovation in tech markets is driven by commercialization and exit strategy. Acquisition has become an important way for investors and startups to profit from their innovation. Vertical merger policy that is biased against vertical acquisition threatens this ecosystem and the draft guidelines should be updated to reflect this reality.

Full post.

Rybnicek on how the draft vertical merger guidelines might do more harm than good

Rybnicek notes the common calls to withdraw the 1984 Non-Horizontal Merger Guidelines, but is skeptical that replacing them will be beneficial. Particularly, he believes there are major flaws in the draft guidelines that would lead to suboptimal merger policy at the Agencies.

One concern is that the draft guidelines could easily lead to the impression that vertical mergers are as likely to lead to harm as horizontal mergers. But that is false and easily refuted by economic evidence and logic. By focusing on vertical transactions more than the evidence suggests is necessary, the Agencies will waste resources and spend less time pursuing enforcement of actually anticompetitive transactions.

Rybicek also notes that, in addition to the 20 percent threshold “safe harbor” being economically unsound, they will likely create a problematic “sufficient condition” for enforcement.

Rybnicek believes that the draft guidelines minimize the significant role of EDM and efficiencies by pointing to the 2010 Horizontal Merger Guidelines for analytical guidance. In the horizontal context, efficiencies are exceedingly difficult to prove, and it is unwarranted to apply the same skeptical treatment of efficiencies in the vertical merger context.

Ultimately, Rybnicek concludes that the draft guidelines do little to advance an understanding of how the agencies will look at a vertical transaction, while also undermining the economics and theory that have guided antitrust law. 

Full post.

Lawrence J. White on the missing market definition standard in the draft vertical guidelines

White believes that there is a gaping absence in the draft guidelines insofar as they lack an adequate  market definition paradigm. White notes that markets need to be defined in a way that permits a determination of market power (or not) post-merger, but the guidelines refrain from recommending a vertical-specific method for drawing market definition. 

Instead, the draft guidelines point to the 2010 Horizontal Merger Guidelines for a market definition paradigm. Unfortunately, that paradigm is inapplicable in the vertical merger context. The way that markets are defined in the horizontal and vertical contexts is very different. There is a significant chance that an improperly drawn market definition based on the Horizontal Guidelines could understate the risk of harm from a given vertical merger.

Full post.

Manne & Stout 1 on the important differences between integration via contract and integration via merger

Manne & Stout believe that there is a great deal of ambiguity in the proposed guidelines that could lead either to uncertainty as to how the agencies will exercise their discretion, or, more troublingly, could lead courts to take seriously speculative theories of harm. 

Among these, Manne & Stout believe that the Agencies should specifically address the alleged equivalence of integration via contract and integration via merger. They  need to either repudiate this theory, or else more fully explain the extremely complex considerations that factor into different integration decisions for different firms.

In particular, there is no reason to presume in any given situation that the outcome from contracting would be the same as from merging, even where both are notionally feasible. It would be a categorical mistake for the draft guidelines to permit an inference that simply because an integration could be achieved by contract, it follows that integration by merger deserves greater scrutiny per se.

A whole host of efficiency and non-efficiency related goals are involved in a choice of integration methods. But adopting a presumption against integration via merger necessary leads to (1) an erroneous assumption that efficiencies are functionally achievable in both situations and (2) a more concerning creation of discretion in the hands of enforcers to discount the non-efficiency reasons for integration.

Therefore, the agencies should clarify in the draft guidelines that the mere possibility of integration via contract or the inability of merging parties to rigorously describe and quantify efficiencies does not condemn a proposed merger.

Full post.

Manne & Stout 2 on the problematic implication of incorporating a contract/merger equivalency assumption into the draft guidelines

Manne & Stout begin by observing that, while Agencies have the opportunity to enforce in either the case of merger or contract, defendants can frequently only realize efficiencies in the case of merger. Therefore, calling for a contract/merger equivalency amounts to a preference for more enforcement per se, and is less solicitous of concerns about loss of procompetitive arrangements. Moreover, Manne & Stout point out that there is currently no empirical basis for justifying the weighting of enforcement so heavily against vertical mergers. 

Manne & Stout further observe that vertical merger enforcement is more likely to thwart procompetitive than anticompetitive arrangements relative to the status quo ante because we lack fundamental knowledge about the effects of market structure and firm organization on innovation and dynamic competition. 

Instead, the draft guidelines should adopt Williamson’s view of economic organizations: eschew the formal orthodox neoclassical economic lens in favor of organizational theory that focuses on complex contracts (including vertical mergers). Without this view, “We are more likely to miss it when mergers solve market inefficiencies, and more likely to see it when they impose static costs — even if the apparent costs actually represent a move from less efficient contractual arrangements to more efficient integration.”

Critically, Manne & Stout argue that the guidelines focus on market share thresholds leads to an overly narrow view of competition. Instead of looking at static market analyses, the Agencies should include a richer set of observations, including those that involve “organizational decisions made to facilitate the coordination of production and commercialization when they are dependent upon intangible assets.”

Ultimately Manne & Stout suggest that the draft guidelines should be clarified to guide the Agencies and courts away from applying inflexible, formalistic logic that will lead to suboptimal enforcement.

Full post.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Steven J. Cernak (Partner, Bona Law; Adjunct Professor, University of Michigan Law School and Western Michigan University Thomas M. Cooley Law School; former antitrust counsel, GM).] 

[Cernak: This paper represents the current views of the author alone and not necessarily the views of any past, present, or future employer or client.]

What should we make of Cmr. Chopra’s and Cmr. Slaughter’s dissents?

When I first heard that the FTC and DOJ Antitrust Division issued the draft Vertical Merger Guidelines late on Friday January 10, I did not rush out and review them to form an opinion, antitrust geek though I am. The issuance was not a surprise, given that the 1984 Guidelines were more than 35 years old and described as outdated by all observers, including those at an FTC hearing more than a year earlier. So I was surprised when I saw some pundits, especially on Twitter, immediately found the new draft controversial and I learned that two of the FTC Commissioners had not supported the release. Surely nobody was a big 1984 supporter other than fans of Orwell, Bowie, and Morris, right? 

Some of my confusion dissipated as I had a chance to read and analyze the draft guidelines and the accompanying statements of Commissioners Wilson, Slaughter, and Chopra. First, Commissioners Slaughter and Chopra only abstained from the decision to release the draft for public comment. In their statements, they explained their actions as necessary to register their disagreement with the terms of this particular draft but that they too joined the chorus calling for repudiation of the 1984 Guidelines. 

But some of my confusion remained as I went over Commissioner Chopra’s statement again. Instead of objections to particular provisions of the draft guidelines, the statement is more of a litany of complaints on all that is wrong with today’s economy and antitrust policy’s role in it. Those complaints are ones we have heard from Commissioner Chopra before. They certainly should be part of the general policy debate; however, they seem to go well beyond competitive issues that might be raised by vertical mergers and that should be part of a set of guidelines. 

As the first sentence and footnote of the draft guidelines make clear, the draft guidelines are meant to “outline the principal analytical techniques, practices and enforcement policy of … the Agencies” and “reflect the ongoing accumulation of experience at the Agencies.” They are written to provide some guidance to potential merging parties and their advisers as to how the Agencies are likely to analyze a merger and, so, provide some greater level of certainty. That does not mean that the guidelines are meant to capture the techniques of the Agencies in amber forever – or even 35 years. As that same first footnote makes clear, the guidelines may be revised to “reflect significant changes in enforcement policy…or to reflect new learning.” But guidelines designed to provide some clarity on how vertical mergers have been and will be reviewed are not the forum for a broad exchange of views on antitrust policy. Those comments are more helpful in FTC hearings, speeches, or enforcement actions that the Commissioners might participate in, not guidelines for practitioners. 

Commissioner Slaughter’s statement, on the other hand, stays focused on vertical mergers and the issues that she has with these draft guidelines. She and other early commentators raise at least some questions about the current draft that I hope will be addressed in the final version. For instance, the 1984 version of the guidelines included as potential anticompetitive effects from vertical mergers 1) regulatory evasion and 2) the creation of the need for potential entrants to enter at multiple stages of the market. As Commissioner Slaughter points out, the current draft guidelines drop those two and instead focus on 1) foreclosure; 2) raising rivals’ costs; and 3) the exchange of competitively sensitive information. 

Should we take the absence of the two 1984 harms as an indication that those types of harms are no longer important to the Agencies? Or that they have not been important in recent Agency action, and so did not make this draft, but would still be considered if the correct facts were found? Some other option? While the new guidelines would become too long and unwieldy if they recited and rejected all potential theories of harm, I join Commissioner Slaughter in thinking it would be helpful to include an explanation regarding these particular changes from the prior guidance. 

Who bears the burden on elimination of double marginalization?

Finally, both Commissioner Wilson’s and Commissioner Slaughter’s statements specifically request public comments regarding certain features of the draft guidelines’ handling of the elimination of double marginalization (“EDM”). While they raise good questions, I want to focus on a more fundamental question raised by the draft guidelines and a recent speech by Assistant Attorney General Makan Delrahim. 

The draft guidelines provide a concise, cogent description of EDM, the usual analysis of it during vertical mergers, and some special factors that might make it less likely to occur. Some commentators have pointed out that EDM gets its own section of the draft guidelines, signaling its importance. I think it even more significant, perhaps, that that separate section is placed in between the sections on unilateral and coordinated competitive effects. Does that placement signal that the analysis of EDM is part of the Agencies’ analysis of the overall predicted competitive effects of the merger? That hypothesis also is supported by this statement at the end of the EDM section: “The Agencies will not challenge a merger if the net effect of elimination of double marginalization means that the merger is unlikely to be anticompetitive in any relevant market.” 

Because the Agencies would have the ultimate burden of showing in court that the effect of the proposed merger “may be substantially to lessen competition, or tend to create a monopoly,” it seems to follow that the Agencies would have the burden to factor EDM into the rest of their competitive analysis to show what the potential overall net effect of the merger would be. 

Unfortunately, earlier in the EDM section of the draft guidelines, the Agencies state that they “generally rely on the parties to identify and demonstrate whether and how the merger eliminates double marginalization.” (emphasis added) Does that statement merely mean that the parties must cooperate with the Agencies and provide relevant information, as required on all points under Hart-Scott-Rodino? Or is it an attempt to shift to the parties the ultimate burden of proving this part of the competitive analysis? That is, is it a signal that, despite the separate section placed in the middle of the discussion of competitive effects analysis, the Agencies are skeptical of EDM and plan to treat it more like a defense as they treat certain cognizable efficiencies? 

That latter position is supported by comments by AAG Delrahim in a recent speech: “as the law requires for the advancement of any affirmative defense, the burden is on the parties in a vertical merger to put forward evidence to support and quantify EDM as a defense.” So is EDM a defense to an otherwise anticompetitive vertical merger or just part of the overall analysis of competitive effects? Before getting to the pertinent but more detailed questions posed by Commissioners Wilson and Slaughter, these draft guidelines would further their goal of providing clarity by answering that more basic EDM question. 

Despite those concerns, the draft guidelines seem consistent with the antitrust community’s consensus today on the proper analysis of vertical mergers. As such, they would seem to be consistent with how the Agencies evaluate such mergers today and so provide helpful guidance to parties considering such a merger. I hope the final version considers all the comments and remains helpful – and is released on a Monday so we can all more easily and intelligently start commenting. 

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.]

This post is authored by Joshua D. Wright (University Professor of Law, George Mason University and former Commissioner, FTC); Douglas H. Ginsburg (Senior Circuit Judge, US Court of Appeals for the DC Circuit; Professor of Law, George Mason University; and former Assistant Attorney General, DOJ Antitrust Division); Tad Lipsky (Assistant Professor of Law, George Mason University; former Acting Director, FTC Bureau of Competition; former chief antitrust counsel, Coca-Cola; former Deputy Assistant Attorney General, DOJ Antitrust Division); and John M. Yun (Associate Professor of Law, George Mason University; former Acting Deputy Assistant Director, FTC Bureau of Economics).]

After much anticipation, the Department of Justice Antitrust Division and the Federal Trade Commission released a draft of the Vertical Merger Guidelines (VMGs) on January 10, 2020. The Global Antitrust Institute (GAI) will be submitting formal comments to the agencies regarding the VMGs and this post summarizes our main points.

The Draft VMGs supersede the 1984 Merger Guidelines, which represent the last guidance from the agencies on the treatment of vertical mergers. The VMGs provide valuable guidance and greater clarity in terms of how the agencies will review vertical mergers going forward. While the proposed VMGs generally articulate an analytical framework based upon sound economic principles, there are several ways that the VMGs could more deeply integrate sound economics and our empirical understanding of the competitive consequences of vertical integration.

In this post, we discuss four issues: (1) incorporating the elimination of double marginalization (EDM) into the analysis of the likelihood of a unilateral price effect; (2) eliminating the role of market shares and structural analysis; (3) highlighting that the weight of empirical evidence supports the proposition that vertical mergers are less likely to generate competitive concerns than horizontal mergers; and (4) recognizing the importance of transaction cost-based efficiencies.

Elimination of double marginalization is a unilateral price effect

EDM is discussed separately from both unilateral price effects, in Section 5, and efficiencies, in Section 9, of the draft VMGs. This is notable because the structure of the VMGs obfuscates the relevant economics of internalizing pricing externalities and may encourage the misguided view that EDM is a special form of efficiency.

When separate upstream and downstream entities price their products, they do not fully take into account the impact of their pricing decision on each other — even though they are ultimately part of the same value chain for a given product. Vertical mergers eliminate a pricing externality since the post-merger upstream and downstream units are fully aligned in terms of their pricing incentives. In this sense, EDM is indistinguishable from the unilateral effects discussed in Section 5 of the VMGs that cause upward pricing pressure. Specifically, in the context of mergers, just as there is a greater incentive, under certain conditions, to foreclose or raise rivals’ costs (RRC) post-merger (although, this does not mean there is an ability to engage in these behaviors), there is also an incentive to lower prices due to the elimination of a markup along the supply chain. Consequently, we really cannot assess unilateral effects without accounting for the full set of incentives that could move prices in either direction.

Further, it is improper to consider EDM in the context of a “net effect” given that this phrase has strong connotations with weighing efficiencies against findings of anticompetitive harm. Rather, “unilateral price effects” actually includes EDM — just as a finding that a merger will induce entry properly belongs in a unilateral effects analysis. For these reasons, we suggest incorporating the discussion of EDM into the discussion of unilateral effects contained in Section 5 of the VMGs and eliminating Section 6. Otherwise, by separating EDM into its own section, the agencies are creating a type of “limbo” between unilateral effects and efficiencies — which creates confusion, particularly for courts. It is also important to emphasize that the mere existence of alternative contracting mechanisms to mitigate double marginalization does not tell us about their relative efficacy compared to vertical integration as there are costs to contracting.

Role of market shares and structural analysis

In Section 3 (“Market Participants, Market Shares, and Market Concentration”), there are two notable statements. First,

[t]he Agencies…do not rely on changes in concentration as a screen for or indicator of competitive effects from vertical theories of harm.

This statement, without further explanation, is puzzling as there are no changes in concentration for vertical mergers. Second, the VMGs then go on to state that 

[t]he Agencies are unlikely to challenge a vertical merger where the parties to the merger have a share in the relevant market of less than 20 percent, and the related product is used in less than 20 percent of the relevant market.

The very next sentence reads:

In some circumstances, mergers with shares below the thresholds can give rise to competitive concerns.

From this, we conclude that the VMGs are adopting a prior belief that, if both the relevant product and the related product have a less than 20 percent share in the relevant market, the acquisition is either competitively neutral or benign. The VMGs make clear, however, they do not offer a safe harbor. With these statements, the agencies run the risk that the 20 percent figure will be interpreted as a trigger for competitive concern. There is no sound economic reason to believe 20 percent share in the relevant market or the related market is of any particular importance to predicting competitive effects. The VMGs should eliminate the discussion of market shares altogether. At a minimum, the final guidelines would benefit from some explanation for this threshold if it is retained.

Empirical evidence on the welfare impact of vertical mergers

In contrast to vertical mergers, horizontal mergers inherently involve a degree of competitive overlap and an associated loss of at least some degree of rivalry between actual and/or potential competitors. The price effect for vertical mergers, however, is generally theoretically ambiguous — even before accounting for efficiencies — due to EDM and the uncertainty regarding whether the integrated firm has an incentive to raise rivals’ costs or foreclose. Thus, for vertical mergers, empirically evaluating the welfare effects of consummated mergers has been and remains an important area of research to guide antitrust policy.

Consequently, what is noticeably absent from the draft guidelines is an empirical grounding. Consistent empirical findings should inform agency decision-making priors. With few exceptions, the literature does not support the view that these practices are used for anticompetitive reasons — see Lafontaine & Slade (2007) and Cooper et al. (2005). (For an update on the empirical literature from 2009 through 2018, which confirms the conclusions of the prior literature, see the GAI’s Comment on Vertical Mergers submitted during the recent FTC Hearings.) Thus, the modern antitrust approach to vertical mergers, as reflected in the antitrust literature, should reflect the empirical reality that vertical relationships are generally procompetitive or neutral.

The bottom line is that how often vertical mergers are anticompetitive should influence our framework and priors. Given the strong empirical evidence that vertical mergers do not tend to result in welfare losses for consumers, we believe the agencies should consider at least the modest statement that vertical mergers are more often than not procompetitive or, alternatively, vertical mergers tend to be more procompetitive or neutral than horizontal ones. Thus, we believe the final VMGs would benefit from language similar to the 1984 VMGs: “Although nonhorizontal mergers are less likely than horizontal mergers to create competitive problems, they are not invariably innocuous.”

Transaction cost efficiencies and merger specificity

The VMGs address efficiencies in Section 8. Under the VMGs, the Agencies will evaluate efficiency claims by the parties using the approach set forth in Section 10 of the 2010 Horizontal Merger Guidelines. Thus, efficiencies must be both cognizable and merger specific to be considered by the agencies.

In general, the VMGs also adopt an approach that is consistent with the teachings of the robust literature on transaction cost economics, which recognizes the costs of using the price system to explain the boundaries of economic organizations, and the importance of incorporating such considerations into any antitrust analyses. In particular, this literature has demonstrated, both theoretically and empirically, that the decision to contract or vertically integrate is often driven by the relatively high costs of contracting as well as concerns regarding the enforcement of contracts and opportunistic behavior. This literature suggests that such transactions cost efficiencies in the vertical merger context often will be both cognizable and merger-specific and rejects an approach that would presume such efficiencies are not merger specific because they can be theoretically achieved via contract.

While we agree with the overall approach set out in the VMGs, we are concerned that the application of Section 8, in practice, without more specificity and guidance, will be carried out in a way that is inconsistent with the approach set out in Section 10 of the 2010 HMGs.

Conclusion

Overall, the agencies deserve credit for highlighting the relevant factors in assessing vertical mergers and for not attempting to be overly aggressive in advancing untested merger assessment tools or theories of harm.

The agencies should seriously consider, however, refinements in a number of critical areas:

  • First, discussion of EDM should be integrated into the larger unilateral effects analysis in Section 5 of the VMGs. 
  • Second, the agencies should eliminate the role of market shares and structural analysis in the VMGs. 
  • Third, the final VMGs should acknowledge that vertical mergers are less likely to generate competitive concerns than horizontal mergers. 
  • Finally, the final VMGs should recognize the importance of transaction cost-based efficiencies. 

We believe incorporating these changes will result in guidelines that are more in conformity with sound economics and the empirical evidence.

FTC v. Qualcomm

Last week the International Center for Law & Economics (ICLE) and twelve noted law and economics scholars filed an amicus brief in the Ninth Circuit in FTC v. Qualcomm, in support of appellant (Qualcomm) and urging reversal of the district court’s decision. The brief was authored by Geoffrey A. Manne, President & founder of ICLE, and Ben Sperry, Associate Director, Legal Research of ICLE. Jarod M. Bona and Aaron R. Gott of Bona Law PC collaborated in drafting the brief and they and their team provided invaluable pro bono legal assistance, for which we are enormously grateful. Signatories on the brief are listed at the end of this post.

We’ve written about the case several times on Truth on the Market, as have a number of guest bloggers, in our ongoing blog series on the case here.   

The ICLE amicus brief focuses on the ways that the district court exceeded the “error cost” guardrails erected by the Supreme Court to minimize the risk and cost of mistaken antitrust decisions, particularly those that wrongly condemn procompetitive behavior. As the brief notes at the outset:

The district court’s decision is disconnected from the underlying economics of the case. It improperly applied antitrust doctrine to the facts, and the result subverts the economic rationale guiding monopolization jurisprudence. The decision—if it stands—will undercut the competitive values antitrust law was designed to protect.  

The antitrust error cost framework was most famously elaborated by Frank Easterbrook in his seminal article, The Limits of Antitrust (1984). It has since been squarely adopted by the Supreme Court—most significantly in Brooke Group (1986), Trinko (2003), and linkLine (2009).  

In essence, the Court’s monopolization case law implements the error cost framework by (among other things) obliging courts to operate under certain decision rules that limit the use of inferences about the consequences of a defendant’s conduct except when the circumstances create what game theorists call a “separating equilibrium.” A separating equilibrium is a 

solution to a game in which players of different types adopt different strategies and thereby allow an uninformed player to draw inferences about an informed player’s type from that player’s actions.

Baird, Gertner & Picker, Game Theory and the Law

The key problem in antitrust is that while the consequence of complained-of conduct for competition (i.e., consumers) is often ambiguous, its deleterious effect on competitors is typically quite evident—whether it is actually anticompetitive or not. The question is whether (and when) it is appropriate to infer anticompetitive effect from discernible harm to competitors. 

Except in the narrowly circumscribed (by Trinko) instance of a unilateral refusal to deal, anticompetitive harm under the rule of reason must be proven. It may not be inferred from harm to competitors, because such an inference is too likely to be mistaken—and “mistaken inferences are especially costly, because they chill the very conduct the antitrust laws are designed to protect.” (Brooke Group (quoting yet another key Supreme Court antitrust error cost case, Matsushita (1986)). 

Yet, as the brief discusses, in finding Qualcomm liable the district court did not demand or find proof of harm to competition. Instead, the court’s opinion relies on impermissible inferences from ambiguous evidence to find that Qualcomm had (and violated) an antitrust duty to deal with rival chip makers and that its conduct resulted in anticompetitive foreclosure of competition. 

We urge you to read the brief (it’s pretty short—maybe the length of three blogs posts) to get the whole argument. Below we draw attention to a few points we make in the brief that are especially significant. 

The district court bases its approach entirely on Microsoft — which it misinterprets in clear contravention of Supreme Court case law

The district court doesn’t stay within the strictures of the Supreme Court’s monopolization case law. In fact, although it obligingly recites some of the error cost language from Trinko, it quickly moves away from Supreme Court precedent and bases its approach entirely on its reading of the D.C. Circuit’s Microsoft (2001) decision. 

Unfortunately, the district court’s reading of Microsoft is mistaken and impermissible under Supreme Court precedent. Indeed, both the Supreme Court and the D.C. Circuit make clear that a finding of illegal monopolization may not rest on an inference of anticompetitive harm.

The district court cites Microsoft for the proposition that

Where a government agency seeks injunctive relief, the Court need only conclude that Qualcomm’s conduct made a “significant contribution” to Qualcomm’s maintenance of monopoly power. The plaintiff is not required to “present direct proof that a defendant’s continued monopoly power is precisely attributable to its anticompetitive conduct.”

It’s true Microsoft held that, in government actions seeking injunctions, “courts [may] infer ‘causation’ from the fact that a defendant has engaged in anticompetitive conduct that ‘reasonably appears capable of making a significant contribution to maintaining monopoly power.’” (Emphasis added). 

But Microsoft never suggested that anticompetitiveness itself may be inferred.

“Causation” and “anticompetitive effect” are not the same thing. Indeed, Microsoft addresses “anticompetitive conduct” and “causation” in separate sections of its decision. And whereas Microsoft allows that courts may infer “causation” in certain government actions, it makes no such allowance with respect to “anticompetitive effect.” In fact, it explicitly rules it out:

[T]he plaintiff… must demonstrate that the monopolist’s conduct indeed has the requisite anticompetitive effect…; no less in a case brought by the Government, it must demonstrate that the monopolist’s conduct harmed competition, not just a competitor.”

The D.C. Circuit subsequently reinforced this clear conclusion of its holding in Microsoft in Rambus

Deceptive conduct—like any other kind—must have an anticompetitive effect in order to form the basis of a monopolization claim…. In Microsoft… [t]he focus of our antitrust scrutiny was properly placed on the resulting harms to competition.

Finding causation entails connecting evidentiary dots, while finding anticompetitive effect requires an economic assessment. Without such analysis it’s impossible to distinguish procompetitive from anticompetitive conduct, and basing liability on such an inference effectively writes “anticompetitive” out of the law.

Thus, the district court is correct when it holds that it “need not conclude that Qualcomm’s conduct is the sole reason for its rivals’ exits or impaired status.” But it is simply wrong to hold—in the same sentence—that it can thus “conclude that Qualcomm’s practices harmed competition and consumers.” The former claim is consistent with Microsoft; the latter is emphatically not.

Under Trinko and Aspen Skiing the district court’s finding of an antitrust duty to deal is impermissible 

Because finding that a company operates under a duty to deal essentially permits a court to infer anticompetitive harm without proof, such a finding “comes dangerously close to being a form of ‘no-fault’ monopolization,” as Herbert Hovenkamp has written. It is also thus seriously disfavored by the Court’s error cost jurisprudence.

In Trinko the Supreme Court interprets its holding in Aspen Skiing to identify essentially a single scenario from which it may plausibly be inferred that a monopolist’s refusal to deal with rivals harms consumers: the existence of a prior, profitable course of dealing, and the termination and replacement of that arrangement with an alternative that not only harms rivals, but also is less profitable for the monopolist.

In an effort to satisfy this standard, the district court states that “because Qualcomm previously licensed its rivals, but voluntarily stopped licensing rivals even though doing so was profitable, Qualcomm terminated a voluntary and profitable course of dealing.”

But it’s not enough merely that the prior arrangement was profitable. Rather, Trinko and Aspen Skiing hold that when a monopolist ends a profitable relationship with a rival, anticompetitive exclusion may be inferred only when it also refuses to engage in an ongoing arrangement that, in the short run, is more profitable than no relationship at all. The key is the relative value to the monopolist of the current options on offer, not the value to the monopolist of the terminated arrangement. In a word, what the Court requires is that the defendant exhibit behavior that, but-for the expectation of future, anticompetitive returns, is irrational.

It should be noted, as John Lopatka (here) and Alan Meese (here) (both of whom joined the amicus brief) have written, that even the Supreme Court’s approach is likely insufficient to permit a court to distinguish between procompetitive and anticompetitive conduct. 

But what is certain is that the district court’s approach in no way permits such an inference.

“Evasion of a competitive constraint” is not an antitrust-relevant refusal to deal

In order to infer anticompetitive effect, it’s not enough that a firm may have a “duty” to deal, as that term is colloquially used, based on some obligation other than an antitrust duty, because it can in no way be inferred from the evasion of that obligation that conduct is anticompetitive.

The district court bases its determination that Qualcomm’s conduct is anticompetitive on the fact that it enables the company to avoid patent exhaustion, FRAND commitments, and thus price competition in the chip market. But this conclusion is directly precluded by the Supreme Court’s holding in NYNEX

Indeed, in Rambus, the D.C. Circuit, citing NYNEX, rejected the FTC’s contention that it may infer anticompetitive effect from defendant’s evasion of a constraint on its monopoly power in an analogous SEP-licensing case: “But again, as in NYNEX, an otherwise lawful monopolist’s end-run around price constraints, even when deceptive or fraudulent, does not alone present a harm to competition.”

As Josh Wright has noted:

[T]he objection to the “evasion” of any constraint approach is… that it opens the door to enforcement actions applied to business conduct that is not likely to harm competition and might be welfare increasing.

Thus NYNEX and Rambus (and linkLine) reinforce the Court’s repeated holding that an inference of harm to competition is permissible only where conduct points clearly to anticompetitive effect—and, bad as they may be, evading obligations under other laws or violating norms of “business morality” do not suffice.

The district court’s elaborate theory of harm rests fundamentally on the claim that Qualcomm injures rivals—and the record is devoid of evidence demonstrating actual harm to competition. Instead, the court infers it from what it labels “unreasonably high” royalty rates, enabled by Qualcomm’s evasion of competition from rivals. In turn, the court finds that that evasion of competition can be the source of liability if what Qualcomm evaded was an antitrust duty to deal. And, in impermissibly circular fashion, the court finds that Qualcomm indeed evaded an antitrust duty to deal—because its conduct allowed it to sustain “unreasonably high” prices. 

The Court’s antitrust error cost jurisprudence—from Brooke Group to NYNEX to Trinko & linkLine—stands for the proposition that no such circular inferences are permitted.

The district court’s foreclosure analysis also improperly relies on inferences in lieu of economic evidence

Because the district court doesn’t perform a competitive effects analysis, it fails to demonstrate the requisite “substantial” foreclosure of competition required to sustain a claim of anticompetitive exclusion. Instead the court once again infers anticompetitive harm from harm to competitors. 

The district court makes no effort to establish the quantity of competition foreclosed as required by the Supreme Court. Nor does the court demonstrate that the alleged foreclosure harms competition, as opposed to just rivals. Foreclosure per se is not impermissible and may be perfectly consistent with procompetitive conduct.

Again citing Microsoft, the district court asserts that a quantitative finding is not required. Yet, as the court’s citation to Microsoft should have made clear, in its stead a court must find actual anticompetitive effect; it may not simply assert it. As Microsoft held: 

It is clear that in all cases the plaintiff must… prove the degree of foreclosure. This is a prudential requirement; exclusivity provisions in contracts may serve many useful purposes. 

The court essentially infers substantiality from the fact that Qualcomm entered into exclusive deals with Apple (actually, volume discounts), from which the court concludes that Qualcomm foreclosed rivals’ access to a key customer. But its inference that this led to substantial foreclosure is based on internal business statements—so-called “hot docs”—characterizing the importance of Apple as a customer. Yet, as Geoffrey Manne and Marc Williamson explain, such documentary evidence is unreliable as a guide to economic significance or legal effect: 

Business people will often characterize information from a business perspective, and these characterizations may seem to have economic implications. However, business actors are subject to numerous forces that influence the rhetoric they use and the conclusions they draw….

There are perfectly good reasons to expect to see “bad” documents in business settings when there is no antitrust violation lurking behind them.

Assuming such language has the requisite economic or legal significance is unsupportable—especially when, as here, the requisite standard demands a particular quantitative significance.

Moreover, the court’s “surcharge” theory of exclusionary harm rests on assumptions regarding the mechanism by which the alleged surcharge excludes rivals and harms consumers. But the court incorrectly asserts that only one mechanism operates—and it makes no effort to quantify it. 

The court cites “basic economics” via Mankiw’s Principles of Microeconomics text for its conclusion:

The surcharge affects demand for rivals’ chips because as a matter of basic economics, regardless of whether a surcharge is imposed on OEMs or directly on Qualcomm’s rivals, “the price paid by buyers rises, and the price received by sellers falls.” Thus, the surcharge “places a wedge between the price that buyers pay and the price that sellers receive,” and demand for such transactions decreases. Rivals see lower sales volumes and lower margins, and consumers see less advanced features as competition decreases.

But even assuming the court is correct that Qualcomm’s conduct entails such a surcharge, basic economics does not hold that decreased demand for rivals’ chips is the only possible outcome. 

In actuality, an increase in the cost of an input for OEMs can have three possible effects:

  1. OEMs can pass all or some of the cost increase on to consumers in the form of higher phone prices. Assuming some elasticity of demand, this would mean fewer phone sales and thus less demand by OEMs for chips, as the court asserts. But the extent of that effect would depend on consumers’ demand elasticity and the magnitude of the cost increase as a percentage of the phone price. If demand is highly inelastic at this price (i.e., relatively insensitive to the relevant price change), it may have a tiny effect on the number of phones sold and thus the number of chips purchased—approaching zero as price insensitivity increases.
  2. OEMs can absorb the cost increase and realize lower profits but continue to sell the same number of phones and purchase the same number of chips. This would not directly affect demand for chips or their prices.
  3. OEMs can respond to a price increase by purchasing fewer chips from rivals and more chips from Qualcomm. While this would affect rivals’ chip sales, it would not necessarily affect consumer prices, the total number of phones sold, or OEMs’ margins—that result would depend on whether Qualcomm’s chips cost more or less than its rivals’. If the latter, it would even increase OEMs’ margins and/or lower consumer prices and increase output.

Alternatively, of course, the effect could be some combination of these.

Whether any of these outcomes would substantially exclude rivals is inherently uncertain to begin with. But demonstrating a reduction in rivals’ chip sales is a necessary but not sufficient condition for proving anticompetitive foreclosure. The FTC didn’t even demonstrate that rivals were substantially harmed, let alone that there was any effect on consumers—nor did the district court make such findings. 

Doing so would entail consideration of whether decreased demand for rivals’ chips flows from reduced consumer demand or OEMs’ switching to Qualcomm for supply, how consumer demand elasticity affects rivals’ chip sales, and whether Qualcomm’s chips were actually less or more expensive than rivals’. Yet the court determined none of these. 

Conclusion

Contrary to established Supreme Court precedent, the district court’s decision relies on mere inferences to establish anticompetitive effect. The decision, if it stands, would render a wide range of potentially procompetitive conduct presumptively illegal and thus harm consumer welfare. It should be reversed by the Ninth Circuit.

Joining ICLE on the brief are:

  • Donald J. Boudreaux, Professor of Economics, George Mason University
  • Kenneth G. Elzinga, Robert C. Taylor Professor of Economics, University of Virginia
  • Janice Hauge, Professor of Economics, University of North Texas
  • Justin (Gus) Hurwitz, Associate Professor of Law, University of Nebraska College of Law; Director of Law & Economics Programs, ICLE
  • Thomas A. Lambert, Wall Chair in Corporate Law and Governance, University of Missouri Law School
  • John E. Lopatka, A. Robert Noll Distinguished Professor of Law, Penn State University Law School
  • Daniel Lyons, Professor of Law, Boston College Law School
  • Geoffrey A. Manne, President and Founder, International Center for Law & Economics; Distinguished Fellow, Northwestern University Center on Law, Business & Economics
  • Alan J. Meese, Ball Professor of Law, William & Mary Law School
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics Emeritus, Emory University
  • Vernon L. Smith, George L. Argyros Endowed Chair in Finance and Economics, Chapman University School of Business; Nobel Laureate in Economics, 2002
  • Michael Sykuta, Associate Professor of Economics, University of Missouri


Today, for the first time in its 100-year history, the FTC issued enforcement guidelines for cases brought by the agency under the Unfair Methods of Competition (“UMC”) provisions of Section 5 of the FTC Act.

The Statement of Enforcement Principles represents a significant victory for Commissioner Joshua Wright, who has been a tireless advocate for defining and limiting the scope of the Commission’s UMC authority since before his appointment to the FTC in 2013.

As we’ve noted many times before here at TOTM (including in our UMC Guidelines Blog Symposium), FTC enforcement principles for UMC actions have been in desperate need of clarification. Without any UMC standards, the FTC has been free to leverage its costly adjudication process into settlements (or short-term victories) and businesses have been left in the dark as to what what sorts of conduct might trigger enforcement. Through a series of unadjudicated settlements, UMC unfairness doctrine (such as it is) has remained largely within the province of FTC discretion and without judicial oversight. As a result, and either by design or by accident, UMC never developed a body of law encompassing well-defined goals or principles like antitrust’s consumer welfare standard.

Commissioner Wright has long been at the forefront of the battle to rein in the FTC’s discretion in this area and to promote the rule of law. Soon after joining the Commission, he called for Section 5 guidelines that would constrain UMC enforcement to further consumer welfare, tied to the economically informed analysis of competitive effects developed in antitrust law.

Today’s UMC Statement embodies the essential elements of Commissioner Wright’s proposal. Under the new guidelines:

  1. The Commission will make UMC enforcement decisions based on traditional antitrust principles, including the consumer welfare standard;
  2. Only conduct that would violate the antitrust rule of reason will give rise to enforcement, and the Commission will not bring UMC cases without evidence demonstrating that harm to competition outweighs any efficiency or business justifications for the conduct at issue; and
  3. The Commission commits to the principle that it is more appropriate to bring cases under the antitrust laws than under Section 5 when the conduct at issue could give rise to a cause of action under the antitrust laws. Notably, this doesn’t mean that the agency gets to use UMC when it thinks it might lose under the Sherman or Clayton Acts; rather, it means UMC is meant only to be a gap-filler, to be used when the antitrust statutes don’t apply at all.

Yes, the Statement is a compromise. For instance, there is no safe harbor from UMC enforcement if any cognizable efficiencies are demonstrated, as Commissioner Wright initially proposed.

But by enshrining antitrust law’s consumer welfare standard in future UMC caselaw, by obligating the Commission to assess conduct within the framework of the well-established antitrust rule of reason, and by prioritizing antitrust over UMC when both might apply, the Statement brings UMC law into the world of modern antitrust analysis. This is a huge achievement.

It’s also a huge achievement that a Statement like this one would be introduced by Chairwoman Ramirez. As recently as last year, Ramirez had resisted efforts to impose constraints on the FTC’s UMC enforcement discretion. In a 2014 speech Ramirez said:

I have expressed concern about recent proposals to formulate guidance to try to codify our unfair methods principles for the first time in the Commission’s 100 year history. While I don’t object to guidance in theory, I am less interested in prescribing our future enforcement actions than in describing our broad enforcement principles revealed in our recent precedent.

The “recent precedent” that Ramirez referred to is precisely the set of cases applying UMC to reach antitrust-relevant conduct that led to Commissioner Wright’s efforts. The common law of consent decrees that make up the precedent Ramirez refers to, of course, are not legally binding and provide little more than regurgitated causes of action.

But today, under Congressional pressure and pressure from within the agency led by Commissioner Wright, Chairwoman Ramirez and the other two Democratic commissioners voted for the Statement.

Competitive Effects Analysis Under the Statement

As Commissioner Ohlhausen argues in her dissenting statement, the UMC Statement doesn’t remove all enforcement discretion from the Commission — after all, enforcement principles, like standards in law generally, have fuzzy boundaries.

But what Commissioner Ohlhausen seems to miss is that, by invoking antitrust principles, the rule of reason and competitive effects analysis, the Statement incorporates by reference 125 years of antitrust law and economics. The Statement itself need not go into excessive detail when, with only a few words, it brings modern antitrust jurisprudence embodied in cases like Trinko, Leegin, and Brooke Group into UMC law.

Under the new rule of reason approach for UMC, the FTC will condemn conduct only when it causes or is likely to cause “harm to competition or the competitive process, taking into account any associated cognizable efficiencies and business justifications.” In other words, the evidence must demonstrate net harm to consumers before the FTC can take action. That’s a significant constraint.

As noted above, Commissioner Wright originally proposed a safe harbor from FTC UMC enforcement whenever cognizable efficiencies are present. The Statement’s balancing test is thus a compromise. But it’s not really a big move from Commissioner Wright’s initial position.

Commissioner Wright’s original proposal tied the safe harbor to “cognizable” efficiencies, which is an exacting standard. As Commissioner Wright noted in his Blog Symposium post on the subject:

[T]he efficiencies screen I offer intentionally leverages the Commission’s considerable expertise in identifying the presence of cognizable efficiencies in the merger context and explicitly ties the analysis to the well-developed framework offered in the Horizontal Merger Guidelines. As any antitrust practitioner can attest, the Commission does not credit “cognizable efficiencies” lightly and requires a rigorous showing that the claimed efficiencies are merger-specific, verifiable, and not derived from an anticompetitive reduction in output or service. Fears that the efficiencies screen in the Section 5 context would immunize patently anticompetitive conduct because a firm nakedly asserts cost savings arising from the conduct without evidence supporting its claim are unwarranted. Under this strict standard, the FTC would almost certainly have no trouble demonstrating no cognizable efficiencies exist in Dan’s “blowing up of the competitor’s factory” example because the very act of sabotage amounts to an anticompetitive reduction in output.

The difference between the safe harbor approach and the balancing approach embodied in the Statement is largely a function of administrative economy. Before, the proposal would have caused the FTC to err on the side of false negatives, possibly forbearing from bringing some number of welfare-enhancing cases in exchange for a more certain reduction in false positives. Now, there is greater chance of false positives.

But the real effect is that more cases will be litigated because, in the end, both versions would require some degree of antitrust-like competitive effects analysis. Under the Statement, if procompetitive efficiencies outweigh anticompetitive harms, the defendant still wins (and the FTC is to avoid enforcement). Under the original proposal fewer actions might be brought, but those that are brought would surely settle. So one likely outcome of choosing a balancing test over the safe harbor is that more close cases will go to court to be sorted out. Whether this is a net improvement over the safe harbor depends on whether the social costs of increased litigation and error are offset by a reduction in false negatives — as well as the more robust development of the public good of legal case law.  

Reduced FTC Discretion Under the Statement

The other important benefit of the Statement is that it commits the FTC to a regime that reduces its discretion.

Chairwoman Ramirez and former Chairman Leibowitz — among others — have embraced a broader role for Section 5, particularly in order to avoid the judicial limits on antitrust actions arising out of recent Supreme Court cases like Trinko, Leegin, Brooke Group, Linkline, Weyerhaeuser and Credit Suisse.

For instance, as former Chairman Leibowitz said in 2008:

[T]he Commission should not be tied to the more technical definitions of consumer harm that limit applications of the Sherman Act when we are looking at pure Section 5 violations.

And this was no idle threat. Recent FTC cases, including Intel, N-Data, Google (Motorola), and Bosch, could all have been brought under the Sherman Act, but were brought — and settled — as Section 5 cases instead. Under the new Statement, all four would likely be Sherman Act cases.

There’s little doubt that, left unfettered, Section 5 UMC actions would only have grown in scope. Former Chairman Leibowitz, in his concurring opinion in Rambus, described UMC as

a flexible and powerful Congressional mandate to protect competition from unreasonable restraints, whether long-since recognized or newly discovered, that violate the antitrust laws, constitute incipient violations of those laws, or contravene those laws’ fundamental policies.

Both Leibowitz and former Commissioner Tom Rosch (again, among others) often repeated their views that Section 5 permitted much the same actions as were available under Section 2 — but without the annoyance of those pesky, economically sensible, judicial limitations. (Although, in fairness, Leibowitz also once commented that it would not “be wise to use the broader [Section 5] authority whenever we think we can’t win an antitrust case, as a sort of ‘fallback.’”)

In fact, there is a long and unfortunate trend of FTC commissioners and other officials asserting some sort of “public enforcement exception” to the judicial limits on Sherman Act cases. As then Deputy Director for Antitrust in the Bureau of Economics, Howard Shelanski, told Congress in 2010:

The Commission believes that its authority to prevent “unfair methods of competition” through Section 5 of the Federal Trade Commission Act enables the agency to pursue conduct that it cannot reach under the Sherman Act, and thus avoid the potential strictures of Trinko.

In this instance, and from the context (followed as it is by a request for Congress to actually exempt the agency from Trinko and Credit Suisse!), it seems that “reach” means “win.”

Still others have gone even further. Tom Rosch, for example, has suggested that the FTC should challenge Patent Assertion Entities under Section 5 merely because “we have a gut feeling” that the conduct violates the Act and it may not be actionable under Section 2.

Even more egregious, Steve Salop and Jon Baker advocate using Section 5 to implement their preferred social policies — in this case to reduce income inequality. Such expansionist views, as Joe Sims recently reminded TOTM readers, hearken back to the troubled FTC of the 1970s:  

Remember [former FTC Chairman] Mike Pertschuck saying that Section 5 could possibly be used to enforce compliance with desirable energy policies or environmental requirements, or to attack actions that, in the opinion of the FTC majority, impeded desirable employment programs or were inconsistent with the nation’s “democratic, political and social ideals.” The two speeches he delivered on this subject in 1977 were the beginning of the end for increased Section 5 enforcement in that era, since virtually everyone who heard or read them said:  “Whoa! Is this really what we want the FTC to be doing?”

Apparently, for some, it is — even today. But don’t forget: This was the era in which Congress actually briefly shuttered the FTC for refusing to recognize limits on its discretion, as Howard Beales reminds us:

The breadth, overreaching, and lack of focus in the FTC’s ambitious rulemaking agenda outraged many in business, Congress, and the media. Even the Washington Post editorialized that the FTC had become the “National Nanny.” Most significantly, these concerns reverberated in Congress. At one point, Congress refused to provide the necessary funding, and simply shut down the FTC for several days…. So great were the concerns that Congress did not reauthorize the FTC for fourteen years. Thus chastened, the Commission abandoned most of its rulemaking initiatives, and began to re-examine unfairness to develop a focused, injury-based test to evaluate practices that were allegedly unfair.

A truly significant effect of the Policy Statement will be to neutralize the effort to use UMC to make an end-run around antitrust jurisprudence in order to pursue non-economic goals. It will now be a necessary condition of a UMC enforcement action to prove a contravention of fundamental antitrust policies (i.e., consumer welfare), rather than whatever three commissioners happen to agree is a desirable goal. And the Statement puts the brakes on efforts to pursue antitrust cases under Section 5 by expressing a clear policy preference at the FTC to bring such cases under the antitrust laws.

Commissioner Ohlhausen’s objects that

the fact that this policy statement requires some harm to competition does little to constrain the Commission, as every Section 5 theory pursued in the last 45 years, no matter how controversial or convoluted, can be and has been couched in terms of protecting competition and/or consumers.

That may be true, but the same could be said of every Section 2 case, as well. Commissioner Ohlhausen seems to be dismissing the fact that the Statement effectively incorporates by reference the last 45 years of antitrust law, too. Nothing will incentivize enforcement targets to challenge the FTC in court — or incentivize the FTC itself to forbear from enforcement — like the ability to argue Trinko, Leegin and their ilk. Antitrust law isn’t perfect, of course, but making UMC law coextensive with modern antitrust law is about as much as we could ever reasonably hope for. And the Statement basically just gave UMC defendants blanket license to add a string of “See Areeda & Hovenkamp” cites to every case the FTC brings. We should count that as a huge win.

Commissioner Ohlhausen also laments the brevity and purported vagueness of the Statement, claiming that

No interpretation of the policy statement by a single Commissioner, no matter how thoughtful, will bind this or any future Commission to greater limits on Section 5 UMC enforcement than what is in this exceedingly brief, highly general statement.

But, in the end, it isn’t necessarily the Commissioners’ self-restraint upon which the Statement relies; it’s the courts’ (and defendants’) ability to take the obvious implications of the Statement seriously and read current antitrust precedent into future UMC cases. If every future UMC case is adjudicated like a Sherman or Clayton Act case, the Statement will have been a resounding success.

Arguably no FTC commissioner has been as successful in influencing FTC policy as a minority commissioner — over sustained opposition, and in a way that constrains the agency so significantly — as has Commissioner Wright today.

The CPI Antitrust Chronicle published Geoffrey Manne’s and my recent paperThe Problems and Perils of Bootstrapping Privacy and Data into an Antitrust Framework as part of a symposium on Big Data in the May 2015 issue. All of the papers are worth reading and pondering, but of course ours is the best ;).

In it, we analyze two of the most prominent theories of antitrust harm arising from data collection: privacy as a factor of non-price competition, and price discrimination facilitated by data collection. We also analyze whether data is serving as a barrier to entry and effectively preventing competition. We argue that, in the current marketplace, there are no plausible harms to competition arising from either non-price effects or price discrimination due to data collection online and that there is no data barrier to entry preventing effective competition.

The issues of how to regulate privacy issues and what role competition authorities should in that, are only likely to increase in importance as the Internet marketplace continues to grow and evolve. The European Commission and the FTC have been called on by scholars and advocates to take greater consideration of privacy concerns during merger review and encouraged to even bring monopolization claims based upon data dominance. These calls should be rejected unless these theories can satisfy the rigorous economic review of antitrust law. In our humble opinion, they cannot do so at this time.

Excerpts:

PRIVACY AS AN ELEMENT OF NON-PRICE COMPETITION

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application.

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist.

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

PRICE DISCRIMINATION AS A PRIVACY HARM

If non-price effects cannot be relied upon to establish competitive injury (as explained above), then what can be the basis for incorporating privacy concerns into antitrust? One argument is that major data collectors (e.g., Google and Facebook) facilitate price discrimination.

The argument can be summed up as follows: Price discrimination could be a harm to consumers that antitrust law takes into consideration. Because companies like Google and Facebook are able to collect a great deal of data about their users for analysis, businesses could segment groups based on certain characteristics and offer them different deals. The resulting price discrimination could lead to many consumers paying more than they would in the absence of the data collection. Therefore, the data collection by these major online companies facilitates price discrimination that harms consumer welfare.

This argument misses a large part of the story, however. The flip side is that price discrimination could have benefits to those who receive lower prices from the scheme than they would have in the absence of the data collection, a possibility explored by the recent White House Report on Big Data and Differential Pricing.

While privacy advocates have focused on the possible negative effects of price discrimination to one subset of consumers, they generally ignore the positive effects of businesses being able to expand output by serving previously underserved consumers. It is inconsistent with basic economic logic to suggest that a business relying on metrics would want to serve only those who can pay more by charging them a lower price, while charging those who cannot afford it a larger one. If anything, price discrimination would likely promote more egalitarian outcomes by allowing companies to offer lower prices to poorer segments of the population—segments that can be identified by data collection and analysis.

If this group favored by “personalized pricing” is as big as—or bigger than—the group that pays higher prices, then it is difficult to state that the practice leads to a reduction in consumer welfare, even if this can be divorced from total welfare. Again, the question becomes one of magnitudes that has yet to be considered in detail by privacy advocates.

DATA BARRIER TO ENTRY

Either of these theories of harm is predicated on the inability or difficulty of competitors to develop alternative products in the marketplace—the so-called “data barrier to entry.” The argument is that upstarts do not have sufficient data to compete with established players like Google and Facebook, which in turn employ their data to both attract online advertisers as well as foreclose their competitors from this crucial source of revenue. There are at least four reasons to be dubious of such arguments:

  1. Data is useful to all industries, not just online companies;
  2. It’s not the amount of data, but how you use it;
  3. Competition online is one click or swipe away; and
  4. Access to data is not exclusive

CONCLUSION

Privacy advocates have thus far failed to make their case. Even in their most plausible forms, the arguments for incorporating privacy and data concerns into antitrust analysis do not survive legal and economic scrutiny. In the absence of strong arguments suggesting likely anticompetitive effects, and in the face of enormous analytical problems (and thus a high risk of error cost), privacy should remain a matter of consumer protection, not of antitrust.

Louis Kaplow’s Why (Ever) Define Markets? in the Harvard Law Review was one of the most provocative papers in the antitrust literature over the past few years.  We’ve discussed it here.  I wrote:

Kaplow provocatively argues that the entire “market definition/ market share” paradigm of antitrust is misguided and beyond repair.  Kaplow describes the exclusive role of market definition in that paradigm as generating inferences about market power, argues that market definition is incapable of generating reasonable inferences for that purpose as a matter of basic economic principles primarily because one must have a “best estimate” of market power previous to market definition, and concludes that antitrust ought to do away with market definition entirely.  As my description of the paper suggests, and Kaplow recognizes, it is certainly an “immodest” claim.  But it is a paper that has evoked much discussion in antitrust circles, especially in light of the recent shift in the 2010 HMGs toward analysis of competitive effects and away from market definition.

Many economists were inclined to agree with the basic conceptual shift toward direct analysis of competitive effects.  Much of that agreement was had on the basis that the market definition exercise aimed to do a number of things directed toward identifying the potential competitive effects of a merger (identifying market power is certainly one of those things), and that if we had tools allowing for direct inferences we ought to use those instead.  Kaplow’s attack on market definition, however, was by far the most aggressive critique.

Kaplow’s analysis prompted responses from antitrust scholars, including most notably Greg Werden (DOJ).  I discuss Werden’s critique here.

In my view, the debating over the proper scope and function of market definition in antitrust – and in particular the proper relationship between the market definition inquiry and competitive effects analysis – is ongoing.  Thus, it was interesting for me to see Richard Markovits’ (Texas) latest entry on SSRN (HT: Danny Sokol), which appears to attempt to shift the debate from whether market definition should be killed, to whom the credit should be attributed for its execution.  Markovits’ piece, Why One Should Never Define Markets or Use Market-Oriented Approaches to Analyze the Legality of Business Conduct Under U.S. Antitrust Law, argues – well – what the title says.   And in particular, he defends his earlier work against Kaplow’s dismissal of it in footnote – claiming his it is own analysis, not Kaplow’s, that should be credited with the rejection of market definition based approaches (in fact, Markovits’ claim is much broader).  From the abstract:

In 2010, Professor Louis Kaplow published an article Why Ever Define Markets? that argues for the proposition that one should never define markets for the purpose of measuring a firm’s economic power, which is a corollary of the conclusion that I established in 1978. Kaplow’s article includes a lengthy footnote that — after stating that my 1978 article constitutes a “particularly harsh attack on market definition” — denigrates it on a number of accounts. The article I am posting (1) delineates slightly-improved versions of my 1978 arguments against the use of market-oriented approaches to analyzing the legality of business practices under U.S. antitrust law, (2) explains why those arguments and the “idiosyncratic” (Kaplow’s accurate if pejorative characterization) conceptual systems and competition theories they employ imply that Kaplow’s more limited conclusion is correct, (3) delineates and criticizes Kaplow’s “arguments” for his conclusion (the most relevant of which is a correct assertion of a proposition that is an analog to the conclusion of my second argument for the claim my 1978 article establishes — an assertion he does not and cannot justify because he does not develop and use any counterpart to my idiosyncratic conceptual systems and theories, which play a critical role in the justificatory argument), (4) demonstrates that all of Kaplow’s criticisms of my 1978 article are either incorrect or unjustified, and (5) asserts that at least some of the errors Kaplow makes when criticizing my article are important because they are made by others as well and militate against the correct analysis of the legality of various types of business conduct under U.S. antitrust law.

It is an interesting debate.  And I certainly do not fault Professor Markovits for defending his claims against Kaplow’s dismissal.  The piece is very long and dense, and frankly, was a difficult read (at least for me).  But it is a provocative read.  However, my reaction to reading it was that I couldn’t escape thinking about one problem with arguments largely about the intellectual credit for eliminating market definition: market definition isn’t even close to dead.  Perhaps it will be in 20 years.  But it isn’t now and its entirely unclear that antitrust jurisprudence in the courts is even moving that way – the agencies may be a different story.  Further, there isn’t much evidence that the move within the 2010 Guidelines to reduce the importance of – but not eliminate the need for – market definition was part of a broader movement toward rejection what Markovits describes as “market-oriented approaches” to antitrust analysis.  In any event, perhaps we will eventually be citing the Markovits-Kaplow, or will be be Kaplow-Markovits, for the death of market definition.  But for now, market definition appears to be alive and kicking.

As everyone knows by now, AT&T’s proposed merger with T-Mobile has hit a bureaucratic snag at the FCC.  The remarkable decision to refer the merger to the Commission’s Administrative Law Judge (in an effort to derail the deal) and the public release of the FCC staff’s internal, draft report are problematic and poorly considered.  But far worse is the content of the report on which the decision to attempt to kill the deal was based.

With this report the FCC staff joins the exalted company of AT&T’s complaining competitors (surely the least reliable judges of the desirability of the proposed merger if ever there were any) and the antitrust policy scolds and consumer “advocates” who, quite literally, have never met a merger of which they approved.

In this post I’m going to hit a few of the most glaring problems in the staff’s report, and I hope to return again soon with further analysis.

As it happens, AT&T’s own response to the report is actually very good and it effectively highlights many of the key problems with the staff’s report.  While it might make sense to take AT&T’s own reply with a grain of salt, in this case the reply is, if anything, too tame.  No doubt the company wants to keep in the Commission’s good graces (it is the very definition of a repeat player at the agency, after all).  But I am not so constrained.  Using the company’s reply as a jumping off point, let me discuss a few of the problems with the staff report.

First, as the blog post (written by Jim Cicconi, Senior Vice President of External & Legislative Affairs) notes,

We expected that the AT&T-T-Mobile transaction would receive careful, considered, and fair analysis.   Unfortunately, the preliminary FCC Staff Analysis offers none of that.  The document is so obviously one-sided that any fair-minded person reading it is left with the clear impression that it is an advocacy piece, and not a considered analysis.

In our view, the report raises questions as to whether its authors were predisposed.  The report cherry-picks facts to support its views, and ignores facts that don’t.  Where facts were lacking, the report speculates, with no basis, and then treats its own speculations as if they were fact.  This is clearly not the fair and objective analysis to which any party is entitled, and which we have every right to expect.

OK, maybe they aren’t pulling punches.  The fact that this reply was written with such scathing language despite AT&T’s expectation to have to go right back to the FCC to get approval for this deal in some form or another itself speaks volumes about the undeniable shoddiness of the report.

Cicconi goes on to detail five areas where AT&T thinks the report went seriously awry:  “Expanding LTE to 97% of the U.S. Population,” “Job Gains Versus Losses,” “Deutsche Telekom, T-Mobile’s Parent, Has Serious Investment Constraints,” “Spectrum” and “Competition.”  I have dealt with a few of these issues at some length elsewhere, including most notably here (noting how the FCC’s own wireless competition report “supports what everyone already knows: falling prices, improved quality, dynamic competition and unflagging innovation have led to a golden age of mobile services”), and here (“It is troubling that critics–particularly those with little if any business experience–are so certain that even with no obvious source of additional spectrum suitable for LTE coming from the government any time soon, and even with exponential growth in broadband (including mobile) data use, AT&T’s current spectrum holdings are sufficient to satisfy its business plans”).

What is really galling about the staff report—and, frankly, the basic posture of the agency—is that its criticisms really boil down to one thing:  “We believe there is another way to accomplish (something like) what AT&T wants to do here, and we’d just prefer they do it that way.”  This is central planning at its most repugnant.  What is both assumed and what is lacking in this basic posture is beyond the pale for an allegedly independent government agency—and as Larry Downes notes in the linked article, the agency’s hubris and its politics may have real, costly consequences for all of us.

Competition

But procedure must be followed, and the staff thus musters a technical defense to support its basic position, starting with the claim that the merger will result in too much concentration.  Blinded by its new-found love for HHIs, the staff commits a few blunders.  First, it claims that concentration levels like those in this case “trigger a presumption of harm” to competition, citing the DOJ/FTC Merger Guidelines.  Alas, as even the report’s own footnotes reveal, the Merger Guidelines actually say that highly concentrated markets with HHI increases of 200 or more trigger a presumption that the merger will “enhance market power.”  This is not, in fact, the same thing as harm to competition.  Elsewhere the staff calls this—a merger that increases concentration and gives one firm an “undue” share of the market—“presumptively illegal.”  Perhaps the staff could use an antitrust refresher course.  I’d be happy to come teach it.

Not only is there no actual evidence of consumer harm resulting from the sort of increases in concentration that might result from the merger, but the staff seems to derive its negative conclusions despite the damning fact that the data shows that wireless markets have seen considerable increases in concentration along with considerable decreases in prices, rather than harm to competition, over the last decade.  While high and increasing HHIs might indicate a need for further investigation, when actual evidence refutes the connection between concentration and price, they simply lose their relevance.  Someone should tell the FCC staff.

This is a different Wireless Bureau than the one that wrote so much sensible material in the 15th Annual Wireless Competition Report.  That Bureau described a complex, dynamic, robust mobile “ecosystem” driven not by carrier market power and industrial structure, but by rapid evolution and technological disruptors.  The analysis here wishes away every important factor that every consumer knows to be the real drivers of price and innovation in the mobile marketplace, including, among other things:

  1. Local markets, where there are five, six, or more carriers to choose from;
  2. Non-contract/pre-paid providers, whose strength is rapidly growing;
  3. Technology that is making more bands of available spectrum useful for competitive offerings;
  4. The reality that LTE will make inter-modal competition a reality; and
  5. The reality that churn is rampant and consumer decision-making is driven today by devices, operating systems, applications and content – not networks.

The resulting analysis is stilted and stale, and describes a wireless industry that exists only in the agency’s collective imagination.

There is considerably more to say about the report’s tortured unilateral effects analysis, but it will have to wait for my next post.  Here I want to quickly touch on a two of the other issues called out by Cicconi’s blog post. Continue Reading…

Smoothing Demand Kinks

Steve Salop —  4 April 2011

One criticism of the unilateral effects analysis in the 2010 Merger Guidelines is that demand curves are kinked at the current price.  A small increase in price will dramatically reduce the quantity demanded.  One rationale for the kink is that people over-react to small price changes and dramatically reduce demand.  As a result of this behavioral economics deviation from standard rational behavior, it is claimed, merging firms will not raise prices when the merger increases the opportunity cost of increasing output.  (The opportunity cost increases because some of the increased output now comes from the new merger partner.)  It has been argued that such kinks are ubiquitous, whatever the current price is.  For some recent views on this issue, see the recent anti-kink article by Werden and the pro-kink reply by Scheffman and Simons.

A story in today’s New York Times nicely illustrates one of the problems with the kinked demand story.  Instead of raising prices, consumer products firms can and commonly do raise per unit prices by reducing package sizes.  Changes in package sizes do not create a disproportionate reaction, perhaps because they are less visible to busy shoppers.   Whatever the reason, this smaller package size raises the effective price per unit while avoiding the behavioral economics kink.  Of course, this is not to say that firms never raise prices; they do.  Moreover, even a kink did exist for reasons grounded in behavioral economics or menu costs, any kink likely is just temporary.  In contrast, a merger is permanent.

It is for these reasons that this kinked economics has gotten much traction in the current debate.  But, these presumptions do not mean that kinked economics arguments can never be raised in a merger.  If there were evidence of a low pass-through rate of variable cost into higher prices over a significant period of time, that evidence would be relevant to a more refined analysis of upward pricing pressure.

 

Along with co-author Judd Stone, I’ve posted to SSRN our contribution to the Review of Industrial Organization‘s symposium on the 2010 Horizontal Merger Guidelines — The Sound of One Hand Clapping: The 2010 Horizontal Merger Guidelines and the Challenge of Judicial Adoption.

The paper focuses on the Guidelines’ efficiencies analysis.  We argue that while the 2010 HMGs “update” the Guidelines’ analytical approach in generally desirable ways, these updates are largely asymmetrical in nature: while the new Guidelines update economic thinking on one “side” of the ledger (changes that make the plaintiff’s prima facie burden easier to satisfy, ceteris paribus), they do not do so with respect to efficiencies analysis on the other side of the ledger.  These asymmetrical changes thereby undermine the new Guidelines’ institutional credibility.

In particular, we focus on the Guidelines’ treatment of so-called “out-of-market” efficiencies as well as fixed cost savings.  In both cases we argue that updates were appropriate and consistent with the Agencies’ expressed preference to more accurately reflect economic thinking and shift from proxies to direct assessment of competitive effects.   If anything, the Guidelines appear to be more skeptical of efficiencies arguments than the previous version, adding “the Agencies are mindful that the antitrust laws give competition, not internal operational efficiency, primacy in protecting customers.”  We then turn to discussing the implications of this “asymmetrical update” for judicial adoption of the Guidelines.  Some have discussed the possibility that these Guidelines will be less successful with federal courts because they downplay market definition.  As I’ve said here many times, I do not think the Agencies (if out of nothing but self-interest) will avoid market definition.  However, we argue that the asymmetrical updating problem is a more serious one, and that widespread and wholesale adoption of the HMGs should not be taken for granted.

Here is the abstract:

There is ample justification for the consensus view that the Horizontal Merger Guidelines have proven one of antitrust law’s great successes in the grounding of antitrust doctrine within economic learning. The foundation of the Guidelines’ success has been its widespread adoption by federal courts, which have embraced its rigorous underlying economic logic and analytical approach to merger analysis under the Clayton Act. While some have suggested that the Guidelines’ most recent iteration might jeopardize this record of judicial adoption by downplaying the role of market definition and updating its unilateral effects analysis, we believe these updates are generally beneficial and include long-overdue shifts away from antiquated structural presumptions in favor of analyzing competitive effects directly where possible. However, this article explores a different reason to be concerned that the 2010 Guidelines may not enjoy widespread judicial adoption: the 2010 Guidelines asymmetrically update economic insights underlying merger analysis. While the 2010 Guidelines’ updated economic thinking on market definition and unilateral effects will likely render the prima facie burden facing plaintiffs easier to satisfy in merger analysis moving forward, and thus have significant practical impact, the Guidelines do not correspondingly update efficiencies analysis, leaving it as largely as it first appeared 13 years earlier. We discuss two well-qualified candidates for “economic updates” of efficiencies analysis under the Guidelines: (1) out-of-market efficiencies and (2) fixed cost savings. We conclude with some thoughts about the implications of the asymmetric updates for judicial adoption of the 2010 Guidelines.

Download and read the whole thing.

Leah Brannon and co-author Kathleen Bradish, both of Cleary Gottlieb Steen & Hamilton, offer a skeptical view:

In the half-century since du Pont, lower courts have continued to view market definition as a predicate to Section 7 claims. For example, the D.C. Circuit in FTC v. Cardinal Health, Inc. stated that “[d]efining the relevant market is the starting point for any merger analysis.”19 Similarly, the Eighth Circuit in FTC v. Tenet Health Care Corp. held that it is “essential that the FTC identify a credible relevant market before a preliminary injunction may properly issue” because “[w]ithout a well defined relevant market, a merger’s effect on competition cannot be evaluated.”20 And, just weeks before the final version of the 2010 Guidelines was published, the district court in United States v. Dean Foods Co. reaffirmed that “[i]n determining the likely anti-competitive effects of an acquisition, courts look to the relevant product market, as well as the relevant geographic market.”21

Given the weight of this precedent, courts may be reluctant to embrace the 2010 Guidelines as readily as they have accepted past versions. Courts have, in other contexts, resisted adopting significant changes in agency guidance documents when the changes appeared to conflict with existing case law. In United States v. Kinder, for example, a prisoner defendant requested reconsideration of his sentencing for drug charges based on an amendment to the Sentencing Guidelines that changed the definition of how carrier weight would be calculated.22 The Second Circuit rejected the request, finding that the Sentencing Guidelines amendment could not trump binding precedent that adopted the older method of carrier weight calculation. The dissent, in contrast, specifically pointed to the Merger Guidelines, and argued that with the Merger Guidelines courts had in fact altered prior precedent “by voluntarily accepting uncompelled guidance from a constructive administrative interpretation.”23

The article concludes:

[T]he 2010 Guidelines ask more of the courts than previous versions have, and if recent court decisions are any indication, courts may not be willing to forgo market definitions in Section 7 cases.

I share some of these concerns and agree with Brannon & Bradish that one certainly should not take for granted the notion that the courts will embrace these HMGs.  However, I suspect that the Agencies will not put federal courts in the position Brannon & Bradish are most concerned with — that is, Agencies will not ask judges to rule on a Section 7 case in which they have not defined a market.  As I’ve written:

With respect to the whether or not market definition is a necessary condition under Section 7, if one adopts the view of Professor Crane and myself that in their current form the 2010 HMG are at best equivocal as to whether the agencies must define a market, then the opinion may preview impending hostility to such an approach in federal courts.  But while I’ve blogged that I do not think the 2010 HMGs are clear enough of the necessity of market definition, I do not think that those with genuine concerns about the new HMG approach are really concerned that the Agencies will bring cases in which they do not define a relevant market.Indeed, the real problem with the 2010 HMGs is not that the Agencies will avoid defining markets is at all.  The Agencies want to win cases.  And to the extent that federal courts expect markets to be defined, you can bet the Agencies will do so as part of their case in chief.  It is true that diagnostics for unilateral effects are based on the value of diverted sales and can be done without defining a market, but so long as this is part of an analysis that also defines a market at some stage, the Agencies can comply with the requirement that markets be defined under Section 7 of the Clayton Act.

If Brannon & Bradish are correct that the HMGs will be used in support of Section 7 analyses that omit market definition, there is no doubt that there will be a great risk of their rejection.  But while I agree that the Guidelines are not as clear as they should or could with regard to the Agencies’ commitment to market definition, they are certainly flexible enough to allow the Agencies to define markets at some point.  And because the Agencies don’t want to lose cases, I suspect they will do so.  That does not mean that federal courts will automatically embrace the new HMGs as they have in the past, but it does mean that any such rejection will have to be on more nuanced analytical grounds, e.g. the Agencies bring a case in which market definition is conducted as an afterthought to a competitive effects analysis the court finds unpersuasive.  Brannon & Bradish highlight a very interesting issue which be interesting to watch unfold over the next several years.