Archives For essential facilities

The goal of US antitrust law is to ensure that competition continues to produce positive results for consumers and the economy in general. We published a letter co-signed by twenty three of the U.S.’s leading economists, legal scholars and practitioners, including one winner of the Nobel Prize in economics (full list of signatories here), to exactly that effect urging the House Judiciary Committee on the State of Antitrust Law to reject calls for radical upheaval of antitrust law that would, among other things, undermine the independence and neutrality of US antitrust law. 

A critical part of maintaining independence and neutrality in the administration of antitrust is ensuring that it is insulated from politics. Unfortunately, this view is under attack from all sides. The President sees widespread misconduct among US tech firms that he believes are controlled by the “radical left” and is, apparently, happy to use whatever tools are at hand to chasten them. 

Meanwhile, Senator Klobuchar has claimed, without any real evidence, that the mooted Uber/Grubhub merger is simply about monopolisation of the market, and not, for example, related to the huge changes that businesses like this are facing because of the Covid shutdown.

Both of these statements challenge the principle that the rule of law depends on being politically neutral, including in antitrust. 

Our letter, contrary to the claims made by President Trump, Sen. Klobuchar and some of the claims made to the Committee, asserts that the evidence and economic theory is clear: existing antitrust law is doing a good job of promoting competition and consumer welfare in digital markets and the economy more broadly, and concludes that the Committee should focus on reforms that improve antitrust at the margin, not changes that throw out decades of practice and precedent.

The letter argues that:

  1. The American economy—including the digital sector—is competitive, innovative, and serves consumers well, contrary to how it is sometimes portrayed in the public debate. 
  2. Structural changes in the economy have resulted from increased competition, and increases in national concentration have generally happened because competition at the local level has intensified and local concentration has fallen.
  3. Lax antitrust enforcement has not allowed systematic increases in market power, and the evidence simply does not support out the idea that antitrust enforcement has weakened in recent decades.
  4. Existing antitrust law is adequate for protecting competition in the modern economy, and built up through years of careful case-by-case scrutiny. Calls to throw out decades of precedent to achieve an antitrust “Year Zero” would throw away a huge body of learning and deliberation.
  5. History teaches that discarding the modern approach to antitrust would harm consumers, and return to a situation where per se rules prohibited the use of economic analysis and fact-based defences of business practices.
  6. Common sense reforms should be pursued to improve antitrust enforcement, and the reforms proposed in the letter could help to improve competition and consumer outcomes in the United States without overturning the whole system.

The reforms suggested include measures to increase transparency of the DoJ and FTC, greater scope for antitrust challenges against state-sponsored monopolies, stronger penalties for criminal cartel conduct, and more agency resources being made available to protect workers from anti-competitive wage-fixing agreements between businesses. These are suggestions for the House Committee to consider and are not supported by all the letter’s signatories.

Some of the arguments in the letter are set out in greater detail in the ICLE’s own submission to the Committee, which goes into detail about the nature of competition in modern digital markets and in traditional markets that have been changed because of the adoption of digital technologies. 

The full letter is here.

In our first post, we discussed the weaknesses of an important theoretical underpinning of efforts to expand vertical merger enforcement (including, possibly, the proposed guidelines): the contract/merger equivalency assumption.

In this post we discuss the implications of that assumption and some of the errors it leads to — including some incorporated into the proposed guidelines.

There is no theoretical or empirical justification for more vertical enforcement

Tim Brennan makes a fantastic and regularly overlooked point in his post: If it’s true, as many claim (see, e.g., Steve Salop), that firms can generally realize vertical efficiencies by contracting instead of merging, then it’s also true that they can realize anticompetitive outcomes the same way. While efficiencies have to be merger-specific in order to be relevant to the analysis, so too do harms. But where the assumption is that the outcomes of integration can generally be achieved by the “less-restrictive” means of contracting, that would apply as well to any potential harms, thus negating the transaction-specificity required for enforcement. As Dennis Carlton notes:

There is a symmetry between an evaluation of the harms and benefits of vertical integration. Each must be merger-specific to matter in an evaluation of the merger’s effects…. If transaction costs are low, then vertical integration creates neither benefits nor harms, since everything can be achieved by contract. If transaction costs exist to prevent the achievement of a benefit but not a harm (or vice-versa), then that must be accounted for in a calculation of the overall effect of a vertical merger. (Dennis Carlton, Transaction Costs and Competition Policy)

Of course, this also means that those (like us) who believe that it is not so easy to accomplish by contract what may be accomplished by merger must also consider the possibility that a proposed merger may be anticompetitive because it overcomes an impediment to achieving anticompetitive goals via contract.

There’s one important caveat, though: The potential harms that could arise from a vertical merger are the same as those that would be cognizable under Section 2 of the Sherman Act. Indeed, for a vertical merger to cause harm, it must be expected to result in conduct that would otherwise be illegal under Section 2. This means there is always the possibility of a second bite at the apple when it comes to thwarting anticompetitive conduct. 

The same cannot be said of procompetitive conduct that can arise only through merger if a merger is erroneously prohibited before it even happens

Interestingly, Salop himself — the foremost advocate today for enhanced vertical merger enforcement — recognizes the issue raised by Brennan: 

Exclusionary harms and certain efficiency benefits also might be achieved with vertical contracts and agreements without the need for a vertical merger…. It [] might be argued that the absence of premerger exclusionary contracts implies that the merging firms lack the incentive to engage in conduct that would lead to harmful exclusionary effects. But anticompetitive vertical contracts may face the same types of impediments as procompetitive ones, and may also be deterred by potential Section 1 enforcement. Neither of these arguments thus justify a more or less intrusive vertical merger policy generally. Rather, they are factors that should be considered in analyzing individual mergers. (Salop & Culley, Potential Competitive Effects of Vertical Mergers)

In the same article, however, Salop also points to the reasons why it should be considered insufficient to leave enforcement to Sections 1 and 2, instead of addressing them at their incipiency under Clayton Section 7:

While relying solely on post-merger enforcement might have appealing simplicity, it obscures several key facts that favor immediate enforcement under Section 7.

  • The benefit of HSR review is to prevent the delays and remedial issues inherent in after-the-fact enforcement….
  • There may be severe problems in remedying the concern….
  • Section 1 and Section 2 legal standards are more permissive than Section 7 standards….
  • The agencies might well argue that anticompetitive post-merger conduct was caused by the merger agreement, so that it would be covered by Section 7….

All in all, failure to address these kinds of issues in the context of merger review could lead to significant consumer harm and underdeterrence.

The points are (mostly) well-taken. But they also essentially amount to a preference for more and tougher enforcement against vertical restraints than the judicial interpretations of Sections 1 & 2 currently countenance — a preference, in other words, for the use of Section 7 to bolster enforcement against vertical restraints of any sort (whether contractual or structural).

The problem with that, as others have pointed out in this symposium (see, e.g., Nuechterlein; Werden & Froeb; Wright, et al.), is that there’s simply no empirical basis for adopting a tougher stance against vertical restraints in the first place. Over and over again the empirical research shows that vertical restraints and vertical mergers are unlikely to cause anticompetitive harm: 

In reviewing this literature, two features immediately stand out: First, there is a paucity of support for the proposition that vertical restraints/vertical integration are likely to harm consumers. . . . Second, a far greater number of studies found that the use of vertical restraints in the particular context studied improved welfare unambiguously. (Cooper, et al, Vertical Restrictions and Antitrust Policy: What About the Evidence?)

[W]e did not have a particular conclusion in mind when we began to collect the evidence, and we… are therefore somewhat surprised at what the weight of the evidence is telling us. It says that, under most circumstances, profit-maximizing, vertical-integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view…. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked. (Francine Lafontaine & Margaret Slade, Vertical Integration and Firm Boundaries: The Evidence)

[Table 1 in this paper] indicates that voluntarily adopted restraints are associated with lower costs, greater consumption, higher stock returns, and better chances of survival. (Daniel O’Brien, The Antitrust Treatment of Vertical Restraint: Beyond the Beyond the Possibility Theorems)

In sum, these papers from 2009-2018 continue to support the conclusions from Lafontaine & Slade (2007) and Cooper et al. (2005) that consumers mostly benefit from vertical integration. While vertical integration can certainly foreclose rivals in theory, there is only limited empirical evidence supporting that finding in real markets. (GAI Comment on Vertical Mergers)

To the extent that the proposed guidelines countenance heightened enforcement relative to the status quo, they fall prey to the same defect. And while it is unclear from the fairly terse guidelines whether this is animating them, the removal of language present in the 1984 Non-Horizontal Merger Guidelines acknowledging the relative lack of harm from vertical mergers (“[a]lthough non-horizontal mergers are less likely than horizontal mergers to create competitive problems…”) is concerning.  

The shortcomings of orthodox economics and static formal analysis

There is also a further reason to think that vertical merger enforcement may be more likely to thwart procompetitive than anticompetitive arrangements relative to the status quo ante (i.e., where arrangements among vertical firms are by contract): Our lack of knowledge about the effects of market structure and firm organization on innovation and dynamic competition, and the relative hostility to nonstandard contracting, including vertical integration:

[T]he literature addressing how market structure affects innovation (and vice versa) in the end reveals an ambiguous relationship in which factors unrelated to competition play an important role. (Katz & Shelanski, Mergers and Innovation)

The fixation on the equivalency of the form of vertical integration (i.e., merger versus contract) is likely to lead enforcers to focus on static price and cost effects, and miss the dynamic organizational and informational effects that lead to unexpected, increased innovation across and within firms. 

In the hands of Oliver Williamson, this means that understanding firms in the real world entails taking an organization theory approach, in contrast to the “orthodox” economic perspective:

The lens of contract approach to the study of economic organization is partly complementary but also partly rival to the orthodox [neoclassical economic] lens of choice. Specifically, whereas the latter focuses on simple market exchange, the lens of contract is predominantly concerned with the complex contracts. Among the major differences is that non‐standard and unfamiliar contractual practices and organizational structures that orthodoxy interprets as manifestations of monopoly are often perceived to serve economizing purposes under the lens of contract. A major reason for these and other differences is that orthodoxy is dismissive of organization theory whereas organization theory provides conceptual foundations for the lens of contract. (emphasis added)

We are more likely to miss it when mergers solve market inefficiencies, and more likely to see it when they impose static costs — even if the apparent costs actually represent a move from less efficient contractual arrangements to more efficient integration.

The competition that takes place in the real world and between various groups ultimately depends upon the institution of private contracts, many of which, including the firm itself, are nonstandard. Innovation includes the discovery of new organizational forms and the application of old forms to new contexts. Such contracts prevent or attenuate market failure, moving the market toward what economists would deem a more competitive result. Indeed, as Professor Coase pointed out, many markets deemed “perfectly competitive” are in fact the end result of complex contracts limiting rivalry between competitors. This contractual competition cannot produce perfect results — no human institution ever can. Nonetheless, the result is superior to that which would obtain in a (real) world without nonstandard contracting. These contracts do not depend upon the creation or enhancement of market power and thus do not produce the evils against which antitrust law is directed. (Alan Meese, Price Theory Competition & the Rule of Reason)

Or, as Oliver Williamson more succinctly puts it:

[There is a] rebuttable presumption that nonstandard forms of contracting have efficiency purposes. (Oliver Williamson, The Economic Institutions of Capitalism)

The pinched focus of the guidelines on narrow market definition misses the bigger picture of dynamic competition over time

The proposed guidelines (and the theories of harm undergirding them) focus upon indicia of market power that may not be accurate if assessed in more realistic markets or over more relevant timeframes, and, if applied too literally, may bias enforcement against mergers with dynamic-innovation benefits but static-competition costs.  

Similarly, the proposed guidelines’ enumeration of potential efficiencies doesn’t really begin to cover the categories implicated by the organization of enterprise around dynamic considerations

The proposed guidelines’ efficiencies section notes that:

Vertical mergers bring together assets used at different levels in the supply chain to make a final product. A single firm able to coordinate how these assets are used may be able to streamline production, inventory management, or distribution, or create innovative products in ways that would have been hard to achieve though arm’s length contracts. (emphasis added)

But it is not clear than any of these categories encompasses organizational decisions made to facilitate the coordination of production and commercialization when they are dependent upon intangible assets.

As Thomas Jorde and David Teece write:

For innovations to be commercialized, the economic system must somehow assemble all the relevant complementary assets and create a dynamically-efficient interactive system of learning and information exchange. The necessary complementary assets can conceivably be assembled by either administrative or market processes, as when the innovator simply licenses the technology to firms that already own or are willing to create the relevant assets. These organizational choices have received scant attention in the context of innovation. Indeed, the serial model relies on an implicit belief that arm’s-length contracts between unaffiliated firms in the vertical chain from research to customer will suffice to commercialize technology. In particular, there has been little consideration of how complex contractual arrangements among firms can assist commercialization — that is, translating R&D capability into profitable new products and processes….

* * *

But in reality, the market for know-how is riddled with imperfections. Simple unilateral contracts where technology is sold for cash are unlikely to be efficient. Complex bilateral and multilateral contracts, internal organization, or various hybrid structures are often required to shore up obvious market failures and create procompetitive efficiencies. (Jorde & Teece, Rule of Reason Analysis of Horizontal Arrangements: Agreements Designed to Advance Innovation and Commercialize Technology) (emphasis added)

When IP protection for a given set of valuable pieces of “know-how” is strong — easily defendable, unique patents, for example — firms can rely on property rights to efficiently contract with vertical buyers and sellers. But in cases where the valuable “know how” is less easily defended as IP — e.g. business process innovation, managerial experience, distributed knowledge, corporate culture, and the like — the ability to partially vertically integrate through contract becomes more difficult, if not impossible. 

Perhaps employing these assets is part of what is meant in the draft guidelines by “streamline.” But the very mention of innovation only in the technological context of product innovation is at least some indication that organizational innovation is not clearly contemplated.  

This is a significant lacuna. The impact of each organizational form on knowledge transfers creates a particularly strong division between integration and contract. As Enghin Atalay, Ali Hortaçsu & Chad Syverson point out:

That vertical integration is often about transfers of intangible inputs rather than physical ones may seem unusual at first glance. However, as observed by Arrow (1975) and Teece (1982), it is precisely in the transfer of nonphysical knowledge inputs that the market, with its associated contractual framework, is most likely to fail to be a viable substitute for the firm. Moreover, many theories of the firm, including the four “elemental” theories as identified by Gibbons (2005), do not explicitly invoke physical input transfers in their explanations for vertical integration. (Enghin Atalay, et al., Vertical Integration and Input Flows) (emphasis added)

There is a large economics and organization theory literature discussing how organizations are structured with respect to these sorts of intangible assets. And the upshot is that, while we start — not end, as some would have it — with the Coasian insight that firm boundaries are necessarily a function of production processes and not a hard limit, we quickly come to realize that it is emphatically not the case that integration-via-contract and integration-via-merger are always, or perhaps even often, viable substitutes.

Conclusion

The contract/merger equivalency assumption, coupled with a “least-restrictive alternative” logic that favors contract over merger, puts a thumb on the scale against vertical mergers. While the proposed guidelines as currently drafted do not necessarily portend the inflexible, formalistic application of this logic, they offer little to guide enforcers or courts away from the assumption in the important (and perhaps numerous) cases where it is unwarranted.   

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Jonathan M. Jacobson (Partner, Wilson Sonsini Goodrich & Rosati), and Kenneth Edelson (Associate, Wilson Sonsini Goodrich & Rosati).]

So we now have 21st Century Vertical Merger Guidelines, at least in draft. Yay. Do they tell us anything? Yes! Do they tell us much? No. But at least it’s a start.

* * * * *

In November 2018, the FTC held hearings on vertical merger analysis devoted to the questions of whether the agencies should issue new guidelines, and what guidance those guidelines should provide. And, indeed, on January 10, 2020, the DOJ and FTC issued their new Draft Vertical Merger Guidelines (“Draft Guidelines”). That new guidance has finally been issued is a welcome development. The antitrust community has been calling for new vertical merger guidelines for some time. The last vertical merger guidelines were issued in 1984, and there is broad consensus in the antitrust community – despite vigorous debate on correct legal treatment of vertical mergers – that the ’84 Guidelines are outdated and should be withdrawn. Despite disagreement on the best enforcement policy, there is general recognition that the legal rules applicable to vertical mergers need clarification. New guidelines are especially important in light of recent high-visibility merger challenges, including the government’s challenge to the ATT/Time Warner merger, the first vertical merger case litigated since the 1970s. These merger challenges have occurred in an environment in which there is little up-to-date case law to guide courts or agencies and the ’84 Guidelines have been rendered obsolete by subsequent developments in economics. 

The discussion here focuses on what the new Draft Guidelines do say, key issues on which they do not weigh in, and where additional guidance would be desirable

What the Draft Guidelines do say

The Draft Guidelines start with a relevant market requirement – making clear that the agencies will identify at least one relevant market in which a vertical merger may foreclose competition. However, the Draft Guidelines do not require a market definition for the vertically related upstream or downstream market(s) in the merger. Rather, the agencies’ proposed policy is to identify one or more “related products.” The Draft Guidelines define a related product as

a product or service that is supplied by the merged firm, is vertically related to the products and services in the relevant market, and to which access by the merged firm’s rivals affects competition in the relevant market.

The Draft Guidelines’ most significant (and most concrete) proposal is a loose safe harbor based on market share and the percentage of use of the related product in the relevant market of interest. The Draft Guidelines suggest that agencies are not likely to challenge mergers if two conditions are met: (1) the merging company has less than 20% market share in the relevant market, and (2) less than 20% of the relevant market uses the related product identified by the agencies. 

This proposed safe harbor is welcome. Generally, in order for a vertical merger to have anticompetitive effects, both the upstream and downstream markets involved need to be concentrated, and the merging firms’ shares of both markets have to be substantial – although the Draft Guidelines do not contain any such requirements. Mergers in which the merging company has less than a 20% market share of the relevant market, and in which less than 20% of the market uses the vertically related product are unlikely to have serious anticompetitive effects.

However, the proposed safe harbor does not provide much certainty. After describing the safe harbor, the Draft Guidelines offer a caveat: meeting the proposed 20% thresholds will not serve as a “rigid screen” for the agencies to separate out mergers that are unlikely to have anticompetitive effects. Accordingly, the guidelines as currently drafted do not guarantee that vertical mergers in which market share and related product use fall below 20% would be immune from agency scrutiny. So, while the proposed safe harbor is a welcome statement of good policy that may guide agency staff and courts in analyzing market share and share of relevant product use, it is not a true safe harbor. This ambiguity limits the safe harbor’s utility for the purpose of counseling clients on market share issues.

The Draft Guidelines also identify a number of specific unilateral anticompetitive effects that, in the agencies’ view, may result from vertical mergers (the Draft Guidelines note that coordinated effects will be evaluated consistent with the Horizontal Merger Guidelines). Most importantly, the guidelines name raising rivals’ costs, foreclosure, and access to competitively sensitive information as potential unilateral effects of vertical mergers. The Draft Guidelines indicate that the agency may consider the following issues: would foreclosure or raising rivals’ costs (1) cause rivals to lose sales; (2) benefit the post-merger firm’s business in the relevant market; (3) be profitable to the firm; and (4) be beyond a de minimis level, such that it could substantially lessen competition? Mergers where all four conditions are met, the Draft Guidelines say, often warrant competitive scrutiny. While the big picture guidance about what agencies find concerning is helpful, the Draft Guidelines are short on details that would make this a useful statement of enforcement policy, or sufficiently reliable to guide practitioners in counseling clients. Most importantly, the Draft guidelines give no indication of what the agencies will consider a de minimis level of foreclosure.

The Draft Guidelines also articulate a concern with access to competitively sensitive information, as in the recent Staples/Essendant enforcement action. There, the FTC permitted the merger after imposing a firewall that blocked Staples from accessing certain information about its rivals held by Essendant. This contrasts with the current DOJ approach of hostility to behavioral remedies.

What the Draft Guidelines don’t say

The Draft Guidelines also decline to weigh in on a number of important issues in the debates over vertical mergers. Two points are particularly noteworthy.

First, the Draft Guidelines decline to allocate the parties’ proof burdens on key issues. The burden-shifting framework established in U.S. v. Baker Hughes is regularly used in horizontal merger cases, and was recently adopted in AT&T/Time-Warner in a vertical context. The framework has three phases: (1) the plaintiff bears the burden of establishing a prima facie case that the merger will substantially lessen competition in the relevant market; (2) the defendant bears the burden of producing evidence to demonstrate that the merger’s procompetitive effects outweigh the alleged anticompetitive effects; and (3) the plaintiff bears the burden of countering the defendant’s rebuttal, and bears the ultimate burden of persuasion. Virtually everyone agrees that this or some similar structure should be used. However, the Draft Guidelines’ silence on the appropriate burden is consistent with the agencies’ historical practice: The 2010 Horizontal Merger Guidelines allocate no burdens and the 1997 Merger Guidelines explicitly decline to assign the burden of proof or production on any issue.

Second, the Draft Guidelines take an unclear approach to elimination of double marginalization (EDM). The appropriate treatment of EDM has been one of the key topics in the debates on the law and economics of vertical mergers, but the Draft Guidelines take no position on the key issues in the conversation about EDM: whether it should be presumed in a vertical merger, and whether it should be presumed to be merger-specific.

EDM may occur if two vertically related firms merge and the new firm captures the margins of both the upstream and downstream firms. After the merger, the downstream firm gets its input at cost, allowing the merged firm to eliminate one party’s markup. This makes price reduction profitable for the merged firm where it would not have been for either firm before the merger. 

The Draft Guidelines state that the agencies will not challenge vertical mergers where EDM means that the merger is unlikely to be anticompetitive. OK. Duh. However, they also claim that in some situations, EDM may not occur, or its benefits may be offset by other incentives for the merged firm to raise prices. The Draft Guidelines do not weigh in on whether it should be presumed that vertical mergers will result in EDM, or whether it should be presumed that EDM is merger-specific. 

These are the most important questions in the debate over EDM. Some economists take the position that EDM is not guaranteed, and not necessarily merger-specific. Others take the position that EDM is basically inevitable in a vertical merger, and is unlikely to be achieved without a merger. That is: if there is EDM, it should be presumed to be merger-specific. Those who take the former view would put the burden on the merging parties to establish pricing benefits of EDM and its merger-specificity. 

Our own view is that this efficiency is pervasive and significant in vertical mergers. The defense should therefore bear only a burden of producing evidence, and the agencies should bear the burden of disproving the significance of EDM where shown to exist. This would depart from the typical standard in a merger case, under which defendants must prove the reality, magnitude, and merger-specific character of the claimed efficiencies (the Draft Guidelines adopt this standard along with the approach of the 2010 Horizontal Merger Guidelines on efficiencies). However, it would more closely reflect the economic reality of most vertical mergers. 

Conclusion

While the Draft Guidelines are a welcome step forward in the debates around the law and economics of vertical mergers, they do not guide very much. The fact that the Draft Guidelines highlight certain issues is a useful indicator of what the agencies find important, but not a meaningful statement of enforcement policy. 

On a positive note, the Draft Guidelines’ explanations of certain economic concepts important to vertical mergers may serve to illuminate these issues for courts

However, the agencies’ proposals are not specific enough to create predictability for business or the antitrust bar or provide meaningful guidance for enforcers to develop a consistent enforcement policy. This result is not surprising given the lack of consensus on the law and economics of vertical mergers and the best approach to enforcement. But the antitrust community — and all of its participants — would be better served by a more detailed document that commits to positions on key issues in the relevant debates. 

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.]

This post is authored by Joshua D. Wright (University Professor of Law, George Mason University and former Commissioner, FTC); Douglas H. Ginsburg (Senior Circuit Judge, US Court of Appeals for the DC Circuit; Professor of Law, George Mason University; and former Assistant Attorney General, DOJ Antitrust Division); Tad Lipsky (Assistant Professor of Law, George Mason University; former Acting Director, FTC Bureau of Competition; former chief antitrust counsel, Coca-Cola; former Deputy Assistant Attorney General, DOJ Antitrust Division); and John M. Yun (Associate Professor of Law, George Mason University; former Acting Deputy Assistant Director, FTC Bureau of Economics).]

After much anticipation, the Department of Justice Antitrust Division and the Federal Trade Commission released a draft of the Vertical Merger Guidelines (VMGs) on January 10, 2020. The Global Antitrust Institute (GAI) will be submitting formal comments to the agencies regarding the VMGs and this post summarizes our main points.

The Draft VMGs supersede the 1984 Merger Guidelines, which represent the last guidance from the agencies on the treatment of vertical mergers. The VMGs provide valuable guidance and greater clarity in terms of how the agencies will review vertical mergers going forward. While the proposed VMGs generally articulate an analytical framework based upon sound economic principles, there are several ways that the VMGs could more deeply integrate sound economics and our empirical understanding of the competitive consequences of vertical integration.

In this post, we discuss four issues: (1) incorporating the elimination of double marginalization (EDM) into the analysis of the likelihood of a unilateral price effect; (2) eliminating the role of market shares and structural analysis; (3) highlighting that the weight of empirical evidence supports the proposition that vertical mergers are less likely to generate competitive concerns than horizontal mergers; and (4) recognizing the importance of transaction cost-based efficiencies.

Elimination of double marginalization is a unilateral price effect

EDM is discussed separately from both unilateral price effects, in Section 5, and efficiencies, in Section 9, of the draft VMGs. This is notable because the structure of the VMGs obfuscates the relevant economics of internalizing pricing externalities and may encourage the misguided view that EDM is a special form of efficiency.

When separate upstream and downstream entities price their products, they do not fully take into account the impact of their pricing decision on each other — even though they are ultimately part of the same value chain for a given product. Vertical mergers eliminate a pricing externality since the post-merger upstream and downstream units are fully aligned in terms of their pricing incentives. In this sense, EDM is indistinguishable from the unilateral effects discussed in Section 5 of the VMGs that cause upward pricing pressure. Specifically, in the context of mergers, just as there is a greater incentive, under certain conditions, to foreclose or raise rivals’ costs (RRC) post-merger (although, this does not mean there is an ability to engage in these behaviors), there is also an incentive to lower prices due to the elimination of a markup along the supply chain. Consequently, we really cannot assess unilateral effects without accounting for the full set of incentives that could move prices in either direction.

Further, it is improper to consider EDM in the context of a “net effect” given that this phrase has strong connotations with weighing efficiencies against findings of anticompetitive harm. Rather, “unilateral price effects” actually includes EDM — just as a finding that a merger will induce entry properly belongs in a unilateral effects analysis. For these reasons, we suggest incorporating the discussion of EDM into the discussion of unilateral effects contained in Section 5 of the VMGs and eliminating Section 6. Otherwise, by separating EDM into its own section, the agencies are creating a type of “limbo” between unilateral effects and efficiencies — which creates confusion, particularly for courts. It is also important to emphasize that the mere existence of alternative contracting mechanisms to mitigate double marginalization does not tell us about their relative efficacy compared to vertical integration as there are costs to contracting.

Role of market shares and structural analysis

In Section 3 (“Market Participants, Market Shares, and Market Concentration”), there are two notable statements. First,

[t]he Agencies…do not rely on changes in concentration as a screen for or indicator of competitive effects from vertical theories of harm.

This statement, without further explanation, is puzzling as there are no changes in concentration for vertical mergers. Second, the VMGs then go on to state that 

[t]he Agencies are unlikely to challenge a vertical merger where the parties to the merger have a share in the relevant market of less than 20 percent, and the related product is used in less than 20 percent of the relevant market.

The very next sentence reads:

In some circumstances, mergers with shares below the thresholds can give rise to competitive concerns.

From this, we conclude that the VMGs are adopting a prior belief that, if both the relevant product and the related product have a less than 20 percent share in the relevant market, the acquisition is either competitively neutral or benign. The VMGs make clear, however, they do not offer a safe harbor. With these statements, the agencies run the risk that the 20 percent figure will be interpreted as a trigger for competitive concern. There is no sound economic reason to believe 20 percent share in the relevant market or the related market is of any particular importance to predicting competitive effects. The VMGs should eliminate the discussion of market shares altogether. At a minimum, the final guidelines would benefit from some explanation for this threshold if it is retained.

Empirical evidence on the welfare impact of vertical mergers

In contrast to vertical mergers, horizontal mergers inherently involve a degree of competitive overlap and an associated loss of at least some degree of rivalry between actual and/or potential competitors. The price effect for vertical mergers, however, is generally theoretically ambiguous — even before accounting for efficiencies — due to EDM and the uncertainty regarding whether the integrated firm has an incentive to raise rivals’ costs or foreclose. Thus, for vertical mergers, empirically evaluating the welfare effects of consummated mergers has been and remains an important area of research to guide antitrust policy.

Consequently, what is noticeably absent from the draft guidelines is an empirical grounding. Consistent empirical findings should inform agency decision-making priors. With few exceptions, the literature does not support the view that these practices are used for anticompetitive reasons — see Lafontaine & Slade (2007) and Cooper et al. (2005). (For an update on the empirical literature from 2009 through 2018, which confirms the conclusions of the prior literature, see the GAI’s Comment on Vertical Mergers submitted during the recent FTC Hearings.) Thus, the modern antitrust approach to vertical mergers, as reflected in the antitrust literature, should reflect the empirical reality that vertical relationships are generally procompetitive or neutral.

The bottom line is that how often vertical mergers are anticompetitive should influence our framework and priors. Given the strong empirical evidence that vertical mergers do not tend to result in welfare losses for consumers, we believe the agencies should consider at least the modest statement that vertical mergers are more often than not procompetitive or, alternatively, vertical mergers tend to be more procompetitive or neutral than horizontal ones. Thus, we believe the final VMGs would benefit from language similar to the 1984 VMGs: “Although nonhorizontal mergers are less likely than horizontal mergers to create competitive problems, they are not invariably innocuous.”

Transaction cost efficiencies and merger specificity

The VMGs address efficiencies in Section 8. Under the VMGs, the Agencies will evaluate efficiency claims by the parties using the approach set forth in Section 10 of the 2010 Horizontal Merger Guidelines. Thus, efficiencies must be both cognizable and merger specific to be considered by the agencies.

In general, the VMGs also adopt an approach that is consistent with the teachings of the robust literature on transaction cost economics, which recognizes the costs of using the price system to explain the boundaries of economic organizations, and the importance of incorporating such considerations into any antitrust analyses. In particular, this literature has demonstrated, both theoretically and empirically, that the decision to contract or vertically integrate is often driven by the relatively high costs of contracting as well as concerns regarding the enforcement of contracts and opportunistic behavior. This literature suggests that such transactions cost efficiencies in the vertical merger context often will be both cognizable and merger-specific and rejects an approach that would presume such efficiencies are not merger specific because they can be theoretically achieved via contract.

While we agree with the overall approach set out in the VMGs, we are concerned that the application of Section 8, in practice, without more specificity and guidance, will be carried out in a way that is inconsistent with the approach set out in Section 10 of the 2010 HMGs.

Conclusion

Overall, the agencies deserve credit for highlighting the relevant factors in assessing vertical mergers and for not attempting to be overly aggressive in advancing untested merger assessment tools or theories of harm.

The agencies should seriously consider, however, refinements in a number of critical areas:

  • First, discussion of EDM should be integrated into the larger unilateral effects analysis in Section 5 of the VMGs. 
  • Second, the agencies should eliminate the role of market shares and structural analysis in the VMGs. 
  • Third, the final VMGs should acknowledge that vertical mergers are less likely to generate competitive concerns than horizontal mergers. 
  • Finally, the final VMGs should recognize the importance of transaction cost-based efficiencies. 

We believe incorporating these changes will result in guidelines that are more in conformity with sound economics and the empirical evidence.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Margaret E. Slade (Professor Emeritus, Vancouver School of Economics, The University of British Columbia).]

A revision of the DOJ’s Non-Horizontal Merger Guidelines is long overdue and the Draft Vertical Merger Guidelines (“Guidelines”) takes steps in the right direction. However, the treatment of important issues can be uneven. For example, the discussions of market definition and shares are relatively thorough whereas the discussions of anti-competitive harm and pro-competitive efficiencies are more vague.

Market definition, market shares, and concentration

The Guidelines are correct in deferring to the Horizontal Merger Guidelines for most aspects of market definition, market shares, and market concentration. The relevant sections of the Horizontal Guidelines are not without problems. However, it would make no sense to use different methods and concepts to delineate horizontal markets that are involved in vertical mergers compared to those that are involved in horizontal mergers.  

One aspect of market definition, however, is new: the notion of a related product, which is a product that links the up and downstream firms. Such products might be inputs, distribution systems, or sets of customers. The Guidelines set thresholds of 20% for the related product’s share, as well as the parties’ shares, in the relevant market. 

Those thresholds are, of course, only indicative and mergers can be investigated when markets are smaller. In addition, mergers that fail to meet the share tests need not be challenged. It would therefore be helpful to have a list of factors that could be used to determine which mergers that fall below those thresholds are more likely to be investigated, and vice versa. For example, the EU Vertical Merger Guidelines list circumstances, such as the existence of significant cross-shareholding relationships, the fact that one of the firms is considered to be a maverick, and suspicion that coordination is ongoing, under which mergers that fall into the safety zones are more apt to be investigated.

Elimination of double marginalization and other efficiencies

Although the elimination of double marginalization (EDM) is a pricing externality that does not change unit costs, the Guidelines discuss EDM as the principal `efficiency’ or at least they have more to say about that factor. Furthermore, after discussing EDM, the Guidelines note that the full EDM benefit might not occur if the downstream firm cannot use the product or if the parties are already engaged in contracting. The first factor is obvious and the second implies that the efficiency is not merger specific. In practice, however, antitrust and regulatory policy has tended to apply the EDM argument uncritically, ignoring several key assumptions and issues.

The simple model of EDM relies on a setting in which there are two monopolists, one up and one downstream, each produces a single product, and production is subject to fixed proportions. This model predicts that welfare will increase after a vertical merger. If these assumptions are violated, however, the predictions change (as John Kwoka and I discuss in more detail here). For example, under variable proportions the unintegrated downstream firm can avoid some of the adverse effects of the inflated wholesale price by substituting away from use of that product, and the welfare implications are ambiguous. Moreover, managerial considerations such as independent pricing by divisions can lead to less-than-full elimination of double marginalization.  

With multi-product firms, the integrated firm’s average downstream prices need not fall and can even rise when double marginalization is eliminated. To illustrate, after EDM the products with eliminated margins become relatively more profitable to sell. This gives the integrated firm incentives to divert demand towards those products by increasing the prices of its products for which double marginalization was not eliminated. Moreover, under some circumstances, the integrated downstream price can also rise.

Since violations of the simple model are present in almost all cases, it would be helpful to include a more complete list of factors that cause the simple model — the one that predicts that EDM is always welfare improving — to fail.

Unlike the case of horizontal mergers, with vertical mergers, real productive efficiencies on the supply side are often given less attention. Those efficiencies, which include economies of scope, the ability to coordinate other aspects of the vertical chain such as inventories and distribution, and the expectation of productivity growth due to knowledge transfers, can be important

Moreover, organizational efficiencies, such as mitigating contracting, holdup, and renegotiation costs, facilitating specific investments in physical and human capital, and providing appropriate incentives within firms, are usually ignored. Those efficiencies can be difficult to evaluate. Nevertheless, they should not be excluded from consideration on that basis.

Equilibrium effects

On page 4, the Guidelines suggest that merger simulations might be used to quantify unilateral price effects of vertical mergers. However, they have nothing to say about the pitfalls. Unfortunately, compared to horizontal merger simulations, there are many more assumptions that are required to construct vertical simulation models and thus many more places where they can go wrong. In particular, one must decide on the number and identity of the rivals; the related products that are potentially disadvantaged; the geographic markets in which foreclosure or raising rivals’ costs are likely to occur; the timing of moves: whether up and downstream prices are set simultaneously or the upstream firm is a first mover; the link between up and downstream: whether bargaining occurs or the upstream firm makes take-it-or-leave-it offers; and, as I discuss below, the need to evaluate the raising rivals’ costs (RRC) and elimination of double marginalization (EDM) effects simultaneously.

These choices can be crucial in determining model predictions. Indeed, as William Rogerson notes (in an unpublished 2019 draft paper, Modeling and Predicting the Competitive Effects of Vertical Mergers Due to Changes in Bargaining Leverage: The Bargaining Leverage Over Rivals (BLR) Effect), when moves are simultaneous, there is no RRC effect. This is true because, when negotiating over input prices, firms take downstream prices as given. 

On the other hand, bargaining introduces a new competitive effect — the bargaining leverage effect — which arises because, after a vertical merger, the disagreement payoff is higher. Indeed, the merged firm recognizes the increased profit that its downstream integrated division will earn if the input is withheld from the rival. In contrast, the upstream firm’s disagreement payoff is irrelevant when it has all of the bargaining power.

Finally, on page 5, the Guidelines describe something that sounds like a vertical upward pricing pressure (UPP) index, analogous to the GUPPI that has been successfully employed in evaluating horizontal mergers. However, extending the GUPPI to a vertical context is not straightforward

To illustrate, Das Varma and Di Stefano show that a sequential process can be very misleading, where a sequential process consists of first calculating the RRC effect and, if that effect is substantial, evaluating the EDM effect and comparing the two. The problem is that the two effects are not independent of one another. Moreover, when the two are determined simultaneously, compared to the sequential RRC, the equilibrium RRC can increase or decrease and can even change sign (i.e., lowering rival costs).What these considerations mean is that vertical merger simulations have to be carefully crafted to fit the markets that are susceptible to foreclosure and that a one-size-fits-all model can be very misleading. Furthermore, if a simpler sequential screening process is used, careful consideration must be given to whether the markets of interest satisfy the assumptions under which that process will yield approximately reasonable results.

FTC v. Qualcomm

Last week the International Center for Law & Economics (ICLE) and twelve noted law and economics scholars filed an amicus brief in the Ninth Circuit in FTC v. Qualcomm, in support of appellant (Qualcomm) and urging reversal of the district court’s decision. The brief was authored by Geoffrey A. Manne, President & founder of ICLE, and Ben Sperry, Associate Director, Legal Research of ICLE. Jarod M. Bona and Aaron R. Gott of Bona Law PC collaborated in drafting the brief and they and their team provided invaluable pro bono legal assistance, for which we are enormously grateful. Signatories on the brief are listed at the end of this post.

We’ve written about the case several times on Truth on the Market, as have a number of guest bloggers, in our ongoing blog series on the case here.   

The ICLE amicus brief focuses on the ways that the district court exceeded the “error cost” guardrails erected by the Supreme Court to minimize the risk and cost of mistaken antitrust decisions, particularly those that wrongly condemn procompetitive behavior. As the brief notes at the outset:

The district court’s decision is disconnected from the underlying economics of the case. It improperly applied antitrust doctrine to the facts, and the result subverts the economic rationale guiding monopolization jurisprudence. The decision—if it stands—will undercut the competitive values antitrust law was designed to protect.  

The antitrust error cost framework was most famously elaborated by Frank Easterbrook in his seminal article, The Limits of Antitrust (1984). It has since been squarely adopted by the Supreme Court—most significantly in Brooke Group (1986), Trinko (2003), and linkLine (2009).  

In essence, the Court’s monopolization case law implements the error cost framework by (among other things) obliging courts to operate under certain decision rules that limit the use of inferences about the consequences of a defendant’s conduct except when the circumstances create what game theorists call a “separating equilibrium.” A separating equilibrium is a 

solution to a game in which players of different types adopt different strategies and thereby allow an uninformed player to draw inferences about an informed player’s type from that player’s actions.

Baird, Gertner & Picker, Game Theory and the Law

The key problem in antitrust is that while the consequence of complained-of conduct for competition (i.e., consumers) is often ambiguous, its deleterious effect on competitors is typically quite evident—whether it is actually anticompetitive or not. The question is whether (and when) it is appropriate to infer anticompetitive effect from discernible harm to competitors. 

Except in the narrowly circumscribed (by Trinko) instance of a unilateral refusal to deal, anticompetitive harm under the rule of reason must be proven. It may not be inferred from harm to competitors, because such an inference is too likely to be mistaken—and “mistaken inferences are especially costly, because they chill the very conduct the antitrust laws are designed to protect.” (Brooke Group (quoting yet another key Supreme Court antitrust error cost case, Matsushita (1986)). 

Yet, as the brief discusses, in finding Qualcomm liable the district court did not demand or find proof of harm to competition. Instead, the court’s opinion relies on impermissible inferences from ambiguous evidence to find that Qualcomm had (and violated) an antitrust duty to deal with rival chip makers and that its conduct resulted in anticompetitive foreclosure of competition. 

We urge you to read the brief (it’s pretty short—maybe the length of three blogs posts) to get the whole argument. Below we draw attention to a few points we make in the brief that are especially significant. 

The district court bases its approach entirely on Microsoft — which it misinterprets in clear contravention of Supreme Court case law

The district court doesn’t stay within the strictures of the Supreme Court’s monopolization case law. In fact, although it obligingly recites some of the error cost language from Trinko, it quickly moves away from Supreme Court precedent and bases its approach entirely on its reading of the D.C. Circuit’s Microsoft (2001) decision. 

Unfortunately, the district court’s reading of Microsoft is mistaken and impermissible under Supreme Court precedent. Indeed, both the Supreme Court and the D.C. Circuit make clear that a finding of illegal monopolization may not rest on an inference of anticompetitive harm.

The district court cites Microsoft for the proposition that

Where a government agency seeks injunctive relief, the Court need only conclude that Qualcomm’s conduct made a “significant contribution” to Qualcomm’s maintenance of monopoly power. The plaintiff is not required to “present direct proof that a defendant’s continued monopoly power is precisely attributable to its anticompetitive conduct.”

It’s true Microsoft held that, in government actions seeking injunctions, “courts [may] infer ‘causation’ from the fact that a defendant has engaged in anticompetitive conduct that ‘reasonably appears capable of making a significant contribution to maintaining monopoly power.’” (Emphasis added). 

But Microsoft never suggested that anticompetitiveness itself may be inferred.

“Causation” and “anticompetitive effect” are not the same thing. Indeed, Microsoft addresses “anticompetitive conduct” and “causation” in separate sections of its decision. And whereas Microsoft allows that courts may infer “causation” in certain government actions, it makes no such allowance with respect to “anticompetitive effect.” In fact, it explicitly rules it out:

[T]he plaintiff… must demonstrate that the monopolist’s conduct indeed has the requisite anticompetitive effect…; no less in a case brought by the Government, it must demonstrate that the monopolist’s conduct harmed competition, not just a competitor.”

The D.C. Circuit subsequently reinforced this clear conclusion of its holding in Microsoft in Rambus

Deceptive conduct—like any other kind—must have an anticompetitive effect in order to form the basis of a monopolization claim…. In Microsoft… [t]he focus of our antitrust scrutiny was properly placed on the resulting harms to competition.

Finding causation entails connecting evidentiary dots, while finding anticompetitive effect requires an economic assessment. Without such analysis it’s impossible to distinguish procompetitive from anticompetitive conduct, and basing liability on such an inference effectively writes “anticompetitive” out of the law.

Thus, the district court is correct when it holds that it “need not conclude that Qualcomm’s conduct is the sole reason for its rivals’ exits or impaired status.” But it is simply wrong to hold—in the same sentence—that it can thus “conclude that Qualcomm’s practices harmed competition and consumers.” The former claim is consistent with Microsoft; the latter is emphatically not.

Under Trinko and Aspen Skiing the district court’s finding of an antitrust duty to deal is impermissible 

Because finding that a company operates under a duty to deal essentially permits a court to infer anticompetitive harm without proof, such a finding “comes dangerously close to being a form of ‘no-fault’ monopolization,” as Herbert Hovenkamp has written. It is also thus seriously disfavored by the Court’s error cost jurisprudence.

In Trinko the Supreme Court interprets its holding in Aspen Skiing to identify essentially a single scenario from which it may plausibly be inferred that a monopolist’s refusal to deal with rivals harms consumers: the existence of a prior, profitable course of dealing, and the termination and replacement of that arrangement with an alternative that not only harms rivals, but also is less profitable for the monopolist.

In an effort to satisfy this standard, the district court states that “because Qualcomm previously licensed its rivals, but voluntarily stopped licensing rivals even though doing so was profitable, Qualcomm terminated a voluntary and profitable course of dealing.”

But it’s not enough merely that the prior arrangement was profitable. Rather, Trinko and Aspen Skiing hold that when a monopolist ends a profitable relationship with a rival, anticompetitive exclusion may be inferred only when it also refuses to engage in an ongoing arrangement that, in the short run, is more profitable than no relationship at all. The key is the relative value to the monopolist of the current options on offer, not the value to the monopolist of the terminated arrangement. In a word, what the Court requires is that the defendant exhibit behavior that, but-for the expectation of future, anticompetitive returns, is irrational.

It should be noted, as John Lopatka (here) and Alan Meese (here) (both of whom joined the amicus brief) have written, that even the Supreme Court’s approach is likely insufficient to permit a court to distinguish between procompetitive and anticompetitive conduct. 

But what is certain is that the district court’s approach in no way permits such an inference.

“Evasion of a competitive constraint” is not an antitrust-relevant refusal to deal

In order to infer anticompetitive effect, it’s not enough that a firm may have a “duty” to deal, as that term is colloquially used, based on some obligation other than an antitrust duty, because it can in no way be inferred from the evasion of that obligation that conduct is anticompetitive.

The district court bases its determination that Qualcomm’s conduct is anticompetitive on the fact that it enables the company to avoid patent exhaustion, FRAND commitments, and thus price competition in the chip market. But this conclusion is directly precluded by the Supreme Court’s holding in NYNEX

Indeed, in Rambus, the D.C. Circuit, citing NYNEX, rejected the FTC’s contention that it may infer anticompetitive effect from defendant’s evasion of a constraint on its monopoly power in an analogous SEP-licensing case: “But again, as in NYNEX, an otherwise lawful monopolist’s end-run around price constraints, even when deceptive or fraudulent, does not alone present a harm to competition.”

As Josh Wright has noted:

[T]he objection to the “evasion” of any constraint approach is… that it opens the door to enforcement actions applied to business conduct that is not likely to harm competition and might be welfare increasing.

Thus NYNEX and Rambus (and linkLine) reinforce the Court’s repeated holding that an inference of harm to competition is permissible only where conduct points clearly to anticompetitive effect—and, bad as they may be, evading obligations under other laws or violating norms of “business morality” do not suffice.

The district court’s elaborate theory of harm rests fundamentally on the claim that Qualcomm injures rivals—and the record is devoid of evidence demonstrating actual harm to competition. Instead, the court infers it from what it labels “unreasonably high” royalty rates, enabled by Qualcomm’s evasion of competition from rivals. In turn, the court finds that that evasion of competition can be the source of liability if what Qualcomm evaded was an antitrust duty to deal. And, in impermissibly circular fashion, the court finds that Qualcomm indeed evaded an antitrust duty to deal—because its conduct allowed it to sustain “unreasonably high” prices. 

The Court’s antitrust error cost jurisprudence—from Brooke Group to NYNEX to Trinko & linkLine—stands for the proposition that no such circular inferences are permitted.

The district court’s foreclosure analysis also improperly relies on inferences in lieu of economic evidence

Because the district court doesn’t perform a competitive effects analysis, it fails to demonstrate the requisite “substantial” foreclosure of competition required to sustain a claim of anticompetitive exclusion. Instead the court once again infers anticompetitive harm from harm to competitors. 

The district court makes no effort to establish the quantity of competition foreclosed as required by the Supreme Court. Nor does the court demonstrate that the alleged foreclosure harms competition, as opposed to just rivals. Foreclosure per se is not impermissible and may be perfectly consistent with procompetitive conduct.

Again citing Microsoft, the district court asserts that a quantitative finding is not required. Yet, as the court’s citation to Microsoft should have made clear, in its stead a court must find actual anticompetitive effect; it may not simply assert it. As Microsoft held: 

It is clear that in all cases the plaintiff must… prove the degree of foreclosure. This is a prudential requirement; exclusivity provisions in contracts may serve many useful purposes. 

The court essentially infers substantiality from the fact that Qualcomm entered into exclusive deals with Apple (actually, volume discounts), from which the court concludes that Qualcomm foreclosed rivals’ access to a key customer. But its inference that this led to substantial foreclosure is based on internal business statements—so-called “hot docs”—characterizing the importance of Apple as a customer. Yet, as Geoffrey Manne and Marc Williamson explain, such documentary evidence is unreliable as a guide to economic significance or legal effect: 

Business people will often characterize information from a business perspective, and these characterizations may seem to have economic implications. However, business actors are subject to numerous forces that influence the rhetoric they use and the conclusions they draw….

There are perfectly good reasons to expect to see “bad” documents in business settings when there is no antitrust violation lurking behind them.

Assuming such language has the requisite economic or legal significance is unsupportable—especially when, as here, the requisite standard demands a particular quantitative significance.

Moreover, the court’s “surcharge” theory of exclusionary harm rests on assumptions regarding the mechanism by which the alleged surcharge excludes rivals and harms consumers. But the court incorrectly asserts that only one mechanism operates—and it makes no effort to quantify it. 

The court cites “basic economics” via Mankiw’s Principles of Microeconomics text for its conclusion:

The surcharge affects demand for rivals’ chips because as a matter of basic economics, regardless of whether a surcharge is imposed on OEMs or directly on Qualcomm’s rivals, “the price paid by buyers rises, and the price received by sellers falls.” Thus, the surcharge “places a wedge between the price that buyers pay and the price that sellers receive,” and demand for such transactions decreases. Rivals see lower sales volumes and lower margins, and consumers see less advanced features as competition decreases.

But even assuming the court is correct that Qualcomm’s conduct entails such a surcharge, basic economics does not hold that decreased demand for rivals’ chips is the only possible outcome. 

In actuality, an increase in the cost of an input for OEMs can have three possible effects:

  1. OEMs can pass all or some of the cost increase on to consumers in the form of higher phone prices. Assuming some elasticity of demand, this would mean fewer phone sales and thus less demand by OEMs for chips, as the court asserts. But the extent of that effect would depend on consumers’ demand elasticity and the magnitude of the cost increase as a percentage of the phone price. If demand is highly inelastic at this price (i.e., relatively insensitive to the relevant price change), it may have a tiny effect on the number of phones sold and thus the number of chips purchased—approaching zero as price insensitivity increases.
  2. OEMs can absorb the cost increase and realize lower profits but continue to sell the same number of phones and purchase the same number of chips. This would not directly affect demand for chips or their prices.
  3. OEMs can respond to a price increase by purchasing fewer chips from rivals and more chips from Qualcomm. While this would affect rivals’ chip sales, it would not necessarily affect consumer prices, the total number of phones sold, or OEMs’ margins—that result would depend on whether Qualcomm’s chips cost more or less than its rivals’. If the latter, it would even increase OEMs’ margins and/or lower consumer prices and increase output.

Alternatively, of course, the effect could be some combination of these.

Whether any of these outcomes would substantially exclude rivals is inherently uncertain to begin with. But demonstrating a reduction in rivals’ chip sales is a necessary but not sufficient condition for proving anticompetitive foreclosure. The FTC didn’t even demonstrate that rivals were substantially harmed, let alone that there was any effect on consumers—nor did the district court make such findings. 

Doing so would entail consideration of whether decreased demand for rivals’ chips flows from reduced consumer demand or OEMs’ switching to Qualcomm for supply, how consumer demand elasticity affects rivals’ chip sales, and whether Qualcomm’s chips were actually less or more expensive than rivals’. Yet the court determined none of these. 

Conclusion

Contrary to established Supreme Court precedent, the district court’s decision relies on mere inferences to establish anticompetitive effect. The decision, if it stands, would render a wide range of potentially procompetitive conduct presumptively illegal and thus harm consumer welfare. It should be reversed by the Ninth Circuit.

Joining ICLE on the brief are:

  • Donald J. Boudreaux, Professor of Economics, George Mason University
  • Kenneth G. Elzinga, Robert C. Taylor Professor of Economics, University of Virginia
  • Janice Hauge, Professor of Economics, University of North Texas
  • Justin (Gus) Hurwitz, Associate Professor of Law, University of Nebraska College of Law; Director of Law & Economics Programs, ICLE
  • Thomas A. Lambert, Wall Chair in Corporate Law and Governance, University of Missouri Law School
  • John E. Lopatka, A. Robert Noll Distinguished Professor of Law, Penn State University Law School
  • Daniel Lyons, Professor of Law, Boston College Law School
  • Geoffrey A. Manne, President and Founder, International Center for Law & Economics; Distinguished Fellow, Northwestern University Center on Law, Business & Economics
  • Alan J. Meese, Ball Professor of Law, William & Mary Law School
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics Emeritus, Emory University
  • Vernon L. Smith, George L. Argyros Endowed Chair in Finance and Economics, Chapman University School of Business; Nobel Laureate in Economics, 2002
  • Michael Sykuta, Associate Professor of Economics, University of Missouri


[TOTM: The following is the fourth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here. This post originally appeared on the Federalist Society Blog.]

The courtroom trial in the Federal Trade Commission’s (FTC’s) antitrust case against Qualcomm ended in January with a promise from the judge in the case, Judge Lucy Koh, to issue a ruling as quickly as possible — caveated by her acknowledgement that the case is complicated and the evidence voluminous. Well, things have only gotten more complicated since the end of the trial. Not only did Apple and Qualcomm reach a settlement in the antitrust case against Qualcomm that Apple filed just three days after the FTC brought its suit, but the abbreviated trial in that case saw the presentation by Qualcomm of some damning evidence that, if accurate, seriously calls into (further) question the merits of the FTC’s case.

Apple v. Qualcomm settles — and the DOJ takes notice

The Apple v. Qualcomm case, which was based on substantially the same arguments brought by the FTC in its case, ended abruptly last month after only a day and a half of trial — just enough time for the parties to make their opening statements — when Apple and Qualcomm reached an out-of-court settlement. The settlement includes a six-year global patent licensing deal, a multi-year chip supplier agreement, an end to all of the patent disputes around the world between the two companies, and a $4.5 billion settlement payment from Apple to Qualcomm.

That alone complicates the economic environment into which Judge Koh will issue her ruling. But the Apple v. Qualcomm trial also appears to have induced the Department of Justice Antitrust Division (DOJ) to weigh in on the FTC’s case with a Statement of Interest requesting Judge Koh to use caution in fashioning a remedy in the case should she side with the FTC, followed by a somewhat snarky Reply from the FTC arguing the DOJ’s filing was untimely (and, reading the not-so-hidden subtext, unwelcome).

But buried in the DOJ’s Statement is an important indication of why it filed its Statement when it did, just about a week after the end of the Apple v. Qualcomm case, and a pointer to a much larger issue that calls the FTC’s case against Qualcomm even further into question (I previously wrote about the lack of theoretical and evidentiary merit in the FTC’s case here).

Footnote 6 of the DOJ’s Statement reads:

Internal Apple documents that recently became public describe how, in an effort to “[r]educe Apple’s net royalty to Qualcomm,” Apple planned to “[h]urt Qualcomm financially” and “[p]ut Qualcomm’s licensing model at risk,” including by filing lawsuits raising claims similar to the FTC’s claims in this case …. One commentator has observed that these documents “potentially reveal[] that Apple was engaging in a bad faith argument both in front of antitrust enforcers as well as the legal courts about the actual value and nature of Qualcomm’s patented innovation.” (Emphasis added).

Indeed, the slides presented by Qualcomm during that single day of trial in Apple v. Qualcomm are significant, not only for what they say about Apple’s conduct, but, more importantly, for what they say about the evidentiary basis for the FTC’s claims against the company.

The evidence presented by Qualcomm in its opening statement suggests some troubling conduct by Apple

Others have pointed to Qualcomm’s opening slides and the Apple internal documents they present to note Apple’s apparent bad conduct. As one commentator sums it up:

Although we really only managed to get a small glimpse of Qualcomm’s evidence demonstrating the extent of Apple’s coordinated strategy to manipulate the FRAND license rate, that glimpse was particularly enlightening. It demonstrated a decade-long coordinated effort within Apple to systematically engage in what can only fairly be described as manipulation (if not creation of evidence) and classic holdout.

Qualcomm showed during opening arguments that, dating back to at least 2009, Apple had been laying the foundation for challenging its longstanding relationship with Qualcomm. (Emphasis added).

The internal Apple documents presented by Qualcomm to corroborate this claim appear quite damning. Of course, absent explanation and cross-examination, it’s impossible to know for certain what the documents mean. But on their face they suggest Apple knowingly undertook a deliberate scheme (and knowingly took upon itself significant legal risk in doing so) to devalue comparable patent portfolios to Qualcomm’s:

The apparent purpose of this scheme was to devalue comparable patent licensing agreements where Apple had the power to do so (through litigation or the threat of litigation) in order to then use those agreements to argue that Qualcomm’s royalty rates were above the allowable, FRAND level, and to undermine the royalties Qualcomm would be awarded in courts adjudicating its FRAND disputes with the company. As one commentator put it:

Apple embarked upon a coordinated scheme to challenge weaker patents in order to beat down licensing prices. Once the challenges to those weaker patents were successful, and the licensing rates paid to those with weaker patent portfolios were minimized, Apple would use the lower prices paid for weaker patent portfolios as proof that Qualcomm was charging a super-competitive licensing price; a licensing price that violated Qualcomm’s FRAND obligations. (Emphasis added).

That alone is a startling revelation, if accurate, and one that would seem to undermine claims that patent holdout isn’t a real problem. It also would undermine Apple’s claims that it is a “willing licensee,” engaging with SEP licensors in good faith. (Indeed, this has been called into question before, and one Federal Circuit judge has noted in dissent that “[t]he record in this case shows evidence that Apple may have been a hold out.”). If the implications drawn from the Apple documents shown in Qualcomm’s opening statement are accurate, there is good reason to doubt that Apple has been acting in good faith.

Even more troubling is what it means for the strength of the FTC’s case

But the evidence offered in Qualcomm’s opening argument point to another, more troubling implication, as well. We know that Apple has been coordinating with the FTC and was likely an important impetus for the FTC’s decision to bring an action in the first place. It seems reasonable to assume that Apple used these “manipulated” agreements to help make its case.

But what is most troubling is the extent to which it appears to have worked.

The FTC’s action against Qualcomm rested in substantial part on arguments that Qualcomm’s rates were too high (even though the FTC constructed its case without coming right out and saying this, at least until trial). In its opening statement the FTC said:

Qualcomm’s practices, including no license, no chips, skewed negotiations towards the outcomes that favor Qualcomm and lead to higher royalties. Qualcomm is committed to license its standard essential patents on fair, reasonable, and non-discriminatory terms. But even before doing market comparison, we know that the license rates charged by Qualcomm are too high and above FRAND because Qualcomm uses its chip power to require a license.

* * *

Mr. Michael Lasinski [the FTC’s patent valuation expert] compared the royalty rates received by Qualcomm to … the range of FRAND rates that ordinarily would form the boundaries of a negotiation … Mr. Lasinski’s expert opinion … is that Qualcomm’s royalty rates are far above any indicators of fair and reasonable rates. (Emphasis added).

The key question is what constitutes the “range of FRAND rates that ordinarily would form the boundaries of a negotiation”?

Because they were discussed under seal, we don’t know the precise agreements that the FTC’s expert, Mr. Lasinski, used for his analysis. But we do know something about them: His analysis entailed a study of only eight licensing agreements; in six of them, the licensee was either Apple or Samsung; and in all of them the licensor was either Interdigital, Nokia, or Ericsson. We also know that Mr. Lasinski’s valuation study did not include any Qualcomm licenses, and that the eight agreements he looked at were all executed after the district court’s decision in Microsoft vs. Motorola in 2013.

A curiously small number of agreements

Right off the bat there is a curiosity in the FTC’s valuation analysis. Even though there are hundreds of SEP license agreements involving the relevant standards, the FTC’s analysis relied on only eight, three-quarters of which involved licensing by only two companies: Apple and Samsung.

Indeed, even since 2013 (a date to which we will return) there have been scads of licenses (see, e.g., herehere, and here). Not only Apple and Samsung make CDMA and LTE devices; there are — quite literally — hundreds of other manufacturers out there, all of them licensing essentially the same technology — including global giants like LG, Huawei, HTC, Oppo, Lenovo, and Xiaomi. Why were none of their licenses included in the analysis? 

At the same time, while Interdigital, Nokia, and Ericsson are among the largest holders of CDMA and LTE SEPs, several dozen companies have declared such patents, including Motorola (Alphabet), NEC, Huawei, Samsung, ZTE, NTT DOCOMO, etc. Again — why were none of their licenses included in the analysis?

All else equal, more data yields better results. This is particularly true where the data are complex license agreements which are often embedded in larger, even-more-complex commercial agreements and which incorporate widely varying patent portfolios, patent implementers, and terms.

Yet the FTC relied on just eight agreements in its comparability study, covering a tiny fraction of the industry’s licensors and licensees, and, notably, including primarily licenses taken by the two companies (Samsung and Apple) that have most aggressively litigated their way to lower royalty rates.

A curiously crabbed selection of licensors

And it is not just that the selected licensees represent a weirdly small and biased sample; it is also not necessarily even a particularly comparable sample.

One thing we can be fairly confident of, given what we know of the agreements used, is that at least one of the license agreements involved Nokia licensing to Apple, and another involved InterDigital licensing to Apple. But these companies’ patent portfolios are not exactly comparable to Qualcomm’s. About Nokia’s patents, Apple said:

And about InterDigital’s:

Meanwhile, Apple’s view of Qualcomm’s patent portfolio (despite its public comments to the contrary) was that it was considerably better than the others’:

The FTC’s choice of such a limited range of comparable license agreements is curious for another reason, as well: It includes no Qualcomm agreements. Qualcomm is certainly one of the biggest players in the cellular licensing space, and no doubt more than a few license agreements involve Qualcomm. While it might not make sense to include Qualcomm licenses that the FTC claims incorporate anticompetitive terms, that doesn’t describe the huge range of Qualcomm licenses with which the FTC has no quarrel. Among other things, Qualcomm licenses from before it began selling chips would not have been affected by its alleged “no license, no chips” scheme, nor would licenses granted to companies that didn’t also purchase Qualcomm chips. Furthermore, its licenses for technology reading on the WCDMA standard are not claimed to be anticompetitive by the FTC.

And yet none of these licenses were deemed “comparable” by the FTC’s expert, even though, on many dimensions — most notably, with respect to the underlying patent portfolio being valued — they would have been the most comparable (i.e., identical).

A curiously circumscribed timeframe

That the FTC’s expert should use the 2013 cut-off date is also questionable. According to Lasinski, he chose to use agreements after 2013 because it was in 2013 that the U.S. District Court for the Western District of Washington decided the Microsoft v. Motorola case. Among other things, the court in Microsoft v Motorola held that the proper value of a SEP is its “intrinsic” patent value, including its value to the standard, but not including the additional value it derives from being incorporated into a widely used standard.

According to the FTC’s expert,

prior to [Microsoft v. Motorola], people were trying to value … the standard and the license based on the value of the standard, not the value of the patents ….

Asked by Qualcomm’s counsel if his concern was that the “royalty rates derived in license agreements for cellular SEPs [before Microsoft v. Motorola] could very well have been above FRAND,” Mr. Lasinski concurred.

The problem with this approach is that it’s little better than arbitrary. The Motorola decision was an important one, to be sure, but the notion that sophisticated parties in a multi-billion dollar industry were systematically agreeing to improper terms until a single court in Washington suggested otherwise is absurd. To be sure, such agreements are negotiated in “the shadow of the law,” and judicial decisions like the one in Washington (later upheld by the Ninth Circuit) can affect the parties’ bargaining positions.

But even if it were true that the court’s decision had some effect on licensing rates, the decision would still have been only one of myriad factors determining parties’ relative bargaining  power and their assessment of the proper valuation of SEPs. There is no basis to support the assertion that the Motorola decision marked a sea-change between “improper” and “proper” patent valuations. And, even if it did, it was certainly not alone in doing so, and the FTC’s expert offers no justification for determining that agreements reached before, say, the European Commission’s decision against Qualcomm in 2018 were “proper,” or that the Korea FTC’s decision against Qualcomm in 2009 didn’t have the same sort of corrective effect as the Motorola court’s decision in 2013. 

At the same time, a review of a wider range of agreements suggested that Qualcomm’s licensing royalties weren’t inflated

Meanwhile, one of Qualcomm’s experts in the FTC case, former DOJ Chief Economist Aviv Nevo, looked at whether the FTC’s theory of anticompetitive harm was borne out by the data by looking at Qualcomm’s royalty rates across time periods and standards, and using a much larger set of agreements. Although his remit was different than Mr. Lasinski’s, and although he analyzed only Qualcomm licenses, his analysis still sheds light on Mr. Lasinski’s conclusions:

[S]pecifically what I looked at was the predictions from the theory to see if they’re actually borne in the data….

[O]ne of the clear predictions from the theory is that during periods of alleged market power, the theory predicts that we should see higher royalty rates.

So that’s a very clear prediction that you can take to data. You can look at the alleged market power period, you can look at the royalty rates and the agreements that were signed during that period and compare to other periods to see whether we actually see a difference in the rates.

Dr. Nevo’s analysis, which looked at royalty rates in Qualcomm’s SEP license agreements for CDMA, WCDMA, and LTE ranging from 1990 to 2017, found no differences in rates between periods when Qualcomm was alleged to have market power and when it was not alleged to have market power (or could not have market power, on the FTC’s theory, because it did not sell corresponding chips).

The reason this is relevant is that Mr. Lasinski’s assessment implies that Qualcomm’s higher royalty rates weren’t attributable to its superior patent portfolio, leaving either anticompetitive conduct or non-anticompetitive, superior bargaining ability as the explanation. No one thinks Qualcomm has cornered the market on exceptional negotiators, so really the only proffered explanation for the results of Mr. Lasinski’s analysis is anticompetitive conduct. But this assumes that his analysis is actually reliable. Prof. Nevo’s analysis offers some reason to think that it is not.

All of the agreements studied by Mr. Lasinski were drawn from the period when Qualcomm is alleged to have employed anticompetitive conduct to elevate its royalty rates above FRAND. But when the actual royalties charged by Qualcomm during its alleged exercise of market power are compared to those charged when and where it did not have market power, the evidence shows it received identical rates. Mr Lasinki’s results, then, would imply that Qualcomm’s royalties were “too high” not only while it was allegedly acting anticompetitively, but also when it was not. That simple fact suggests on its face that Mr. Lasinski’s analysis may have been flawed, and that it systematically under-valued Qualcomm’s patents.

Connecting the dots and calling into question the strength of the FTC’s case

In its closing argument, the FTC pulled together the implications of its allegations of anticompetitive conduct by pointing to Mr. Lasinski’s testimony:

Now, looking at the effect of all of this conduct, Qualcomm’s own documents show that it earned many times the licensing revenue of other major licensors, like Ericsson.

* * *

Mr. Lasinski analyzed whether this enormous difference in royalties could be explained by the relative quality and size of Qualcomm’s portfolio, but that massive disparity was not explained.

Qualcomm’s royalties are disproportionate to those of other SEP licensors and many times higher than any plausible calculation of a FRAND rate.

* * *

The overwhelming direct evidence, some of which is cited here, shows that Qualcomm’s conduct led licensees to pay higher royalties than they would have in fair negotiations.

It is possible, of course, that Lasinki’s methodology was flawed; indeed, at trial Qualcomm argued exactly this in challenging his testimony. But it is also possible that, whether his methodology was flawed or not, his underlying data was flawed.

It is impossible from the publicly available evidence to definitively draw this conclusion, but the subsequent revelation that Apple may well have manipulated at least a significant share of the eight agreements that constituted Mr. Lasinski’s data certainly increases the plausibility of this conclusion: We now know, following Qualcomm’s opening statement in Apple v. Qualcomm, that that stilted set of comparable agreements studied by the FTC’s expert also happens to be tailor-made to be dominated by agreements that Apple may have manipulated to reflect lower-than-FRAND rates.

What is most concerning is that the FTC may have built up its case on such questionable evidence, either by intentionally cherry picking the evidence upon which it relied, or inadvertently because it rested on such a needlessly limited range of data, some of which may have been tainted.

Intentionally or not, the FTC appears to have performed its valuation analysis using a needlessly circumscribed range of comparable agreements and justified its decision to do so using questionable assumptions. This seriously calls into question the strength of the FTC’s case.

(The following is adapted from a recent ICLE Issue Brief on the flawed essential facilities arguments undergirding the EU competition investigations into Amazon’s marketplace that I wrote with Geoffrey Manne.  The full brief is available here. )

Amazon has largely avoided the crosshairs of antitrust enforcers to date. The reasons seem obvious: in the US it handles a mere 5% of all retail sales (with lower shares worldwide), and it consistently provides access to a wide array of affordable goods. Yet, even with Amazon’s obvious lack of dominance in the general retail market, the EU and some of its member states are opening investigations.

Commissioner Margarethe Vestager’s probe into Amazon, which came to light in September, centers on whether Amazon is illegally using its dominant position vis-á-vis third party merchants on its platforms in order to obtain data that it then uses either to promote its own direct sales, or else to develop competing products under its private label brands. More recently, Austria and Germany have launched separate investigations of Amazon rooted in many of the same concerns as those of the European Commission. The German investigation also focuses on whether the contractual relationships that third party sellers enter into with Amazon are unfair because these sellers are “dependent” on the platform.

One of the fundamental, erroneous assumptions upon which these cases are built is the alleged “essentiality” of the underlying platform or input. In truth, these sorts of cases are more often based on stories of firms that chose to build their businesses in a way that relies on a specific platform. In other words, their own decisions — from which they substantially benefited, of course — made their investments highly “asset specific” and thus vulnerable to otherwise avoidable risks. When a platform on which these businesses rely makes a disruptive move, the third parties cry foul, even though the platform was not — nor should have been — under any obligation to preserve the status quo on behalf of third parties.

Essential or not, that is the question

All three investigations are effectively premised on a version of an “essential facilities” theory — the claim that Amazon is essential to these companies’ ability to do business.

There are good reasons that the US has tightly circumscribed the scope of permissible claims invoking the essential facilities doctrine. Such “duty to deal” claims are “at or near the outer boundary” of US antitrust law. And there are good reasons why the EU and its member states should be similarly skeptical.

Characterizing one firm as essential to the operation of other firms is tricky because “[c]ompelling [innovative] firms to share the source of their advantage… may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities.” Further, the classification requires “courts to act as central planners, identifying the proper price, quantity, and other terms of dealing—a role for which they are ill-suited.”

The key difficulty is that alleged “essentiality” actually falls on a spectrum. On one end is something like a true monopoly utility that is actually essential to all firms that use its service as a necessary input; on the other is a firm that offers highly convenient services that make it much easier for firms to operate. This latter definition of “essentiality” describes firms like Google and Amazon, but it is not accurate to characterize such highly efficient and effective firms as truly “essential.” Instead, companies that choose to take advantage of the benefits such platforms offer, and to tailor their business models around them, suffer from an asset specificity problem.

Geoffrey Manne noted this problem in the context of the EU’s Google Shopping case:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control.

Third-party sellers that rely upon Amazon without a contingency plan are engaging in a calculated risk that, as business owners, they would typically be expected to manage.  The investigations by European authorities are based on the notion that antitrust law might require Amazon to remove that risk by prohibiting it from undertaking certain conduct that might raise costs for its third-party sellers.

Implications and extensions

In the full issue brief, we consider the tensions in EU law between seeking to promote innovation and protect the competitive process, on the one hand, and the propensity of EU enforcers to rely on essential facilities-style arguments on the other. One of the fundamental errors that leads EU enforcers in this direction is that they confuse the distribution channel of the Internet with an antitrust-relevant market definition.

A claim based on some flavor of Amazon-as-essential-facility should be untenable given today’s market realities because Amazon is, in fact, just one mode of distribution among many. Commerce on the Internet is still just commerce. The only thing preventing a merchant from operating a viable business using any of a number of different mechanisms is the transaction costs it would incur adjusting to a different mode of doing business. Casting Amazon’s marketplace as an essential facility insulates third-party firms from the consequences of their own decisions — from business model selection to marketing and distribution choices. Commerce is nothing new and offline distribution channels and retail outlets — which compete perfectly capably with online — are well developed. Granting retailers access to Amazon’s platform on artificially favorable terms is no more justifiable than granting them access to a supermarket end cap, or a particular unit at a shopping mall. There is, in other words, no business or economic justification for granting retailers in the time-tested and massive retail market an entitlement to use a particular mode of marketing and distribution just because they find it more convenient.

[TOTM: The following is the first in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here. This post originally appeared on the Federalist Society Blog.]

Just days before leaving office, the outgoing Obama FTC left what should have been an unwelcome parting gift for the incoming Commission: an antitrust suit against Qualcomm. This week the FTC — under a new Chairman and with an entirely new set of Commissioners — finished unwrapping its present, and rested its case in the trial begun earlier this month in FTC v Qualcomm.

This complex case is about an overreaching federal agency seeking to set prices and dictate the business model of one of the world’s most innovative technology companies. As soon-to-be Acting FTC Chairwoman, Maureen Ohlhausen, noted in her dissent from the FTC’s decision to bring the case, it is “an enforcement action based on a flawed legal theory… that lacks economic and evidentiary support…, and that, by its mere issuance, will undermine U.S. intellectual property rights… worldwide.”

Implicit in the FTC’s case is the assumption that Qualcomm charges smartphone makers “too much” for its wireless communications patents — patents that are essential to many smartphones. But, as former FTC and DOJ chief economist, Luke Froeb, puts it, “[n]othing is more alien to antitrust than enquiring into the reasonableness of prices.” Even if Qualcomm’s royalty rates could somehow be deemed “too high” (according to whom?), excessive pricing on its own is not an antitrust violation under U.S. law.

Knowing this, the FTC “dances around that essential element” (in Ohlhausen’s words) and offers instead a convoluted argument that Qualcomm’s business model is anticompetitive. Qualcomm both sells wireless communications chipsets used in mobile phones, as well as licenses the technology on which those chips rely. According to the complaint, by licensing its patents only to end-users (mobile device makers) instead of to chip makers further up the supply chain, Qualcomm is able to threaten to withhold the supply of its chipsets to its licensees and thereby extract onerous terms in its patent license agreements.

There are numerous problems with the FTC’s case. Most fundamental among them is the “no duh” problem: Of course Qualcomm conditions the purchase of its chips on the licensing of its intellectual property; how could it be any other way? The alternative would require Qualcomm to actually facilitate the violation of its property rights by forcing it to sell its chips to device makers even if they refuse its patent license terms. In that world, what device maker would ever agree to pay more than a pittance for a patent license? The likely outcome is that Qualcomm charges more for its chips to compensate (or simply stops making them). Great, the FTC says; then competitors can fill the gap and — voila: the market is more competitive, prices will actually fall, and consumers will reap the benefits.

Except it doesn’t work that way. As many economists, including both the current and a prominent former chief economist of the FTC, have demonstrated, forcing royalty rates lower in such situations is at least as likely to harm competition as to benefit it. There is no sound theoretical or empirical basis for concluding that using antitrust to move royalty rates closer to some theoretical ideal will actually increase consumer welfare. All it does for certain is undermine patent holders’ property rights, virtually ensuring there will be less innovation.

In fact, given this inescapable reality, it is unclear why the current Commission is continuing to pursue the case at all. The bottom line is that, if it wins the case, the current FTC will have done more to undermine intellectual property rights than any other administration’s Commission has been able to accomplish.

It is not difficult to identify the frailties of the case that would readily support the agency backing away from pursuing it further. To begin with, the claim that device makers cannot refuse Qualcomm’s terms because the company effectively controls the market’s supply of mobile broadband modem chips is fanciful. While it’s true that Qualcomm is the largest supplier of these chipsets, it’s an absurdity to claim that device makers have no alternatives. In fact, Qualcomm has faced stiff competition from some of the world’s other most successful companies since well before the FTC brought its case. Samsung — the largest maker of Android phones — developed its own chip to replace Qualcomm’s in 2015, for example. More recently, Intel has provided Apple with all of the chips for its 2018 iPhones, and Apple is rumored to be developing its own 5G cellular chips in-house. In any case, the fact that most device makers have preferred to use Qualcomm’s chips in the past says nothing about the ability of other firms to take business from it.

The possibility (and actuality) of entry from competitors like Intel ensures that sophisticated purchasers like Apple have bargaining leverage. Yet, ironically, the FTC points to Apple’s claimthat Qualcomm “forced” it to use Intel modems in its latest iPhones as evidence of Qualcomm’s dominance. Think about that: Qualcomm “forced” a company worth many times its own value to use a competitor’s chips in its new iPhones — and that shows Qualcomm has a stranglehold on the market?

The FTC implies that Qualcomm’s refusal to license its patents to competing chip makers means that competitors cannot reliably supply the market. Yet Qualcomm has never asserted its patents against a competing chip maker, every one of which uses Qualcomm’s technology without paying any royalties to do so. The FTC nevertheless paints the decision to license only to device makers as the aberrant choice of an exploitative, dominant firm. The reality, however, is that device-level licensing is the norm practiced by every company in the industry — and has been since the 1980s.

Not only that, but Qualcomm has not altered its licensing terms or practices since it was decidedly an upstart challenger in the market — indeed, since before it even started producing chips, and thus before it even had the supposed means to leverage its chip sales to extract anticompetitive licensing terms. It would be a remarkable coincidence if precisely the same licensing structure and the exact same royalty rate served the company’s interests both as a struggling startup and as an alleged rapacious monopolist. Yet that is the implication of the FTC’s theory.

When Qualcomm introduced CDMA technology to the mobile phone industry in 1989, it was a promising but unproven new technology in an industry dominated by different standards. Qualcomm happily encouraged chip makers to promote the standard by enabling them to produce compliant components without paying any royalties; and it willingly licensed its patents to device makers based on a percentage of sales of the handsets that incorporated CDMA chips. Qualcomm thus shared both the financial benefits and the financial risk associated with the development and sales of devices implementing its new technology.

Qualcomm’s favorable (to handset makers) licensing terms may have helped CDMA become one of the industry standards for 2G and 3G devices. But it’s an unsupportable assertion to say that those identical terms are suddenly the source of anticompetitive power, particularly as 2G and 3G are rapidly disappearing from the market and as competing patent holders gain prominence with each successive cellular technology standard.

To be sure, successful handset makers like Apple that sell their devices at a significant premium would prefer to share less of their revenue with Qualcomm. But their success was built in large part on Qualcomm’s technology. They may regret the terms of the deal that propelled CDMA technology to prominence, but Apple’s regret is not the basis of a sound antitrust case.

And although it’s unsurprising that manufacturers of premium handsets would like to use antitrust law to extract better terms from their negotiations with standard-essential patent holders, it is astonishing that the current FTC is carrying on the Obama FTC’s willingness to do it for them.

None of this means that Qualcomm is free to charge an unlimited price: standard-essential patents must be licensed on “FRAND” terms, meaning they must be fair, reasonable, and nondiscriminatory. It is difficult to asses what constitutes FRAND, but the most restrictive method is to estimate what negotiated terms would look like before a patent was incorporated into a standard. “[R]oyalties that are or would be negotiated ex ante with full information are a market bench-mark reflecting legitimate return to innovation,” writes Carl Shapiro, the FTC’s own economic expert in the case.

And that is precisely what happened here: We don’t have to guess what the pre-standard terms of trade would look like; we know them, because they are the same terms that Qualcomm offers now.

We don’t know exactly what the consequence would be for consumers, device makers, and competitors if Qualcomm were forced to accede to the FTC’s benighted vision of how the market should operate. But we do know that the market we actually have is thriving, with new entry at every level, enormous investment in R&D, and continuous technological advance. These aren’t generally the characteristics of a typical monopoly market. While the FTC’s effort to “fix” the market may help Apple and Samsung reap a larger share of the benefits, it will undoubtedly end up only hurting consumers.

Last week, I objected to Senator Warner relying on the flawed AOL/Time Warner merger conditions as a template for tech regulatory policy, but there is a much deeper problem contained in his proposals.  Although he does not explicitly say “big is bad” when discussing competition issues, the thrust of much of what he recommends would serve to erode the power of larger firms in favor of smaller firms without offering a justification for why this would result in a superior state of affairs. And he makes these recommendations without respect to whether those firms actually engage in conduct that is harmful to consumers.

In the Data Portability section, Warner says that “As platforms grow in size and scope, network effects and lock-in effects increase; consumers face diminished incentives to contract with new providers, particularly if they have to once again provide a full set of data to access desired functions.“ Thus, he recommends a data portability mandate, which would theoretically serve to benefit startups by providing them with the data that large firms possess. The necessary implication here is that it is a per se good that small firms be benefited and large firms diminished, as the proposal is not grounded in any evaluation of the competitive behavior of the firms to which such a mandate would apply.

Warner also proposes an “interoperability” requirement on “dominant platforms” (which I criticized previously) in situations where, “data portability alone will not produce procompetitive outcomes.” Again, the necessary implication is that it is a per se good that established platforms share their services with start ups without respect to any competitive analysis of how those firms are behaving. The goal is preemptively to “blunt their ability to leverage their dominance over one market or feature into complementary or adjacent markets or products.”

Perhaps most perniciously, Warner recommends treating large platforms as essential facilities in some circumstances. To this end he states that:

Legislation could define thresholds – for instance, user base size, market share, or level of dependence of wider ecosystems – beyond which certain core functions/platforms/apps would constitute ‘essential facilities’, requiring a platform to provide third party access on fair, reasonable and non-discriminatory (FRAND) terms and preventing platforms from engaging in self-dealing or preferential conduct.

But, as  i’ve previously noted with respect to imposing “essential facilities” requirements on tech platforms,

[T]he essential facilities doctrine is widely criticized, by pretty much everyone. In their respected treatise, Antitrust Law, Herbert Hovenkamp and Philip Areeda have said that “the essential facility doctrine is both harmful and unnecessary and should be abandoned”; Michael Boudin has noted that the doctrine is full of “embarrassing weaknesses”; and Gregory Werden has opined that “Courts should reject the doctrine.”

Indeed, as I also noted, “the Supreme Court declined to recognize the essential facilities doctrine as a distinct rule in Trinko, where it instead characterized the exclusionary conduct in Aspen Skiing as ‘at or near the outer boundary’ of Sherman Act § 2 liability.”

In short, it’s very difficult to know when access to a firm’s internal functions might be critical to the facilitation of a market. It simply cannot be true that a firm becomes bound under onerous essential facilities requirements (or classification as a public utility) simply because other firms find it more convenient to use its services than to develop their own.

The truth of what is actually happening in these cases, however, is that third-party firms are choosing to anchor their business to the processes of another firm which generates an “asset specificity” problem that they then seek the government to remedy:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control.

This is naturally a calculated risk that a firm may choose to make, but it is a risk. To pry open Google or Facebook for the benefit of competitors that choose to play to Google and Facebook’s user base, rather than opening markets of their own, punishes the large players for being successful while also rewarding behavior that shies away from innovation. Further, such a policy would punish the large platforms whenever they innovate with their services in any way that might frustrate third-party “integrators” (see, e.g., Foundem’s claims that Google’s algorithm updates meant to improve search quality for users harmed Foundem’s search rankings).  

Rather than encouraging innovation, blessing this form of asset specificity would have the perverse result of entrenching the status quo.

In all of these recommendations from Senator Warner, there is no claim that any of the targeted firms will have behaved anticompetitively, but merely that they are above a certain size. This is to say that, in some cases, big is bad.

Senator Warner’s policies would harm competition and innovation

As Geoffrey Manne and Gus Hurwitz have recently noted these views run completely counter to the last half-century or more of economic and legal learning that has occurred in antitrust law. From its murky, politically-motivated origins through the early 60’s when the Structure-Conduct-Performance (“SCP”) interpretive framework was ascendant, antitrust law was more or less guided by the gut feeling of regulators that big business necessarily harmed the competitive process.

Thus, at its height with SCP, “big is bad” antitrust relied on presumptions that large firms over a certain arbitrary threshold were harmful and should be subjected to more searching judicial scrutiny when merging or conducting business.

A paradigmatic example of this approach can be found in Von’s Grocery where the Supreme Court prevented the merger of two relatively small grocery chains. Combined, the two chains would have constitutes a mere 9 percent of the market, yet the Supreme Court, relying on the SCP aversion to concentration in itself, prevented the merger despite any procompetitive justifications that would have allowed the combined entity to compete more effectively in a market that was coming to be dominated by large supermarkets.

As Manne and Hurwitz observe: “this decision meant breaking up a merger that did not harm consumers, on the one hand, while preventing firms from remaining competitive in an evolving market by achieving efficient scale, on the other.” And this gets to the central defect of Senator Warner’s proposals. He ties his decisions to interfere in the operations of large tech firms to their size without respect to any demonstrable harm to consumers.

To approach antitrust this way — that is, to roll the clock back to a period before there was a well-defined and administrable standard for antitrust — is to open the door for regulation by political whim. But the value of the contemporary consumer welfare test is that it provides knowable guidance that limits both the undemocratic conduct of politically motivated enforcers as well as the opportunities for private firms to engage in regulatory capture. As Manne and Hurwitz observe:

Perhaps the greatest virtue of the consumer welfare standard is not that it is the best antitrust standard (although it is) — it’s simply that it is a standard. The story of antitrust law for most of the 20th century was one of standard-less enforcement for political ends. It was a tool by which any entrenched industry could harness the force of the state to maintain power or stifle competition.

While it is unlikely that Senator Warner intends to entrench politically powerful incumbents, or enable regulation by whim, those are the likely effects of his proposals.

Antitrust law has a rich set of tools for dealing with competitive harm. Introducing legislation to define arbitrary thresholds for limiting the potential power of firms will ultimately undermine the power of those tools and erode the welfare of consumers.

 

By Pinar Akman, Professor of Law, University of Leeds*

The European Commission’s decision in Google Android cuts a fine line between punishing a company for its success and punishing a company for falling afoul of the rules of the game. Which side of the line it actually falls on cannot be fully understood until the Commission publishes its full decision. Much depends on the intricate facts of the case. As the full decision may take months to come, this post offers merely the author’s initial thoughts on the decision on the basis of the publicly available information.

The eye-watering fine of $5.1 billion — which together with the fine of $2.7 billion in the Google Shopping decision from last year would (according to one estimate) suffice to fund for almost one year the additional yearly public spending necessary to eradicate world hunger by 2030 — will not be further discussed in this post. This is because the fine is assumed to have been duly calculated on the basis of the Commission’s relevant Guidelines, and, from a legal and commercial point of view, the absolute size of the fine is not as important as the infringing conduct and the remedy Google will need to adopt to comply with the decision.

First things first. This post proceeds on the premise that the aim of competition law is to prevent the exclusion of competitors that are (at least) as efficient as the dominant incumbent, whose exclusion would ultimately harm consumers.

Next, it needs to be noted that the Google Android case is a more conventional antitrust case than Google Shopping in the sense that one can at least envisage a potentially robust antitrust theory of harm in the former case. If a dominant undertaking ties its products together to exclude effective competition in some of these markets or if it pays off customers to exclude access by its efficient competitors to consumers, competition law intervention may be justified.

The central question in Google Android is whether on the available facts this appears to have happened.

What we know and market definition

The premise of the case is that Google used its dominance in the Google Play Store (which enables users to download apps onto their Android phones) to “cement Google’s dominant position in general internet search.”

It is interesting that the case appears to concern a dominant undertaking leveraging its dominance from a market in which it is dominant (Google Play Store) into another market in which it is also dominant (internet search). As far as this author is aware, most (if not all?) cases of tying in the EU to date concerned tying where the dominant undertaking leveraged its dominance in one market to distort or eliminate competition in an otherwise competitive market.

Thus, for example, in Microsoft (Windows Operating System —> media players), Hilti (patented cartridge strips —> nails), and Tetra Pak II (packaging machines —> non-aseptic cartons), the tied market was actually or potentially competitive, and this was why the tying was alleged to have eliminated competition. It will be interesting to see which case the Commission uses as precedent in its decision — more on that later.

Also noteworthy is that the Commission does not appear to have defined a separate mobile search market that would have been competitive but for Google’s alleged leveraging. The market has been defined as the general internet search market. So, according to the Commission, the Google Search App and Google Search engine appear to be one and the same thing, and desktop and mobile devices are equivalent (or substitutable).

Finding mobile and desktop devices to be equivalent to one another may have implications for other cases including the ongoing appeal in Google Shopping where, for example, the Commission found that “[m]obile [apps] are not a viable alternative for replacing generic search traffic from Google’s general search results pages” for comparison shopping services. The argument that mobile apps and mobile traffic are fundamental in Google Android but trivial in Google Shopping may not play out favourably for the Commission before the Court of Justice of the EU.

Another interesting market definition point is that the Commission has found Apple not to be a competitor to Google in the relevant market defined by the Commission: the market for “licensable smart mobile operating systems.” Apple does not fall within that market because Apple does not license its mobile operating system to anyone: Apple’s model eliminates all possibility of competition from the start and is by definition exclusive.

Although there is some internal logic in the Commission’s exclusion of Apple from the upstream market that it has defined, is this not a bit of a definitional stop? How can Apple compete with Google in the market as defined by the Commission when Apple allows only itself to use its operating system only on devices that Apple itself manufactures?

To be fair, the Commission does consider there to be some competition between Apple and Android devices at the level of consumers — just not sufficient to constrain Google at the upstream, manufacturer level.

Nevertheless, the implication of the Commission’s assessment that separates the upstream and downstream in this way is akin to saying that the world’s two largest corn producers that produce the corn used to make corn flakes do not compete with one another in the market for corn flakes because one of them uses its corn exclusively in its own-brand cereal.

Although the Commission cabins the use of supply-side substitutability in market definition, its own guidance on the topic notes that

Supply-side substitutability may also be taken into account when defining markets in those situations in which its effects are equivalent to those of demand substitution in terms of effectiveness and immediacy. This means that suppliers are able to switch production to the relevant products and market them in the short term….

Apple could — presumably — rather immediately and at minimal cost produce and market a version of iOS for use on third-party device makers’ devices. By the Commission’s own definition, it would seem to make sense to include Apple in the relevant market. Nevertheless, it has apparently not done so here.

The message that the Commission sends with the finding is that if Android had not been open source and freely available, and if Google competed with Apple with its own version of a walled-garden built around exclusivity, it is possible that none of its practices would have raised any concerns. Or, should Apple be expecting a Statement of Objections next from the EU Commission?

Is Microsoft really the relevant precedent?

Given that Google Android appears to revolve around the idea of tying and leveraging, the EU Commission’s infringement decision against Microsoft, which found an abusive tie in Microsoft’s tying of Windows Operating System with Windows Media Player, appears to be the most obvious precedent, at least for the tying part of the case.

There are, however, potentially important factual differences between the two cases. To take just a few examples:

  • Microsoft charged for the Windows Operating System, whereas Google does not;
  • Microsoft tied the setting of Windows Media Player as the default to OEMs’ licensing of the operating system (Windows), whereas Google ties the setting of Search as the default to device makers’ use of other Google apps, while allowing them to use the operating system (Android) without any Google apps; and
  • Downloading competing media players was difficult due to download speeds and lack of user familiarity, whereas it is trivial and commonplace for users to download apps that compete with Google’s.

Moreover, there are also some conceptual hurdles in finding the conduct to be that of tying.

First, the difference between “pre-installed,” “default,” and “exclusive” matters a lot in establishing whether effective competition has been foreclosed. The Commission’s Press Release notes that to pre-install Google Play, manufacturers have to also pre-install Google Search App and Google Chrome. It also states that Google Search is the default search engine on Google Chrome. The Press Release does not indicate that Google Search App has to be the exclusive or default search app. (It is worth noting, however, that the Statement of Objections in Google Android did allege that Google violated EU competition rules by requiring Search to be installed as the default. We will have to await the decision itself to see if this was dropped from the case or simply not mentioned in the Press Release).

In fact, the fact that the other infringement found is that of Google’s making payments to manufacturers in return for exclusively pre-installing the Google Search App indirectly suggests that not every manufacturer pre-installs Google Search App as the exclusive, pre-installed search app. This means that any other search app (provider) can also (request to) be pre-installed on these devices. The same goes for the browser app.

Of course, regardless, even if the manufacturer does not pre-install competing apps, the consumer is free to download any other app — for search or browsing — as they wish, and can do so in seconds.

In short, pre-installation on its own does not necessarily foreclose competition, and thus may not constitute an illegal tie under EU competition law. This is particularly so when download speeds are fast (unlike the case at the time of Microsoft) and consumers regularly do download numerous apps.

What may, however, potentially foreclose effective competition is where a dominant undertaking makes payments to stop its customers, as a practical matter, from selling its rivals’ products. Intel, for example, was found to have abused its dominant position through payments to a computer retailer in return for its not selling computers with its competitor AMD’s chips, and to computer manufacturers in return for delaying the launch of computers with AMD chips.

In Google Android, the exclusivity provision that would require manufacturers to pre-install Google Search App exclusively in return for financial incentives may be deemed to be similar to this.

Having said that, unlike in Intel where a given computer can have a CPU from only one given manufacturer, even the exclusive pre-installation of the Google Search App would not have prevented consumers from downloading competing apps. So, again, in theory effective competition from other search apps need not have been foreclosed.

It must also be noted that just because a Google app is pre-installed does not mean that it generates any revenue to Google — consumers have to actually choose to use that app as opposed to another one that they might prefer in order for Google to earn any revenue from it. The Commission seems to place substantial weight on pre-installation which it alleges to create “a status quo bias.”

The concern with this approach is that it is not possible to know whether those consumers who do not download competing apps do so out of a preference for Google’s apps or, instead, for other reasons that might indicate competition not to be working. Indeed, one hurdle as regards conceptualising the infringement as tying is that it would require establishing that a significant number of phone users would actually prefer to use Google Play Store (the tying product) without Google Search App (the tied product).

This is because, according to the Commission’s Guidance Paper, establishing tying starts with identifying two distinct products, and

[t]wo products are distinct if, in the absence of tying or bundling, a substantial number of customers would purchase or would have purchased the tying product without also buying the tied product from the same supplier.

Thus, if a substantial number of customers would not want to use Google Play Store without also preferring to use Google Search App, this would cause a conceptual problem for making out a tying claim.

In fact, the conduct at issue in Google Android may be closer to a refusal to supply type of abuse.

Refusal to supply also seems to make more sense regarding the prevention of the development of Android forks being found to be an abuse. In this context, it will be interesting to see how the Commission overcomes the argument that Android forks can be developed freely and Google may have legitimate business reasons in wanting to associate its own, proprietary apps only with a certain, standardised-quality version of the operating system.

More importantly, the possible underlying theory in this part of the case is that the Google apps — and perhaps even the licensed version of Android — are a “must-have,” which is close to an argument that they are an essential facility in the context of Android phones. But that would indeed require a refusal to supply type of abuse to be established, which does not appear to be the case.

What will happen next?

To answer the question raised in the title of this post — whether the Google Android decision will benefit consumers — one needs to consider what Google may do in order to terminate the infringing conduct as required by the Commission, whilst also still generating revenue from Android.

This is because unbundling Google Play Store, Google Search App and Google Chrome (to allow manufacturers to pre-install Google Play Store without the latter two) will disrupt Google’s main revenue stream (i.e., ad revenue generated through the use of Google Search App or Google Search within the Chrome app) which funds the free operating system. This could lead Google to start charging for the operating system, and limiting to whom it licenses the operating system under the Commission’s required, less-restrictive terms.

As the Commission does not seem to think that Apple constrains Google when it comes to dealings with device manufacturers, in theory, Google should be able to charge up to the monopoly level licensing fee to device manufacturers. If that happens, the price of Android smartphones may go up. It is possible that there is a new competitor lurking in the woods that will grow and constrain that exercise of market power, but how this will all play out for consumers — as well as app developers who may face increasing costs due to the forking of Android — really remains to be seen.

 

* Pinar Akman is Professor of Law, Director of Centre for Business Law and Practice, University of Leeds, UK. This piece has not been commissioned or funded by any entity. The author has not been involved in the Google Android case in any capacity. In the past, the author wrote a piece on the Commission’s Google Shopping case, ‘The Theory of Abuse in Google Search: A Positive and Normative Assessment under EU Competition Law,’ supported by a research grant from Google. The author would like to thank Peter Whelan, Konstantinos Stylianou, and Geoffrey Manne for helpful comments. All errors remain her own. The author can be contacted here.

What happened

Today, following a six year investigation into Google’s business practices in India, the Competition Commission of India (CCI) issued its ruling.

Two things, in particular, are remarkable about the decision. First, while the CCI’s staff recommended a finding of liability on a litany of claims (the exact number is difficult to infer from the Commission’s decision, but it appears to be somewhere in the double digits), the Commission accepted its staff’s recommendation on only three — and two of those involve conduct no longer employed by Google.

Second, nothing in the Commission’s finding of liability or in the remedy it imposes suggests it approaches the issue as the EU does. To be sure, the CCI employs rhetoric suggesting that “search bias” can be anticompetitive. But its focus remains unwaveringly on the welfare of the consumer, not on the hyperbolic claims of Google’s competitors.

What didn’t happen

In finding liability on only a single claim involving ongoing practices — the claim arising from Google’s “unfair” placement of its specialized flight search (Google Flights) results — the Commission also roundly rejected a host of other claims (more than once with strong words directed at its staff for proposing such woefully unsupported arguments). Among these are several that have been raised (and unanimously rejected) by competition regulators elsewhere in the world. These claims related to a host of Google’s practices, including:

  • Search bias involving the treatment of specialized Google content (like Google Maps, YouTube, Google Reviews, etc.) other than Google Flights
  • Search bias involving the display of Universal Search results (including local search, news search, image search, etc.), except where these results are fixed to a specific position on every results page (as was the case in India before 2010), instead of being inserted wherever most appropriate in context
  • Search bias involving OneBox results (instant answers to certain queries that are placed at the top of search results pages), even where answers are drawn from Google’s own content and specific, licensed sources (rather than from crawling the web)
  • Search bias involving sponsored, vertical search results (e.g., Google Shopping results) other than Google Flights. These results are not determined by the same algorithm that returns organic results, but are instead more like typical paid search advertising results that sometimes appear at the top of search results pages. The Commission did find that Google’s treatment of its Google Flight results (another form of sponsored result) violated India’s competition laws
  • The operation of Google’s advertising platform (AdWords), including the use of a “Quality Score” in its determination of an ad’s relevance (something Josh Wright and I discuss at length here)
  • Google’s practice of allowing advertisers to bid on trademarked keywords
  • Restrictions placed by Google upon the portability of advertising campaign data to other advertising platforms through its AdWords API
  • Distribution agreements that set Google as the default (but not exclusive) search engine on certain browsers
  • Certain restrictions in syndication agreements with publishers (websites) through which Google provides search and/or advertising (Google’s AdSense offering). The Commission found that negotiated search agreements that require Google to be the exclusive search provider on certain sites did violate India’s competition laws. It should be noted, however, that Google has very few of these agreements, and no longer enters into them, so the finding is largely historical. All of the other assertions regarding these agreements (and there were numerous claims involving a number of clauses in a range of different agreements) were rejected by the Commission.

Just like competition authorities in the US, Canada, and Taiwan that have properly focused on consumer welfare in their Google investigations, the CCI found important consumer benefits from these practices that outweigh any inconveniences they may impose on competitors. And, just as in those jurisdictions, all of them were rejected by the Commission.

Still improperly assessing Google’s dominance

The biggest problem with the CCI’s decision is its acceptance — albeit moderated in important ways — of the notion that Google owes a special duty to competitors given its position as an alleged “gateway” to the Internet:

In the present case, since Google is the gateway to the internet for a vast majority of internet users, due to its dominance in the online web search market, it is under an obligation to discharge its special responsibility. As Google has the ability and the incentive to abuse its dominant position, its “special responsibility” is critical in ensuring not only the fairness of the online web search and search advertising markets, but also the fairness of all online markets given that these are primarily accessed through search engines. (para 202)

As I’ve discussed before, a proper analysis of the relevant markets in which Google operates would make clear that Google is beset by actual and potential competitors at every turn. Access to consumers by advertisers, competing search services, other competing services, mobile app developers, and the like is readily available. The lines between markets drawn by the CCI are based on superficial distinctions that are of little importance to the actual relevant market.

Consider, for example: Users seeking product information can get it via search, but also via Amazon and Facebook; advertisers can place ad copy and links in front of millions of people on search results pages, and they can also place them in front of millions of people on Facebook and Twitter. Meanwhile, many specialized search competitors like Yelp receive most of their traffic from direct navigation and from their mobile apps. In short, the assumption of market dominance made by the CCI (and so many others these days) is based on a stilted conception of the relevant market, as Google is far from the only channel through which competitors can reach consumers.

The importance of innovation in the CCI’s decision

Of course, it’s undeniable that Google is an important mechanism by which competitors reach consumers. And, crucially, nowhere did the CCI adopt Google’s critics’ and competitors’ frequently asserted position that Google is, in effect, an “essential facility” requiring extremely demanding limitations on its ability to control its product when doing so might impede its rivals.

So, while the CCI defines the relevant markets and adopts legal conclusions that confer special importance on Google’s operation of its general search results pages, it stops short of demanding that Google treat competitors on equal terms to its own offerings, as would typically be required of essential facilities (or their close cousin, public utilities).

Significantly, the Commission weighs the imposition of even these “special responsibilities” against the effects of such duties on innovation, particularly with respect to product design.

The CCI should be commended for recognizing that any obligation imposed by antitrust law on a dominant company to refrain from impeding its competitors’ access to markets must stop short of requiring the company to stop innovating, even when its product innovations might make life difficult for its competitors.

Of course, some product design choices can be, on net, anticompetitive. But innovation generally benefits consumers, and it should be impeded only where doing so clearly results in net consumer harm. Thus:

[T]he Commission is cognizant of the fact that any intervention in technology markets has to be carefully crafted lest it stifles innovation and denies consumers the benefits that such innovation can offer. This can have a detrimental effect on economic welfare and economic growth, particularly in countries relying on high growth such as India…. [P]roduct design is an important and integral dimension of competition and any undue intervention in designs of SERP [Search Engine Results Pages] may affect legitimate product improvements resulting in consumer harm. (paras 203-04).

As a consequence of this cautious approach, the CCI refused to accede to its staff’s findings of liability based on Google’s treatment of its vertical search results without considering how Google’s incorporation of these specialized results improved its product for consumers. Thus, for example:

The Commission is of opinion that requiring Google to show third-party maps may cause a delay in response time (“latency”) because these maps reside on third-party servers…. Further, requiring Google to show third-party maps may break the connection between Google’s local results and the map…. That being so, the Commission is of the view that no case of contravention of the provisions of the Act is made out in Google showing its own maps along with local search results. The Commission also holds that the same consideration would apply for not showing any other specialised result designs from third parties. (para 224 (emphasis added))

The CCI’s laudable and refreshing focus on consumer welfare

Even where the CCI determined that Google’s current practices violate India’s antitrust laws (essentially only with respect to Google Flights), it imposed a remedy that does not demand alteration of the overall structure of Google’s search results, nor its algorithmic placement of those results. In fact, the most telling indication that India’s treatment of product design innovation embodies a consumer-centric approach markedly different from that pushed by Google’s competitors (and adopted by the EU) is its remedy.

Following its finding that

[p]rominent display and placement of Commercial Flight Unit with link to Google’s specialised search options/ services (Flight) amounts to an unfair imposition upon users of search services as it deprives them of additional choices (para 420),

the CCI determined that the appropriate remedy for this defect was:

So far as the contravention noted by the Commission in respect of Flight Commercial Unit is concerned, the Commission directs Google to display a disclaimer in the commercial flight unit box indicating clearly that the “search flights” link placed at the bottom leads to Google’s Flights page, and not the results aggregated by any other third party service provider, so that users are not misled. (para 422 (emphasis added))

Indeed, what is most notable — and laudable — about the CCI’s decision is that both the alleged problem, as well as the proposed remedy, are laser-focused on the effect on consumers — not the welfare of competitors.

Where the EU’s recent Google Shopping decision considers that this sort of non-neutral presentation of Google search results harms competitors and demands equal treatment by Google of rivals seeking access to Google’s search results page, the CCI sees instead that non-neutral presentation of results could be confusing to consumers. It does not demand that Google open its doors to competitors, but rather that it more clearly identify when its product design prioritizes Google’s own content rather than determine priority based on its familiar organic search results algorithm.

This distinction is significant. For all the language in the decision asserting Google’s dominance and suggesting possible impediments to competition, the CCI does not, in fact, view Google’s design of its search results pages as a contrivance intended to exclude competitors from accessing markets.

The CCI’s remedy suggests that it has no problem with Google maintaining control over its search results pages and determining what results, and in what order, to serve to consumers. Its sole concern, rather, is that Google not get a leg up at the expense of consumers by misleading them into thinking that its product design is something that it is not.

Rather than dictate how Google should innovate or force it to perpetuate an outdated design in the name of preserving access by competitors bent on maintaining the status quo, the Commission embraces the consumer benefits of Google’s evolving products, and seeks to impose only a narrowly targeted tweak aimed directly at the quality of consumers’ interactions with Google’s products.

Conclusion

As some press accounts of the CCI’s decision trumpet, the Commission did impose liability on Google for abuse of a dominant position. But its similarity with the EU’s abuse of dominance finding ends there. The CCI rejected many more claims than it adopted, and it carefully tailored its remedy to the welfare of consumers, not the lamentations of competitors. Unlike the EU, the CCI’s finding of a violation is tempered by its concern for avoiding harmful constraints on innovation and product design, and its remedy makes this clear. Whatever the defects of India’s decision, it offers a welcome return to consumer-centric antitrust.