Archives For carl shapiro

In the battle of ideas, it is quite useful to be able to brandish clear and concise debating points in support of a proposition, backed by solid analysis. Toward that end, in a recent primer about antitrust law published by the Mercatus Center, I advance four reasons to reject neo-Brandeisian critiques of the consensus (at least, until very recently) consumer welfare-centric approach to antitrust enforcement. My four points, drawn from the primer (with citations deleted and hyperlinks added) are as follows:

First, the underlying assumptions of rising concentration and declining competition on which the neo-Brandeisian critique is largely based (and which are reflected in the introductory legislative findings of the Competition and Antitrust Law Enforcement Reform Act [of 2021, introduced by Senator Klobuchar on February 4, lack merit]. Chapter 6 of the 2020 Economic Report of the President, dealing with competition policy, summarizes research debunking those assumptions. To begin with, it shows that studies complaining that competition is in decline are fatally flawed. Studies such as one in 2016 by the Council of Economic Advisers rely on overbroad market definitions that say nothing about competition in specific markets, let alone across the entire economy. Indeed, in 2018, professor Carl Shapiro, chief DOJ antitrust economist in the Obama administration, admitted that a key summary chart in the 2016 study “is not informative regarding overall trends in concentration in well-defined relevant markets that are used by antitrust economists to assess market power, much less trends in concentration in the U.S. economy.” Furthermore, as the 2020 report points out, other literature claiming that competition is in decline rests on a problematic assumption that increases in concentration (even assuming such increases exist) beget softer competition. Problems with this assumption have been understood since at least the 1970s. The most fundamental problem is that there are alternative explanations (such as exploitation of scale economies) for why a market might demonstrate both high concentration and high markups—explanations that are still consistent with procompetitive behavior by firms. (In a related vein, research by other prominent economists has exposed flaws in studies that purport to show a weakening of merger enforcement standards in recent years.) Finally, the 2020 report notes that the real solution to perceived economic problems may be less government, not more: “As historic regulatory reform across American industries has shown, cutting government-imposed barriers to innovation leads to increased competition, strong economic growth, and a revitalized private sector.”

Second, quite apart from the flawed premises that inform the neo-Brandeisian critique, specific neo-Brandeisian reforms appear highly problematic on economic grounds. Breakups of dominant firms or near prohibitions on dominant firm acquisitions would sacrifice major economies of scale and potential efficiencies of integration, harming consumers without offering any proof that the new market structures in reshaped industries would yield consumer or producer benefits. Furthermore, a requirement that merging parties prove a negative (that the merger will not harm competition) would limit the ability of entrepreneurs and market makers to act on information about misused or underutilized assets through the merger process. This limitation would reduce economic efficiency. After-the-fact studies indicating that a large percentage of mergers do not add wealth and do not otherwise succeed as much as projected miss this point entirely. They ignore what the world would be like if mergers were much more difficult to enter into: a world where there would be lower efficiency and dynamic economic growth because there would be less incentive to seek out market-improving opportunities.

Third, one aspect of the neo-Brandeisian approach to antitrust policy is at odds with fundamental notions of fair notice of wrongdoing and equal treatment under neutral principles, notions that are central to the rule of law. In particular, the neo-Brandeisian call for considering a multiplicity of new factors such as fairness, labor, and the environment when enforcing policy is troublesome. There is no neutral principle for assigning weights to such divergent interests, and (even if weights could be assigned) there are no economic tools for accurately measuring how a transaction under review would affect those interests. It follows that abandoning antitrust law’s consumer-welfare standard in favor of an ill-defined multifactor approach would spawn confusion in the private sector and promote arbitrariness in enforcement decisions, undermining the transparency that is a key aspect of the rule of law. Whereas concerns other than consumer welfare may of course be validly considered in setting public policy, they are best dealt with under other statutory schemes, not under antitrust law.

Fourth, and finally, neo-Brandeisian antitrust proposals are not a solution to widely expressed concerns that big companies in general, and large digital platforms in particular, are undermining free speech by censoring content of which they disapprove. Antitrust law is designed to prevent businesses from creating impediments to market competition that reduce economic welfare; it is not well-suited to policing companies’ determinations regarding speech. To the extent that policymakers wish to address speech censorship on large platforms, they should consider other regulatory institutions that would be better suited to the task (such as communications law), while keeping in mind First Amendment limitations on the ability of government to control private speech.

In light of these four points, the primer concludes that the neo-Brandeisian-inspired antitrust “reform” proposals being considered by Congress should be rejected:

[E]fforts to totally reshape antitrust policy into a quasi-regulatory system that arbitrarily blocks and disincentivizes (1) welfare-enhancing mergers and (2) an array of actions by dominant firms are highly troubling. Such interventionist proposals ignore the lack of evidence of serious competitive problems in the American economy and appear arbitrary compared to the existing consumer-welfare-centric antitrust enforcement regime. To use a metaphor, Congress and public officials should avoid a drastic new antitrust cure for an anticompetitive disease that can be handled effectively with existing antitrust medications.

Let us hope that the serious harm associated with neo-Brandeisian legislative “deformation” (a more apt term than reformation) of the antitrust laws is given a full legislative airing before Congress acts.

This guest post is by Patrick Todd, an England-qualified solicitor and author on competition law/policy in digital markets.

The above quote is not about Democrat-nominee hopeful Elizabeth Warren’s policy views on sport. It is in fact an analogy to her proposal of splitting Google, Amazon, Facebook and Apple (“GAFA”) apart from their respective ancillary lines of business, a solution to one of the current hot topics in antitrust law, namely the alleged practice of GAFA exploiting the popularity of their platforms to gain competitive advantages in neighboring markets. Can a “referee” favor its own “players” in the digital platform game? Can we blame the “referee” if one “player” knocks out another? Should the “referee” be forced to intervene to protect said “player”? The analogy reflects a growing concern that platform owners’ entry into adjacent markets that are, or theoretically could be, served by independent firms creates an irreconcilable misalignment between the interests of users, independent companies and platform owners. As Margrethe Vestager, European Competition Commissioner and Vice-President of the European Commission (“EC”), has said:

[O]ne of the biggest issues we face is with platform businesses that also compete in other markets, with companies that depend on the platform. That means that the very same business becomes both player and referee, competing with others that rely on the platform, but also setting the rules that govern that competition.

Whether and to what extent successful firms in digital markets can enter and compete in neighboring markets, utilizing their existing expertise, has matured into an existential question that plagues and polarizes the antitrust community. Perhaps the most famous and debated case is the EC’s 2017 decision in Google Shopping, where it concluded that Google’s preferential placement of its comparison shopping results in a special box at the top of its search pages constituted an abuse of a dominant position under Article 102 TFEU. The EC found that such prominent placement, coupled with the denial of access to the box for rival price comparison websites, had the effect of driving traffic to Google’s own shopping site, depriving Google’s rivals of user-traffic. Google is strongly contesting both the facts and theory underpinning this decision in its appeal, the hearing for which took place in February. Meanwhile, complaints in relation to Google’s similar treatment of its other ancillary services, such as vacation rentals, have followed suit. Similar allegations have been made against Apple (see e.g. here), Amazon (see e.g. here) and Facebook (see e.g. here) for the way they design their platforms and organize their search results.[KS1] 

What links these cases, investigations and accusations is the doctrine of leverage, i.e. the practice of exploiting one’s market power in one market in order to extend that power to an adjacent market. Importantly, leveraging is not a standalone theory of harm in antitrust law: it is more appropriately regarded as a category of conduct where competitive effects are felt in a neighboring market (think tying, refusals to deal, margin squeeze, etc.). Examples of such conduct in the platform context could include platform owners: promoting their own adjacent products/services in search result pages; bundling, tying or pre-installing their adjacent products/services with platform software code; shutting off access to Application Programming Interfaces or data to third parties to decrease the relative interoperability of their rivals’ products/services; or generally reducing the compatibility of third-party products/services with the platform as a means of distribution.

This post examines various proposals that have been put forward to solve the alleged prevalence of anticompetitive leveraging in digital platform markets, namely:

  1. blocking platform owners from also owning adjacent products/services;
  2. prohibiting “favoring” or “self-preferencing” behavior (i.e. enforcing a non-discrimination standard); and
  3. reversing the burden of proof so that dominant platform firms bear the burden of showing that such conduct does not harm competition.

Each of these proposals would abrogate the “consumer welfare standard” baked into antitrust law, which permits exclusionary behavior as long as it constitutes “competition on the merits”, i.e. that the conduct ultimately benefits consumers. As Judge Frank Easterbrook has mercilessly held, “injuries to rivals are byproducts of vigorous competition, and the antitrust laws are not balm for rivals’ wounds.” Antitrust law maintains a distinction between pro- and anti-competitive leveraging because consumers frequently benefit from the conduct outlined above. Conversely, implementing any of the above proposals would decrease or negate entirely the ability of platform owners to show that such conduct benefits consumers.

This post then examines whether protecting competition in adjacent markets is important enough to sacrifice the consumer benefits that flow from pro-competitive leveraging. Empirical criteria that have been present in comparable instances of such intervention, such as bottleneck power over distribution, evidence of widespread harm to competition in neighboring markets, static product boundaries, and a lack or unimportance of integrative efficiencies, are not satisfied in the current context. Absent some proof that they are, the consumer welfare framework under antitrust law should prevail without recourse to more intrusive intervention.

Proposals to regulate the activities of digital platform owners in neighboring markets

1.      Structural separation

Some scholars, such as Lina Khan, propose to implement “[s]tructural remedies and prophylactic bans [to] limit the ability of dominant platforms to enter certain distinct lines of business.” Senator Warren has echoed this proposal, calling forlarge tech platforms to be designated as ‘Platform Utilities’ and broken apart from any participant on that platform.” Under this proposal, Amazon would be unable to act both as an online marketplace and a seller on its own marketplace, Google would be unable to act as both a search engine and a mapping provider, and Apple and Google would be unable to act as both producers of mobile operating systems and apps that run on those operating systems. Meanwhile, Facebook would be unable to operate both its core social media platform and separate services, such as dating, local buy-and-sell, and other businesses like Instagram and WhatsApp. Khan posits that such separation is the primary method of “prevent[ing] leveraging and eliminat[ing] a core conflict of interest currently embedded in the business model of dominant platforms.”

A rule that prohibits entry into neighboring markets will certainly catch all instances of harmful leveraging, but it will inevitably also condemn all instances of leveraging that are in fact beneficial to consumers (see below for examples). Moreover, structural separation would also condemn efficiencies stemming from vertical integration that do not depend on leveraging behavior, e.g. elimination of double marginalization. As Bruce Owen sums up, such intervention “is not necessary, and may well reduce welfare by deterring efficient investments,” in circumstances where “[e]mpirical evidence that vertical integration or vertical restraints are harmful is weak, compared to evidence that vertical integration is beneficial.”

2.      Non-discrimination principles

Other scholars seek a prohibition on leveraging, i.e. a “non-discrimination” or “platform neutrality” standard whereby platform owners cannot treat third-party products/services differently to how they treat their own. Though framed as a regulatory regime operating in parallel to antitrust law, such regulation would have the effect of supplanting antitrust in favor of a standard that blocks all leveraging behavior, whether pro- or anti-competitive. For example, Apple and Google could still produce both apps and software platforms, but they would be unable to bundle them together, even if doing so improves the user experience (or benefits app developers).

This proposal also disregards the distinction between pro- and anti-competitive leveraging (albeit in a less intrusive manner than structural separation). It would, however, appear to maintain efficiencies stemming solely from vertical integration, as long as said benefits do not result in preferential treatment of the platform owner’s products/services.

3.      Reversing the burden of proof

Though not regulatory in nature, it is also worth including the proposal in the EC’s expert report on “competition in the digital era” of recalibrating the legal analysis of leveraging conduct by “err[ing] on the side of disallowing potentially anti-competitive conducts, and impos[ing] on the incumbent the burden of proof for showing the pro-competitiveness of its conduct.” Under this proposal, once a plaintiff establishes that leveraging conduct exists (without having to establish that it satisfies pre-existing legal criteria), the defendant would bear the burden of showing that its conduct did not have long-run anti-competitive effects or that the conduct had an overriding efficiency rationale.

As Dolmans and Pesch point out, proving that conduct does not have a long-term impact on competition may be nigh on impossible, as it involves proving a negative. This proposal would therefore bring non-discrimination in by the back door and return antitrust law to form-based rules that neglect the actual effects of conduct on competition or consumers. Moreover, making it unduly difficult for dominant firms to show that their conduct is in fact pro-competitive, despite any exclusionary effects, would similarly collapse an effects-based model for leveraging conduct into blanket non-discrimination. The report’s authors admit as much, citing for their proposal a report by the French telecoms regulator which advocates “a principle of ‘net neutrality’ for smartphones, tablets and voice assistants” (i.e. a non-discrimination standard).

Consumer vs. small-business welfare: which should prevail in digital markets?

Each of the above proposals would, to varying degrees, dissolve the distinctions between pro- and anti-competitive leveraging and (in the case of structural separation) pro- and anti-competitive vertical integration, significantly curtailing the ability of firms to legitimately out-compete their rivals in neighboring markets. All leveraging would be presumptively harmful to competition and, by extension, consumers.

The question then becomes whether we should ignore the incentive to innovate and compete in platform markets and turn our societal interests to competition within platforms. It has long been the case that “the primary purpose of the antitrust laws is to protect interbrand competition.” However, in certain circumstances, it can be preferable to shift the focus from inter- to intra-brand competition (often through legislation). Take, for example, must-carry provisions imposed on cable operators in the US or net neutrality regulation (repealed in the US, but prevailing in the EU). In circumstances such as these, society willingly foregoes benefits of continued innovation and competition in the inter-brand market because it has concluded, for one reason or another, that bolstering competition in the intra-brand market is more important. This can entail tolerating counterfactually higher prices or reduced quality as a byproduct of protecting interests deemed to be more important, such as maintaining a pluralistic downstream market. In line with the above proposals, there is a growing belief that such an inversion of the goals of competition policy is exactly what is needed in digital markets. This section examines the empirical criteria that one would expect to be verified before shifting the focal point of competition policy from inter-platform competition to intra-platform competition.

1.      Strategic bottleneck power over distribution

In past instances of “access regulation” or structural separation of vertically integrated firms, there have been concerns that the targeted firms had strategic “bottleneck” power over the distribution of some downstream product or service, i.e. the firms sat between a set of suppliers and consumers and, through the control of some vital input or method of distribution, controlled access between the two. Strategic bottleneck power was present in the must-carry provisions imposed on cable operators in the US, the non-discrimination principles enshrined in § 616 of the US Communications Act, and net neutrality regulation. The same applies to structural separation: when the District Court approved the consent decree structurally separating AT&T’s long-distance arm from its local operating companies, it was motivated by the fact that “the principal means by which AT&T has maintained monopoly power in telecommunications has been its control of the Operating Companies with their strategic bottleneck position.”

Do GAFA possess strategic bottleneck power? Take Google Search, for example. In the EC’s decision in Google Shopping, it found that referrals from Google Search accounted for a large proportion of traffic to rival comparison shopping websites and that traffic could not be effectively replaced by other sources. However, firms operating in neighboring markets have many more routes to consumers that flout discovery through a search engine. As John Temple Lang observes, they can access consumers through “direct navigation, specialized search services, social networks such as Facebook and Pinterest, partnerships with PC and mobile device markets, agreements with other publishers to refer traffic to each other, and so on.” Apple’s iOS and Google’s Android, on the other hand, compete against each other and thus neither firm, by definition, can possess the degree of strategic bottleneck power required to consider abandoning their respective incentives to innovate. As for Amazon: in 2019 it was estimated that Amazon accounted for 38% of all online sales in the US. This may seem like a staggering volume, but it in fact shows that distributors can – and do – bypass Amazon’s platform to reach consumers, with great success.

Insofar as GAFA possess strategic bottleneck power over particular categories of goods (i.e. in particular neighboring markets), this would not justify shifting the focus to intra-platform competition across all product categories. The market power element of traditional antitrust analyses serves to guard competition in these circumstances by carving a remedy around conduct that illegitimately hampers the ability of competitors in neighboring markets to compete.

2.      Widespread harm in adjacent markets

To ban platform owners from leveraging anti- and pro-competitively, one would expect there to be cogent evidence of harm to competition across a multitude of adjacent markets that depend on the platforms for access to consumers. However, as Feng Zhu and Qihong Liu note, there is a dearth of empirical evidence on the effects of platform owners’ entry into complementary markets. Even studies that support the proposition that such entry dampens or skews innovation incentives of firms in adjacent markets conclude that the welfare effects are ambiguous, and that consumers may actually be better off (see e.g. here and here). Other studies show that third-party producers can benefit from platform entry into adjacent markets (see e.g. here and here). It is therefore clear that this criterion, which should also be a prerequisite to imposing blanket regulation to control the behavior of platform owners, has not been satisfied.

3.      Discernible and static product boundaries

In prior cases of access regulation, the input that firms in neighboring markets have depended on to access consumers has been clearly distinguishable from their own products. Although antitrust literature commonly refers to “platforms” and “applications” as if these are perceptibly different products, the reality is much different: both platforms and their complementary applications are composed of individual components and, as Carl Shapiro notes, “the boundary between the ‘platform’ and services running on that platform can be fuzzy and can change over time.” Any attempt to freeze the definitional boundary of a platform would negate platform owners’ incentive to build upon and improve their platforms, to the detriment of consumers and (in the case of software platforms) app developers. If Apple were prevented from vertically integrating, what would iOS look like? Could it even have a voice-call function? Alternatively, under non-discrimination regulation, what would a new iOS device look like? Would it just be a blank screen where the user is then forced to choose between various alternatives?

The problem with proposing to separate platforms from adjacent products is that any platform component can theoretically be modularized and opened to competition from third-parties. Because integration of complementary components is an essential part of inter-platform competition, imposing the proposed interventions could destroy the very ecosystems that the competitors that critics seek to protect depend on, and prevent the next popular digital platform from emerging.

4.      Lack or unimportance of integrative efficiencies

Critics may counter that efficiencies stemming from leveraging are unimportant or non-existent, or do not depend on conduct that has exclusionary effects, and thus nothing is lost by shifting antitrust’s focus to the protection of competitors in adjacent markets. However, any iPhone user will testify to the consumer benefits flowing from technically integrating multiple platform components and features into a single package (e.g. voice-assistant technology and mapping functionality). In a similar vein, the UK CMA, in approving Google’s acquisition of mapping software company Waze, was prompted in part by the fact that “[i]ntegration of a map application into the operating system creates opportunities for operating system developers to use their own or affiliated services (for example search engines and social networks) to improve the experience of users.” Integrating Product A (the platform) and Product B (a component or the software code of an ancillary product/service) can facilitate the creation some new functionality or feature in the form of Product C that users value and, crucially, could not achieve by combining Products A and B themselves (from one or multiple firms). Another potential consumer benefit flowing from leveraging is a reduction in consumer search costs i.e. providing users with the functionality or end results that they seek more quickly and efficiently. Even though anti-competitive concerns can theoretically arise, it remains the case that, empirically, integration of software code is predominantly motivated by efficiency justifications and occurs in both competitive and concentrated markets.

Conclusion

Much of the impetus to enact the above proposals stems from the perception that antitrust law in its current form does not act quickly enough to restore competition in the market. Indeed, it can take over a decade for the dust to settle in big ticket antitrust cases, by which time antitrust remedies may be too little too late in those cases where the authorities get it right. To the extent that authorities can think of innovative ways to enforce existing standards more quickly and accurately, this would be met with widespread enthusiasm (but may be idealistic).

However, introducing more intrusive measures to protect competition in neighboring markets, and  undermining the consumer welfare standard that protects the ability of dominant firms to legitimately enter neighboring markets and compete on the merits, is not warranted. Intervention should remain targeted and evidence-based. If a complainant can adduce evidence that a platform owner is leveraging into a neighboring market and raising the complainant’s cost of doing business, and if the platform owner cannot show a pro-competitive justification for the behavior, antitrust law will intervene to restore competition under existing standards. For this, no regulatory intervention or other change to existing rules is necessary.

For a more detailed version of this post, see: Patrick F. Todd, Digital Platforms and the Leverage Problem, 98 Neb. L. Rev. 486 (2019).

Available at: https://digitalcommons.unl.edu/nlr/vol98/iss2/12.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Geoffrey A. Manne (President & Founder, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics ); and Kristian Stout (Associate Director, ICLE).]

As many in the symposium have noted — and as was repeatedly noted during the FTC’s Hearings on Competition and Consumer Protection in the 21st Century — there is widespread dissatisfaction with the 1984 Non-Horizontal Merger Guidelines

Although it is doubtless correct that the 1984 guidelines don’t reflect the latest economic knowledge, it is by no means clear that this has actually been a problem — or that a new set of guidelines wouldn’t create even greater problems. Indeed, as others have noted in this symposium, there is a great deal of ambiguity in the proposed guidelines that could lead either to uncertainty as to how the agencies will exercise their discretion, or, more troublingly, could lead courts to take seriously speculative theories of harm

We can do little better in expressing our reservations that new guidelines are needed than did the current Chairman of the FTC, Joe Simons, writing on this very blog in a symposium on what became the 2010 Horizontal Merger Guidelines. In a post entitled, Revisions to the Merger Guidelines: Above All, Do No Harm, Simons writes:

My sense is that there is no need to revise the DOJ/FTC Horizontal Merger Guidelines, with one exception…. The current guidelines lay out the general framework quite well and any change in language relative to that framework are likely to create more confusion rather than less. Based on my own experience, the business community has had a good sense of how the agencies conduct merger analysis…. If, however, the current administration intends to materially change the way merger analysis is conducted at the agencies, then perhaps greater revision makes more sense. But even then, perhaps the best approach is to try out some of the contemplated changes (i.e. in actual investigations) and publicize them in speeches and the like before memorializing them in a document that is likely to have some substantial permanence to it.

Wise words. Unless, of course, “the current [FTC] intends to materially change the way [vertical] merger analysis is conducted.” But the draft guidelines don’t really appear to portend a substantial change, and in several ways they pretty accurately reflect agency practice.

What we want to draw attention to, however, is an implicit underpinning of the draft guidelines that we believe the agencies should clearly disavow (or at least explain more clearly the complexity surrounding): the extent and implications of the presumed functional equivalence of vertical integration by contract and by merger — the contract/merger equivalency assumption.   

Vertical mergers and their discontents

The contract/merger equivalency assumption has been gaining traction with antitrust scholars, but it is perhaps most clearly represented in some of Steve Salop’s work. Salop generally believes that vertical merger enforcement should be heightened. Among his criticisms of current enforcement is his contention that efficiencies that can be realized by merger can often also be achieved by contract. As he discussed during his keynote presentation at last year’s FTC hearing on vertical mergers:

And, finally, the key policy issue is the issue is not about whether or not there are efficiencies; the issue is whether the efficiencies are merger-specific. As I pointed out before, Coase stressed that you can get vertical integration by contract. Very often, you can achieve the vertical efficiencies if they occur, but with contracts rather than having to merge.

And later, in the discussion following his talk:

If there is vertical integration by contract… it meant you could get all the efficiencies from vertical integration with a contract. You did not actually need the vertical integration. 

Salop thus argues that because the existence of a “contract solution” to firm problems can often generate the same sorts of efficiencies as when firms opt to merge, enforcers and courts should generally adopt a presumption against vertical mergers relative to contracting:

Coase’s door swings both ways: Efficiencies often can be achieved by vertical contracts, without the potential anticompetitive harms from merger

In that vertical restraints are characterized as “just” vertical integration “by contract,” then claimed efficiencies in problematical mergers might be achieved with non-merger contracts that do not raise the same anticompetitive concerns. (emphasis in original)

(Salop isn’t alone in drawing such a conclusion, of course; Carl Shapiro, for example, has made a similar point (as have others)).

In our next post we explore the policy errors implicated by this contract/merger equivalency assumption. But here we want to consider whether it makes logical sense in the first place

The logic of vertical integration is not commutative 

It is true that, where contracts are observed, they are likely as (or more, actually)  efficient than merger. But, by the same token, it is also true that where mergers are observed they are likely more efficient than contracts. Indeed, the entire reason for integration is efficiency relative to what could be done by contract — this is the essence of the so-called “make-or-buy” decision. 

For example, a firm that decides to buy its own warehouse has determined that doing so is more efficient than renting warehouse space. Some of these efficiencies can be measured and quantified (e.g., carrying costs of ownership vs. the cost of rent), but many efficiencies cannot be easily measured or quantified (e.g., layout of the facility or site security). Under the contract/merger equivalency assumption, the benefits of owning a warehouse can be achieved “very often” by renting warehouse space. But the fact that many firms using warehouses own some space and rent some space indicates that the make-or-buy decision is often unique to each firm’s idiosyncratic situation. Moreover, the distinctions driving those differences will not always be readily apparent, and whether contracting or integrating is preferable in any given situation may not be inferred from the existence of one or the other elsewhere in the market — or even in the same firm!

There is no reason to presume in any given situation that the outcome from contracting would be the same as from merging, even where both are notionally feasible. The two are, quite simply, different bargaining environments, each with a different risk and cost allocation; accounting treatment; effect on employees, customers, and investors; tax consequence, etc. Even if the parties accomplished nominally “identical” outcomes, they would not, in fact, be identical.

Meanwhile, what if the reason for failure to contract, or the reason to prefer merger, has nothing to do with efficiency? What if there were no anticompetitive aim but there were a tax advantage? What if one of the parties just wanted a larger firm in order to satisfy the CEO’s ego? That these are not cognizable efficiencies under antitrust law is clear. But the adoption of a presumption of equivalence between contract and merger would — ironically — entail their incorporation into antitrust law just the same — by virtue of their effective prohibition under antitrust law

In other words, if the assumption is that contract and merger are equally efficient unless proven otherwise, but the law adopts a suspicion (or, even worse, a presumption) that vertical mergers are anticompetitive which can be rebutted only with highly burdensome evidence of net efficiency gain, this effectively deputizes antitrust law to enforce a preconceived notion of “merger appropriateness” that does not necessarily turn on efficiencies. There may (or may not) be sensible policy reasons for adopting such a stance, but they aren’t antitrust reasons.

More fundamentally, however, while there are surely some situations in which contractual restraints might be able to achieve similar organizational and efficiency gains as a merger, the practical realities of achieving not just greater efficiency, but a whole host of non-efficiency-related, yet nonetheless valid, goals, are rarely equivalent between the two

It may be that the parties don’t know what they don’t know to such an extent that a contract would be too costly because it would be too incomplete, for example. But incomplete contracts and ambiguous control and ownership rights aren’t (as much of) an issue on an ongoing basis after a merger. 

As noted, there is no basis for assuming that the structure of a merger and a contract would be identical. In the same way, there is no basis for assuming that the knowledge transfer that would result from a merger would be the same as that which would result from a contract — and in ways that the parties could even specify or reliably calculate in advance. Knowing that the prospect for knowledge “synergies” would be higher with a merger than a contract might be sufficient to induce the merger outcome. But asked to provide evidence that the parties could not engage in the same conduct via contract, the parties would be unable to do so. The consequence, then, would be the loss of potential gains from closer integration.

At the same time, the cavalier assumption that parties would be able — legally — to enter into an analogous contract in lieu of a merger is problematic, given that it would likely be precisely the form of contract (foreclosing downstream or upstream access) that is alleged to create problems with the merger in the first place.

At the FTC hearings last year, Francine LaFontaine highlighted this exact concern

I want to reemphasize that there are also rules against vertical restraints in antitrust laws, and so to say that the firms could achieve the mergers outcome by using vertical restraints is kind of putting them in a circular motion where we are telling them you cannot merge because you could do it by contract, and then we say, but these contract terms are not acceptable.

Indeed, legal risk is one of the reasons why a merger might be preferable to a contract, and because the relevant markets here are oligopoly markets, the possibility of impermissible vertical restraints between large firms with significant market share is quite real.

More important, the assumptions underlying the contention that contracts and mergers are functionally equivalent legal devices fails to appreciate the importance of varied institutional environments. Consider that one reason some takeovers are hostile is because incumbent managers don’t want to merge, and often believe that they are running a company as well as it can be run — that a change of corporate control would not improve efficiency. The same presumptions may also underlie refusals to contract and, even more likely, may explain why, to the other firm, a contract would be ineffective.

But, while there is no way to contract without bilateral agreement, there is a corporate control mechanism to force a takeover. In this institutional environment a merger may be easier to realize than a contract (and that applies even to a consensual merger, of course, given the hostile outside option). In this case, again, the assumption that contract should be the relevant baseline and the preferred mechanism for coordination is misplaced — even if other firms in the industry are successfully accomplishing the same thing via contract, and even if a contract would be more “efficient” in the abstract.

Conclusion

Properly understood, the choice of whether to contract or merge derives from a host of complicated factors, many of which are difficult to observe and/or quantify. The contract/merger equivalency assumption — and the species of “least-restrictive alternative” reasoning that would demand onerous efficiency arguments to permit a merger when a contract was notionally possible — too readily glosses over these complications and unjustifiably embraces a relative hostility to vertical mergers at odds with both theory and evidence

Rather, as has long been broadly recognized, there can be no legally relevant presumption drawn against a company when it chooses one method of vertical integration over another in the general case. The agencies should clarify in the draft guidelines that the mere possibility of integration via contract or the inability of merging parties to rigorously describe and quantify efficiencies does not condemn a proposed merger.

There’s always a reason to block a merger:

  • If a firm is too big, it will be because it is “a merger for monopoly”;
  • If the firms aren’t that big, it will be for “coordinated effects”;
  • If a firm is small, then it will be because it will “eliminate a maverick”.

It’s a version of Ronald Coase’s complaint about antitrust, as related by William Landes:

Ronald said he had gotten tired of antitrust because when the prices went up the judges said it was monopoly, when the prices went down, they said it was predatory pricing, and when they stayed the same, they said it was tacit collusion.

Of all the reasons to block a merger, the maverick notion is the weakest, and it’s well past time to ditch it.

The Horizontal Merger Guidelines define a “maverick” as “a firm that plays a disruptive role in the market to the benefit of customers.” According to the Guidelines, this includes firms:

  1. With a new technology or business model that threatens to disrupt market conditions;
  2. With an incentive to take the lead in price cutting or other competitive conduct or to resist increases in industry prices;
  3. That resist otherwise prevailing industry norms to cooperate on price setting or other terms of competition; and/or
  4. With an ability and incentive to expand production rapidly using available capacity to “discipline prices.”

There appears to be no formal model of maverick behavior that does not rely on some a priori assumption that the firm is a maverick.

For example, John Kwoka’s 1989 model assumes the maverick firm has different beliefs about how competing firms would react if the maverick varies its output or price. Louis Kaplow and Carl Shapiro developed a simple model in which the firm with the smallest market share may play the role of a maverick. They note, however, that this raises the question—in a model in which every firm faces the same cost and demand conditions—why would there be any variation in market shares? The common solution, according to Kaplow and Shapiro, is cost asymmetries among firms. If that is the case, then “maverick” activity is merely a function of cost, rather than some uniquely maverick-like behavior.

The idea of the maverick firm requires that the firm play a critical role in the market. The maverick must be the firm that outflanks coordinated action or acts as a bulwark against unilateral action. By this loosey goosey definition of maverick, a single firm can make the difference between success or failure of anticompetitive behavior by its competitors. Thus, the ability and incentive to expand production rapidly is a necessary condition for a firm to be considered a maverick. For example, Kaplow and Shapiro explain:

Of particular note is the temptation of one relatively small firm to decline to participate in the collusive arrangement or secretly to cut prices to serve, say, 4% rather than 2% of the market. As long as price cuts by a small firm are less likely to be accurately observed or inferred by the other firms than are price cuts by larger firms, the presence of small firms that are capable of expanding significantly is especially disruptive to effective collusion.

A “maverick” firm’s ability to “discipline prices” depends crucially on its ability to expand output in the face of increased demand for its products. Similarly, the other non-maverick firms can be “disciplined” by the maverick only in the face of a credible threat of (1) a noticeable drop in market share that (2) leads to lower profits.

The government’s complaint in AT&T/T-Mobile’s 2011 proposed merger alleges:

Relying on its disruptive pricing plans, its improved high-speed HSPA+ network, and a variety of other initiatives, T-Mobile aimed to grow its nationwide share to 17 percent within the next several years, and to substantially increase its presence in the enterprise and government market. AT&T’s acquisition of T-Mobile would eliminate the important price, quality, product variety, and innovation competition that an independent T-Mobile brings to the marketplace.

At the time of the proposed merger, T-Mobile accounted for 11% of U.S. wireless subscribers. At the end of 2016, its market share had hit 17%. About half of the increase can be attributed to its 2012 merger with MetroPCS. Over the same period, Verizon’s market share increased from 33% to 35% and AT&T market share remained stable at 32%. It appears that T-Mobile’s so-called maverick behavior did more to disrupt the market shares of smaller competitors Sprint and Leap (which was acquired by AT&T). Thus, it is not clear, ex post, that T-Mobile posed any threat to AT&T or Verizon’s market shares.

Geoffrey Manne raised some questions about the government’s maverick theory which also highlights a fundamental problem with the willy nilly way in which firms are given the maverick label:

. . . it’s just not enough that a firm may be offering products at a lower price—there is nothing “maverick-y” about a firm that offers a different, less valuable product at a lower price. I have seen no evidence to suggest that T-Mobile offered the kind of pricing constraint on AT&T that would be required to make it out to be a maverick.

While T-Mobile had a reputation for lower mobile prices, in 2011, the firm was lagging behind Verizon, Sprint, and AT&T in the rollout of 4G technology. In other words, T-Mobile was offering an inferior product at a lower price. That’s not a maverick, that’s product differentiation with hedonic pricing.

More recently, in his opposition to the proposed T-Mobile/Sprint merger, Gene Kimmelman from Public Knowledge asserts that both firms are mavericks and their combination would cause their maverick magic to disappear:

Sprint, also, can be seen as a maverick. It has offered “unlimited” plans and simplified its rate plans, for instance, driving the rest of the industry forward to more consumer-friendly options. As Sprint CEO Marcelo Claure stated, “Sprint and T-Mobile have similar DNA and have eliminated confusing rate plans, converging into one rate plan: Unlimited.” Whether both or just one of the companies can be seen as a “maverick” today, in either case the newly combined company would simply have the same structural incentives as the larger carriers both Sprint and T-Mobile today work so hard to differentiate themselves from.

Kimmelman provides no mechanism by which the magic would go missing, but instead offers a version of an adversity-builds-character argument:

Allowing T-Mobile to grow to approximately the same size as AT&T, rather than forcing it to fight for customers, will eliminate the combined company’s need to disrupt the market and create an incentive to maintain the existing market structure.

For 30 years, the notion of the maverick firm has been a concept in search of a model. If the concept cannot be modeled decades after being introduced, maybe the maverick can’t be modeled.

What’s left are ad hoc assertions mixed with speculative projections in hopes that some sympathetic judge can be swayed. However, some judges seem to be more skeptical than sympathetic, as in H&R Block/TaxACT :

The parties have spilled substantial ink debating TaxACT’s maverick status. The arguments over whether TaxACT is or is not a “maverick” — or whether perhaps it once was a maverick but has not been a maverick recently — have not been particularly helpful to the Court’s analysis. The government even put forward as supposed evidence a TaxACT promotional press release in which the company described itself as a “maverick.” This type of evidence amounts to little more than a game of semantic gotcha. Here, the record is clear that while TaxACT has been an aggressive and innovative competitor in the market, as defendants admit, TaxACT is not unique in this role. Other competitors, including HRB and Intuit, have also been aggressive and innovative in forcing companies in the DDIY market to respond to new product offerings to the benefit of consumers.

It’s time to send the maverick out of town and into the sunset.

 

[TOTM: The following is the fifth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.

This post is authored by Douglas H. Ginsburg, Professor of Law, Antonin Scalia Law School at George Mason University; Senior Judge, United States Court of Appeals for the District of Columbia Circuit; and former Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice; and Joshua D. Wright, University Professor, Antonin Scalia Law School at George Mason University; Executive Director, Global Antitrust Institute; former U.S. Federal Trade Commissioner from 2013-15; and one of the founding bloggers at Truth on the Market.]

[Ginsburg & Wright: Professor Wright is recused from participation in the FTC litigation against Qualcomm, but has provided counseling advice to Qualcomm concerning other regulatory and competition matters. The views expressed here are our own and neither author received financial support.]

Introduction

In a recent article Joe Kattan and Tim Muris (K&M) criticize our article on the predictive power of bargaining models in antitrust, in which we used two recent applications to explore implications for uses of bargaining models in courts and antitrust agencies moving forward.  Like other theoretical models used to predict competitive effects, complex bargaining models require courts and agencies rigorously to test their predictions against data from the real world markets and institutions to which they are being applied.  Where the “real-world evidence,” as Judge Leon described such data in AT&T/Time Warner, is inconsistent with the predictions of a complex bargaining model, then the tribunal should reject the model rather than reality.

K&M, who represent Intel Corporation in connection with the FTC v. Qualcomm case now pending in the Northern District of California, focus exclusively upon, and take particular issue with, one aspect of our prior article:  We argued that, as in AT&T/Time Warner, the market realities at issue in FTC v. Qualcomm are inconsistent with the use of Dr. Carl Shapiro’s bargaining model to predict competitive effects in the relevant market.  K&M—no doubt confident in their superior knowledge of the underlying facts due to their representation in the matter—criticize our analysis for our purported failure to get our hands sufficiently dirty with the facts.  They criticize our broader analysis of bargaining models and their application for our failure to discuss specific pieces of evidence presented at trial, and offer up several quotations from Qualcomm’s customers as support for Shapiro’s economic analysis.  K&M concede that, as we argue, the antitrust laws should not condemn a business practice in the absence of robust economic evidence of actual or likely harm to competition; yet, they do not see any conflict between that concession and their position that the FTC need not, through its expert, quantify the royalty surcharge imposed by Qualcomm because the “exact size of the overcharge was not relevant to the issue of Qualcomm’s liability.” [Kattan and Muris miss the point that within the context of economic modeling, the failure to identify the magnitude of an effect with any certainty when data are available, including whether the effect is statistically different than zero, calls into question the model’s robustness more generally.]

Though our prior article was a broad one, not limited to FTC v. Qualcomm or intended to cover record evidence in detail, we welcome K&M’s critique and are happy to accept their invitation to engage further on the facts of that particular case.  We agree that accounting for market realities is very important when complex economic models are at play.  Unfortunately, K&M’s position that the evidence “supports Shapiro’s testimony overwhelmingly” ignores the sound empirical evidence employed by Dr. Aviv Nevo during trial and has not aged well in light of the internal Apple documents made public in Qualcomm’s Opening Statement following the companies’ decision to settle the case, which Apple had initiated in January 2017.

Qualcomm’s Opening Statement in the Apple litigation revealed a number of new facts that are problematic, to say the least, for K&M’s position and, even more troublesome for Shapiro’s model and the FTC’s case.  Of course, as counsel to an interested party in the FTC case, it is entirely possible that K&M were aware of the internal Apple documents cited in Qualcomm’s Opening Statement (or similar documents) and simply disagree about their significance.  On the other hand, it is quite clear the Department of Justice Antitrust Division found them to be significantly damaging; it took the rare step of filing a Statement of Interest of the United States with the district court citing the documents and imploring the court to call for additional briefing and hold a hearing on issues related to a remedy in the event that it finds Qualcomm liable on any of the FTC’s claims. The internal Apple documents cited in Qualcomm’s Opening Statement leave no doubt as to several critical market realities that call into question the FTC’s theory of harm and Shapiro’s attempts to substantiate it.

(For more on the implications of these documents, see Geoffrey Manne’s post in this series, here).

First, the documents laying out Apple’s litigation strategy clearly establish that it has a high regard for Qualcomm’s technology and patent portfolio and that Apple strategized for several years about how to reduce its net royalties and to hurt Qualcomm financially. 

Second, the documents undermine Apple’s public complaints about Qualcomm and call into question the validity of the underlying theory of harm in the FTC’s case.  In particular, the documents plainly debunk Apple’s claims that Qualcomm’s patents weakened over time as a result of a decline in the quality of the technology and that Qualcomm devised an anticompetitive strategy in order to extract value from a weakening portfolio.  The documents illustrate that in fact, Apple adopted a deliberate strategy of trying to manipulate the value of Qualcomm’s portfolio.  The company planned to “creat[e] evidence” by leveraging its purchasing power to methodically license less expensive patents in hope of making Qualcomm’s royalties appear artificially inflated. In other words, if Apple’s made-for-litigation position were correct, then it would be only because of Apple’s attempt to manipulate and devalue Qualcomm’s patent portfolio, not because there had been any real change in its value. 

Third, the documents directly refute some of the arguments K&M put forth in their critique of our prior article, in which we invoked Dr. Nevo’s empirical analysis of royalty rates over time as important evidence of historical facts that contradict Dr. Shapiro’s model.  For example, K&M attempt to discredit Nevo’s analysis by claiming he did not control for changes in the strength of Qualcomm’s patent portfolio which, they claim, had weakened over time. According to internal Apple documents, however, “Qualcomm holds a stronger position in . . . , and particularly with respect to cellular and Wi-Fi SEPs” than do Huawei, Nokia, Ericsson, IDCC, and Apple. Another document states that “Qualcomm is widely considered the owner of the strongest patent portfolio for essential and relevant patents for wireless standards.” Indeed, Apple’s documents show that Apple sought artificially to “devalue SEPs” in the industry by “build[ing] favorable, arms-length ‘comp’ licenses” in an attempt to reduce what FRAND means. The ultimate goal of this pursuit was stated frankly by Apple: To “reduce Apple’s net royalty to Qualcomm” despite conceding that Qualcomm’s chips “engineering wise . . . have been the best.”

As new facts relevant to the FTC’s case and contrary to its theory of harm come to light, it is important to re-emphasize the fundamental point of our prior article: Model predictions that are inconsistent with actual market evidence should give fact finders serious pause before accepting the results as reliable.  This advice is particularly salient in a case like FTC v. Qualcomm, where intellectual property and innovation are critical components of the industry and its competitiveness, because condemning behavior that is not truly anticompetitive may have serious, unintended consequences. (See Douglas H. Ginsburg & Joshua D. Wright, Dynamic Analysis and the Limits of Antitrust Institutions, 78 Antitrust L.J. 1 (2012); Geoffrey A. Manne & Joshua D. Wright, Innovation and the Limits of Antitrust, 6 J. Competition L. & Econ. 153 (2010)).

The serious consequences of a false positive, that is, the erroneous condemnation of a procompetitive or competitively neutral business practice, is undoubtedly what caused the Antitrust Division to file its Statement of Interest in the FTC’s case against Qualcomm.  That Statement correctly highlights the Apple documents as support for Government’s concern that “an overly broad remedy in this case could reduce competition and innovation in markets for 5G technology and downstream applications that rely on that technology.”

In this reply, we examine closely the market realities that with and hence undermine both Dr. Shapiro’s bargaining model and the FTC’s theory of harm in its case against Qualcomm.  We believe the “large body of evidence” offered by K&M supporting Shapiro’s theoretical analysis is insufficient to sustain his conclusions under standard antitrust analysis, including the requirement that a plaintiff alleging monopolization or attempted monopolization provide evidence of actual or likely anticompetitive effects.  We will also discuss the implications of the newly-public internal Apple documents for the FTC’s case, which remains pending at the time of this writing, and for future government investigations involving allegedly anticompetitive licensing of intellectual property.

I. Kattan and Muris Rely Upon Inconsequential Testimony and Mischaracterize Dr. Nevo’s Empirical Analysis

K&M march through a series of statements from Qualcomm’s customers asserting that the threat of Qualcomm discontinuing the supply of modem chips forced them to agree to unreasonable licensing demands.  This testimony, however, is reminiscent of Dr. Shapiro’s testimony in AT&T/Time Warner concerning the threat of a long-term blackout of CNN and other Turner channels:  Qualcomm has never cut off any customer’s supply of chips.  The assertion that companies negotiating with Qualcomm either had to “agree to the license or basically go out of business” ignores the reality that even if Qualcomm discontinued supplying chips to a customer, the customer could obtain chips from one of four rival sources.  This was not a theoretical possibility.  Indeed, Apple has been sourcing chips from Intel since 2016 and made the decision to switch to Intel specifically in order, in its own words, to exert “commercial pressure against Qualcomm.”

Further, as Dr. Nevo pointed out at trial, SEP license agreements are typically long term (e.g., 10 or 15 year agreements) and are negotiated far less frequently than chip prices, which are typically negotiated annually.  In other words, Qualcomm’s royalty rate is set prior to and independent of chip sale negotiations. 

K&M raise a number of theoretical objections to Nevo’s empirical analysis.  For example, K&M accuse Nevo of “cherry picking” the licenses he included in his empirical analysis to show that royalty rates remained constant over time, stating that he “excluded from consideration any license that had non-standard terms.” They mischaracterize Nevo’s testimony on this point.  Nevo excluded from his analysis agreements that, according to the FTC’s own theory of harm, would be unaffected (e.g., agreements that were signed subject to government supervision or agreements that have substantially different risk splitting provisions).  In any event, Nevo testified that modifying his analysis to account for Shapiro’s criticism regarding the excluded agreements would have no material effect on his conclusions.  To our knowledge, Nevo’s testimony is the only record evidence providing any empirical analysis of the effects of Qualcomm’s licensing agreements.

As previously mentioned, K&M also claim that Dr. Nevo’s analysis failed to account for the alleged weakening of Qualcomm’s patent portfolio over time.  Apple’s internal documents, however, are fatal to that claim..  K&M also pinpoint failure to control for differences among customers and changes in the composition of handsets over time as critical errors in Nevo’s analysis.  Their assertion that Nevo should have controlled for differences among customers is puzzling.  They do not elaborate upon that criticism, but they seem to believe different customers are entitled to different FRAND rates for the same license.  But Qualcomm’s standard practice—due to the enormous size of its patent portfolio—is and has always been to charge all licensees the same rate for the entire portfolio.

As to changes in the composition of handsets over time, no doubt a smartphone today has many more features than a first-generation handset that only made and received calls; those new features, however, would be meaningless without Qualcomm’s SEPs, which are implemented by mobile chips that enable cellular communication.  One must wonder why Qualcomm should have reduced the royalty rate on licenses for patents that are just as fundamental to the functioning of mobile phones today as they were to the functioning of a first-generation handset.  K&M ignore the fundamental importance of Qualcomm’s SEPs in claiming that royalty rates should have declined along with the quality adjusted/? declining prices of mobile phones.  They also, conveniently, ignore the evidence that the industry has been characterized by increasing output and quality—increases which can certainly be attributed at least in part to Qualcomm’s chips being “engineering wise . . . the best.”. 

II. Apple’s Internal Documents Eviscerate the FTC’s Theory of Harm

The FTC’s theory of harm is premised upon Qualcomm’s allegedly charging a supra-FRAND rate for its SEPs (the “royalty surcharge”), which squeezes the margins of OEMs and consequently prevents rival chipset suppliers from obtaining a sufficient return when negotiating with those OEMs. (See Luke Froeb, et al’s criticism of the FTC’s theory of harm on these and related grounds, here). To predict the effects of Qualcomm’s allegedly anticompetitive conduct, Dr. Shapiro compared the gains from trade OEMs receive when they purchase a chip from Qualcomm and pay Qualcomm a FRAND royalty to license its SEPs with the gains from trade OEMs receive when they purchase a chip from a rival manufacturer and pay a “royalty surcharge” to Qualcomm to license its SEPs.  Shapiro testified that he had “reason to believe that the royalty surcharge was substantial” and had “inevitable consequences,” for competition and for consumers, though his bargaining model did not quantify the effects of Qualcomm’s practice. 

The premise of the FTC theory requires a belief about FRAND as a meaningful, objective competitive benchmark that Qualcomm was able to evade as a result of its market power in chipsets.  But Apple manipulated negotiations as a tactic to reshape FRAND itself.  The closer look at the facts invited by K&M does nothing to improve one’s view of the FTC’s claims.  The Apple documents exposed at trial make it clear that Apple deliberately manipulated negotiations with other suppliers in order to make it appear to courts and antitrust agencies that something other than the quality of Qualcomm’s technology was driving royalty rates.  For example, Apple’s own documents show it sought artificially to “devalue SEPs” by “build[ing] favorable, arms-length ‘comp’ licenses” in an attempt to reshape what FRAND means in this industry. Simply put, Apple’s strategy was to negotiate cheap supposedly “comparable” licenses with other chipset suppliers as part of a plan to reduce its net royalties to Qualcomm. 

As part of the same strategy, Apple spent years arguing to regulators and courts that Qualcomm’s patents were no better than those of its competitors.  But their internal documents tell this very different story:

  • “Nokia’s patent portfolio is significantly weaker than Qualcomm’s.”
  • “[InterDigital] makes minimal contributions to [the 4G/LTE] standard”
  • “Compared to [Huawei, Nokia, Ericsson, IDCC, and Apple], Qualcomm holds a stronger position in , and particularly with respect to cellular and Wi-Fi SEPs.”
  • “Compared to other licensors, Qualcomm has more significant holdings in key areas such as media processing, non-cellular communications and hardware.  Likewise, using patent citation analysis as a measure of thorough prosecution within the US PTO, Qualcomm patents (SEPs and non-SEPs both) on average score higher compared to the other, largely non-US based licensors.”

One internal document that is particularly troubling states that Apple’s plan was to “create leverage by building pressure” in order to  (i) hurt Qualcomm financially and (ii) put Qualcomm’s licensing model at risk. What better way to harm Qualcomm financially and put its licensing model at risk than to complain to regulators that the business model is anticompetitive and tie the company up in multiple costly litigations?  That businesses make strategic plans to harm one another is no surprise.  But it underscores the importance of antitrust institutions – with their procedural and evidentiary requirements – to separate meritorious claims from fabricated ones. They failed to do so here.

III. Lessons Learned

So what should we make of evidence suggesting one of the FTC’s key informants during its investigation of Qualcomm didn’t believe the arguments it was selling?  The exposure of Apple’s internal documents is a sobering reminder that the FTC is not immune from the risk of being hoodwinked by rent-seeking antitrust plaintiffs.  That a firm might try to persuade antitrust agencies to investigate and sue its rivals is nothing new (see, e.g., William J. Baumol & Janusz A. Ordover, Use of Antitrust to Subvert Competition, 28 J.L. & Econ. 247 (1985)), but it is a particularly high-stakes game in modern technology markets. 

Lesson number one: Requiring proof of actual anticompetitive effects rather than relying upon a model that is not robust to market realities is an important safeguard to ensure that Section 2 protects competition and not merely an individual competitor.  Yet the agencies’ staked their cases on bargaining models in AT&T/Time Warner and FTC v. Qualcomm that fell short of proving anticompetitive effects.  An agency convinced by one firm or firms to pursue an action against a rival for conduct that does not actually harm competition could have a significant and lasting anticompetitive effect on the market.  Modern antitrust analysis requires plaintiffs to substantiate their claims with more than just theory or scant evidence that rivals have been harmed.  That safeguard is particularly important when an agency is pursuing an enforcement action against a company in a market where the risks of regulatory capture and false positives are high.  With calls to move away from the consumer welfare standard—which would exacerbate both the risks and consequences of false positives–it is imperative to embrace rather than reject the requirement of proof in monopolization cases. (See Elyse Dorsey, Jan Rybnicek & Joshua D. Wright, Hipster Antitrust Meets Public Choice Economics: The Consumer Welfare Standard, Rule of Law, and Rent-Seeking, CPI Antitrust Chron. (Apr. 2018); see also Joshua D. Wright et al., Requiem For a Paradox: The Dubious Rise and Inevitable Fall of Hipster Antitrust, 51 Ariz. St. L.J. 293 (2019).) The DOJ’s Statement of Interest is a reminder of this basic tenet. 

Lesson number two: Antitrust should have a limited role in adjudicating disputes arising between sophisticated parties in bilateral negotiations of patent licenses.  Overzealous claims of harm from patent holdup and anticompetitive licensing can deter the lawful exercise of patent rights, good faith modifications of existing contracts, and more generally interfere with the outcome of arms-length negotiations (See Bruce H. Kobayashi & Joshua D. Wright, The Limits of Antitrust and Patent Holdup: A Reply To Cary et al., 78 Antitrust L.J. 701 (2012)). It is also a difficult task for an antitrust regulator or court to identify and distinguish anticompetitive patent licenses from neutral or welfare-increasing behavior.  An antitrust agency’s willingness to cast the shadow of antitrust remedies over one side of the bargaining table inevitably places the agency in the position of encouraging further rent-seeking by licensees seeking similar intervention on their behalf.

Finally, antitrust agencies intervening in patent holdup and licensing disputes on behalf of one party to a patent licensing agreement risks transforming the agency into a price regulator.  Apple’s fundamental complaint in its own litigation, and the core of the similar FTC allegation against Qualcomm, is that royalty rates are too high.  The risks to competition and consumers of antitrust courts and agencies playing the role of central planner for the innovation economy are well known, and are at the peak when the antitrust enterprise is used to set prices, mandate a particular organizational structure for the firm, or to intervene in garden variety contract and patent disputes in high-tech markets.

The current Commission did not vote out the Complaint now being litigated in the Northern District of California.  That case was initiated by an entirely different set of Commissioners.  It is difficult to imagine the new Commissioners having no reaction to the Apple documents, and in particular to the perception they create that Apple was successful in manipulating the agency in its strategy to bolster its negotiating position against Qualcomm.  A thorough reevaluation of the evidence here might well lead the current Commission to reconsider the merits of the agency’s position in the litigation and whether continuing is in the public interest.  The Apple documents, should they enter the record, may affect significantly the Ninth Circuit’s or Supreme Court’s understanding of the FTC’s theory of harm.

Zoom, one of Silicon Valley’s lesser-known unicorns, has just gone public. At the time of writing, its shares are trading at about $65.70, placing the company’s value at $16.84 billion. There are good reasons for this success. According to its Form S-1, Zoom’s revenue rose from about $60 million in 2017 to a projected $330 million in 2019, and the company has already surpassed break-even . This growth was notably fueled by a thriving community of users who collectively spend approximately 5 billion minutes per month in Zoom meetings.

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects. For instance, the value of Skype to one user depends – at least to some extent – on the number of other people that might be willing to use the network. In these settings, it is often said that positive feedback loops may cause the market to tip in favor of a single firm that is then left with an unassailable market position. Although Zoom still faces significant competitive challenges, it has nonetheless established a strong position in a market previously dominated by powerful incumbents who could theoretically count on network effects to stymie its growth.

Further complicating matters, Zoom chose to compete head-on with these incumbents. It did not create a new market or a highly differentiated product. Zoom’s Form S-1 is quite revealing. The company cites the quality of its product as its most important competitive strength. Similarly, when listing the main benefits of its platform, Zoom emphasizes that its software is “easy to use”, “easy to deploy and manage”, “reliable”, etc. In its own words, Zoom has thus gained a foothold by offering an existing service that works better than that of its competitors.

And yet, this is precisely the type of story that a literal reading of the network effects literature would suggest is impossible, or at least highly unlikely. For instance, the foundational papers on network effects often cite the example of the DVORAK keyboard (David, 1985; and Farrell & Saloner, 1985). These early scholars argued that, despite it being the superior standard, the DVORAK layout failed to gain traction because of the network effects protecting the QWERTY standard. In other words, consumers failed to adopt the superior DVORAK layout because they were unable to coordinate on their preferred option. It must be noted, however, that the conventional telling of this story was forcefully criticized by Liebowitz & Margolis in their classic 1995 article, The Fable of the Keys.

Despite Liebowitz & Margolis’ critique, the dominance of the underlying network effects story persists in many respects. And in that respect, the emergence of Zoom is something of a cautionary tale. As influential as it may be, the network effects literature has tended to overlook a number of factors that may mitigate, or even eliminate, the likelihood of problematic outcomes. Zoom is yet another illustration that policymakers should be careful when they make normative inferences from positive economics.

A Coasian perspective

It is now widely accepted that multi-homing and the absence of switching costs can significantly curtail the potentially undesirable outcomes that are sometimes associated with network effects. But other possibilities are often overlooked. For instance, almost none of the foundational network effects papers pay any notice to the application of the Coase theorem (though it has been well-recognized in the two-sided markets literature).

Take a purported market failure that is commonly associated with network effects: an installed base of users prevents the market from switching towards a new standard, even if it is superior (this is broadly referred to as “excess inertia,” while the opposite scenario is referred to as “excess momentum”). DVORAK’s failure is often cited as an example.

Astute readers will quickly recognize that this externality problem is not fundamentally different from those discussed in Ronald Coase’s masterpiece, “The Problem of Social Cost,” or Steven Cheung’s “The Fable of the Bees” (to which Liebowitz & Margolis paid homage in their article’s title). In the case at hand, there are at least two sets of externalities at play. First, early adopters of the new technology impose a negative externality on the old network’s installed base (by reducing its network effects), and a positive externality on other early adopters (by growing the new network). Conversely, installed base users impose a negative externality on early adopters and a positive externality on other remaining users.

Describing these situations (with a haughty confidence reminiscent of Paul Samuelson and Arthur Cecil Pigou), Joseph Farrell and Garth Saloner conclude that:

In general, he or she [i.e. the user exerting these externalities] does not appropriately take this into account.

Similarly, Michael Katz and Carl Shapiro assert that:

In terms of the Coase theorem, it is very difficult to design a contract where, say, the (potential) future users of HDTV agree to subsidize today’s buyers of television sets to stop buying NTSC sets and start buying HDTV sets, thereby stimulating the supply of HDTV programming.

And yet it is far from clear that consumers and firms can never come up with solutions that mitigate these problems. As Daniel Spulber has suggested, referral programs offer a case in point. These programs usually allow early adopters to receive rewards in exchange for bringing new users to a network. One salient feature of these programs is that they do not simply charge a lower price to early adopters; instead, in order to obtain a referral fee, there must be some agreement between the early adopter and the user who is referred to the platform. This leaves ample room for the reallocation of rewards. Users might, for instance, choose to split the referral fee. Alternatively, the early adopter might invest time to familiarize the switching user with the new platform, hoping to earn money when the user jumps ship. Both of these arrangements may reduce switching costs and mitigate externalities.

Danial Spulber also argues that users may coordinate spontaneously. For instance, social groups often decide upon the medium they will use to communicate. Families might choose to stay on the same mobile phone network. And larger groups (such as an incoming class of students) may agree upon a social network to share necessary information, etc. In these contexts, there is at least some room to pressure peers into adopting a new platform.

Finally, firms and other forms of governance may also play a significant role. For instance, employees are routinely required to use a series of networked goods. Common examples include office suites, email clients, social media platforms (such as Slack), or video communications applications (Zoom, Skype, Google Hangouts, etc.). In doing so, firms presumably act as islands of top-down decision-making and impose those products that maximize the collective preferences of employers and employees. Similarly, a single firm choosing to join a network (notably by adopting a standard) may generate enough momentum for a network to gain critical mass. Apple’s decisions to adopt USB-C connectors on its laptops and to ditch headphone jacks on its iPhones both spring to mind. Likewise, it has been suggested that distributed ledger technology and initial coin offerings may facilitate the creation of new networks. The intuition is that so-called “utility tokens” may incentivize early adopters to join a platform, despite initially weak network effects, because they expect these tokens to increase in value as the network expands.

A combination of these arrangements might explain how Zoom managed to grow so rapidly, despite the presence of powerful incumbents. In its own words:

Our rapid adoption is driven by a virtuous cycle of positive user experiences. Individuals typically begin using our platform when a colleague or associate invites them to a Zoom meeting. When attendees experience our platform and realize the benefits, they often become paying customers to unlock additional functionality.

All of this is not to say that network effects will always be internalized through private arrangements, but rather that it is equally wrong to assume that transaction costs systematically prevent efficient coordination among users.

Misguided regulatory responses

Over the past couple of months, several antitrust authorities around the globe have released reports concerning competition in digital markets (UK, EU, Australia), or held hearings on this topic (US). A recurring theme throughout their published reports is that network effects almost inevitably weaken competition in digital markets.

For instance, the report commissioned by the European Commission mentions that:

Because of very strong network externalities (especially in multi-sided platforms), incumbency advantage is important and strict scrutiny is appropriate. We believe that any practice aimed at protecting the investment of a dominant platform should be minimal and well targeted.

The Australian Competition & Consumer Commission concludes that:

There are considerable barriers to entry and expansion for search platforms and social media platforms that reinforce and entrench Google and Facebook’s market power. These include barriers arising from same-side and cross-side network effects, branding, consumer inertia and switching costs, economies of scale and sunk costs.

Finally, a panel of experts in the United Kingdom found that:

Today, network effects and returns to scale of data appear to be even more entrenched and the market seems to have stabilised quickly compared to the much larger degree of churn in the early days of the World Wide Web.

To address these issues, these reports suggest far-reaching policy changes. These include shifting the burden of proof in competition cases from authorities to defendants, establishing specialized units to oversee digital markets, and imposing special obligations upon digital platforms.

The story of Zoom’s emergence and the important insights that can be derived from the Coase theorem both suggest that these fears may be somewhat overblown.

Rivals do indeed find ways to overthrow entrenched incumbents with some regularity, even when these incumbents are shielded by network effects. Of course, critics may retort that this is not enough, that competition may sometimes arrive too late (excess inertia, i.e., “ a socially excessive reluctance to switch to a superior new standard”) or too fast (excess momentum, i.e., “the inefficient adoption of a new technology”), and that the problem is not just one of network effects, but also one of economies of scale, information asymmetry, etc. But this comes dangerously close to the Nirvana fallacy. To begin, it assumes that regulators are able to reliably navigate markets toward these optimal outcomes — which is questionable, at best. Moreover, the regulatory cost of imposing perfect competition in every digital market (even if it were possible) may well outweigh the benefits that this achieves. Mandating far-reaching policy changes in order to address sporadic and heterogeneous problems is thus unlikely to be the best solution.

Instead, the optimal policy notably depends on whether, in a given case, users and firms can coordinate their decisions without intervention in order to avoid problematic outcomes. A case-by-case approach thus seems by far the best solution.

And competition authorities need look no further than their own decisional practice. The European Commission’s decision in the Facebook/Whatsapp merger offers a good example (this was before Margrethe Vestager’s appointment at DG Competition). In its decision, the Commission concluded that the fast-moving nature of the social network industry, widespread multi-homing, and the fact that neither Facebook nor Whatsapp controlled any essential infrastructure, prevented network effects from acting as a barrier to entry. Regardless of its ultimate position, this seems like a vastly superior approach to competition issues in digital markets. The Commission adopted a similar reasoning in the Microsoft/Skype merger. Unfortunately, the Commission seems to have departed from this measured attitude in more recent decisions. In the Google Search case, for example, the Commission assumes that the mere existence of network effects necessarily increases barriers to entry:

The existence of positive feedback effects on both sides of the two-sided platform formed by general search services and online search advertising creates an additional barrier to entry.

A better way forward

Although the positive economics of network effects are generally correct and most definitely useful, some of the normative implications that have been derived from them are deeply flawed. Too often, policymakers and commentators conclude that these potential externalities inevitably lead to stagnant markets where competition is unable to flourish. But this does not have to be the case. The emergence of Zoom shows that superior products may prosper despite the presence of strong incumbents and network effects.

Basing antitrust policies on sweeping presumptions about digital competition – such as the idea that network effects are rampant or the suggestion that online platforms necessarily imply “extreme returns to scale” – is thus likely to do more harm than good. Instead, Antitrust authorities should take a leaf out of Ronald Coase’s book, and avoid blackboard economics in favor of a more granular approach.

[TOTM: The following is the third in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.

This post is authored by Douglas H. Ginsburg, Professor of Law, Antonin Scalia Law School at George Mason University; Senior Judge, United States Court of Appeals for the District of Columbia Circuit; and former Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice; and Joshua D. Wright, University Professor, Antonin Scalia Law School at George Mason University; Executive Director, Global Antitrust Institute; former U.S. Federal Trade Commissioner from 2013-15; and one of the founding bloggers at Truth on the Market.]

[Ginsburg & Wright: Professor Wright is recused from participation in the FTC litigation against Qualcomm, but has provided counseling advice to Qualcomm concerning other regulatory and competition matters. The views expressed here are our own and neither author received financial support.]

The Department of Justice Antitrust Division (DOJ) and Federal Trade Commission (FTC) have spent a significant amount of time in federal court litigating major cases premised upon an anticompetitive foreclosure theory of harm. Bargaining models, a tool used commonly in foreclosure cases, have been essential to the government’s theory of harm in these cases. In vertical merger or conduct cases, the core theory of harm is usually a variant of the claim that the transaction (or conduct) strengthens the firm’s incentives to engage in anticompetitive strategies that depend on negotiations with input suppliers. Bargaining models are a key element of the agency’s attempt to establish those claims and to predict whether and how firm incentives will affect negotiations with input suppliers, and, ultimately, the impact on equilibrium prices and output. Application of bargaining models played a key role in evaluating the anticompetitive foreclosure theories in the DOJ’s litigation to block the proposed merger of AT&T and Time Warner Cable. A similar model is at the center of the FTC’s antitrust claims against Qualcomm and its patent licensing business model.

Modern antitrust analysis does not condemn business practices as anticompetitive without solid economic evidence of an actual or likely harm to competition. This cautious approach was developed in the courts for two reasons. The first is that the difficulty of distinguishing between procompetitive and anticompetitive explanations for the same conduct suggests there is a high risk of error. The second is that those errors are more likely to be false positives than false negatives because empirical evidence and judicial learning have established that unilateral conduct is usually either procompetitive or competitively neutral. In other words, while the risk of anticompetitive foreclosure is real, courts have sensibly responded by requiring plaintiffs to substantiate their claims with more than just theory or scant evidence that rivals have been harmed.

An economic model can help establish the likelihood and/or magnitude of competitive harm when the model carefully captures the key institutional features of the competition it attempts to explain. Naturally, this tends to mean that the economic theories and models proffered by dueling economic experts to predict competitive effects take center stage in antitrust disputes. The persuasiveness of an economic model turns on the robustness of its assumptions about the underlying market. Model predictions that are inconsistent with actual market evidence give one serious pause before accepting the results as reliable.

For example, many industries are characterized by bargaining between providers and distributors. The Nash bargaining framework can be used to predict the outcomes of bilateral negotiations based upon each party’s bargaining leverage. The model assumes that both parties are better off if an agreement is reached, but that as the utility of one party’s outside option increases relative to the bargain, it will capture an increasing share of the surplus. Courts have had to reconcile these seemingly complicated economic models with prior case law and, in some cases, with direct evidence that is apparently inconsistent with the results of the model.

Indeed, Professor Carl Shapiro recently used bargaining models to analyze harm to competition in two prominent cases alleging anticompetitive foreclosure—one initiated by the DOJ and one by the FTC—in which he served as the government’s expert economist. In United States v. AT&T Inc., Dr. Shapiro testified that the proposed transaction between AT&T and Time Warner would give the vertically integrated company leverage to extract higher prices for content from AT&T’s rival, Dish Network. Soon after, Dr. Shapiro presented a similar bargaining model in FTC v. Qualcomm Inc. He testified that Qualcomm leveraged its monopoly power over chipsets to extract higher royalty rates from smartphone OEMs, such as Apple, wishing to license its standard essential patents (SEPs). In each case, Dr. Shapiro’s models were criticized heavily by the defendants’ expert economists for ignoring market realities that play an important role in determining whether the challenged conduct was likely to harm competition.

Judge Leon’s opinion in AT&T/Time Warner—recently upheld on appeal—concluded that Dr. Shapiro’s application of the bargaining model was significantly flawed, based upon unreliable inputs, and undermined by evidence about actual market performance presented by defendant’s expert, Dr. Dennis Carlton. Dr. Shapiro’s theory of harm posited that the combined company would increase its bargaining leverage and extract greater affiliate fees for Turner content from AT&T’s distributor rivals. The increase in bargaining leverage was made possible by the threat of a post-merger blackout of Turner content for AT&T’s rivals. This theory rested on the assumption that the combined firm would have reduced financial exposure from a long-term blackout of Turner content and would therefore have more leverage to threaten a blackout in content negotiations. The purpose of his bargaining model was to quantify how much AT&T could extract from competitors subjected to a long-term blackout of Turner content.

Judge Leon highlighted a number of reasons for rejecting the DOJ’s argument. First, Dr. Shapiro’s model failed to account for existing long-term affiliate contracts, post-litigation offers of arbitration agreements, and the increasing competitiveness of the video programming and distribution industry. Second, Dr. Carlton had demonstrated persuasively that previous vertical integration in the video programming and distribution industry did not have a significant effect on content prices. Finally, Dr. Shapiro’s model primarily relied upon three inputs: (1) the total number of subscribers the unaffiliated distributor would lose in the event of a long-term blackout of Turner content, (2) the percentage of the distributor’s lost subscribers who would switch to AT&T as a result of the blackout, and (3) the profit margin AT&T would derive from the subscribers it gained from the blackout. Many of Dr. Shapiro’s inputs necessarily relied on critical assumptions and/or third-party sources. Judge Leon considered and discredited each input in turn. 

The parties in Qualcomm are, as of the time of this posting, still awaiting a ruling. Dr. Shapiro’s model in that case attempts to predict the effect of Qualcomm’s alleged “no license, no chips” policy. He compared the gains from trade OEMs receive when they purchase a chip from Qualcomm and pay Qualcomm a FRAND royalty to license its SEPs with the gains from trade OEMs receive when they purchase a chip from a rival manufacturer and pay a “royalty surcharge” to Qualcomm to license its SEPs. In other words, the FTC’s theory of harm is based upon the premise that Qualcomm is charging a supra-FRAND rate for its SEPs (the“royalty surcharge”) that squeezes the margins of OEMs. That margin squeeze, the FTC alleges, prevents rival chipset suppliers from obtaining a sufficient return when negotiating with OEMs. The FTC predicts the end result is a reduction in competition and an increase in the price of devices to consumers.

Qualcomm, like Judge Leon in AT&T, questioned the robustness of Dr. Shapiro’s model and its predictions in light of conflicting market realities. For example, Dr. Shapiro, argued that the

leverage that Qualcomm brought to bear on the chips shifted the licensing negotiations substantially in Qualcomm’s favor and led to a significantly higher royalty than Qualcomm would otherwise have been able to achieve.

Yet, on cross-examination, Dr. Shapiro declined to move from theory to empirics when asked if he had quantified the effects of Qualcomm’s practice on any other chip makers. Instead, Dr. Shapiro responded that he had not, but he had “reason to believe that the royalty surcharge was substantial” and had “inevitable consequences.” Under Dr. Shapiro’s theory, one would predict that royalty rates were higher after Qualcomm obtained market power.

As with Dr. Carlton’s testimony inviting Judge Leon to square the DOJ’s theory with conflicting historical facts in the industry, Qualcomm’s economic expert, Dr. Aviv Nevo, provided an analysis of Qualcomm’s royalty agreements from 1990-2017, confirming that there was no economic and meaningful difference between the royalty rates during the time frame when Qualcomm was alleged to have market power and the royalty rates outside of that time frame. He also presented evidence that ex ante royalty rates did not increase upon implementation of the CDMA standard or the LTE standard. Moreover, Dr.Nevo testified that the industry itself was characterized by declining prices and increasing output and quality.

Dr. Shapiro’s model in Qualcomm appears to suffer from many of the same flaws that ultimately discredited his model in AT&T/Time Warner: It is based upon assumptions that are contrary to real-world evidence and it does not robustly or persuasively identify anticompetitive effects. Some observers, including our Scalia Law School colleague and former FTC Chairman, Tim Muris, would apparently find it sufficient merely to allege a theoretical “ability to manipulate the marketplace.” But antitrust cases require actual evidence of harm. We think Professor Muris instead captured the appropriate standard in his important article rejecting attempts by the FTC to shortcut its requirement of proof in monopolization cases:

This article does reject, however, the FTC’s attempt to make it easier for the government to prevail in Section 2 litigation. Although the case law is hardly a model of clarity, one point that is settled is that injury to competitors by itself is not a sufficient basis to assume injury to competition …. Inferences of competitive injury are, of course, the heart of per se condemnation under the rule of reason. Although long a staple of Section 1, such truncation has never been a part of Section 2. In an economy as dynamic as ours, now is hardly the time to short-circuit Section 2 cases. The long, and often sorry, history of monopolization in the courts reveals far too many mistakes even without truncation.

Timothy J. Muris, The FTC and the Law of Monopolization, 67 Antitrust L. J. 693 (2000)

We agree. Proof of actual anticompetitive effects rather than speculation derived from models that are not robust to market realities are an important safeguard to ensure that Section 2 protects competition and not merely individual competitors.

The future of bargaining models in antitrust remains to be seen. Judge Leon certainly did not question the proposition that they could play an important role in other cases. Judge Leon closely dissected the testimony and models presented by both experts in AT&T/Time Warner. His opinion serves as an important reminder. As complex economic evidence like bargaining models become more common in antitrust litigation, judges must carefully engage with the experts on both sides to determine whether there is direct evidence on the likely competitive effects of the challenged conduct. Where “real-world evidence,” as Judge Leon called it, contradicts the predictions of a bargaining model, judges should reject the model rather than the reality. Bargaining models have many potentially important antitrust applications including horizontal mergers involving a bargaining component – such as hospital mergers, vertical mergers, and licensing disputes. The analysis of those models by the Ninth and D.C. Circuits will have important implications for how they will be deployed by the agencies and parties moving forward.

[TOTM: The following is the first in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here. This post originally appeared on the Federalist Society Blog.]

Just days before leaving office, the outgoing Obama FTC left what should have been an unwelcome parting gift for the incoming Commission: an antitrust suit against Qualcomm. This week the FTC — under a new Chairman and with an entirely new set of Commissioners — finished unwrapping its present, and rested its case in the trial begun earlier this month in FTC v Qualcomm.

This complex case is about an overreaching federal agency seeking to set prices and dictate the business model of one of the world’s most innovative technology companies. As soon-to-be Acting FTC Chairwoman, Maureen Ohlhausen, noted in her dissent from the FTC’s decision to bring the case, it is “an enforcement action based on a flawed legal theory… that lacks economic and evidentiary support…, and that, by its mere issuance, will undermine U.S. intellectual property rights… worldwide.”

Implicit in the FTC’s case is the assumption that Qualcomm charges smartphone makers “too much” for its wireless communications patents — patents that are essential to many smartphones. But, as former FTC and DOJ chief economist, Luke Froeb, puts it, “[n]othing is more alien to antitrust than enquiring into the reasonableness of prices.” Even if Qualcomm’s royalty rates could somehow be deemed “too high” (according to whom?), excessive pricing on its own is not an antitrust violation under U.S. law.

Knowing this, the FTC “dances around that essential element” (in Ohlhausen’s words) and offers instead a convoluted argument that Qualcomm’s business model is anticompetitive. Qualcomm both sells wireless communications chipsets used in mobile phones, as well as licenses the technology on which those chips rely. According to the complaint, by licensing its patents only to end-users (mobile device makers) instead of to chip makers further up the supply chain, Qualcomm is able to threaten to withhold the supply of its chipsets to its licensees and thereby extract onerous terms in its patent license agreements.

There are numerous problems with the FTC’s case. Most fundamental among them is the “no duh” problem: Of course Qualcomm conditions the purchase of its chips on the licensing of its intellectual property; how could it be any other way? The alternative would require Qualcomm to actually facilitate the violation of its property rights by forcing it to sell its chips to device makers even if they refuse its patent license terms. In that world, what device maker would ever agree to pay more than a pittance for a patent license? The likely outcome is that Qualcomm charges more for its chips to compensate (or simply stops making them). Great, the FTC says; then competitors can fill the gap and — voila: the market is more competitive, prices will actually fall, and consumers will reap the benefits.

Except it doesn’t work that way. As many economists, including both the current and a prominent former chief economist of the FTC, have demonstrated, forcing royalty rates lower in such situations is at least as likely to harm competition as to benefit it. There is no sound theoretical or empirical basis for concluding that using antitrust to move royalty rates closer to some theoretical ideal will actually increase consumer welfare. All it does for certain is undermine patent holders’ property rights, virtually ensuring there will be less innovation.

In fact, given this inescapable reality, it is unclear why the current Commission is continuing to pursue the case at all. The bottom line is that, if it wins the case, the current FTC will have done more to undermine intellectual property rights than any other administration’s Commission has been able to accomplish.

It is not difficult to identify the frailties of the case that would readily support the agency backing away from pursuing it further. To begin with, the claim that device makers cannot refuse Qualcomm’s terms because the company effectively controls the market’s supply of mobile broadband modem chips is fanciful. While it’s true that Qualcomm is the largest supplier of these chipsets, it’s an absurdity to claim that device makers have no alternatives. In fact, Qualcomm has faced stiff competition from some of the world’s other most successful companies since well before the FTC brought its case. Samsung — the largest maker of Android phones — developed its own chip to replace Qualcomm’s in 2015, for example. More recently, Intel has provided Apple with all of the chips for its 2018 iPhones, and Apple is rumored to be developing its own 5G cellular chips in-house. In any case, the fact that most device makers have preferred to use Qualcomm’s chips in the past says nothing about the ability of other firms to take business from it.

The possibility (and actuality) of entry from competitors like Intel ensures that sophisticated purchasers like Apple have bargaining leverage. Yet, ironically, the FTC points to Apple’s claimthat Qualcomm “forced” it to use Intel modems in its latest iPhones as evidence of Qualcomm’s dominance. Think about that: Qualcomm “forced” a company worth many times its own value to use a competitor’s chips in its new iPhones — and that shows Qualcomm has a stranglehold on the market?

The FTC implies that Qualcomm’s refusal to license its patents to competing chip makers means that competitors cannot reliably supply the market. Yet Qualcomm has never asserted its patents against a competing chip maker, every one of which uses Qualcomm’s technology without paying any royalties to do so. The FTC nevertheless paints the decision to license only to device makers as the aberrant choice of an exploitative, dominant firm. The reality, however, is that device-level licensing is the norm practiced by every company in the industry — and has been since the 1980s.

Not only that, but Qualcomm has not altered its licensing terms or practices since it was decidedly an upstart challenger in the market — indeed, since before it even started producing chips, and thus before it even had the supposed means to leverage its chip sales to extract anticompetitive licensing terms. It would be a remarkable coincidence if precisely the same licensing structure and the exact same royalty rate served the company’s interests both as a struggling startup and as an alleged rapacious monopolist. Yet that is the implication of the FTC’s theory.

When Qualcomm introduced CDMA technology to the mobile phone industry in 1989, it was a promising but unproven new technology in an industry dominated by different standards. Qualcomm happily encouraged chip makers to promote the standard by enabling them to produce compliant components without paying any royalties; and it willingly licensed its patents to device makers based on a percentage of sales of the handsets that incorporated CDMA chips. Qualcomm thus shared both the financial benefits and the financial risk associated with the development and sales of devices implementing its new technology.

Qualcomm’s favorable (to handset makers) licensing terms may have helped CDMA become one of the industry standards for 2G and 3G devices. But it’s an unsupportable assertion to say that those identical terms are suddenly the source of anticompetitive power, particularly as 2G and 3G are rapidly disappearing from the market and as competing patent holders gain prominence with each successive cellular technology standard.

To be sure, successful handset makers like Apple that sell their devices at a significant premium would prefer to share less of their revenue with Qualcomm. But their success was built in large part on Qualcomm’s technology. They may regret the terms of the deal that propelled CDMA technology to prominence, but Apple’s regret is not the basis of a sound antitrust case.

And although it’s unsurprising that manufacturers of premium handsets would like to use antitrust law to extract better terms from their negotiations with standard-essential patent holders, it is astonishing that the current FTC is carrying on the Obama FTC’s willingness to do it for them.

None of this means that Qualcomm is free to charge an unlimited price: standard-essential patents must be licensed on “FRAND” terms, meaning they must be fair, reasonable, and nondiscriminatory. It is difficult to asses what constitutes FRAND, but the most restrictive method is to estimate what negotiated terms would look like before a patent was incorporated into a standard. “[R]oyalties that are or would be negotiated ex ante with full information are a market bench-mark reflecting legitimate return to innovation,” writes Carl Shapiro, the FTC’s own economic expert in the case.

And that is precisely what happened here: We don’t have to guess what the pre-standard terms of trade would look like; we know them, because they are the same terms that Qualcomm offers now.

We don’t know exactly what the consequence would be for consumers, device makers, and competitors if Qualcomm were forced to accede to the FTC’s benighted vision of how the market should operate. But we do know that the market we actually have is thriving, with new entry at every level, enormous investment in R&D, and continuous technological advance. These aren’t generally the characteristics of a typical monopoly market. While the FTC’s effort to “fix” the market may help Apple and Samsung reap a larger share of the benefits, it will undoubtedly end up only hurting consumers.

Amazon offers Prime discounts to Whole Food customers and offers free delivery for Prime members. Those are certainly consumer benefits. But with those comes a cost, which may or may not be significant. By bundling its products with collective discounts, Amazon makes it more attractive for shoppers to shift their buying practices from local stores to the internet giant. Will this eventually mean that local stores will become more inefficient, based on lower volume, and will eventually close? Do most Americans care about the potential loss of local supermarkets and specialty grocers? No one, including antitrust enforcers, seems to have asked them.

Continue Reading...

Carl Shapiro, the government’s economics expert opposing the AT&T-Time Warner merger, seems skeptical of much of the antitrust populists’ Amazon rhetoric: “Simply saying that Amazon has grown like a weed, charges very low prices, and has driven many smaller retailers out of business is not sufficient. Where is the consumer harm?”

On its face, there was nothing about the Amazon/Whole Foods merger that should have raised any antitrust concerns. While one year is too soon to fully judge the competitive impacts of the Amazon-Whole Foods merger, nevertheless, it appears that much of the populist antitrust movement’s speculation that the merger would destroy competition and competitors and impoverish workers has failed to materialize.

Continue Reading...

AT&T’s merger with Time Warner has lead to one of the most important, but least interesting, antitrust trials in recent history.

The merger itself is somewhat unimportant to consumers. It’s about a close to a “pure” vertical merger as we can get in today’s world and would not lead to a measurable increase in prices paid by consumers. At the same time, Richard J. Leon’s decision to approve the merger may have sent a signal regarding how the anticipated Fox-Disney (or Comcast), CVS-Aetna, and Cigna-Express Scripts mergers might proceed.

Judge Leon of the United States District Court in Washington, said the U.S. Department of Justice had not proved that AT&T’s acquisition of Time Warner would lead to fewer choices for consumers and higher prices for television and internet services.

As shown in the figure below, there is virtually no overlap in services provided by Time Warner (content creation and broadcasting) and AT&T (content distribution). We say “virtually” because, through it’s ownership of DirecTV, AT&T has an ownership stake in several channels such as the Game Show Network, the MLB Network, and Root Sports. So, not a “pure” vertical merger, but pretty close. Besides no one seems to really care about GSN, MLB, or Root.

Infographic: What's at Stake in the Proposed AT&T - Time Warner Merger | Statista

The merger trial was one of the least interesting because the government’s case opposing the merger was so weak.

The Justice Department’s economic expert, University of California, Berkeley, professor Carl Shapiro, argued the merger would harm consumers and competition in three ways:

  1. AT&T would raise the price of content to other cable companies, driving up their costs which would be passed on consumers.
  2. Across more than 1,000 subscription television markets, AT&T could benefit by drawing customers away from rival content distributors in the event of a “blackout,” in which the distributor chooses not to carry Time Warner content over a pricing dispute. In addition, AT&T could also use its control over Time Warner content to retain customers by discouraging consumers from switching to providers that don’t carry the Time Warner content. Those two factors, according to Shapiro, could cause rival cable companies to lose between 9 and 14 percent of their subscribers over the long term.
  3. AT&T and competitor Comcast could coordinate to restrict access to popular Time Warner and NBC content in ways that could stifle competition from online cable alternatives such as Dish Network’s Sling TV or Sony’s PlayStation Vue. Even tacit coordination of this type would impair consumer choices, Shapiro opined.

Price increases and blackouts

Shapiro initially indicated the merger would cause consumers to pay an additional $436 million year, which amounts to an average of 45 cents a month per customer, or a 0.4 percent increase. At trial, he testified the amount might be closer to 27 cents a month and conceded it could be a low as 13 cents a month.

The government’s “blackout” arguments seemed to get lost in the shifting sands of shifting survey results. Blackouts mattered, according to Shapiro, because “Even though they don’t happen very much, that’s the key to leverage.” His testimony on the potential for price hikes relied heavily on a study commissioned by Charter Communications Inc., which opposes the merger. Stefan Bewley, a director at consulting firm Altman Vilandrie & Co., which produced the study, testified the report predicted Charter would lose 9 percent of its subscribers if it lost access to Turner programming.

Under cross-examination by AT&T’s lawyer, Bewley acknowledged what was described as a “final” version of the study presented to Charter in April last year put the subscriber loss estimate at 5 percent. When confronted with his own emails about the change to 9 percent, Bewley said he agreed to the update after meeting with Charter. At the time of the change from 5 percent to 9 percent, Charter was discussing its opposition to the merger with the Justice Department.

Bewley noted that the change occurred because he saw that some of the figures his team had gathered about Turner networks were outliers, with a range of subcriber losses of 5 percent on the low end and 14 percent on the high end. He indicated his team came up with a “weighted average” of 9 percent.

This 5/9/14 percent distinction seems to be critical to the government’s claim the merger would raise consumer prices. Referring to Shapiro’s analysis, AT&T-Time Warner’s lead counsel, Daniel Petrocelli, asked Bewley: “Are you aware that if he’d used 5 percent there would have been a price increase of zero?” Bewley said he was not aware.

At trial, AT&T and Turner executives testified that they couldn’t credibly threaten to withhold Turner programming from rivals because the networks’ profitability depends on wide distribution. In addition, one of AT&T’s expert witnesses, University of California, Berkeley business and economics professor Michael Katz, testified about what he said were the benefits of AT&T’s offer to use “baseball style” arbitration with rival pay TV distributors if the two sides couldn’t agree on what fees to pay for Time Warner’s Turner networks. With baseball style arbitration, both sides submit their final offer to an arbitrator, who determines which of the two offers is most appropriate.

Under the terms of the arbitration offer, AT&T has agreed not to black out its networks for the duration of negotiations with distributors. Dennis Carlton, an economics professor at the University of Chicago, said Shapiro’s model was unreliable because he didn’t account for that. Shapiro conceded he did not factor that into his study, saying that he would need to use an entirely different model to study how the arbitration agreement would affect the merger.

Coordination with Comcast/NBCUniversal

The government’s contention that, after the merger, AT&T and rival Comcast could coordinate to restrict access to popular Time Warner and NBC content to harm emerging competitors was always a weak argument.

At trial, the Justice Department seemed to abandon any claim that the merged company would unilaterally restrict access to online “virtual MVPDs.” The government’s case, made by its expert Shapiro, ended up being there would be a “risk” and “danger” that AT&T and Comcast would “coordinate” to withhold programming in a way to harm emerging online multichannel distributors. However, under cross examination, he conceded that his opinions were not based on a “quantifiable model.” Shapiro testified that he had no opinion whether the odds of such coordination would be greater than 1 percent.

Doing no favors to its case, the government turned to a seemingly contradictory argument that AT&T and Comcast would coordinate to demand virtual providers take too much content. Emerging online multichannel distributors pitch their offerings as “skinny bundles” with a limited selection of the more popular channels. By forcing these providers to take more channels, the government argued, the skinny bundle business model is undermined in a version of raising rivals costs. This theory did not get much play at trial, but seems to suggest the government was trying to have its cake and eat it, too.

Except in this case, as with much of the government’s case in this matter, the cake was not completely baked.

 

On January 23rd, the Heritage Foundation convened its Fourth Annual Antitrust Conference, “Trump Antitrust Policy after One Year.”  The entire Conference can be viewed online (here).  The Conference featured a keynote speech, followed by three separate panels that addressed  developments at the Federal Trade Commission (FTC), at the Justice Department’s Antitrust Division (DOJ), and in the international arena, developments that can have a serious effect on the country’s economic growth and expansion of our business and industrial sector.

  1. Professor Bill Kovacic’s Keynote Speech

The conference started with a bang, featuring a stellar keynote speech (complemented by excellent power point slides) by GW Professor and former FTC Chairman Bill Kovacic, who also serves as a Member of the Board of the UK Government’s Competitive Markets Authority.  Kovacic began by noting the claim by senior foreign officials that “nothing is happening” in U.S. antitrust enforcement.  Although this perception may be inaccurate, Kovacic argued that it colors foreign officials’ dealings with the U.S., and continues a preexisting trend of diminishing U.S. influence on foreign governments’ antitrust enforcement systems.  (It is widely believed that the European antitrust model is dominant internationally.)

In order to enhance the perceived effectiveness (and prestige) of American antitrust on the global plane, American antitrust enforcers should, according to Kovacic, adopt a positive agenda citing specific priorities for action (as opposed to a “negative approach” focused on what actions will not be taken) – an orientation which former FTC Chairman Muris employed successfully in the last Bush Administration.  The positive engagement themes should be communicated powerfully to the public here and abroad through active public engagement by agency officials.  Agency strengths, such as FTC market studies and economic expertise, should be highlighted.

In addition, the FTC and Justice Department should act more like an “antitrust policy joint venture” at home and abroad, extending cooperation beyond guidelines to economic research, studies, and other aspects of their missions.  This would showcase the outstanding capabilities of the U.S. public antitrust enterprise.

  1. FTC Panel

A panel on FTC developments (moderated by Dr. Jeff Eisenach, Managing Director of NERA Economic Consulting and former Chief of Staff to FTC Chairman James Miller) followed Kovacic’s presentation.

Acting Bureau of Competition Chief Bruce Hoffman began by stressing that FTC antitrust enforcers are busier than ever, with a number of important cases in litigation and resources stretched to the limit.  Thus, FTC enforcement is neither weak nor timid – to the contrary, it is quite vigorous.  Hoffman was surprised by recent political attacks on the 40 year bipartisan consensus regarding the economics-centered consumer welfare standard that has set the direction of U.S. antitrust enforcement.  According to Hoffman, noted economist Carl Shapiro has debunked the notion that supposed increases in industry concentration even at the national level are meaningful.  In short, there is no empirical basis to dethrone the consumer welfare standard and replace it with something else.

Other former senior FTC officials engaged in a discussion following Hoffman’s remarks.  Orrick Partner Alex Okuliar, a former Attorney-Advisor to FTC Acting Chairman Maureen Ohlhausen, noted Ohlhausen’s emphasis on “regulatory humility” ( recognizing the inherent limitations of regulation and acting in accordance with those limits) and on the work of the FTC’s Economic Liberty Task Force, which centers on removing unnecessary regulatory restraints on competition (such as excessive occupational licensing requirements).

Wilson Sonsini Partner Susan Creighton, a former Director of the FTC’s Bureau of Competition, discussed the importance of economics-based “technocratic antitrust” (applied by sophisticated judges) for a sound and manageable antitrust system – something still not well understood by many foreign antitrust agencies.  Creighton had three reform suggestions for the Trump Administration:

(1) the DOJ and the FTC should stress the central role of economics in the institutional arrangements of antitrust (DOJ’s “economics structure” is a bit different than the FTC’s);

(2) both agencies should send relatively more economists to represent us at antitrust meetings abroad, thereby enabling the agencies to place a greater stress on the importance of economic rigor in antitrust enforcement; and

(3) the FTC and the DOJ should establish a task force to jointly carry out economics research and hone a consistent economic policy message.

Sidley & Austin Partner Bill Blumenthal, a former FTC General Counsel, noted the problems of defining Trump FTC policy in the absence of new Trump FTC Commissioners.  Blumenthal noted that signs of a populist uprising against current antitrust norms extend beyond antitrust, and that the agencies may have to look to new unilateral conduct cases to show that they are “doing something.”  He added that the populist rejection of current economics-based antitrust analysis is intellectually incoherent.  There is a tension between protecting consumers and protecting labor; for example, anti-consumer cartels may be beneficial to labor union interests.

In a follow-up roundtable discussion, Hoffman noted that theoretical “existence theorems” of anticompetitive harm that lack empirical support in particular cases are not administrable.  Creighton opined that, as an independent agency, the FTC may be a bit more susceptible to congressional pressure than DOJ.  Blumenthal stated that congressional interest may be able to trigger particular investigations, but it does not dictate outcomes.

  1. DOJ Panel

Following lunch, a panel of antitrust experts (moderated by Morgan Lewis Partner and former Chief of Staff to the Assistant Attorney General Hill Wellford) addressed DOJ developments.

The current Principal Deputy Assistant Attorney General for Antitrust, Andrew Finch, began by stating that the three major Antitrust Division initiatives involve (1) intellectual property (IP), (2) remedies, and (3) criminal enforcement.  Assistant Attorney General Makan Delrahim’s November 2017 speech explained that antitrust should not undermine legitimate incentives of patent holders to maximize returns to their IP through licensing.  DOJ is looking into buyer and seller cartel behavior (including in standard setting) that could harm IP rights.  DOJ will work to streamline and improve consent decrees and other remedies, and make it easier to go after decree violations.  In criminal enforcement, DOJ will continue to go after “no employee poaching” employer agreements as criminal violations.

Former Assistant Attorney General Tom Barnett, a Covington & Burling Partner, noted that more national agencies are willing to intervene in international matters, leading to inconsistencies in results.  The International Competition Network is important, but major differences in rhetoric have created a sense that there is very little agreement among enforcers, although the reality may be otherwise.  Muted U.S. agency voices on the international plane and limited resources have proven unfortunate – the FTC needs to engage better in international discussions and needs new Commissioners.

Former Counsel to the Assistant Attorney Eric Grannon, a White & Case Partner, made three specific comments:

(1) DOJ should look outside the career criminal enforcement bureaucracy and consider selecting someone with significant private sector experience as Deputy Assistant Attorney General for Criminal Enforcement;

(2) DOJ needs to go beyond merely focusing on metrics that show increased aggregate fines and jail time year-by-year (something is wrong if cartel activities and penalties keep rising despite the growing emphasis on inculcating an “anti-cartel culture” within firms); and

(3) DOJ needs to reassess its “amnesty plus” program, in which an amnesty applicant benefits by highlighting the existence of a second cartel in which it participates (non-culpable firms allegedly in the second cartel may be fingered, leading to unjustified potential treble damages liability for them in private lawsuits).

Grannon urged that DOJ hold a public workshop on the amnesty plus program in the coming year.  Grannon also argued against the classification of antitrust offenses as crimes of “moral turpitude” (moral turpitude offenses allow perpetrators to be excluded from the U.S. for 20 years).  Finally, as a good government measure, Grannon recommended that the Antitrust Division should post all briefs on its website, including those of opposing parties and third parties.

Baker and Botts Partner Stephen Weissman, a former Deputy Director of the FTC’s Bureau of Competition, found a great deal of continuity in DOJ civil enforcement.  Nevertheless, he expressed surprise at Assistant Attorney General Delrahim’s recent remarks that suggested that DOJ might consider asking the Supreme Court to overturn the Illinois Brick ban on indirect purchaser suits under federal antitrust law.  Weissman noted the increased DOJ focus on the rights of IP holders, not implementers, and the beneficial emphasis on the importance of DOJ’s amicus program.

The following discussion among the panelists elicited agreement (Weissman and Barnett) that the business community needs more clear-cut guidance on vertical mergers (and perhaps on other mergers as well) and affirmative statements on DOJ’s plans.  DOJ was characterized as too heavy-handed in setting timing agreements in mergers.  The panelists were in accord that enforcers should continue to emphasize the American consumer welfare model of antitrust.  The panelists believed the U.S. gets it right in stressing jail time for cartelists and in detrebling for amnesty applicants.  DOJ should, however, apply a proper dose of skepticism in assessing the factual content of proffers made by amnesty applicants.  Former enforcers saw no need to automatically grant markers to those applicants.  Andrew Finch returned to the topic of Illinois Brick, explaining that the Antitrust Modernization Commission had suggested reexamining that case’s bar on federal indirect purchaser suits.  In response to an audience question as to which agency should do internet oversight, Finch stressed that relevant agency experience and resources are assessed on a matter-specific basis.

  1. International Panel

The last panel of the afternoon, which focused on international developments, was moderated by Cadwalader Counsel (and former Attorney-Advisor to FTC Chairman Tim Muris) Bilal Sayyed.

Deputy Assistant Attorney General for International Matters, Roger Alford, began with an overview of trade and antitrust considerations.  Alford explained that DOJ adds a consumer welfare and economics perspective to Trump Administration trade policy discussions.  On the international plane, DOJ supports principles of non-discrimination, strong antitrust enforcement, and opposition to national champions, plus the addition of a new competition chapter in “NAFTA 2.0” negotiations.  The revised 2017 DOJ International Antitrust Guidelines dealt with economic efficiency and the consideration of comity.  DOJ and the Executive Branch will take into account the degree of conflict with other jurisdictions’ laws (fleshing out comity analysis) and will push case coordination as well as policy coordination.  DOJ is considering new ideas for dealing with due process internationally, in addition to working within the International Competition Network to develop best practices.  Better international coordination is also needed on the cartel leniency program.

Next, Koren Wong-Ervin, Qualcomm Director of IP and Competition Policy (and former Director of the Scalia Law School’s Global Antitrust Institute) stated that the Korea Fair Trade Commission had ignored comity and guidance from U.S. expert officials in imposing global licensing remedies and penalties on Qualcomm.  The U.S. Government is moving toward a sounder approach on the evaluation of standard essential patents, as is Europe, with a move away from required component-specific patent licensing royalty determinations.  More generally, a return to an economic effects-based approach to IP licensing is important.  Comprehensive revisions to China’s Anti-Monopoly Law, now under consideration, will have enormous public policy importance.  Balanced IP licensing rules, with courts as gatekeepers, are important.  Chinese law still has overly broad essential facilities and deception law; IP price regulation proposals are very troublesome.  New FTC Commissioners are needed, accompanied by robust budget support for international work.

Latham & Watkins’ Washington, D.C. Managing Partner Michael Egge focused on the substantial divergence in merger enforcement practice around the world.  The cost of compliance imposed by European Commission pre-notification filing requirements is overly high; this pre-notification practice is not written down and has escaped needed public attention.  Chinese merger filing practice (“China is struggling to cope”) features a costly 1-3 month pre-filing acceptance period, and merger filing requirements in India are particularly onerous.

Jim Rill, former Assistant Attorney General for Antitrust and former ABA Antitrust Section Chair, stressed that due process improvements can help promote substantive antitrust convergence around the globe.  Rill stated that U.S. Government officials, with the assistance of private sector stakeholders, need a mechanism (a “report card”) to measure foreign agencies’ implementation of OECD antitrust recommendations.  U.S. Government officials should consider participating in foreign proceedings where the denial of due process is blatant, and where foreign governments indirectly dictate a particular harmful policy result.  Multilateral review of international agreements is valuable as well.  The comity principles found in the 1991 EU-U.S. Antitrust Cooperation Agreement are quite useful.  Trade remedies in antitrust agreements are not a competition solution, and are not helpful.  More and better training programs for foreign officials are called for; International Chamber of Commerce, American Bar Association, and U.S. Chamber of Commerce principles are generally sound.  Some consideration should be given to old ICPAC recommendations, such as (perhaps) the development of a common merger notification form for use around the world.

Douglas Ginsburg, Senior Judge (and former Chief Judge) of the U.S. Court of Appeals for the D.C. Circuit, and former Assistant Attorney General for Antitrust, spoke last, focusing on the European Court of Justice’s Intel decision, which laid bare the deficiencies in the European Commission’s finding of a competition law violation in that matter.

In a brief closing roundtable discussion, Roger Alford suggested possible greater involvement by business community stakeholders in training foreign antitrust officials.

  1. Conclusion

Heritage Foundation host Alden Abbott closed the proceedings with a brief capsule summary of panel highlights.  As in prior years, the Fourth Annual Heritage Antitrust Conference generated spirited discussion among the brightest lights in the American antitrust firmament on recent developments and likely trends in antitrust enforcement and policy development, here and abroad.