Archives For essential facilities

[This post adapts elements of “Should ASEAN Antitrust Laws Emulate European Competition Policy?”, published in the Singapore Economic Review (2021). Open access working paper here.]

U.S. and European competition laws diverge in numerous ways that have important real-world effects. Understanding these differences is vital, particularly as lawmakers in the United States, and the rest of the world, consider adopting a more “European” approach to competition.

In broad terms, the European approach is more centralized and political. The European Commission’s Directorate General for Competition (DG Comp) has significant de facto discretion over how the law is enforced. This contrasts with the common law approach of the United States, in which courts elaborate upon open-ended statutes through an iterative process of case law. In other words, the European system was built from the top down, while U.S. antitrust relies on a bottom-up approach, derived from arguments made by litigants (including the government antitrust agencies) and defendants (usually businesses).

This procedural divergence has significant ramifications for substantive law. European competition law includes more provisions akin to de facto regulation. This is notably the case for the “abuse of dominance” standard, in which a “dominant” business can be prosecuted for “abusing” its position by charging high prices or refusing to deal with competitors. By contrast, the U.S. system places more emphasis on actual consumer outcomes, rather than the nature or “fairness” of an underlying practice.

The American system thus affords firms more leeway to exclude their rivals, so long as this entails superior benefits for consumers. This may make the U.S. system more hospitable to innovation, since there is no built-in regulation of conduct for innovators who acquire a successful market position fairly and through normal competition.

In this post, we discuss some key differences between the two systems—including in areas like predatory pricing and refusals to deal—as well as the discretionary power the European Commission enjoys under the European model.

Exploitative Abuses

U.S. antitrust is, by and large, unconcerned with companies charging what some might consider “excessive” prices. The late Associate Justice Antonin Scalia, writing for the Supreme Court majority in the 2003 case Verizon v. Trinko, observed that:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period—is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth.

This contrasts with European competition-law cases, where firms may be found to have infringed competition law because they charged excessive prices. As the European Court of Justice (ECJ) held in 1978’s United Brands case: “In this case charging a price which is excessive because it has no reasonable relation to the economic value of the product supplied would be such an abuse.”

While United Brands was the EU’s foundational case for excessive pricing, and the European Commission reiterated that these allegedly exploitative abuses were possible when it published its guidance paper on abuse of dominance cases in 2009, the commission had for some time demonstrated apparent disinterest in bringing such cases. In recent years, however, both the European Commission and some national authorities have shown renewed interest in excessive-pricing cases, most notably in the pharmaceutical sector.

European competition law also penalizes so-called “margin squeeze” abuses, in which a dominant upstream supplier charges a price to distributors that is too high for them to compete effectively with that same dominant firm downstream:

[I]t is for the referring court to examine, in essence, whether the pricing practice introduced by TeliaSonera is unfair in so far as it squeezes the margins of its competitors on the retail market for broadband connection services to end users. (Konkurrensverket v TeliaSonera Sverige, 2011)

As Scalia observed in Trinko, forcing firms to charge prices that are below a market’s natural equilibrium affects firms’ incentives to enter markets, notably with innovative products and more efficient means of production. But the problem is not just one of market entry and innovation.  Also relevant is the degree to which competition authorities are competent to determine the “right” prices or margins.

As Friedrich Hayek demonstrated in his influential 1945 essay The Use of Knowledge in Society, economic agents use information gleaned from prices to guide their business decisions. It is this distributed activity of thousands or millions of economic actors that enables markets to put resources to their most valuable uses, thereby leading to more efficient societies. By comparison, the efforts of central regulators to set prices and margins is necessarily inferior; there is simply no reasonable way for competition regulators to make such judgments in a consistent and reliable manner.

Given the substantial risk that investigations into purportedly excessive prices will deter market entry, such investigations should be circumscribed. But the court’s precedents, with their myopic focus on ex post prices, do not impose such constraints on the commission. The temptation to “correct” high prices—especially in the politically contentious pharmaceutical industry—may thus induce economically unjustified and ultimately deleterious intervention.

Predatory Pricing

A second important area of divergence concerns predatory-pricing cases. U.S. antitrust law subjects allegations of predatory pricing to two strict conditions:

  1. Monopolists must charge prices that are below some measure of their incremental costs; and
  2. There must be a realistic prospect that they will able to recoup these initial losses.

In laying out its approach to predatory pricing, the U.S. Supreme Court has identified the risk of false positives and the clear cost of such errors to consumers. It thus has particularly stressed the importance of the recoupment requirement. As the court found in 1993’s Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., without recoupment, “predatory pricing produces lower aggregate prices in the market, and consumer welfare is enhanced.”

Accordingly, U.S. authorities must prove that there are constraints that prevent rival firms from entering the market after the predation scheme, or that the scheme itself would effectively foreclose rivals from entering the market in the first place. Otherwise, the predator would be undercut by competitors as soon as it attempts to recoup its losses by charging supra-competitive prices.

Without the strong likelihood that a monopolist will be able to recoup lost revenue from underpricing, the overwhelming weight of economic evidence (to say nothing of simple logic) is that predatory pricing is not a rational business strategy. Thus, apparent cases of predatory pricing are most likely not, in fact, predatory; deterring or punishing them would actually harm consumers.

By contrast, the EU employs a more expansive legal standard to define predatory pricing, and almost certainly risks injuring consumers as a result. Authorities must prove only that a company has charged a price below its average variable cost, in which case its behavior is presumed to be predatory. Even when a firm charges prices that are between its average variable and average total cost, it can be found guilty of predatory pricing if authorities show that its behavior was part of a plan to eliminate a competitor. Most significantly, in neither case is it necessary for authorities to show that the scheme would allow the monopolist to recoup its losses.

[I]t does not follow from the case‑law of the Court that proof of the possibility of recoupment of losses suffered by the application, by an undertaking in a dominant position, of prices lower than a certain level of costs constitutes a necessary precondition to establishing that such a pricing policy is abusive. (France Télécom v Commission, 2009).

This aspect of the legal standard has no basis in economic theory or evidence—not even in the “strategic” economic theory that arguably challenges the dominant Chicago School understanding of predatory pricing. Indeed, strategic predatory pricing still requires some form of recoupment, and the refutation of any convincing business justification offered in response. For example, ​​in a 2017 piece for the Antitrust Law Journal, Steven Salop lays out the “raising rivals’ costs” analysis of predation and notes that recoupment still occurs, just at the same time as predation:

[T]he anticompetitive conditional pricing practice does not involve discrete predatory and recoupment periods, as in the case of classical predatory pricing. Instead, the recoupment occurs simultaneously with the conduct. This is because the monopolist is able to maintain its current monopoly power through the exclusionary conduct.

The case of predatory pricing illustrates a crucial distinction between European and American competition law. The recoupment requirement embodied in American antitrust law serves to differentiate aggressive pricing behavior that improves consumer welfare—because it leads to overall price decreases—from predatory pricing that reduces welfare with higher prices. It is, in other words, entirely focused on the welfare of consumers.

The European approach, by contrast, reflects structuralist considerations far removed from a concern for consumer welfare. Its underlying fear is that dominant companies could use aggressive pricing to engender more concentrated markets. It is simply presumed that these more concentrated markets are invariably detrimental to consumers. Both the Tetra Pak and France Télécom cases offer clear illustrations of the ECJ’s reasoning on this point:

[I]t would not be appropriate, in the circumstances of the present case, to require in addition proof that Tetra Pak had a realistic chance of recouping its losses. It must be possible to penalize predatory pricing whenever there is a risk that competitors will be eliminated… The aim pursued, which is to maintain undistorted competition, rules out waiting until such a strategy leads to the actual elimination of competitors. (Tetra Pak v Commission, 1996).

Similarly:

[T]he lack of any possibility of recoupment of losses is not sufficient to prevent the undertaking concerned reinforcing its dominant position, in particular, following the withdrawal from the market of one or a number of its competitors, so that the degree of competition existing on the market, already weakened precisely because of the presence of the undertaking concerned, is further reduced and customers suffer loss as a result of the limitation of the choices available to them.  (France Télécom v Commission, 2009).

In short, the European approach leaves less room to analyze the concrete effects of a given pricing scheme, leaving it more prone to false positives than the U.S. standard explicated in the Brooke Group decision. Worse still, the European approach ignores not only the benefits that consumers may derive from lower prices, but also the chilling effect that broad predatory pricing standards may exert on firms that would otherwise seek to use aggressive pricing schemes to attract consumers.

Refusals to Deal

U.S. and EU antitrust law also differ greatly when it comes to refusals to deal. While the United States has limited the ability of either enforcement authorities or rivals to bring such cases, EU competition law sets a far lower threshold for liability.

As Justice Scalia wrote in Trinko:

Aspen Skiing is at or near the outer boundary of §2 liability. The Court there found significance in the defendant’s decision to cease participation in a cooperative venture. The unilateral termination of a voluntary (and thus presumably profitable) course of dealing suggested a willingness to forsake short-term profits to achieve an anticompetitive end. (Verizon v Trinko, 2003.)

This highlights two key features of American antitrust law with regard to refusals to deal. To start, U.S. antitrust law generally does not apply the “essential facilities” doctrine. Accordingly, in the absence of exceptional facts, upstream monopolists are rarely required to supply their product to downstream rivals, even if that supply is “essential” for effective competition in the downstream market. Moreover, as Justice Scalia observed in Trinko, the Aspen Skiing case appears to concern only those limited instances where a firm’s refusal to deal stems from the termination of a preexisting and profitable business relationship.

While even this is not likely the economically appropriate limitation on liability, its impetus—ensuring that liability is found only in situations where procompetitive explanations for the challenged conduct are unlikely—is completely appropriate for a regime concerned with minimizing the cost to consumers of erroneous enforcement decisions.

As in most areas of antitrust policy, EU competition law is much more interventionist. Refusals to deal are a central theme of EU enforcement efforts, and there is a relatively low threshold for liability.

In theory, for a refusal to deal to infringe EU competition law, it must meet a set of fairly stringent conditions: the input must be indispensable, the refusal must eliminate all competition in the downstream market, and there must not be objective reasons that justify the refusal. Moreover, if the refusal to deal involves intellectual property, it must also prevent the appearance of a new good.

In practice, however, all of these conditions have been relaxed significantly by EU courts and the commission’s decisional practice. This is best evidenced by the lower court’s Microsoft ruling where, as John Vickers notes:

[T]he Court found easily in favor of the Commission on the IMS Health criteria, which it interpreted surprisingly elastically, and without relying on the special factors emphasized by the Commission. For example, to meet the “new product” condition it was unnecessary to identify a particular new product… thwarted by the refusal to supply but sufficient merely to show limitation of technical development in terms of less incentive for competitors to innovate.

EU competition law thus shows far less concern for its potential chilling effect on firms’ investments than does U.S. antitrust law.

Vertical Restraints

There are vast differences between U.S. and EU competition law relating to vertical restraints—that is, contractual restraints between firms that operate at different levels of the production process.

On the one hand, since the Supreme Court’s Leegin ruling in 2006, even price-related vertical restraints (such as resale price maintenance (RPM), under which a manufacturer can stipulate the prices at which retailers must sell its products) are assessed under the rule of reason in the United States. Some commentators have gone so far as to say that, in practice, U.S. case law on RPM almost amounts to per se legality.

Conversely, EU competition law treats RPM as severely as it treats cartels. Both RPM and cartels are considered to be restrictions of competition “by object”—the EU’s equivalent of a per se prohibition. This severe treatment also applies to non-price vertical restraints that tend to partition the European internal market.

Furthermore, in the Consten and Grundig ruling, the ECJ rejected the consequentialist, and economically grounded, principle that inter-brand competition is the appropriate framework to assess vertical restraints:

Although competition between producers is generally more noticeable than that between distributors of products of the same make, it does not thereby follow that an agreement tending to restrict the latter kind of competition should escape the prohibition of Article 85(1) merely because it might increase the former. (Consten SARL & Grundig-Verkaufs-GMBH v. Commission of the European Economic Community, 1966).

This treatment of vertical restrictions flies in the face of longstanding mainstream economic analysis of the subject. As Patrick Rey and Jean Tirole conclude:

Another major contribution of the earlier literature on vertical restraints is to have shown that per se illegality of such restraints has no economic foundations.

Unlike the EU, the U.S. Supreme Court in Leegin took account of the weight of the economic literature, and changed its approach to RPM to ensure that the law no longer simply precluded its arguable consumer benefits, writing: “Though each side of the debate can find sources to support its position, it suffices to say here that economics literature is replete with procompetitive justifications for a manufacturer’s use of resale price maintenance.” Further, the court found that the prior approach to resale price maintenance restraints “hinders competition and consumer welfare because manufacturers are forced to engage in second-best alternatives and because consumers are required to shoulder the increased expense of the inferior practices.”

The EU’s continued per se treatment of RPM, by contrast, strongly reflects its “precautionary principle” approach to antitrust. European regulators and courts readily condemn conduct that could conceivably injure consumers, even where such injury is, according to the best economic understanding, exceedingly unlikely. The U.S. approach, which rests on likelihood rather than mere possibility, is far less likely to condemn beneficial conduct erroneously.

Political Discretion in European Competition Law

EU competition law lacks a coherent analytical framework like that found in U.S. law’s reliance on the consumer welfare standard. The EU process is driven by a number of laterally equivalent—and sometimes mutually exclusive—goals, including industrial policy and the perceived need to counteract foreign state ownership and subsidies. Such a wide array of conflicting aims produces lack of clarity for firms seeking to conduct business. Moreover, the discretion that attends this fluid arrangement of goals yields an even larger problem.

The Microsoft case illustrates this problem well. In Microsoft, the commission could have chosen to base its decision on various potential objectives. It notably chose to base its findings on the fact that Microsoft’s behavior reduced “consumer choice.”

The commission, in fact, discounted arguments that economic efficiency may lead to consumer welfare gains, because it determined “consumer choice” among media players was more important:

Another argument relating to reduced transaction costs consists in saying that the economies made by a tied sale of two products saves resources otherwise spent for maintaining a separate distribution system for the second product. These economies would then be passed on to customers who could save costs related to a second purchasing act, including selection and installation of the product. Irrespective of the accuracy of the assumption that distributive efficiency gains are necessarily passed on to consumers, such savings cannot possibly outweigh the distortion of competition in this case. This is because distribution costs in software licensing are insignificant; a copy of a software programme can be duplicated and distributed at no substantial effort. In contrast, the importance of consumer choice and innovation regarding applications such as media players is high. (Commission Decision No. COMP. 37792 (Microsoft)).

It may be true that tying the products in question was unnecessary. But merely dismissing this decision because distribution costs are near-zero is hardly an analytically satisfactory response. There are many more costs involved in creating and distributing complementary software than those associated with hosting and downloading. The commission also simply asserts that consumer choice among some arbitrary number of competing products is necessarily a benefit. This, too, is not necessarily true, and the decision’s implication that any marginal increase in choice is more valuable than any gains from product design or innovation is analytically incoherent.

The Court of First Instance was only too happy to give the commission a pass in its breezy analysis; it saw no objection to these findings. With little substantive reasoning to support its findings, the court fully endorsed the commission’s assessment:

As the Commission correctly observes (see paragraph 1130 above), by such an argument Microsoft is in fact claiming that the integration of Windows Media Player in Windows and the marketing of Windows in that form alone lead to the de facto standardisation of the Windows Media Player platform, which has beneficial effects on the market. Although, generally, standardisation may effectively present certain advantages, it cannot be allowed to be imposed unilaterally by an undertaking in a dominant position by means of tying.

The Court further notes that it cannot be ruled out that third parties will not want the de facto standardisation advocated by Microsoft but will prefer it if different platforms continue to compete, on the ground that that will stimulate innovation between the various platforms. (Microsoft Corp. v Commission, 2007)

Pointing to these conflicting effects of Microsoft’s bundling decision, without weighing either, is a weak basis to uphold the commission’s decision that consumer choice outweighs the benefits of standardization. Moreover, actions undertaken by other firms to enhance consumer choice at the expense of standardization are, on these terms, potentially just as problematic. The dividing line becomes solely which theory the commission prefers to pursue.

What such a practice does is vest the commission with immense discretionary power. Any given case sets up a “heads, I win; tails, you lose” situation in which defendants are easily outflanked by a commission that can change the rules of its analysis as it sees fit. Defendants can play only the cards that they are dealt. Accordingly, Microsoft could not successfully challenge a conclusion that its behavior harmed consumers’ choice by arguing that it improved consumer welfare, on net.

By selecting, in this instance, “consumer choice” as the standard to be judged, the commission was able to evade the constraints that might have been imposed by a more robust welfare standard. Thus, the commission can essentially pick and choose the objectives that best serve its interests in each case. This vastly enlarges the scope of potential antitrust liability, while also substantially decreasing the ability of firms to predict when their behavior may be viewed as problematic. It leads to what, in U.S. courts, would be regarded as an untenable risk of false positives that chill innovative behavior and create nearly unwinnable battles for targeted firms.

Congressman Buck’s “Third Way” report offers a compromise between the House Judiciary Committee’s majority report, which proposes sweeping new regulation of tech companies, and the status quo, which Buck argues is unfair and insufficient. But though Buck rejects many of the majority’s reports proposals, what he proposes instead would lead to virtually the same outcome via a slightly longer process. 

The most significant majority proposals that Buck rejects are the structural separation to prevent a company that runs a platform from operating on that platform “in competition with the firms dependent on its infrastructure”, and line-of-business restrictions that would confine tech companies to a small number of markets, to prevent them from preferencing their other products to the detriment of competitors.

Buck rules these out, saying that they are “regulatory in nature [and] invite unforeseen consequences and divert attention away from public interest antitrust enforcement by our antitrust agencies.” He goes on to say that “this proposal is a thinly veiled call to break up Big Tech firms.”

Instead, Buck endorses, either fully or provisionally, measures including revitalising the essential facilities doctrine, imposing data interoperability mandates on platforms, and changing antitrust law to prevent “monopoly leveraging and predatory pricing”. 

Put together, though, these would amount to the same thing that the Democratic majority report proposes: a world where platforms are basically just conduits, regulated to be neutral and open, and where the companies that run them require a regulator’s go-ahead for important decisions — a process that would be just as influenced lobbying and political considerations, and insulated from market price signals, as any other regulator’s decisions are.

Revitalizing the essential facilities doctrine

Buck describes proposals to “revitalize the essential facilities doctrine” as “common ground” that warrant further consideration. This would mean that platforms deemed to be “essential facilities” would be required to offer access to their platform to third parties at a “reasonable” price, except in exceptional circumstances. The presumption would be that these platforms were anticompetitively foreclosing third party developers and merchants by either denying them access to their platforms or by charging them “too high” prices. 

This would require the kind of regulatory oversight that Buck says he wants to avoid. He says that “conservatives should be wary of handing additional regulatory authority to agencies in an attempt to micromanage platforms’ access rules.” But there’s no way to avoid this when the “facility” — and hence its pricing and access rules — changes as frequently as any digital platform does. In practice, digital platforms would have to justify their pricing rules and decisions about exclusion of third parties to courts or a regulator as often as they make those decisions.

If Apple’s App Store were deemed an essential facility such that it is presumed to be foreclosing third party developers any time it rejected their submissions, it would have to submit to regulatory scrutiny of the “reasonableness” of its commercial decisions on, literally, a daily basis.

That would likely require price controls to prevent platforms from using pricing to de facto exclude third parties they did not want to deal with. Adjudication of “fair” pricing by courts is unlikely to be a sustainable solution. Justice Breyer, in Town of Concord v. Boston Edison Co., considered this to be outside the courts’ purview:

[H]ow is a judge or jury to determine a ‘fair price?’ Is it the price charged by other suppliers of the primary product? None exist. Is it the price that competition ‘would have set’ were the primary level not monopolized? How can the court determine this price without examining costs and demands, indeed without acting like a rate-setting regulatory agency, the rate-setting proceedings of which often last for several years? Further, how is the court to decide the proper size of the price ‘gap?’ Must it be large enough for all independent competing firms to make a ‘living profit,’ no matter how inefficient they may be? . . . And how should the court respond when costs or demands change over time, as they inevitably will?

In practice, infrastructure treated as an essential facility is usually subject to pricing control by a regulator. This has its own difficulties. The UK’s energy and water infrastructure is an example. In determining optimal access pricing, regulators must determine the price that weighs competing needs to maximise short-term output, incentivise investment by the infrastructure owner, incentivise innovation and entry by competitors (e.g., local energy grids) and, of course, avoid “excessive” pricing. 

This is a near-impossible task, and the process is often drawn out and subject to challenges even in markets where the infrastructure is relatively simple. It is even less likely that these considerations would be objectively tractable in digital markets.

Treating a service as an essential facility is based on the premise that, absent mandated access, it is impossible to compete with it. But mandating access does not, on its own, prevent it from extracting monopoly rents from consumers; it just means that other companies selling inputs can have their share of the rents. 

So you may end up with two different sets of price controls: on the consumer side, to determine how much monopoly rent can be extracted from consumers, and on the access side, to determine how the monopoly rents are divided.

The UK’s energy market has both, for example. In the case of something like an electricity network, where it may simply not be physically or economically feasible to construct a second, competing network, this might be the least-bad course of action. In such circumstances, consumer-side price regulation might make sense. 

But if a service could, in fact, be competed with by others, treating it as an essential facility may be affirmatively harmful to competition and consumers if it diverts investment and time away from that potential competitor by allowing other companies to acquire some of the incumbent’s rents themselves.

The HJC report assumes that Apple is a monopolist, because, among people who own iPhones, the App Store is the only way to install third-party software. Treating the App Store as an essential facility may mean a ban on Apple charging “excessive prices” to companies like Spotify or Epic that would like to use it, or on Apple blocking them for offering users alternative in-app ways of buying their services.

If it were impossible for users to switch from iPhones, or for app developers to earn revenue through other mechanisms, this logic might be sound. But it would still not change the fact that the App Store platform was able to charge users monopoly prices; it would just mean that Epic and Spotify could capture some of those monopoly rents for themselves. Nice for them, but not for consumers. And since both companies have already grown to be pretty big and profitable with the constraints they object to in place, it seems difficult to argue that they cannot compete with these in place and sounds more like they’d just like a bigger share of the pie.

And, in fact, it is possible to switch away from the iPhone to Android. I have personally switched back and forth several times over the past few years, for example. And so have many others — despite what some claim, it’s really not that hard, especially now that most important data is stored on cloud-based services, and both companies offer an app to switch from the other. Apple also does not act like a monopolist — its Bionic chips are vastly better than any competitor’s and it continues to invest in and develop them.

So in practice, users switching from iPhone to Android if Epic’s games and Spotify’s music are not available constrains Apple, to some extent. If Apple did drive those services permanently off their platform, it would make Android relatively more attractive, and some users would move away — Apple would bear some of the costs of its ecosystem becoming worse. 

Assuming away this kind of competition, as Buck and the majority report do, is implausible. Not only that, but Buck and the majority believe that competition in this market is impossible — no policy or antitrust action could change things, and all that’s left is to regulate the market like it’s an electricity grid. 

And it means that platforms could often face situations where they could not expect to make themselves profitable after building their markets, since they could not control the supply side in order to earn revenues. That would make it harder to build platforms, and weaken competition, especially competition faced by incumbents.

Mandating interoperability

Interoperability mandates, which Buck supports, require platforms to make their products open and interoperable with third party software. If Twitter were required to be interoperable, for example, it would have to provide a mechanism (probably a set of open APIs) by which third party software could tweet and read its feeds, upload photos, send and receive DMs, and so on. 

Obviously, what interoperability actually involves differs from service to service, and involves decisions about design that are specific to each service. These variations are relevant because they mean interoperability requires discretionary regulation, including about product design, and can’t just be covered by a simple piece of legislation or a court order. 

To give an example: interoperability means a heightened security risk, perhaps from people unwittingly authorising a bad actor to access their private messages. How much is it appropriate to warn users about this, and how tight should your security controls be? It is probably excessive to require that users provide a sworn affidavit with witnesses, and even some written warnings about the risks may be so over the top as to scare off virtually any interested user. But some level of warning and user authentication is appropriate. So how much? 

Similarly, a company that has been required to offer its customers’ data through an API, but doesn’t really want to, can make life miserable for third party services that want to use it. Changing the API without warning, or letting its service drop or slow down, can break other services, and few users will be likely to want to use a third-party service that is unreliable. But some outages are inevitable, and some changes to the API and service are desirable. How do you decide how much?

These are not abstract examples. Open Banking in the UK, which requires interoperability of personal and small business current accounts, is the most developed example of interoperability in the world. It has been cited by former Chair of the Council of Economic Advisors, Jason Furman, among others, as a model for interoperability in tech. It has faced all of these questions: one bank, for instance, required that customers pass through twelve warning screens to approve a third party app to access their banking details.

To address problems like this, Open Banking has needed an “implementation entity” to design many of its most important elements. This is a de facto regulator, and it has taken years of difficult design decisions to arrive at Open Banking’s current form. 

Having helped write the UK’s industry review into Open Banking, I am cautiously optimistic about what it might be able to do for banking in Britain, not least because that market is already heavily regulated and lacking in competition. But it has been a huge undertaking, and has related to a relatively narrow set of data (its core is just two different things — the ability to read an account’s balance and transaction history, and the ability to initiate payments) in a sector that is not known for rapidly changing technology. Here, the costs of regulation may be outweighed by the benefits.

I am deeply sceptical that the same would be the case in most digital markets, where products do change rapidly, where new entrants frequently attempt to enter the market (and often succeed), where the security trade-offs are even more difficult to adjudicate, and where the economics are less straightforward, given that many services are provided at least in part because of the access to customer data they provide. 

Even if I am wrong, it is unavoidable that interoperability in digital markets would require an equivalent body to make and implement decisions when trade-offs are involved. This, again, would require a regulator like the UK’s implementation entity, and one that was enormous, given the number and diversity of services that it would have to oversee. And it would likely have to make important and difficult design decisions to which there is no clear answer. 

Banning self-preferencing

Buck’s Third Way would also ban digital platforms from self-preferencing. This typically involves an incumbent that can provide a good more cheaply than its third-party competitors — whether it’s through use of data that those third parties do not have access to, reputational advantages that mean customers will be more likely to use their products, or through scale efficiencies that allow it to provide goods to a larger customer base for a cheaper price. 

Although many people criticise self-preferencing as being unfair on competitors, “self-preferencing” is an inherent part of almost every business. When a company employs its own in-house accountants, cleaners or lawyers, instead of contracting out for them, it is engaged in internal self-preferencing. Any firm that is vertically integrated to any extent, instead of contracting externally for every single ancillary service other than the one it sells in the market, is self-preferencing. Coase’s theory of the firm is all about why this kind of behaviour happens, instead of every worker contracting on the open market for everything they do. His answer is that transaction costs make it cheaper to bring certain business relationships in-house than to contract externally for them. Virtually everyone agrees that this is desirable to some extent.

Nor does it somehow become a problem when the self-preferencing takes place on the consumer product side. Any firm that offers any bundle of products — like a smartphone that can run only the manufacturer’s operating system — is engaged in self-preferencing, because users cannot construct their own bundle with that company’s hardware and another’s operating system. But the efficiency benefits often outweigh the lack of choice.

Self-preferencing in digital platforms occurs, for example, when Google includes relevant Shopping or Maps results at the top of its general Search results, or when Amazon gives its own store-brand products (like the AmazonBasics range) a prominent place in the results listing.

There are good reasons to think that both of these are good for competition and consumer welfare. Google making Shopping results easily visible makes it a stronger competitor to Amazon, and including Maps results when you search for a restaurant just makes it more convenient to get the information you’re looking for.

Amazon sells its own private label products partially because doing so is profitable (even when undercutting rivals), partially to fill holes in product lines (like clothing, where 11% of listings were Amazon private label as of November 2018), and partially because it increases users’ likelihood to use Amazon if they expect to find a reliable product from a brand they trust. According to Amazon, they account for less than 1% of its annual retail sales, in contrast to the 19% of revenues ($54 billion) Amazon makes from third party seller services, which includes Marketplace commissions. Any analysis that ignores that Amazon has to balance those sources of revenue, and so has to tread carefully, is deficient. 

With “commodity” products (like, say, batteries and USB cables), where multiple sellers are offering very similar or identical versions of the same thing, private label competition works well for both Amazon and consumers. By Amazon’s own rules it can enter this market using aggregated data, but this doesn’t give it a significant advantage, because that data is easily obtainable from multiple sources, including Amazon itself, which makes detailed aggregated sales data freely available to third-party retailers

Amazon does profit from sales of these products, of course. And other merchants suffer by having to cut their prices to compete. That’s precisely what competition involves — competition is incompatible with a quiet life for businesses. But consumers benefit, and the biggest benefit to Amazon is that it assures its potential customers that when they visit they will be able to find a product that is cheap and reliable, so they keep coming back.

It is even hard to argue that in aggregate this practice is damaging to third-party sellers: many, like Anker, have built successful businesses on Amazon despite private-label competition precisely because the value of the platform increases for all parties as user trust and confidence in it does.

In these cases and in others, platforms act to solve market failures on the markets they host, as Andrei Hagiu has argued. To maximize profits, digital platforms need to strike a balance between being an attractive place for third-party merchants to sell their goods and being attractive to consumers by offering low prices. The latter will frequently clash with the former — and that’s the difficulty of managing a platform. 

To mistake this pro-competitive behaviour with an absence of competition is misguided. But that is a key conclusion of Buck’s Third Way: that the damage to competitors makes this behaviour harmful overall, and that it should be curtailed with “non-discrimination” rules. 

Treating below-cost selling as “predatory pricing”

Buck’s report equates below-cost selling with predatory pricing (“predatory pricing, also known as below-cost selling”). This is mistaken. Predatory pricing refers to a particular scenario where your price cut is temporary and designed to drive a competitor out of business, so that you can raise prices later and recoup your losses. 

It is easy to see that this does not describe the vast majority of below-cost selling. Buck’s formulation would describe all of the following as “predatory pricing”:

  • A restaurants that gives away ketchup for free;
  • An online retailer that offers free shipping and returns;
  • A grocery store that sells tins of beans for 3p a can. (This really happened when I was a child.)

The rationale for offering below-cost prices differs in each of these cases. Sometimes it’s a marketing ploy — Tesco sells those beans to get some free media, and to entice people into their stores, hoping they’ll decide to do the rest of their weekly shop there at the same time. Sometimes it’s about reducing frictions — the marginal cost of ketchup is so low that it’s simpler to just give it away. Sometimes it’s about reducing the fixed costs of transactions so more take place — allowing customers who buy your products to return them easily may mean more are willing to buy them overall, because there’s less risk for them if they don’t like what they buy. 

Obviously, none of these is “predatory”: none is done in the expectation that the below-cost selling will drive those businesses’ competitors out of business, allowing them to make monopoly profits later.

True predatory pricing is theoretically possible, but very difficult. As David Henderson describes, to successfully engage in predatory pricing means taking enormous and rising losses that grow for the “predatory” firm as customers switch to it from its competitor. And once the rival firm has exited the market, if the predatory firm raises prices above average cost (i.e., to recoup its losses), there is no guarantee that a new competitor will not enter the market selling at the previously competitive price. And the competing firm can either shut down temporarily or, in some cases, just buy up the “predatory” firm’s discounted goods to resell later. It is debatable whether the canonical predatory pricing case, Standard Oil, is itself even an example of that behaviour.

Offering a product below cost in a multi-sided market (like a digital platform) can be a way of building a customer base in order to incentivise entry on the other side of the market. When network effects exist, so additional users make the service more valuable to existing users, it can be worthwhile to subsidise the initial users until the service reaches a certain size. 

Uber subsidising drivers and riders in a new city is an example of this — riders want enough drivers on the road that they know they’ll be picked up fairly quickly if they order one, and drivers want enough riders that they know they’ll be able to earn a decent night’s fares if they use the app. This requires a certain volume of users on both sides — to get there, it can be in everyone’s interest for the platform to subsidise one or both sides of the market to reach that critical mass.

The slightly longer road to regulation

That is another reason for below-cost pricing: someone other than the user may be part-paying for a product, to build a market they hope to profit from later. Platforms must adjust pricing and their offerings to each side of their market to manage supply and demand. Epic, for example, is trying to build a desktop computer game store to rival the largest incumbent, Steam. To win over customers, it has been giving away games for free to users, who can own them on that store forever. 

That is clearly pro-competitive — Epic is hoping to get users over the habit of using Steam for all their games, in the hope that they will recoup the costs of doing so later in increased sales. And it is good for consumers to get free stuff. This kind of behaviour is very common. As well as Uber and Epic, smaller platforms do it too. 

Buck’s proposals would make this kind of behaviour much more difficult, and permitted only if a regulator or court allows it, instead of if the market can bear it. On both sides of the coin, Buck’s proposals would prevent platforms from the behaviour that allows them to grow in the first place — enticing suppliers and consumers and subsidising either side until critical mass has been reached that allows the platform to exist by itself, and the platform owner to recoup its investments. Fundamentally, both Buck and the majority take the existence of platforms as a given, ignoring the incentives to create new ones and compete with incumbents. 

In doing so, they give up on competition altogether. As described, Buck’s provisions would necessitate ongoing rule-making, including price controls, to work. It is unlikely that a court could do this, since the relevant costs would change too often for one-shot rule-making of the kind a court could do. To be effective at all, Buck’s proposals would require an extensive, active regulator, just as the majority report’s would. 

Buck nominally argues against this sort of outcome — “Conservatives should be wary of handing additional regulatory authority to agencies in an attempt to micromanage platforms’ access rules” — but it is probably unavoidable, given the changes he proposes. And because the rule changes he proposes would apply to the whole economy, not just tech, his proposals may, perversely, end up being even more extensive and interventionist than the majority’s.

Other than this, the differences in practice between Buck’s proposals and the Democrats’ proposals would be trivial. At best, Buck’s Third Way is just a longer route to the same destination.

[This post is the second in an ongoing symposium on “Should We Break Up Big Tech?” that will feature analysis and opinion from various perspectives.]

[This post is authored by Philip Marsden, Bank of England & College of Europe, IG/Twitter:  @competition_flaneur]

Since the release of our Furman Report, I have been blessed with an uptick in #antitrusttourism. Everywhere I go, people are talking about what to do about Big Tech. Europe, the Middle East, LatAm, Asia, Down Under — and everyone has slightly different views. But the direction of travel is similar: something is going to be done, some action will be taken. The discussions I’ve been privileged to have with agency officials, advisors, tech in-house counsel and complainants have been balanced and fair. Disagreements tend to focus on the “how, now” rather than on re-hashing arguments about whether anything need be done at all. However, there is one jurisdiction which is the exception — and that is the US.   There, pragmatism seems to have been defenestrated — it is all or nothing: we break tech up, or we praise tech from the rooftops. The thing is, neither is an appropriate response, and the longer the debate paralyses the US antitrust community, the more the rest of the world will say “maybe we should see other people” and break with the hard-earned precedent of evidence-based inquiries for which the US agencies are famous.

In the Land of the Free, there is so much broad-brush polarisation. Of course, there is the political main stage, and we have our share of that in the UK too. But in the theatre of American antitrust we have Chicken Littles running around shrieking that all tech platforms are run by creeps, there is an evil design behind every algo tweak or acqui-hire, and the only solution is to ditch antitrust, and move fast and break things, especially break up the G-MAFIA and the upcoming BAT from Asia, ASAP. The Chicken Littles run rings around another group, the ostriches with their heads in the sand saying “nothing to look at here”, the platforms are only forces for good, markets tip tip and tip again, sit back and enjoy the “free” goodies, and leave any mopping up of the tears of whining complainants to fresh “studies” by antitrust enforcers.  

There is also an endemic American debate which is pitched as a deep existential crisis, but seems more of a distraction: this says let’s change the consumer welfare standard and import broader social concerns — which is matched by a shocked response that price-based consumer welfare analysis is surely tried and true, and any alteration would send the heavens crashing down again. I view this as a distraction because from my experience as an enforcer and advisor, I only see an enlightened use of the consumer welfare standard as already considering harms to innovation, non-price effects, and lately privacy. So it may be interesting academic conference-fodder, but it largely misses the point that modern antitrust analysis is far broader, and more aware of non-price harms than it is portrayed.   

The US though is the only jurisdiction I’ve been to lately that seems to generate the most heat in the debates, and the least light. It is also where demands for tech break-ups are loudest but where any suggestion of regulatory intervention is knee-jerk rejected with abject horror. So there is a lot of noise but not much signal. The US seems disconnected from the international consensus on the need for actual action — and is a lone singleton debating its split-brain into the ground. And when they travel to the rest of the world — many American enforcers say — commendably with honesty — “Hey it’s not me, it’s you.”   “You’re the crazy ones with your Google fines, your Amazon own-sales bans, and your Facebook privacy abuse cases, we’ll just press ahead with our usual measured prosecutorial approach — oh and do a big study.”   

The thing is: no one believes the US will be anti-NIKE and “just do nothing”. If that was true there wouldn’t have been a massive drop of tech stock value on the announcement of DOJ, FTC and particularly Senate inquiries.   So some action will come stateside too… but what should that look like?

What I’d like to see is more engagement in the US with the international proposals. In our Furman Report, we supported a consumer welfare standard, but not laissez-faire. We supported a regulatory model developed through participative antitrust, but not common carrier regulation. And we did not favour breakups or presumptions against acquisitions by tech firms.  We tried to do some good, while preventing greater evils. Now, I still think that the most anti-competitive activity I’ve ever seen comes from government not from the abuses of market power of firms, so we do need to tread very carefully in designing our solutions and remedies. But we must remain vigilant against competitive problems in the tech sector and try to get ahead of them, particularly where they are created through structural aspects of these multi-sided markets, consumer inertia, entrenchment and enveloping, even in a world of “free” “goods” and “services”  (all in quotes since not everything online is free, or good, or even a service). So in Furman, we engaged with the debate but we avoided non-informative polarisation; not out of cowardice but to produce something hopefully relevant, informative, and which can actually be acted upon. It is an honour that our current Prime Minister and Chancellor have supported our work, and there are active moves to implement almost all of our proposals.   

We grounded our work in maintaining a focus on a dynamic consumer welfare standard, but we still firmly agreed that more intervention was needed. We concluded this after laying out our findings of myriad structural reasons for regulatory intervention (with no antitrust cause of complaint), and improving antitrust enforcement to address bad conduct as well. We sought to #dialupantitrust — through speeding up enforcement, and modernising merger control analysis — as well as #unlockingdigitalcompetition by developing a pro-competitive code of conduct, and data mobility (not just portability) and open API and similar remedies. There’s been lots of talk about that, and similarly-directed reports from the EU Trio and the Stigler Centre. I think discussing this sort of approach is the most pragmatic, evidence-based way forward: namely a model of participative antitrust, where the tech companies, their customers, consumer groups and government work out how to ensure platforms with strategic market status take on firm conduct obligations to get ahead of problems ex ante, and clear out many of the most toxic exclusionary or exploitative practices.  

Our approach would leave antitrust authorities to focus on the more nuanced behaviour, where #evidencematters and economic analysis and judgment really need to be brought to bear. This will primarily be in merger control — which we argue needs to be more forward-looking, more focussed on dynamic non-price impacts, and more able to address both the likelihood and magnitude of harms in a balanced way. This may also mean that authorities are less accepting of even heavily-sweated entry stories from merging parties. In ex post antitrust enforcement the main problem is speed, and we need to adjust the overall investigatory and appeal mechanism to ensure it is not captured not so much by the defendants and their armies of lawyers and economists, but by the mistaken focus on victory of our own team.   

I’ve seen senior agency lawyers refuse to release a decision until it has been sweated by 10 litigators and 3 QC’s and is “appeal-proof” — which no decision ever is — adding months or even years to the process. And woe betide a case team, inquiry chair or agency head who tries to cut through that — for the response is always “oh so you’re (much sucking of teeth and shaking of heads) content with Legal Risk???”.   This is lazy — I’d much rather work with lawyers whose default is “What are we trying to achieve?” not “I’ll just say No and then head off home” — a flaw that pervades some in-house counsel too. Legal risk is inherent in antitrust enforcement, not something to be feared. Frankly so many agencies have too many levels of internal scrutiny now which — when married to a system of full merits appeals — makes it incredible that any enforcement ever happens at all. And don’t get me started on the gaming inherent in negotiating commitments that may not even be effective but don’t even get a chance to operate before going through years of  review processes dominated by third party “market tests”. These flaws in enforcement systems contribute to the perception (and reality) of antitrust law’s weakness, slowness and inapplicability to reality — and hence fuel the calls for much stronger, much more intrusive and more chilling regulation, that could truly stifle a lot of genuine innovation.   

So our Furman report tries to cut through this, by speeding up antitrust enforcement, making merger control more forward looking — without achieving mathematical certainty but still allowing judgement of what is harmful on balance — and proposes a pro-competitive code of conduct for tech firms to help develop and “walk the talk”.   Developing that code will be a key challenge as we need to further refine what level of economic dependency on a platform customers and suppliers need to have, before that tech co is deemed to have strategic market status and must take on additional responsibilities to act fairly with respect to its customers, users, and suppliers. Fortunately, the British Government’s approval of our plans for a Digital Markets Unit means we can get started — so watch this space.

I’ve never said that this will be easy to do. We have a model in the Groceries Code Adjudicator — which was set up as a competition remedy — after a long market investigation of the offline retail platform market identified a range of harms that could occur, that might even be price-lowering to consumers but could harm innovation, choice and legitimate competition on the merits. A list of platforms was drawn up, a code was applied, and a range of toxic exploitative and exclusionary conduct was driven out of the market, and while not everything is perfect in retailing, far fewer complaints are landing on the CEO’s desk at the Competition & Markets Authority — so it can focus on other priorities. Our view is similar — while recognising that tech is a lot more complicated. Part of our model is thus also drawn on other CMA work with which I was honoured to be involved, a two year investigation of the retail banking platforms, and a degree of supply side and demand side inertia that I had never seen before, except maybe in energy. Here the solution was not — as politicians wanted — to break up the big banks. That would have done nothing good, and a lot of bad. Instead we found that the dynamic between supply and demand was so broken that remedies on both sides of the equation were needed. Here it was truly an example not of “it’s not you, it’s me” but “it’s both of us”: suppliers and consumers were contributing to the problem. We decided not to break up the platforms, though — but open them up — making data they were just sitting on (and which was a form of barrier to entry) available to fintech intermediaries, who would compete to access the data, train their new algos and thereby offer new choice tools to consumers.    

Breakups wold have added limping suppliers to the market, but much less competitive constraint. Opening up their data banks spurred the incumbents on to innovate faster than they might have, and customers to engage more with their banks. Our measure of success wasn’t switching — there is firm evidence that Britons switch their spouses more often than they switch their banks. So the remedy wasn’t breakup, and the KPI isn’t divorce, but is… engagement, on both sides of the relationship. And if it resulted in “maybe we should see other people” and multi-bank, then that is all to the overall good, for customer satisfaction, better engagement, and a more innovative retail banking ecosystem.  

And that is where I think we should seek new remedies in the tech sphere. Breakups wouldn’t help us stimulate a more innovative creative ecosystem. But only opening up platforms after litigating on an essential facilities doctrine for 8 years wouldn’t get us there either. We need informed analysis, with tech experts and competition and consumer officials, to identify the drivers of business developments, to balance the myriad issues that we all have as citizens, and voters, and shoppers, and then to act surgically when we see that a competition law problem of abuse of market power, or structural economic dependency, is causing real harm.  

I believe that the Furman report, and other international proposals from Australia, Canada, the EU, the UK’s Digital Markets Strategy, and enforcement action in the EU, Spain, Germany, Italy and elsewhere will help provide us with natural experiments and targeted solutions to specific problems. And in the process, will help fend off calls for short-term ‘fixes’ like breakups and other regulation that are retrograde and chill rather than go with the flow of — or better — stimulate innovation.   

Finally, we must not lose sight of one of my current bugbears, the incredible dependency we have allowed our governments and private sector to have on a handful of cloud computing companies. This may well have developed through superior skill, foresight and industry, and may be subject to rigorous procurement procedures and testing, but frankly, this is a ‘market’ that is too important to ignore. Social media and advertising may be pervasive but cloud is huge — with defence departments and banks and key infrastructure dependent on what are essentially private sector resiliency programmes. Even more than Facebook’s proposed currency Libra becoming “instantly systemic”, I fear we are already there with cloud: huge benefits, amazing efficiencies, but with it some zombie-apocalypse-level systemic risks not of one bank falling over, but many. Here it may well be that the bigger they are the more resilient they are, and the more able they are to police and rectify problems… but we have heard that before in other sectors and I just hope we can apply our developing proposals for digital platforms, to new challenges as well. The way tech is developing, we can’t live without it — but to live with it, we need to accept more responsibilities as enforcers, consumers and providers of these crucial services. So let’s stay together and work harder to #makeantitrustgreatagain and #unlockdigitalcompetition.   

[TOTM: The following is the fourth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here. This post originally appeared on the Federalist Society Blog.]

The courtroom trial in the Federal Trade Commission’s (FTC’s) antitrust case against Qualcomm ended in January with a promise from the judge in the case, Judge Lucy Koh, to issue a ruling as quickly as possible — caveated by her acknowledgement that the case is complicated and the evidence voluminous. Well, things have only gotten more complicated since the end of the trial. Not only did Apple and Qualcomm reach a settlement in the antitrust case against Qualcomm that Apple filed just three days after the FTC brought its suit, but the abbreviated trial in that case saw the presentation by Qualcomm of some damning evidence that, if accurate, seriously calls into (further) question the merits of the FTC’s case.

Apple v. Qualcomm settles — and the DOJ takes notice

The Apple v. Qualcomm case, which was based on substantially the same arguments brought by the FTC in its case, ended abruptly last month after only a day and a half of trial — just enough time for the parties to make their opening statements — when Apple and Qualcomm reached an out-of-court settlement. The settlement includes a six-year global patent licensing deal, a multi-year chip supplier agreement, an end to all of the patent disputes around the world between the two companies, and a $4.5 billion settlement payment from Apple to Qualcomm.

That alone complicates the economic environment into which Judge Koh will issue her ruling. But the Apple v. Qualcomm trial also appears to have induced the Department of Justice Antitrust Division (DOJ) to weigh in on the FTC’s case with a Statement of Interest requesting Judge Koh to use caution in fashioning a remedy in the case should she side with the FTC, followed by a somewhat snarky Reply from the FTC arguing the DOJ’s filing was untimely (and, reading the not-so-hidden subtext, unwelcome).

But buried in the DOJ’s Statement is an important indication of why it filed its Statement when it did, just about a week after the end of the Apple v. Qualcomm case, and a pointer to a much larger issue that calls the FTC’s case against Qualcomm even further into question (I previously wrote about the lack of theoretical and evidentiary merit in the FTC’s case here).

Footnote 6 of the DOJ’s Statement reads:

Internal Apple documents that recently became public describe how, in an effort to “[r]educe Apple’s net royalty to Qualcomm,” Apple planned to “[h]urt Qualcomm financially” and “[p]ut Qualcomm’s licensing model at risk,” including by filing lawsuits raising claims similar to the FTC’s claims in this case …. One commentator has observed that these documents “potentially reveal[] that Apple was engaging in a bad faith argument both in front of antitrust enforcers as well as the legal courts about the actual value and nature of Qualcomm’s patented innovation.” (Emphasis added).

Indeed, the slides presented by Qualcomm during that single day of trial in Apple v. Qualcomm are significant, not only for what they say about Apple’s conduct, but, more importantly, for what they say about the evidentiary basis for the FTC’s claims against the company.

The evidence presented by Qualcomm in its opening statement suggests some troubling conduct by Apple

Others have pointed to Qualcomm’s opening slides and the Apple internal documents they present to note Apple’s apparent bad conduct. As one commentator sums it up:

Although we really only managed to get a small glimpse of Qualcomm’s evidence demonstrating the extent of Apple’s coordinated strategy to manipulate the FRAND license rate, that glimpse was particularly enlightening. It demonstrated a decade-long coordinated effort within Apple to systematically engage in what can only fairly be described as manipulation (if not creation of evidence) and classic holdout.

Qualcomm showed during opening arguments that, dating back to at least 2009, Apple had been laying the foundation for challenging its longstanding relationship with Qualcomm. (Emphasis added).

The internal Apple documents presented by Qualcomm to corroborate this claim appear quite damning. Of course, absent explanation and cross-examination, it’s impossible to know for certain what the documents mean. But on their face they suggest Apple knowingly undertook a deliberate scheme (and knowingly took upon itself significant legal risk in doing so) to devalue comparable patent portfolios to Qualcomm’s:

The apparent purpose of this scheme was to devalue comparable patent licensing agreements where Apple had the power to do so (through litigation or the threat of litigation) in order to then use those agreements to argue that Qualcomm’s royalty rates were above the allowable, FRAND level, and to undermine the royalties Qualcomm would be awarded in courts adjudicating its FRAND disputes with the company. As one commentator put it:

Apple embarked upon a coordinated scheme to challenge weaker patents in order to beat down licensing prices. Once the challenges to those weaker patents were successful, and the licensing rates paid to those with weaker patent portfolios were minimized, Apple would use the lower prices paid for weaker patent portfolios as proof that Qualcomm was charging a super-competitive licensing price; a licensing price that violated Qualcomm’s FRAND obligations. (Emphasis added).

That alone is a startling revelation, if accurate, and one that would seem to undermine claims that patent holdout isn’t a real problem. It also would undermine Apple’s claims that it is a “willing licensee,” engaging with SEP licensors in good faith. (Indeed, this has been called into question before, and one Federal Circuit judge has noted in dissent that “[t]he record in this case shows evidence that Apple may have been a hold out.”). If the implications drawn from the Apple documents shown in Qualcomm’s opening statement are accurate, there is good reason to doubt that Apple has been acting in good faith.

Even more troubling is what it means for the strength of the FTC’s case

But the evidence offered in Qualcomm’s opening argument point to another, more troubling implication, as well. We know that Apple has been coordinating with the FTC and was likely an important impetus for the FTC’s decision to bring an action in the first place. It seems reasonable to assume that Apple used these “manipulated” agreements to help make its case.

But what is most troubling is the extent to which it appears to have worked.

The FTC’s action against Qualcomm rested in substantial part on arguments that Qualcomm’s rates were too high (even though the FTC constructed its case without coming right out and saying this, at least until trial). In its opening statement the FTC said:

Qualcomm’s practices, including no license, no chips, skewed negotiations towards the outcomes that favor Qualcomm and lead to higher royalties. Qualcomm is committed to license its standard essential patents on fair, reasonable, and non-discriminatory terms. But even before doing market comparison, we know that the license rates charged by Qualcomm are too high and above FRAND because Qualcomm uses its chip power to require a license.

* * *

Mr. Michael Lasinski [the FTC’s patent valuation expert] compared the royalty rates received by Qualcomm to … the range of FRAND rates that ordinarily would form the boundaries of a negotiation … Mr. Lasinski’s expert opinion … is that Qualcomm’s royalty rates are far above any indicators of fair and reasonable rates. (Emphasis added).

The key question is what constitutes the “range of FRAND rates that ordinarily would form the boundaries of a negotiation”?

Because they were discussed under seal, we don’t know the precise agreements that the FTC’s expert, Mr. Lasinski, used for his analysis. But we do know something about them: His analysis entailed a study of only eight licensing agreements; in six of them, the licensee was either Apple or Samsung; and in all of them the licensor was either Interdigital, Nokia, or Ericsson. We also know that Mr. Lasinski’s valuation study did not include any Qualcomm licenses, and that the eight agreements he looked at were all executed after the district court’s decision in Microsoft vs. Motorola in 2013.

A curiously small number of agreements

Right off the bat there is a curiosity in the FTC’s valuation analysis. Even though there are hundreds of SEP license agreements involving the relevant standards, the FTC’s analysis relied on only eight, three-quarters of which involved licensing by only two companies: Apple and Samsung.

Indeed, even since 2013 (a date to which we will return) there have been scads of licenses (see, e.g., herehere, and here). Not only Apple and Samsung make CDMA and LTE devices; there are — quite literally — hundreds of other manufacturers out there, all of them licensing essentially the same technology — including global giants like LG, Huawei, HTC, Oppo, Lenovo, and Xiaomi. Why were none of their licenses included in the analysis? 

At the same time, while Interdigital, Nokia, and Ericsson are among the largest holders of CDMA and LTE SEPs, several dozen companies have declared such patents, including Motorola (Alphabet), NEC, Huawei, Samsung, ZTE, NTT DOCOMO, etc. Again — why were none of their licenses included in the analysis?

All else equal, more data yields better results. This is particularly true where the data are complex license agreements which are often embedded in larger, even-more-complex commercial agreements and which incorporate widely varying patent portfolios, patent implementers, and terms.

Yet the FTC relied on just eight agreements in its comparability study, covering a tiny fraction of the industry’s licensors and licensees, and, notably, including primarily licenses taken by the two companies (Samsung and Apple) that have most aggressively litigated their way to lower royalty rates.

A curiously crabbed selection of licensors

And it is not just that the selected licensees represent a weirdly small and biased sample; it is also not necessarily even a particularly comparable sample.

One thing we can be fairly confident of, given what we know of the agreements used, is that at least one of the license agreements involved Nokia licensing to Apple, and another involved InterDigital licensing to Apple. But these companies’ patent portfolios are not exactly comparable to Qualcomm’s. About Nokia’s patents, Apple said:

And about InterDigital’s:

Meanwhile, Apple’s view of Qualcomm’s patent portfolio (despite its public comments to the contrary) was that it was considerably better than the others’:

The FTC’s choice of such a limited range of comparable license agreements is curious for another reason, as well: It includes no Qualcomm agreements. Qualcomm is certainly one of the biggest players in the cellular licensing space, and no doubt more than a few license agreements involve Qualcomm. While it might not make sense to include Qualcomm licenses that the FTC claims incorporate anticompetitive terms, that doesn’t describe the huge range of Qualcomm licenses with which the FTC has no quarrel. Among other things, Qualcomm licenses from before it began selling chips would not have been affected by its alleged “no license, no chips” scheme, nor would licenses granted to companies that didn’t also purchase Qualcomm chips. Furthermore, its licenses for technology reading on the WCDMA standard are not claimed to be anticompetitive by the FTC.

And yet none of these licenses were deemed “comparable” by the FTC’s expert, even though, on many dimensions — most notably, with respect to the underlying patent portfolio being valued — they would have been the most comparable (i.e., identical).

A curiously circumscribed timeframe

That the FTC’s expert should use the 2013 cut-off date is also questionable. According to Lasinski, he chose to use agreements after 2013 because it was in 2013 that the U.S. District Court for the Western District of Washington decided the Microsoft v. Motorola case. Among other things, the court in Microsoft v Motorola held that the proper value of a SEP is its “intrinsic” patent value, including its value to the standard, but not including the additional value it derives from being incorporated into a widely used standard.

According to the FTC’s expert,

prior to [Microsoft v. Motorola], people were trying to value … the standard and the license based on the value of the standard, not the value of the patents ….

Asked by Qualcomm’s counsel if his concern was that the “royalty rates derived in license agreements for cellular SEPs [before Microsoft v. Motorola] could very well have been above FRAND,” Mr. Lasinski concurred.

The problem with this approach is that it’s little better than arbitrary. The Motorola decision was an important one, to be sure, but the notion that sophisticated parties in a multi-billion dollar industry were systematically agreeing to improper terms until a single court in Washington suggested otherwise is absurd. To be sure, such agreements are negotiated in “the shadow of the law,” and judicial decisions like the one in Washington (later upheld by the Ninth Circuit) can affect the parties’ bargaining positions.

But even if it were true that the court’s decision had some effect on licensing rates, the decision would still have been only one of myriad factors determining parties’ relative bargaining  power and their assessment of the proper valuation of SEPs. There is no basis to support the assertion that the Motorola decision marked a sea-change between “improper” and “proper” patent valuations. And, even if it did, it was certainly not alone in doing so, and the FTC’s expert offers no justification for determining that agreements reached before, say, the European Commission’s decision against Qualcomm in 2018 were “proper,” or that the Korea FTC’s decision against Qualcomm in 2009 didn’t have the same sort of corrective effect as the Motorola court’s decision in 2013. 

At the same time, a review of a wider range of agreements suggested that Qualcomm’s licensing royalties weren’t inflated

Meanwhile, one of Qualcomm’s experts in the FTC case, former DOJ Chief Economist Aviv Nevo, looked at whether the FTC’s theory of anticompetitive harm was borne out by the data by looking at Qualcomm’s royalty rates across time periods and standards, and using a much larger set of agreements. Although his remit was different than Mr. Lasinski’s, and although he analyzed only Qualcomm licenses, his analysis still sheds light on Mr. Lasinski’s conclusions:

[S]pecifically what I looked at was the predictions from the theory to see if they’re actually borne in the data….

[O]ne of the clear predictions from the theory is that during periods of alleged market power, the theory predicts that we should see higher royalty rates.

So that’s a very clear prediction that you can take to data. You can look at the alleged market power period, you can look at the royalty rates and the agreements that were signed during that period and compare to other periods to see whether we actually see a difference in the rates.

Dr. Nevo’s analysis, which looked at royalty rates in Qualcomm’s SEP license agreements for CDMA, WCDMA, and LTE ranging from 1990 to 2017, found no differences in rates between periods when Qualcomm was alleged to have market power and when it was not alleged to have market power (or could not have market power, on the FTC’s theory, because it did not sell corresponding chips).

The reason this is relevant is that Mr. Lasinski’s assessment implies that Qualcomm’s higher royalty rates weren’t attributable to its superior patent portfolio, leaving either anticompetitive conduct or non-anticompetitive, superior bargaining ability as the explanation. No one thinks Qualcomm has cornered the market on exceptional negotiators, so really the only proffered explanation for the results of Mr. Lasinski’s analysis is anticompetitive conduct. But this assumes that his analysis is actually reliable. Prof. Nevo’s analysis offers some reason to think that it is not.

All of the agreements studied by Mr. Lasinski were drawn from the period when Qualcomm is alleged to have employed anticompetitive conduct to elevate its royalty rates above FRAND. But when the actual royalties charged by Qualcomm during its alleged exercise of market power are compared to those charged when and where it did not have market power, the evidence shows it received identical rates. Mr Lasinki’s results, then, would imply that Qualcomm’s royalties were “too high” not only while it was allegedly acting anticompetitively, but also when it was not. That simple fact suggests on its face that Mr. Lasinski’s analysis may have been flawed, and that it systematically under-valued Qualcomm’s patents.

Connecting the dots and calling into question the strength of the FTC’s case

In its closing argument, the FTC pulled together the implications of its allegations of anticompetitive conduct by pointing to Mr. Lasinski’s testimony:

Now, looking at the effect of all of this conduct, Qualcomm’s own documents show that it earned many times the licensing revenue of other major licensors, like Ericsson.

* * *

Mr. Lasinski analyzed whether this enormous difference in royalties could be explained by the relative quality and size of Qualcomm’s portfolio, but that massive disparity was not explained.

Qualcomm’s royalties are disproportionate to those of other SEP licensors and many times higher than any plausible calculation of a FRAND rate.

* * *

The overwhelming direct evidence, some of which is cited here, shows that Qualcomm’s conduct led licensees to pay higher royalties than they would have in fair negotiations.

It is possible, of course, that Lasinki’s methodology was flawed; indeed, at trial Qualcomm argued exactly this in challenging his testimony. But it is also possible that, whether his methodology was flawed or not, his underlying data was flawed.

It is impossible from the publicly available evidence to definitively draw this conclusion, but the subsequent revelation that Apple may well have manipulated at least a significant share of the eight agreements that constituted Mr. Lasinski’s data certainly increases the plausibility of this conclusion: We now know, following Qualcomm’s opening statement in Apple v. Qualcomm, that that stilted set of comparable agreements studied by the FTC’s expert also happens to be tailor-made to be dominated by agreements that Apple may have manipulated to reflect lower-than-FRAND rates.

What is most concerning is that the FTC may have built up its case on such questionable evidence, either by intentionally cherry picking the evidence upon which it relied, or inadvertently because it rested on such a needlessly limited range of data, some of which may have been tainted.

Intentionally or not, the FTC appears to have performed its valuation analysis using a needlessly circumscribed range of comparable agreements and justified its decision to do so using questionable assumptions. This seriously calls into question the strength of the FTC’s case.

(The following is adapted from a recent ICLE Issue Brief on the flawed essential facilities arguments undergirding the EU competition investigations into Amazon’s marketplace that I wrote with Geoffrey Manne.  The full brief is available here. )

Amazon has largely avoided the crosshairs of antitrust enforcers to date. The reasons seem obvious: in the US it handles a mere 5% of all retail sales (with lower shares worldwide), and it consistently provides access to a wide array of affordable goods. Yet, even with Amazon’s obvious lack of dominance in the general retail market, the EU and some of its member states are opening investigations.

Commissioner Margarethe Vestager’s probe into Amazon, which came to light in September, centers on whether Amazon is illegally using its dominant position vis-á-vis third party merchants on its platforms in order to obtain data that it then uses either to promote its own direct sales, or else to develop competing products under its private label brands. More recently, Austria and Germany have launched separate investigations of Amazon rooted in many of the same concerns as those of the European Commission. The German investigation also focuses on whether the contractual relationships that third party sellers enter into with Amazon are unfair because these sellers are “dependent” on the platform.

One of the fundamental, erroneous assumptions upon which these cases are built is the alleged “essentiality” of the underlying platform or input. In truth, these sorts of cases are more often based on stories of firms that chose to build their businesses in a way that relies on a specific platform. In other words, their own decisions — from which they substantially benefited, of course — made their investments highly “asset specific” and thus vulnerable to otherwise avoidable risks. When a platform on which these businesses rely makes a disruptive move, the third parties cry foul, even though the platform was not — nor should have been — under any obligation to preserve the status quo on behalf of third parties.

Essential or not, that is the question

All three investigations are effectively premised on a version of an “essential facilities” theory — the claim that Amazon is essential to these companies’ ability to do business.

There are good reasons that the US has tightly circumscribed the scope of permissible claims invoking the essential facilities doctrine. Such “duty to deal” claims are “at or near the outer boundary” of US antitrust law. And there are good reasons why the EU and its member states should be similarly skeptical.

Characterizing one firm as essential to the operation of other firms is tricky because “[c]ompelling [innovative] firms to share the source of their advantage… may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities.” Further, the classification requires “courts to act as central planners, identifying the proper price, quantity, and other terms of dealing—a role for which they are ill-suited.”

The key difficulty is that alleged “essentiality” actually falls on a spectrum. On one end is something like a true monopoly utility that is actually essential to all firms that use its service as a necessary input; on the other is a firm that offers highly convenient services that make it much easier for firms to operate. This latter definition of “essentiality” describes firms like Google and Amazon, but it is not accurate to characterize such highly efficient and effective firms as truly “essential.” Instead, companies that choose to take advantage of the benefits such platforms offer, and to tailor their business models around them, suffer from an asset specificity problem.

Geoffrey Manne noted this problem in the context of the EU’s Google Shopping case:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control.

Third-party sellers that rely upon Amazon without a contingency plan are engaging in a calculated risk that, as business owners, they would typically be expected to manage.  The investigations by European authorities are based on the notion that antitrust law might require Amazon to remove that risk by prohibiting it from undertaking certain conduct that might raise costs for its third-party sellers.

Implications and extensions

In the full issue brief, we consider the tensions in EU law between seeking to promote innovation and protect the competitive process, on the one hand, and the propensity of EU enforcers to rely on essential facilities-style arguments on the other. One of the fundamental errors that leads EU enforcers in this direction is that they confuse the distribution channel of the Internet with an antitrust-relevant market definition.

A claim based on some flavor of Amazon-as-essential-facility should be untenable given today’s market realities because Amazon is, in fact, just one mode of distribution among many. Commerce on the Internet is still just commerce. The only thing preventing a merchant from operating a viable business using any of a number of different mechanisms is the transaction costs it would incur adjusting to a different mode of doing business. Casting Amazon’s marketplace as an essential facility insulates third-party firms from the consequences of their own decisions — from business model selection to marketing and distribution choices. Commerce is nothing new and offline distribution channels and retail outlets — which compete perfectly capably with online — are well developed. Granting retailers access to Amazon’s platform on artificially favorable terms is no more justifiable than granting them access to a supermarket end cap, or a particular unit at a shopping mall. There is, in other words, no business or economic justification for granting retailers in the time-tested and massive retail market an entitlement to use a particular mode of marketing and distribution just because they find it more convenient.

[TOTM: The following is the first in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here. This post originally appeared on the Federalist Society Blog.]

Just days before leaving office, the outgoing Obama FTC left what should have been an unwelcome parting gift for the incoming Commission: an antitrust suit against Qualcomm. This week the FTC — under a new Chairman and with an entirely new set of Commissioners — finished unwrapping its present, and rested its case in the trial begun earlier this month in FTC v Qualcomm.

This complex case is about an overreaching federal agency seeking to set prices and dictate the business model of one of the world’s most innovative technology companies. As soon-to-be Acting FTC Chairwoman, Maureen Ohlhausen, noted in her dissent from the FTC’s decision to bring the case, it is “an enforcement action based on a flawed legal theory… that lacks economic and evidentiary support…, and that, by its mere issuance, will undermine U.S. intellectual property rights… worldwide.”

Implicit in the FTC’s case is the assumption that Qualcomm charges smartphone makers “too much” for its wireless communications patents — patents that are essential to many smartphones. But, as former FTC and DOJ chief economist, Luke Froeb, puts it, “[n]othing is more alien to antitrust than enquiring into the reasonableness of prices.” Even if Qualcomm’s royalty rates could somehow be deemed “too high” (according to whom?), excessive pricing on its own is not an antitrust violation under U.S. law.

Knowing this, the FTC “dances around that essential element” (in Ohlhausen’s words) and offers instead a convoluted argument that Qualcomm’s business model is anticompetitive. Qualcomm both sells wireless communications chipsets used in mobile phones, as well as licenses the technology on which those chips rely. According to the complaint, by licensing its patents only to end-users (mobile device makers) instead of to chip makers further up the supply chain, Qualcomm is able to threaten to withhold the supply of its chipsets to its licensees and thereby extract onerous terms in its patent license agreements.

There are numerous problems with the FTC’s case. Most fundamental among them is the “no duh” problem: Of course Qualcomm conditions the purchase of its chips on the licensing of its intellectual property; how could it be any other way? The alternative would require Qualcomm to actually facilitate the violation of its property rights by forcing it to sell its chips to device makers even if they refuse its patent license terms. In that world, what device maker would ever agree to pay more than a pittance for a patent license? The likely outcome is that Qualcomm charges more for its chips to compensate (or simply stops making them). Great, the FTC says; then competitors can fill the gap and — voila: the market is more competitive, prices will actually fall, and consumers will reap the benefits.

Except it doesn’t work that way. As many economists, including both the current and a prominent former chief economist of the FTC, have demonstrated, forcing royalty rates lower in such situations is at least as likely to harm competition as to benefit it. There is no sound theoretical or empirical basis for concluding that using antitrust to move royalty rates closer to some theoretical ideal will actually increase consumer welfare. All it does for certain is undermine patent holders’ property rights, virtually ensuring there will be less innovation.

In fact, given this inescapable reality, it is unclear why the current Commission is continuing to pursue the case at all. The bottom line is that, if it wins the case, the current FTC will have done more to undermine intellectual property rights than any other administration’s Commission has been able to accomplish.

It is not difficult to identify the frailties of the case that would readily support the agency backing away from pursuing it further. To begin with, the claim that device makers cannot refuse Qualcomm’s terms because the company effectively controls the market’s supply of mobile broadband modem chips is fanciful. While it’s true that Qualcomm is the largest supplier of these chipsets, it’s an absurdity to claim that device makers have no alternatives. In fact, Qualcomm has faced stiff competition from some of the world’s other most successful companies since well before the FTC brought its case. Samsung — the largest maker of Android phones — developed its own chip to replace Qualcomm’s in 2015, for example. More recently, Intel has provided Apple with all of the chips for its 2018 iPhones, and Apple is rumored to be developing its own 5G cellular chips in-house. In any case, the fact that most device makers have preferred to use Qualcomm’s chips in the past says nothing about the ability of other firms to take business from it.

The possibility (and actuality) of entry from competitors like Intel ensures that sophisticated purchasers like Apple have bargaining leverage. Yet, ironically, the FTC points to Apple’s claimthat Qualcomm “forced” it to use Intel modems in its latest iPhones as evidence of Qualcomm’s dominance. Think about that: Qualcomm “forced” a company worth many times its own value to use a competitor’s chips in its new iPhones — and that shows Qualcomm has a stranglehold on the market?

The FTC implies that Qualcomm’s refusal to license its patents to competing chip makers means that competitors cannot reliably supply the market. Yet Qualcomm has never asserted its patents against a competing chip maker, every one of which uses Qualcomm’s technology without paying any royalties to do so. The FTC nevertheless paints the decision to license only to device makers as the aberrant choice of an exploitative, dominant firm. The reality, however, is that device-level licensing is the norm practiced by every company in the industry — and has been since the 1980s.

Not only that, but Qualcomm has not altered its licensing terms or practices since it was decidedly an upstart challenger in the market — indeed, since before it even started producing chips, and thus before it even had the supposed means to leverage its chip sales to extract anticompetitive licensing terms. It would be a remarkable coincidence if precisely the same licensing structure and the exact same royalty rate served the company’s interests both as a struggling startup and as an alleged rapacious monopolist. Yet that is the implication of the FTC’s theory.

When Qualcomm introduced CDMA technology to the mobile phone industry in 1989, it was a promising but unproven new technology in an industry dominated by different standards. Qualcomm happily encouraged chip makers to promote the standard by enabling them to produce compliant components without paying any royalties; and it willingly licensed its patents to device makers based on a percentage of sales of the handsets that incorporated CDMA chips. Qualcomm thus shared both the financial benefits and the financial risk associated with the development and sales of devices implementing its new technology.

Qualcomm’s favorable (to handset makers) licensing terms may have helped CDMA become one of the industry standards for 2G and 3G devices. But it’s an unsupportable assertion to say that those identical terms are suddenly the source of anticompetitive power, particularly as 2G and 3G are rapidly disappearing from the market and as competing patent holders gain prominence with each successive cellular technology standard.

To be sure, successful handset makers like Apple that sell their devices at a significant premium would prefer to share less of their revenue with Qualcomm. But their success was built in large part on Qualcomm’s technology. They may regret the terms of the deal that propelled CDMA technology to prominence, but Apple’s regret is not the basis of a sound antitrust case.

And although it’s unsurprising that manufacturers of premium handsets would like to use antitrust law to extract better terms from their negotiations with standard-essential patent holders, it is astonishing that the current FTC is carrying on the Obama FTC’s willingness to do it for them.

None of this means that Qualcomm is free to charge an unlimited price: standard-essential patents must be licensed on “FRAND” terms, meaning they must be fair, reasonable, and nondiscriminatory. It is difficult to asses what constitutes FRAND, but the most restrictive method is to estimate what negotiated terms would look like before a patent was incorporated into a standard. “[R]oyalties that are or would be negotiated ex ante with full information are a market bench-mark reflecting legitimate return to innovation,” writes Carl Shapiro, the FTC’s own economic expert in the case.

And that is precisely what happened here: We don’t have to guess what the pre-standard terms of trade would look like; we know them, because they are the same terms that Qualcomm offers now.

We don’t know exactly what the consequence would be for consumers, device makers, and competitors if Qualcomm were forced to accede to the FTC’s benighted vision of how the market should operate. But we do know that the market we actually have is thriving, with new entry at every level, enormous investment in R&D, and continuous technological advance. These aren’t generally the characteristics of a typical monopoly market. While the FTC’s effort to “fix” the market may help Apple and Samsung reap a larger share of the benefits, it will undoubtedly end up only hurting consumers.

Last week, I objected to Senator Warner relying on the flawed AOL/Time Warner merger conditions as a template for tech regulatory policy, but there is a much deeper problem contained in his proposals.  Although he does not explicitly say “big is bad” when discussing competition issues, the thrust of much of what he recommends would serve to erode the power of larger firms in favor of smaller firms without offering a justification for why this would result in a superior state of affairs. And he makes these recommendations without respect to whether those firms actually engage in conduct that is harmful to consumers.

In the Data Portability section, Warner says that “As platforms grow in size and scope, network effects and lock-in effects increase; consumers face diminished incentives to contract with new providers, particularly if they have to once again provide a full set of data to access desired functions.“ Thus, he recommends a data portability mandate, which would theoretically serve to benefit startups by providing them with the data that large firms possess. The necessary implication here is that it is a per se good that small firms be benefited and large firms diminished, as the proposal is not grounded in any evaluation of the competitive behavior of the firms to which such a mandate would apply.

Warner also proposes an “interoperability” requirement on “dominant platforms” (which I criticized previously) in situations where, “data portability alone will not produce procompetitive outcomes.” Again, the necessary implication is that it is a per se good that established platforms share their services with start ups without respect to any competitive analysis of how those firms are behaving. The goal is preemptively to “blunt their ability to leverage their dominance over one market or feature into complementary or adjacent markets or products.”

Perhaps most perniciously, Warner recommends treating large platforms as essential facilities in some circumstances. To this end he states that:

Legislation could define thresholds – for instance, user base size, market share, or level of dependence of wider ecosystems – beyond which certain core functions/platforms/apps would constitute ‘essential facilities’, requiring a platform to provide third party access on fair, reasonable and non-discriminatory (FRAND) terms and preventing platforms from engaging in self-dealing or preferential conduct.

But, as  i’ve previously noted with respect to imposing “essential facilities” requirements on tech platforms,

[T]he essential facilities doctrine is widely criticized, by pretty much everyone. In their respected treatise, Antitrust Law, Herbert Hovenkamp and Philip Areeda have said that “the essential facility doctrine is both harmful and unnecessary and should be abandoned”; Michael Boudin has noted that the doctrine is full of “embarrassing weaknesses”; and Gregory Werden has opined that “Courts should reject the doctrine.”

Indeed, as I also noted, “the Supreme Court declined to recognize the essential facilities doctrine as a distinct rule in Trinko, where it instead characterized the exclusionary conduct in Aspen Skiing as ‘at or near the outer boundary’ of Sherman Act § 2 liability.”

In short, it’s very difficult to know when access to a firm’s internal functions might be critical to the facilitation of a market. It simply cannot be true that a firm becomes bound under onerous essential facilities requirements (or classification as a public utility) simply because other firms find it more convenient to use its services than to develop their own.

The truth of what is actually happening in these cases, however, is that third-party firms are choosing to anchor their business to the processes of another firm which generates an “asset specificity” problem that they then seek the government to remedy:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control.

This is naturally a calculated risk that a firm may choose to make, but it is a risk. To pry open Google or Facebook for the benefit of competitors that choose to play to Google and Facebook’s user base, rather than opening markets of their own, punishes the large players for being successful while also rewarding behavior that shies away from innovation. Further, such a policy would punish the large platforms whenever they innovate with their services in any way that might frustrate third-party “integrators” (see, e.g., Foundem’s claims that Google’s algorithm updates meant to improve search quality for users harmed Foundem’s search rankings).  

Rather than encouraging innovation, blessing this form of asset specificity would have the perverse result of entrenching the status quo.

In all of these recommendations from Senator Warner, there is no claim that any of the targeted firms will have behaved anticompetitively, but merely that they are above a certain size. This is to say that, in some cases, big is bad.

Senator Warner’s policies would harm competition and innovation

As Geoffrey Manne and Gus Hurwitz have recently noted these views run completely counter to the last half-century or more of economic and legal learning that has occurred in antitrust law. From its murky, politically-motivated origins through the early 60’s when the Structure-Conduct-Performance (“SCP”) interpretive framework was ascendant, antitrust law was more or less guided by the gut feeling of regulators that big business necessarily harmed the competitive process.

Thus, at its height with SCP, “big is bad” antitrust relied on presumptions that large firms over a certain arbitrary threshold were harmful and should be subjected to more searching judicial scrutiny when merging or conducting business.

A paradigmatic example of this approach can be found in Von’s Grocery where the Supreme Court prevented the merger of two relatively small grocery chains. Combined, the two chains would have constitutes a mere 9 percent of the market, yet the Supreme Court, relying on the SCP aversion to concentration in itself, prevented the merger despite any procompetitive justifications that would have allowed the combined entity to compete more effectively in a market that was coming to be dominated by large supermarkets.

As Manne and Hurwitz observe: “this decision meant breaking up a merger that did not harm consumers, on the one hand, while preventing firms from remaining competitive in an evolving market by achieving efficient scale, on the other.” And this gets to the central defect of Senator Warner’s proposals. He ties his decisions to interfere in the operations of large tech firms to their size without respect to any demonstrable harm to consumers.

To approach antitrust this way — that is, to roll the clock back to a period before there was a well-defined and administrable standard for antitrust — is to open the door for regulation by political whim. But the value of the contemporary consumer welfare test is that it provides knowable guidance that limits both the undemocratic conduct of politically motivated enforcers as well as the opportunities for private firms to engage in regulatory capture. As Manne and Hurwitz observe:

Perhaps the greatest virtue of the consumer welfare standard is not that it is the best antitrust standard (although it is) — it’s simply that it is a standard. The story of antitrust law for most of the 20th century was one of standard-less enforcement for political ends. It was a tool by which any entrenched industry could harness the force of the state to maintain power or stifle competition.

While it is unlikely that Senator Warner intends to entrench politically powerful incumbents, or enable regulation by whim, those are the likely effects of his proposals.

Antitrust law has a rich set of tools for dealing with competitive harm. Introducing legislation to define arbitrary thresholds for limiting the potential power of firms will ultimately undermine the power of those tools and erode the welfare of consumers.

 

What happened

Today, following a six year investigation into Google’s business practices in India, the Competition Commission of India (CCI) issued its ruling.

Two things, in particular, are remarkable about the decision. First, while the CCI’s staff recommended a finding of liability on a litany of claims (the exact number is difficult to infer from the Commission’s decision, but it appears to be somewhere in the double digits), the Commission accepted its staff’s recommendation on only three — and two of those involve conduct no longer employed by Google.

Second, nothing in the Commission’s finding of liability or in the remedy it imposes suggests it approaches the issue as the EU does. To be sure, the CCI employs rhetoric suggesting that “search bias” can be anticompetitive. But its focus remains unwaveringly on the welfare of the consumer, not on the hyperbolic claims of Google’s competitors.

What didn’t happen

In finding liability on only a single claim involving ongoing practices — the claim arising from Google’s “unfair” placement of its specialized flight search (Google Flights) results — the Commission also roundly rejected a host of other claims (more than once with strong words directed at its staff for proposing such woefully unsupported arguments). Among these are several that have been raised (and unanimously rejected) by competition regulators elsewhere in the world. These claims related to a host of Google’s practices, including:

  • Search bias involving the treatment of specialized Google content (like Google Maps, YouTube, Google Reviews, etc.) other than Google Flights
  • Search bias involving the display of Universal Search results (including local search, news search, image search, etc.), except where these results are fixed to a specific position on every results page (as was the case in India before 2010), instead of being inserted wherever most appropriate in context
  • Search bias involving OneBox results (instant answers to certain queries that are placed at the top of search results pages), even where answers are drawn from Google’s own content and specific, licensed sources (rather than from crawling the web)
  • Search bias involving sponsored, vertical search results (e.g., Google Shopping results) other than Google Flights. These results are not determined by the same algorithm that returns organic results, but are instead more like typical paid search advertising results that sometimes appear at the top of search results pages. The Commission did find that Google’s treatment of its Google Flight results (another form of sponsored result) violated India’s competition laws
  • The operation of Google’s advertising platform (AdWords), including the use of a “Quality Score” in its determination of an ad’s relevance (something Josh Wright and I discuss at length here)
  • Google’s practice of allowing advertisers to bid on trademarked keywords
  • Restrictions placed by Google upon the portability of advertising campaign data to other advertising platforms through its AdWords API
  • Distribution agreements that set Google as the default (but not exclusive) search engine on certain browsers
  • Certain restrictions in syndication agreements with publishers (websites) through which Google provides search and/or advertising (Google’s AdSense offering). The Commission found that negotiated search agreements that require Google to be the exclusive search provider on certain sites did violate India’s competition laws. It should be noted, however, that Google has very few of these agreements, and no longer enters into them, so the finding is largely historical. All of the other assertions regarding these agreements (and there were numerous claims involving a number of clauses in a range of different agreements) were rejected by the Commission.

Just like competition authorities in the US, Canada, and Taiwan that have properly focused on consumer welfare in their Google investigations, the CCI found important consumer benefits from these practices that outweigh any inconveniences they may impose on competitors. And, just as in those jurisdictions, all of them were rejected by the Commission.

Still improperly assessing Google’s dominance

The biggest problem with the CCI’s decision is its acceptance — albeit moderated in important ways — of the notion that Google owes a special duty to competitors given its position as an alleged “gateway” to the Internet:

In the present case, since Google is the gateway to the internet for a vast majority of internet users, due to its dominance in the online web search market, it is under an obligation to discharge its special responsibility. As Google has the ability and the incentive to abuse its dominant position, its “special responsibility” is critical in ensuring not only the fairness of the online web search and search advertising markets, but also the fairness of all online markets given that these are primarily accessed through search engines. (para 202)

As I’ve discussed before, a proper analysis of the relevant markets in which Google operates would make clear that Google is beset by actual and potential competitors at every turn. Access to consumers by advertisers, competing search services, other competing services, mobile app developers, and the like is readily available. The lines between markets drawn by the CCI are based on superficial distinctions that are of little importance to the actual relevant market.

Consider, for example: Users seeking product information can get it via search, but also via Amazon and Facebook; advertisers can place ad copy and links in front of millions of people on search results pages, and they can also place them in front of millions of people on Facebook and Twitter. Meanwhile, many specialized search competitors like Yelp receive most of their traffic from direct navigation and from their mobile apps. In short, the assumption of market dominance made by the CCI (and so many others these days) is based on a stilted conception of the relevant market, as Google is far from the only channel through which competitors can reach consumers.

The importance of innovation in the CCI’s decision

Of course, it’s undeniable that Google is an important mechanism by which competitors reach consumers. And, crucially, nowhere did the CCI adopt Google’s critics’ and competitors’ frequently asserted position that Google is, in effect, an “essential facility” requiring extremely demanding limitations on its ability to control its product when doing so might impede its rivals.

So, while the CCI defines the relevant markets and adopts legal conclusions that confer special importance on Google’s operation of its general search results pages, it stops short of demanding that Google treat competitors on equal terms to its own offerings, as would typically be required of essential facilities (or their close cousin, public utilities).

Significantly, the Commission weighs the imposition of even these “special responsibilities” against the effects of such duties on innovation, particularly with respect to product design.

The CCI should be commended for recognizing that any obligation imposed by antitrust law on a dominant company to refrain from impeding its competitors’ access to markets must stop short of requiring the company to stop innovating, even when its product innovations might make life difficult for its competitors.

Of course, some product design choices can be, on net, anticompetitive. But innovation generally benefits consumers, and it should be impeded only where doing so clearly results in net consumer harm. Thus:

[T]he Commission is cognizant of the fact that any intervention in technology markets has to be carefully crafted lest it stifles innovation and denies consumers the benefits that such innovation can offer. This can have a detrimental effect on economic welfare and economic growth, particularly in countries relying on high growth such as India…. [P]roduct design is an important and integral dimension of competition and any undue intervention in designs of SERP [Search Engine Results Pages] may affect legitimate product improvements resulting in consumer harm. (paras 203-04).

As a consequence of this cautious approach, the CCI refused to accede to its staff’s findings of liability based on Google’s treatment of its vertical search results without considering how Google’s incorporation of these specialized results improved its product for consumers. Thus, for example:

The Commission is of opinion that requiring Google to show third-party maps may cause a delay in response time (“latency”) because these maps reside on third-party servers…. Further, requiring Google to show third-party maps may break the connection between Google’s local results and the map…. That being so, the Commission is of the view that no case of contravention of the provisions of the Act is made out in Google showing its own maps along with local search results. The Commission also holds that the same consideration would apply for not showing any other specialised result designs from third parties. (para 224 (emphasis added))

The CCI’s laudable and refreshing focus on consumer welfare

Even where the CCI determined that Google’s current practices violate India’s antitrust laws (essentially only with respect to Google Flights), it imposed a remedy that does not demand alteration of the overall structure of Google’s search results, nor its algorithmic placement of those results. In fact, the most telling indication that India’s treatment of product design innovation embodies a consumer-centric approach markedly different from that pushed by Google’s competitors (and adopted by the EU) is its remedy.

Following its finding that

[p]rominent display and placement of Commercial Flight Unit with link to Google’s specialised search options/ services (Flight) amounts to an unfair imposition upon users of search services as it deprives them of additional choices (para 420),

the CCI determined that the appropriate remedy for this defect was:

So far as the contravention noted by the Commission in respect of Flight Commercial Unit is concerned, the Commission directs Google to display a disclaimer in the commercial flight unit box indicating clearly that the “search flights” link placed at the bottom leads to Google’s Flights page, and not the results aggregated by any other third party service provider, so that users are not misled. (para 422 (emphasis added))

Indeed, what is most notable — and laudable — about the CCI’s decision is that both the alleged problem, as well as the proposed remedy, are laser-focused on the effect on consumers — not the welfare of competitors.

Where the EU’s recent Google Shopping decision considers that this sort of non-neutral presentation of Google search results harms competitors and demands equal treatment by Google of rivals seeking access to Google’s search results page, the CCI sees instead that non-neutral presentation of results could be confusing to consumers. It does not demand that Google open its doors to competitors, but rather that it more clearly identify when its product design prioritizes Google’s own content rather than determine priority based on its familiar organic search results algorithm.

This distinction is significant. For all the language in the decision asserting Google’s dominance and suggesting possible impediments to competition, the CCI does not, in fact, view Google’s design of its search results pages as a contrivance intended to exclude competitors from accessing markets.

The CCI’s remedy suggests that it has no problem with Google maintaining control over its search results pages and determining what results, and in what order, to serve to consumers. Its sole concern, rather, is that Google not get a leg up at the expense of consumers by misleading them into thinking that its product design is something that it is not.

Rather than dictate how Google should innovate or force it to perpetuate an outdated design in the name of preserving access by competitors bent on maintaining the status quo, the Commission embraces the consumer benefits of Google’s evolving products, and seeks to impose only a narrowly targeted tweak aimed directly at the quality of consumers’ interactions with Google’s products.

Conclusion

As some press accounts of the CCI’s decision trumpet, the Commission did impose liability on Google for abuse of a dominant position. But its similarity with the EU’s abuse of dominance finding ends there. The CCI rejected many more claims than it adopted, and it carefully tailored its remedy to the welfare of consumers, not the lamentations of competitors. Unlike the EU, the CCI’s finding of a violation is tempered by its concern for avoiding harmful constraints on innovation and product design, and its remedy makes this clear. Whatever the defects of India’s decision, it offers a welcome return to consumer-centric antitrust.

On January 23rd, the Heritage Foundation convened its Fourth Annual Antitrust Conference, “Trump Antitrust Policy after One Year.”  The entire Conference can be viewed online (here).  The Conference featured a keynote speech, followed by three separate panels that addressed  developments at the Federal Trade Commission (FTC), at the Justice Department’s Antitrust Division (DOJ), and in the international arena, developments that can have a serious effect on the country’s economic growth and expansion of our business and industrial sector.

  1. Professor Bill Kovacic’s Keynote Speech

The conference started with a bang, featuring a stellar keynote speech (complemented by excellent power point slides) by GW Professor and former FTC Chairman Bill Kovacic, who also serves as a Member of the Board of the UK Government’s Competitive Markets Authority.  Kovacic began by noting the claim by senior foreign officials that “nothing is happening” in U.S. antitrust enforcement.  Although this perception may be inaccurate, Kovacic argued that it colors foreign officials’ dealings with the U.S., and continues a preexisting trend of diminishing U.S. influence on foreign governments’ antitrust enforcement systems.  (It is widely believed that the European antitrust model is dominant internationally.)

In order to enhance the perceived effectiveness (and prestige) of American antitrust on the global plane, American antitrust enforcers should, according to Kovacic, adopt a positive agenda citing specific priorities for action (as opposed to a “negative approach” focused on what actions will not be taken) – an orientation which former FTC Chairman Muris employed successfully in the last Bush Administration.  The positive engagement themes should be communicated powerfully to the public here and abroad through active public engagement by agency officials.  Agency strengths, such as FTC market studies and economic expertise, should be highlighted.

In addition, the FTC and Justice Department should act more like an “antitrust policy joint venture” at home and abroad, extending cooperation beyond guidelines to economic research, studies, and other aspects of their missions.  This would showcase the outstanding capabilities of the U.S. public antitrust enterprise.

  1. FTC Panel

A panel on FTC developments (moderated by Dr. Jeff Eisenach, Managing Director of NERA Economic Consulting and former Chief of Staff to FTC Chairman James Miller) followed Kovacic’s presentation.

Acting Bureau of Competition Chief Bruce Hoffman began by stressing that FTC antitrust enforcers are busier than ever, with a number of important cases in litigation and resources stretched to the limit.  Thus, FTC enforcement is neither weak nor timid – to the contrary, it is quite vigorous.  Hoffman was surprised by recent political attacks on the 40 year bipartisan consensus regarding the economics-centered consumer welfare standard that has set the direction of U.S. antitrust enforcement.  According to Hoffman, noted economist Carl Shapiro has debunked the notion that supposed increases in industry concentration even at the national level are meaningful.  In short, there is no empirical basis to dethrone the consumer welfare standard and replace it with something else.

Other former senior FTC officials engaged in a discussion following Hoffman’s remarks.  Orrick Partner Alex Okuliar, a former Attorney-Advisor to FTC Acting Chairman Maureen Ohlhausen, noted Ohlhausen’s emphasis on “regulatory humility” ( recognizing the inherent limitations of regulation and acting in accordance with those limits) and on the work of the FTC’s Economic Liberty Task Force, which centers on removing unnecessary regulatory restraints on competition (such as excessive occupational licensing requirements).

Wilson Sonsini Partner Susan Creighton, a former Director of the FTC’s Bureau of Competition, discussed the importance of economics-based “technocratic antitrust” (applied by sophisticated judges) for a sound and manageable antitrust system – something still not well understood by many foreign antitrust agencies.  Creighton had three reform suggestions for the Trump Administration:

(1) the DOJ and the FTC should stress the central role of economics in the institutional arrangements of antitrust (DOJ’s “economics structure” is a bit different than the FTC’s);

(2) both agencies should send relatively more economists to represent us at antitrust meetings abroad, thereby enabling the agencies to place a greater stress on the importance of economic rigor in antitrust enforcement; and

(3) the FTC and the DOJ should establish a task force to jointly carry out economics research and hone a consistent economic policy message.

Sidley & Austin Partner Bill Blumenthal, a former FTC General Counsel, noted the problems of defining Trump FTC policy in the absence of new Trump FTC Commissioners.  Blumenthal noted that signs of a populist uprising against current antitrust norms extend beyond antitrust, and that the agencies may have to look to new unilateral conduct cases to show that they are “doing something.”  He added that the populist rejection of current economics-based antitrust analysis is intellectually incoherent.  There is a tension between protecting consumers and protecting labor; for example, anti-consumer cartels may be beneficial to labor union interests.

In a follow-up roundtable discussion, Hoffman noted that theoretical “existence theorems” of anticompetitive harm that lack empirical support in particular cases are not administrable.  Creighton opined that, as an independent agency, the FTC may be a bit more susceptible to congressional pressure than DOJ.  Blumenthal stated that congressional interest may be able to trigger particular investigations, but it does not dictate outcomes.

  1. DOJ Panel

Following lunch, a panel of antitrust experts (moderated by Morgan Lewis Partner and former Chief of Staff to the Assistant Attorney General Hill Wellford) addressed DOJ developments.

The current Principal Deputy Assistant Attorney General for Antitrust, Andrew Finch, began by stating that the three major Antitrust Division initiatives involve (1) intellectual property (IP), (2) remedies, and (3) criminal enforcement.  Assistant Attorney General Makan Delrahim’s November 2017 speech explained that antitrust should not undermine legitimate incentives of patent holders to maximize returns to their IP through licensing.  DOJ is looking into buyer and seller cartel behavior (including in standard setting) that could harm IP rights.  DOJ will work to streamline and improve consent decrees and other remedies, and make it easier to go after decree violations.  In criminal enforcement, DOJ will continue to go after “no employee poaching” employer agreements as criminal violations.

Former Assistant Attorney General Tom Barnett, a Covington & Burling Partner, noted that more national agencies are willing to intervene in international matters, leading to inconsistencies in results.  The International Competition Network is important, but major differences in rhetoric have created a sense that there is very little agreement among enforcers, although the reality may be otherwise.  Muted U.S. agency voices on the international plane and limited resources have proven unfortunate – the FTC needs to engage better in international discussions and needs new Commissioners.

Former Counsel to the Assistant Attorney Eric Grannon, a White & Case Partner, made three specific comments:

(1) DOJ should look outside the career criminal enforcement bureaucracy and consider selecting someone with significant private sector experience as Deputy Assistant Attorney General for Criminal Enforcement;

(2) DOJ needs to go beyond merely focusing on metrics that show increased aggregate fines and jail time year-by-year (something is wrong if cartel activities and penalties keep rising despite the growing emphasis on inculcating an “anti-cartel culture” within firms); and

(3) DOJ needs to reassess its “amnesty plus” program, in which an amnesty applicant benefits by highlighting the existence of a second cartel in which it participates (non-culpable firms allegedly in the second cartel may be fingered, leading to unjustified potential treble damages liability for them in private lawsuits).

Grannon urged that DOJ hold a public workshop on the amnesty plus program in the coming year.  Grannon also argued against the classification of antitrust offenses as crimes of “moral turpitude” (moral turpitude offenses allow perpetrators to be excluded from the U.S. for 20 years).  Finally, as a good government measure, Grannon recommended that the Antitrust Division should post all briefs on its website, including those of opposing parties and third parties.

Baker and Botts Partner Stephen Weissman, a former Deputy Director of the FTC’s Bureau of Competition, found a great deal of continuity in DOJ civil enforcement.  Nevertheless, he expressed surprise at Assistant Attorney General Delrahim’s recent remarks that suggested that DOJ might consider asking the Supreme Court to overturn the Illinois Brick ban on indirect purchaser suits under federal antitrust law.  Weissman noted the increased DOJ focus on the rights of IP holders, not implementers, and the beneficial emphasis on the importance of DOJ’s amicus program.

The following discussion among the panelists elicited agreement (Weissman and Barnett) that the business community needs more clear-cut guidance on vertical mergers (and perhaps on other mergers as well) and affirmative statements on DOJ’s plans.  DOJ was characterized as too heavy-handed in setting timing agreements in mergers.  The panelists were in accord that enforcers should continue to emphasize the American consumer welfare model of antitrust.  The panelists believed the U.S. gets it right in stressing jail time for cartelists and in detrebling for amnesty applicants.  DOJ should, however, apply a proper dose of skepticism in assessing the factual content of proffers made by amnesty applicants.  Former enforcers saw no need to automatically grant markers to those applicants.  Andrew Finch returned to the topic of Illinois Brick, explaining that the Antitrust Modernization Commission had suggested reexamining that case’s bar on federal indirect purchaser suits.  In response to an audience question as to which agency should do internet oversight, Finch stressed that relevant agency experience and resources are assessed on a matter-specific basis.

  1. International Panel

The last panel of the afternoon, which focused on international developments, was moderated by Cadwalader Counsel (and former Attorney-Advisor to FTC Chairman Tim Muris) Bilal Sayyed.

Deputy Assistant Attorney General for International Matters, Roger Alford, began with an overview of trade and antitrust considerations.  Alford explained that DOJ adds a consumer welfare and economics perspective to Trump Administration trade policy discussions.  On the international plane, DOJ supports principles of non-discrimination, strong antitrust enforcement, and opposition to national champions, plus the addition of a new competition chapter in “NAFTA 2.0” negotiations.  The revised 2017 DOJ International Antitrust Guidelines dealt with economic efficiency and the consideration of comity.  DOJ and the Executive Branch will take into account the degree of conflict with other jurisdictions’ laws (fleshing out comity analysis) and will push case coordination as well as policy coordination.  DOJ is considering new ideas for dealing with due process internationally, in addition to working within the International Competition Network to develop best practices.  Better international coordination is also needed on the cartel leniency program.

Next, Koren Wong-Ervin, Qualcomm Director of IP and Competition Policy (and former Director of the Scalia Law School’s Global Antitrust Institute) stated that the Korea Fair Trade Commission had ignored comity and guidance from U.S. expert officials in imposing global licensing remedies and penalties on Qualcomm.  The U.S. Government is moving toward a sounder approach on the evaluation of standard essential patents, as is Europe, with a move away from required component-specific patent licensing royalty determinations.  More generally, a return to an economic effects-based approach to IP licensing is important.  Comprehensive revisions to China’s Anti-Monopoly Law, now under consideration, will have enormous public policy importance.  Balanced IP licensing rules, with courts as gatekeepers, are important.  Chinese law still has overly broad essential facilities and deception law; IP price regulation proposals are very troublesome.  New FTC Commissioners are needed, accompanied by robust budget support for international work.

Latham & Watkins’ Washington, D.C. Managing Partner Michael Egge focused on the substantial divergence in merger enforcement practice around the world.  The cost of compliance imposed by European Commission pre-notification filing requirements is overly high; this pre-notification practice is not written down and has escaped needed public attention.  Chinese merger filing practice (“China is struggling to cope”) features a costly 1-3 month pre-filing acceptance period, and merger filing requirements in India are particularly onerous.

Jim Rill, former Assistant Attorney General for Antitrust and former ABA Antitrust Section Chair, stressed that due process improvements can help promote substantive antitrust convergence around the globe.  Rill stated that U.S. Government officials, with the assistance of private sector stakeholders, need a mechanism (a “report card”) to measure foreign agencies’ implementation of OECD antitrust recommendations.  U.S. Government officials should consider participating in foreign proceedings where the denial of due process is blatant, and where foreign governments indirectly dictate a particular harmful policy result.  Multilateral review of international agreements is valuable as well.  The comity principles found in the 1991 EU-U.S. Antitrust Cooperation Agreement are quite useful.  Trade remedies in antitrust agreements are not a competition solution, and are not helpful.  More and better training programs for foreign officials are called for; International Chamber of Commerce, American Bar Association, and U.S. Chamber of Commerce principles are generally sound.  Some consideration should be given to old ICPAC recommendations, such as (perhaps) the development of a common merger notification form for use around the world.

Douglas Ginsburg, Senior Judge (and former Chief Judge) of the U.S. Court of Appeals for the D.C. Circuit, and former Assistant Attorney General for Antitrust, spoke last, focusing on the European Court of Justice’s Intel decision, which laid bare the deficiencies in the European Commission’s finding of a competition law violation in that matter.

In a brief closing roundtable discussion, Roger Alford suggested possible greater involvement by business community stakeholders in training foreign antitrust officials.

  1. Conclusion

Heritage Foundation host Alden Abbott closed the proceedings with a brief capsule summary of panel highlights.  As in prior years, the Fourth Annual Heritage Antitrust Conference generated spirited discussion among the brightest lights in the American antitrust firmament on recent developments and likely trends in antitrust enforcement and policy development, here and abroad.

This week, the International Center for Law & Economics filed comments  on the proposed revision to the joint U.S. Federal Trade Commission (FTC) – U.S. Department of Justice (DOJ) Antitrust-IP Licensing Guidelines. Overall, the guidelines present a commendable framework for the IP-Antitrust intersection, in particular as they broadly recognize the value of IP and licensing in spurring both innovation and commercialization.

Although our assessment of the proposed guidelines is generally positive,  we do go on to offer some constructive criticism. In particular, we believe, first, that the proposed guidelines should more strongly recognize that a refusal to license does not deserve special scrutiny; and, second, that traditional antitrust analysis is largely inappropriate for the examination of innovation or R&D markets.

On refusals to license,

Many of the product innovation cases that have come before the courts rely upon what amounts to an implicit essential facilities argument. The theories that drive such cases, although not explicitly relying upon the essential facilities doctrine, encourage claims based on variants of arguments about interoperability and access to intellectual property (or products protected by intellectual property). But, the problem with such arguments is that they assume, incorrectly, that there is no opportunity for meaningful competition with a strong incumbent in the face of innovation, or that the absence of competitors in these markets indicates inefficiency … Thanks to the very elements of IP that help them to obtain market dominance, firms in New Economy technology markets are also vulnerable to smaller, more nimble new entrants that can quickly enter and supplant incumbents by leveraging their own technological innovation.

Further, since a right to exclude is a fundamental component of IP rights, a refusal to license IP should continue to be generally considered as outside the scope of antitrust inquiries.

And, with respect to conducting antitrust analysis of R&D or innovation “markets,” we note first that “it is the effects on consumer welfare against which antitrust analysis and remedies are measured” before going on to note that the nature of R&D makes it effects very difficult to measure on consumer welfare. Thus, we recommend that the the agencies continue to focus on actual goods and services markets:

[C]ompetition among research and development departments is not necessarily a reliable driver of innovation … R&D “markets” are inevitably driven by a desire to innovate with no way of knowing exactly what form or route such an effort will take. R&D is an inherently speculative endeavor, and standard antitrust analysis applied to R&D will be inherently flawed because “[a] challenge for any standard applied to innovation is that antitrust analysis is likely to occur after the innovation, but ex post outcomes reveal little about whether the innovation was a good decision ex ante, when the decision was made.”

It appears that White House’s zeal for progressive-era legal theory has … progressed (or regressed?) further. Late last week President Obama signed an Executive Order that nominally claims to direct executive agencies (and “strongly encourages” independent agencies) to adopt “pro-competitive” policies. It’s called Steps to Increase Competition and Better Inform Consumers and Workers to Support Continued Growth of the American Economy, and was produced alongside an issue brief from the Council of Economic Advisors titled Benefits of Competition and Indicators of Market Power.

TL;DR version: the Order and its brief do not appear so much aimed at protecting consumers or competition, as they are at providing justification for favored regulatory adventures.

In truth, it’s not exactly clear what problem the President is trying to solve. And there is language in both the Order and the brief that could be interpreted in a positive light, and, likewise, language that could be more of a shot across the bow of “unruly” corporate citizens who have not gotten in line with the President’s agenda. Most of the Order and the corresponding CEA brief read as a rote recital of basic antitrust principles: price fixing bad, collusion bad, competition good. That said, there were two items in the Order that particularly stood out.

The (Maybe) Good

Section 2 of the Order states that

Executive departments … with authorities that could be used to enhance competition (agencies) shall … use those authorities to promote competition, arm consumers and workers with the information they need to make informed choices, and eliminate regulations that restrict competition without corresponding benefits to the American public. (emphasis added)

Obviously this is music to the ears of anyone who has thought that agencies should be required to do a basic economic analysis before undertaking brave voyages of regulatory adventure. And this is what the Supreme Court was getting at in Michigan v. EPA when it examined the meaning of the phrase “appropriate” in connection with environmental regulations:

One would not say that it is even rational, never mind “appropriate,” to impose billions of dollars in economic costs in return for a few dollars in health or environmental benefits.

Thus, if this Order follows the direction of Michigan v. EPA, and it becomes the standard for agencies to conduct cost-benefit analyses before issuing regulation (and to review old regulations through such an analysis), then wonderful! Moreover, this mandate to agencies to reduce regulations that restrict competition could lead to an unexpected reformation of a variety of regulations – even outside of the agencies themselves. For instance, the FTC is laudable in its ongoing efforts both to correct anticompetitive state licensing laws as well as to resist state-protected incumbents, such as taxi-cab companies.

Still, I have trouble believing that the President — and this goes for any president, really, regardless of party — would truly intend for agencies under his control to actually cede regulatory ground when a little thing like economic reality points in a different direction than official policy. After all, there was ample information available that the Title II requirements on broadband providers would be both costly and result in reduced capital expenditures, and the White House nonetheless encouraged the FCC to go ahead with reclassification.

And this isn’t the first time that the President has directed agencies to perform retrospective review of regulation (see the Identifying and Reducing Regulatory Burdens Order of 2012). To date, however, there appears to be little evidence that the burdens of the regulatory state have lessened. Last year set a record for the page count of the Federal Register (80k+ pages), and the data suggest that the cost of the regulatory state is only increasing. Thus, despite the pleasant noises the Order makes with regard to imposing economic discipline on agencies – and despite the good example Canada has set for us in this regard – I am not optimistic of the actual result.

And the (maybe) good builds an important bridge to the (probably) bad of the Order. It is well and good to direct agencies to engage in economic calculation when they write and administer regulations, but such calculation must be in earnest, and must be directed by the learning that was hard earned over the course of the development of antitrust jurisprudence in the US. As Geoffrey Manne and Josh Wright have noted:

Without a serious methodological commitment to economic science, the incorporation of economics into antitrust is merely a façade, allowing regulators and judges to select whichever economic model fits their earlier beliefs or policy preferences rather than the model that best fits the real‐world data. Still, economic theory remains essential to antitrust law. Economic analysis constrains and harnesses antitrust law so that it protects consumers rather than competitors.

Unfortunately, the brief does not indicate that it is interested in more than a façade of economic rigor. For instance, it relies on the outmoded 50 firm revenue concentration numbers gathered by the Census Bureau to support the proposition that the industries themselves are highly concentrated and, therefore, are anticompetitive. But, it’s been fairly well understood since the 1970s that concentration says nothing directly about monopoly power and its exercise. In fact, concentration can often be seen as an indicator of superior efficiency that results in better outcomes for consumers (depending on the industry).

The (Probably) Bad

Apart from general concerns (such as having a host of federal agencies with no antitrust expertise now engaging in competition turf wars) there is one specific area that could have a dramatically bad result for long term policy, and that moreover reflects either ignorance or willful blindness of antitrust jurisprudence. Specifically, the Order directs agencies to

identify specific actions that they can take in their areas of responsibility to build upon efforts to detect abuses such as price fixing, anticompetitive behavior in labor and other input markets, exclusionary conduct, and blocking access to critical resources that are needed for competitive entry. (emphasis added).

It then goes on to say that

agencies shall submit … an initial list of … any specific practices, such as blocking access to critical resources, that potentially restrict meaningful consumer or worker choice or unduly stifle new market entrants (emphasis added)

The generally uncontroversial language regarding price fixing and exclusionary conduct are bromides – after all, as the Order notes, we already have the FTC and DOJ very actively policing this sort of conduct. What’s novel here, however, is that the highlighted language above seems to amount to a mandate to executive agencies (and a strong suggestion to independent agencies) that they begin to seek out “essential facilities” within their regulated industries.

But “critical resources … needed for competitive entry” could mean nearly anything, depending on how you define competition and relevant markets. And asking non-antitrust agencies to integrate one of the more esoteric (and controversial) parts of antitrust law into their mission is going to be a recipe for disaster.

In fact, this may be one of the reasons why the Supreme Court declined to recognize the essential facilities doctrine as a distinct rule in Trinko, where it instead characterized the exclusionary conduct in Aspen Skiing as ‘at or near the outer boundary’ of Sherman Act § 2 liability.

In short, the essential facilities doctrine is widely criticized, by pretty much everyone. In their respected treatise, Antitrust Law, Herbert Hovenkamp and Philip Areeda have said that “the essential facility doctrine is both harmful and unnecessary and should be abandoned”; Michael Boudin has noted that the doctrine is full of “embarrassing weaknesses”; and Gregory Werden has opined that “Courts should reject the doctrine.” One important reason for the broad criticism is because

At bottom, a plaintiff … is saying that the defendant has a valuable facility that it would be difficult to reproduce … But … the fact that the defendant has a highly valued facility is a reason to reject sharing, not to require it, since forced sharing “may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities.” (quoting Trinko)

Further, it’s really hard to say when one business is so critical to a particular market that its own internal functions need to be exposed for competitors’ advantage. For instance, is Big Data – which the CEA brief specifically notes as a potential “critical resource” — an essential facility when one company serves so many consumers that it has effectively developed an entire market that it dominates? ( In case you are wondering, it’s actually not). When exactly does a firm so outcompete its rivals that access to its business infrastructure can be seen by regulators as “essential” to competition? And is this just a set-up for punishing success — which hardly promotes competition, innovation or consumer welfare?

And, let’s be honest here, when the CEA is considering Big Data as an essential facility they are at least partially focused on Google and its various search properties. Google is frequently the target for “essentialist” critics who argue, among other things, that Google’s prioritization of its own properties in its own search results violates antitrust rules. The story goes that Google search is so valuable that when Google publishes its own shopping results ahead of its various competitors, it is engaging in anticompetitive conduct. But this is a terribly myopic view of what the choices are for search services because, as Geoffrey Manne has so ably noted before, “competitors denied access to the top few search results at Google’s site are still able to advertise their existence and attract users through a wide range of other advertising outlets[.]”

Moreover, as more and more users migrate to specialized apps on their mobile devices for a variety of content, Google’s desktop search becomes just one choice among many for finding information. All of this leaves to one side, of course, the fact that for some categories, Google has incredibly stiff competition.

Thus it is that

to the extent that inclusion in Google search results is about “Stiglerian” search-cost reduction for websites (and it can hardly be anything else), the range of alternate facilities for this function is nearly limitless.

The troubling thing here is that, given the breezy analysis of the Order and the CEA brief, I don’t think the White House is really considering the long-term legal and economic implications of its command; the Order appears to be much more about political support for favored agency actions already under way.

Indeed, despite the length of the CEA brief and the variety of antitrust principles recited in the Order itself, an accompanying release points to what is really going on (at least in part). The White House, along with the FCC, seems to think that the embedded streams in a cable or satellite broadcast should be considered a form of essential facility that is an indispensable component of video consumers’ choice (which is laughable given the magnitude of choice in video consumption options that consumers enjoy today).

And, to the extent that courts might apply the (controversial) essential facilities doctrine, an “indispensable requirement … is the unavailability of access to the ‘essential facilities’[.]” This is clearly not the case with much of what the CEA brief points to as examples of ostensibly laudable pro-competitive regulation.

The doctrine wouldn’t apply, for instance, to the FCC’s Open Internet Order since edge providers have access to customers over networks, even where network providers want to zero-rate, employ usage-based billing or otherwise negotiate connection fees and prioritization. And it also doesn’t apply to the set-top box kerfuffle; while third-parties aren’t able to access the video streams that make-up a cable broadcast, the market for consuming those streams is a single part of the entire video ecosystem. What really matters there is access to viewers, and the ability to provide services to consumers and compete for their business.

Yet, according to the White House, “the set-top box is the mascot” for the administration’s competition Order, because, apparently, cable boxes represent “what happens when you don’t have the choice to go elsewhere.” ( “Elsewhere” to the White House, I assume, cannot include Roku, Apple TV, Hulu, Netflix, and a myriad of other video options  that consumers can currently choose among.)

The set-top box is, according to the White House, a prime example of the problem that

[a]cross our economy, too many consumers are dealing with inferior or overpriced products, too many workers aren’t getting the wage increases they deserve, too many entrepreneurs and small businesses are getting squeezed out unfairly by their bigger competitors, and overall we are not seeing the level of innovative growth we would like to see.

This is, of course, nonsense. Consumers enjoy an incredible amount of low-cost, high quality goods (including video options) – far more than at any point in history.  After all:

From cable to Netflix to Roku boxes to Apple TV to Amazon FireStick, we have more ways to find and watch TV than ever — and we can do so in our living rooms, on our phones and tablets, and on seat-back screens at 30,000 feet. Oddly enough, FCC Chairman Tom Wheeler … agrees: “American consumers enjoy unprecedented choice in how they view entertainment, news and sports programming. You can pretty much watch what you want, where you want, when you want.”

Thus, I suspect that the White House has its eye on a broader regulatory agenda.

For instance, the Department of Labor recently announced that it would be extending its reach in the financial services industry by changing the standard for when financial advice might give rise to a fiduciary relationship under ERISA. It seems obvious that the SEC or FINRA could have taken up the slack for any financial services regulatory issues – it’s certainly within their respective wheelhouses. But that’s not the direction the administration took, possibly because SEC and FINRA are independent agencies. Thus, the DOL – an agency with substantially less financial and consumer protection experience than either the SEC or FINRA — has expansive new authority.

And that’s where more of the language in the Order comes into focus. It directs agencies to “ensur[e] that consumers and workers have access to the information needed to make informed choices[.]” The text of the DOL rule develops for itself a basis in competition law as well:

The current proposal’s defined boundaries between fiduciary advice, education, and sales activity directed at large plans, may bring greater clarity to the IRA and plan services markets. Innovation in new advice business models, including technology-driven models, may be accelerated, and nudged away from conflicts and toward transparency, thereby promoting healthy competition in the fiduciary advice market.

Thus, it’s hard to see what the White House is doing in the Order, other than laying the groundwork for expansive authority of non-independent executive agencies under the thin guise of promoting competition. Perhaps the President believes that couching this expansion in free market terms ( i.e. that its “pro-competition”) will somehow help the initiatives go through with minimal friction. But there is nothing in the Order or the CEA brief to provide any confidence that competition will, in fact, be promoted. And in the end I have trouble seeing how this sort of regulatory adventurism does not run afoul of separation of powers issues, as well as assorted other legal challenges.

Finally, conjuring up a regulatory version of the essential facilities doctrine as a support for this expansion is simply a terrible idea — one that smacks much more of industrial policy than of sound regulatory reform or consumer protection.

The late Justice Antonin Scalia’s magisterial contributions to American jurisprudence will be the source of numerous learned analyses over the coming months.  As in so many other doctrinal areas, Justice Scalia’s opinions contributed importantly to the sound development of antitrust law, and, in particular, to the assessment of monopolization.  His oft-cited 2004 opinion for the U.S. Supreme Court in Verizon v. Trinko, on which I will focus, is particularly noteworthy.

In Trinko, the “dominant” telecommunications carrier Verizon had been required by the Telecommunications Act of 1996 (1996 Act) to make individual elements of its local network available to new competing “local exchange carriers” (LECs) on an “unbundled” cost-based basis.  The Federal Communications Commission (FCC) penalized Verizon for providing inadequate network access to certain competitors, in violation of complex FCC regulations implementing the 1996 Act.  (Although the Supreme Court’s Trinko decision did not explicitly discuss the point, the byzantine 1996 Act interconnection regulations in essence had forced Verizon to provide access to competitors on unfavorable below-cost terms – terms to which no rational profit-maximizing business would have agreed.)  Verizon also faced a class action antitrust suit for anticompetitive monopolization, brought in federal district court by Trinko, a customer of AT&T, which owned one of the LECs.  Trinko complained that Verizon had filled rivals’ orders on a discriminatory basis as part of an anticompetitive scheme to discourage customers from becoming or remaining customers of competitive LECs, thus impeding the competitive LECs’ ability to enter and compete in the market for local telephone service.  In essence, as the Supreme Court put it, “[t]he complaint allege[d] that Verizon denied interconnection services to rivals in order to limit entry.”  The district court dismissed the complaint, the Second Circuit reinstated the antitrust claim, and the Supreme Court granted certiorari.

Justice Scalia’s brilliant Trinko opinion clarified a number of broad issues related to the antitrust analysis of monopolization.

First, it highlighted the importance of weighing error costs, with an emphasis on false positives, in assessing a monopolist’s conduct (case citations omitted):

Against the slight benefits of antitrust intervention here, we must weigh a realistic assessment of its costs. Under the best of circumstances, applying the requirements of §2 “can be difficult” because “the means of illicit exclusion, like the means of legitimate competition, are myriad.” . . . .  The cost of false positives counsels against an undue expansion of §2 liability. One false-positive risk is that an incumbent LEC’s failure to provide a service with sufficient alacrity might have nothing to do with exclusion. Allegations of violations of . . . [1996 Act regulatory] duties are difficult for antitrust courts to evaluate, not only because they are highly technical, but also because they are likely to be extremely numerous, given the incessant, complex, and constantly changing interaction of competitive and incumbent LECs implementing the sharing and interconnection obligations. . . .  [Evaluation of such duties] would surely be a daunting task for a generalist antitrust court.  Judicial oversight under the Sherman Act would seem destined to distort investment and lead to a new layer of interminable litigation, atop the variety of litigation routes already available to and actively pursued by competitive LECs.”

Second, it dispensed with the notion that requirements created by economic regulations automatically impose antitrust duties on a monopolist:

[J]ust as the 1996 Act preserves claims that satisfy existing antitrust standards, it does not create new claims that go beyond existing antitrust standards; that would be equally inconsistent with the saving clause’s mandate that nothing in the Act “modify, impair, or supersede the applicability” of the antitrust laws.

Third, it stressed that aggressive single firm conduct lies at the heart of consumer welfare-oriented competition, that monopolists should be given broad leeway in deciding with whom they wish to deal, and that antitrust should focus primarily on collusion among direct competitors, which is the primary antitrust evil (case citation omitted):

Firms may acquire monopoly power by establishing an infrastructure that renders them uniquely suited to serve their customers.  Compelling such firms to share the source of their advantage is in some tension with the underlying purpose of antitrust law, since it may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities.  Enforced sharing also requires antitrust courts to act as central planners, identifying the proper price, quantity, and other terms of dealing–a role for which they are ill-suited.  Moreover, compelling negotiation between competitors may facilitate the supreme evil of antitrust: collusion.  Thus, as a general matter, the Sherman Act “does not restrict the long recognized right of [a] trader or manufacturer engaged in an entirely private business, freely to exercise his own independent discretion as to parties with whom he will deal.”

Fourth, and related to the third point, the opinion largely eviscerated the doctrine that monopolists may be required to grant third party access to a facility deemed “essential”:

We conclude that Verizon’s alleged insufficient assistance in the provision of service to rivals is not a recognized antitrust claim under this Court’s existing refusal-to-deal precedents.  This conclusion would be unchanged even if we considered to be established law the “essential facilities” doctrine crafted by some lower courts, under which the Court of Appeals concluded respondent’s allegations might state a claim. See generally Areeda, Essential Facilities: An Epithet in Need of Limiting Principles, 58 Antitrust L. J. 841 (1989) [arguing against the merits of the essential facilities doctrine]. We have never recognized such a doctrine, see Aspen Skiing Co., 472 U.S., at 611, n. 44; AT&T Corp. v. Iowa Utilities Bd., 525 U.S., at 428 (opinion of Breyer, J.), and we find no need . . . to recognize it . . . here.  It suffices for present purposes to note that . . . where access exists [as in this case], the doctrine serves no purpose.

In sum, Justice Scalia’s Trinko opinion transcends the dispute at hand and creates an efficiency-based, error cost-sensitive touchstone for antitrust monopolization analysis.  Put simply, the opinion teaches that a monopolist should be given broad leeway to innovate aggressively, and should not be condemned under antitrust law merely for acting in a manner that is consistent with its legitimate business interests.  This holds true even if the monopolist is violating some non-antitrust regulatory requirement.  Relatedly, as a general proposition a monopolist should not be required by antitrust law to deal with third parties or to make its facilities available to others.  A failure to heed those propositions would convert antitrust monopolization law into just another regulatory vehicle through which government would be empowered to micromanage business relationships among private parties.  Such a failure would predictably impose substantial economic harm, given its tendency to discourage successful firms from competing aggressively and the sad track record of excessively costly American regulatory micromanagement (see here, for example).  Let us hope that U.S. antitrust enforcers and the federal courts remain mindful of this important teaching, and thereby continue to heed Justice Scalia’s seminal contribution to American antitrust law.