Archives For Antonin Scalia

[This post adapts elements of “Should ASEAN Antitrust Laws Emulate European Competition Policy?”, published in the Singapore Economic Review (2021). Open access working paper here.]

U.S. and European competition laws diverge in numerous ways that have important real-world effects. Understanding these differences is vital, particularly as lawmakers in the United States, and the rest of the world, consider adopting a more “European” approach to competition.

In broad terms, the European approach is more centralized and political. The European Commission’s Directorate General for Competition (DG Comp) has significant de facto discretion over how the law is enforced. This contrasts with the common law approach of the United States, in which courts elaborate upon open-ended statutes through an iterative process of case law. In other words, the European system was built from the top down, while U.S. antitrust relies on a bottom-up approach, derived from arguments made by litigants (including the government antitrust agencies) and defendants (usually businesses).

This procedural divergence has significant ramifications for substantive law. European competition law includes more provisions akin to de facto regulation. This is notably the case for the “abuse of dominance” standard, in which a “dominant” business can be prosecuted for “abusing” its position by charging high prices or refusing to deal with competitors. By contrast, the U.S. system places more emphasis on actual consumer outcomes, rather than the nature or “fairness” of an underlying practice.

The American system thus affords firms more leeway to exclude their rivals, so long as this entails superior benefits for consumers. This may make the U.S. system more hospitable to innovation, since there is no built-in regulation of conduct for innovators who acquire a successful market position fairly and through normal competition.

In this post, we discuss some key differences between the two systems—including in areas like predatory pricing and refusals to deal—as well as the discretionary power the European Commission enjoys under the European model.

Exploitative Abuses

U.S. antitrust is, by and large, unconcerned with companies charging what some might consider “excessive” prices. The late Associate Justice Antonin Scalia, writing for the Supreme Court majority in the 2003 case Verizon v. Trinko, observed that:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period—is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth.

This contrasts with European competition-law cases, where firms may be found to have infringed competition law because they charged excessive prices. As the European Court of Justice (ECJ) held in 1978’s United Brands case: “In this case charging a price which is excessive because it has no reasonable relation to the economic value of the product supplied would be such an abuse.”

While United Brands was the EU’s foundational case for excessive pricing, and the European Commission reiterated that these allegedly exploitative abuses were possible when it published its guidance paper on abuse of dominance cases in 2009, the commission had for some time demonstrated apparent disinterest in bringing such cases. In recent years, however, both the European Commission and some national authorities have shown renewed interest in excessive-pricing cases, most notably in the pharmaceutical sector.

European competition law also penalizes so-called “margin squeeze” abuses, in which a dominant upstream supplier charges a price to distributors that is too high for them to compete effectively with that same dominant firm downstream:

[I]t is for the referring court to examine, in essence, whether the pricing practice introduced by TeliaSonera is unfair in so far as it squeezes the margins of its competitors on the retail market for broadband connection services to end users. (Konkurrensverket v TeliaSonera Sverige, 2011)

As Scalia observed in Trinko, forcing firms to charge prices that are below a market’s natural equilibrium affects firms’ incentives to enter markets, notably with innovative products and more efficient means of production. But the problem is not just one of market entry and innovation.  Also relevant is the degree to which competition authorities are competent to determine the “right” prices or margins.

As Friedrich Hayek demonstrated in his influential 1945 essay The Use of Knowledge in Society, economic agents use information gleaned from prices to guide their business decisions. It is this distributed activity of thousands or millions of economic actors that enables markets to put resources to their most valuable uses, thereby leading to more efficient societies. By comparison, the efforts of central regulators to set prices and margins is necessarily inferior; there is simply no reasonable way for competition regulators to make such judgments in a consistent and reliable manner.

Given the substantial risk that investigations into purportedly excessive prices will deter market entry, such investigations should be circumscribed. But the court’s precedents, with their myopic focus on ex post prices, do not impose such constraints on the commission. The temptation to “correct” high prices—especially in the politically contentious pharmaceutical industry—may thus induce economically unjustified and ultimately deleterious intervention.

Predatory Pricing

A second important area of divergence concerns predatory-pricing cases. U.S. antitrust law subjects allegations of predatory pricing to two strict conditions:

  1. Monopolists must charge prices that are below some measure of their incremental costs; and
  2. There must be a realistic prospect that they will able to recoup these initial losses.

In laying out its approach to predatory pricing, the U.S. Supreme Court has identified the risk of false positives and the clear cost of such errors to consumers. It thus has particularly stressed the importance of the recoupment requirement. As the court found in 1993’s Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., without recoupment, “predatory pricing produces lower aggregate prices in the market, and consumer welfare is enhanced.”

Accordingly, U.S. authorities must prove that there are constraints that prevent rival firms from entering the market after the predation scheme, or that the scheme itself would effectively foreclose rivals from entering the market in the first place. Otherwise, the predator would be undercut by competitors as soon as it attempts to recoup its losses by charging supra-competitive prices.

Without the strong likelihood that a monopolist will be able to recoup lost revenue from underpricing, the overwhelming weight of economic evidence (to say nothing of simple logic) is that predatory pricing is not a rational business strategy. Thus, apparent cases of predatory pricing are most likely not, in fact, predatory; deterring or punishing them would actually harm consumers.

By contrast, the EU employs a more expansive legal standard to define predatory pricing, and almost certainly risks injuring consumers as a result. Authorities must prove only that a company has charged a price below its average variable cost, in which case its behavior is presumed to be predatory. Even when a firm charges prices that are between its average variable and average total cost, it can be found guilty of predatory pricing if authorities show that its behavior was part of a plan to eliminate a competitor. Most significantly, in neither case is it necessary for authorities to show that the scheme would allow the monopolist to recoup its losses.

[I]t does not follow from the case‑law of the Court that proof of the possibility of recoupment of losses suffered by the application, by an undertaking in a dominant position, of prices lower than a certain level of costs constitutes a necessary precondition to establishing that such a pricing policy is abusive. (France Télécom v Commission, 2009).

This aspect of the legal standard has no basis in economic theory or evidence—not even in the “strategic” economic theory that arguably challenges the dominant Chicago School understanding of predatory pricing. Indeed, strategic predatory pricing still requires some form of recoupment, and the refutation of any convincing business justification offered in response. For example, ​​in a 2017 piece for the Antitrust Law Journal, Steven Salop lays out the “raising rivals’ costs” analysis of predation and notes that recoupment still occurs, just at the same time as predation:

[T]he anticompetitive conditional pricing practice does not involve discrete predatory and recoupment periods, as in the case of classical predatory pricing. Instead, the recoupment occurs simultaneously with the conduct. This is because the monopolist is able to maintain its current monopoly power through the exclusionary conduct.

The case of predatory pricing illustrates a crucial distinction between European and American competition law. The recoupment requirement embodied in American antitrust law serves to differentiate aggressive pricing behavior that improves consumer welfare—because it leads to overall price decreases—from predatory pricing that reduces welfare with higher prices. It is, in other words, entirely focused on the welfare of consumers.

The European approach, by contrast, reflects structuralist considerations far removed from a concern for consumer welfare. Its underlying fear is that dominant companies could use aggressive pricing to engender more concentrated markets. It is simply presumed that these more concentrated markets are invariably detrimental to consumers. Both the Tetra Pak and France Télécom cases offer clear illustrations of the ECJ’s reasoning on this point:

[I]t would not be appropriate, in the circumstances of the present case, to require in addition proof that Tetra Pak had a realistic chance of recouping its losses. It must be possible to penalize predatory pricing whenever there is a risk that competitors will be eliminated… The aim pursued, which is to maintain undistorted competition, rules out waiting until such a strategy leads to the actual elimination of competitors. (Tetra Pak v Commission, 1996).

Similarly:

[T]he lack of any possibility of recoupment of losses is not sufficient to prevent the undertaking concerned reinforcing its dominant position, in particular, following the withdrawal from the market of one or a number of its competitors, so that the degree of competition existing on the market, already weakened precisely because of the presence of the undertaking concerned, is further reduced and customers suffer loss as a result of the limitation of the choices available to them.  (France Télécom v Commission, 2009).

In short, the European approach leaves less room to analyze the concrete effects of a given pricing scheme, leaving it more prone to false positives than the U.S. standard explicated in the Brooke Group decision. Worse still, the European approach ignores not only the benefits that consumers may derive from lower prices, but also the chilling effect that broad predatory pricing standards may exert on firms that would otherwise seek to use aggressive pricing schemes to attract consumers.

Refusals to Deal

U.S. and EU antitrust law also differ greatly when it comes to refusals to deal. While the United States has limited the ability of either enforcement authorities or rivals to bring such cases, EU competition law sets a far lower threshold for liability.

As Justice Scalia wrote in Trinko:

Aspen Skiing is at or near the outer boundary of §2 liability. The Court there found significance in the defendant’s decision to cease participation in a cooperative venture. The unilateral termination of a voluntary (and thus presumably profitable) course of dealing suggested a willingness to forsake short-term profits to achieve an anticompetitive end. (Verizon v Trinko, 2003.)

This highlights two key features of American antitrust law with regard to refusals to deal. To start, U.S. antitrust law generally does not apply the “essential facilities” doctrine. Accordingly, in the absence of exceptional facts, upstream monopolists are rarely required to supply their product to downstream rivals, even if that supply is “essential” for effective competition in the downstream market. Moreover, as Justice Scalia observed in Trinko, the Aspen Skiing case appears to concern only those limited instances where a firm’s refusal to deal stems from the termination of a preexisting and profitable business relationship.

While even this is not likely the economically appropriate limitation on liability, its impetus—ensuring that liability is found only in situations where procompetitive explanations for the challenged conduct are unlikely—is completely appropriate for a regime concerned with minimizing the cost to consumers of erroneous enforcement decisions.

As in most areas of antitrust policy, EU competition law is much more interventionist. Refusals to deal are a central theme of EU enforcement efforts, and there is a relatively low threshold for liability.

In theory, for a refusal to deal to infringe EU competition law, it must meet a set of fairly stringent conditions: the input must be indispensable, the refusal must eliminate all competition in the downstream market, and there must not be objective reasons that justify the refusal. Moreover, if the refusal to deal involves intellectual property, it must also prevent the appearance of a new good.

In practice, however, all of these conditions have been relaxed significantly by EU courts and the commission’s decisional practice. This is best evidenced by the lower court’s Microsoft ruling where, as John Vickers notes:

[T]he Court found easily in favor of the Commission on the IMS Health criteria, which it interpreted surprisingly elastically, and without relying on the special factors emphasized by the Commission. For example, to meet the “new product” condition it was unnecessary to identify a particular new product… thwarted by the refusal to supply but sufficient merely to show limitation of technical development in terms of less incentive for competitors to innovate.

EU competition law thus shows far less concern for its potential chilling effect on firms’ investments than does U.S. antitrust law.

Vertical Restraints

There are vast differences between U.S. and EU competition law relating to vertical restraints—that is, contractual restraints between firms that operate at different levels of the production process.

On the one hand, since the Supreme Court’s Leegin ruling in 2006, even price-related vertical restraints (such as resale price maintenance (RPM), under which a manufacturer can stipulate the prices at which retailers must sell its products) are assessed under the rule of reason in the United States. Some commentators have gone so far as to say that, in practice, U.S. case law on RPM almost amounts to per se legality.

Conversely, EU competition law treats RPM as severely as it treats cartels. Both RPM and cartels are considered to be restrictions of competition “by object”—the EU’s equivalent of a per se prohibition. This severe treatment also applies to non-price vertical restraints that tend to partition the European internal market.

Furthermore, in the Consten and Grundig ruling, the ECJ rejected the consequentialist, and economically grounded, principle that inter-brand competition is the appropriate framework to assess vertical restraints:

Although competition between producers is generally more noticeable than that between distributors of products of the same make, it does not thereby follow that an agreement tending to restrict the latter kind of competition should escape the prohibition of Article 85(1) merely because it might increase the former. (Consten SARL & Grundig-Verkaufs-GMBH v. Commission of the European Economic Community, 1966).

This treatment of vertical restrictions flies in the face of longstanding mainstream economic analysis of the subject. As Patrick Rey and Jean Tirole conclude:

Another major contribution of the earlier literature on vertical restraints is to have shown that per se illegality of such restraints has no economic foundations.

Unlike the EU, the U.S. Supreme Court in Leegin took account of the weight of the economic literature, and changed its approach to RPM to ensure that the law no longer simply precluded its arguable consumer benefits, writing: “Though each side of the debate can find sources to support its position, it suffices to say here that economics literature is replete with procompetitive justifications for a manufacturer’s use of resale price maintenance.” Further, the court found that the prior approach to resale price maintenance restraints “hinders competition and consumer welfare because manufacturers are forced to engage in second-best alternatives and because consumers are required to shoulder the increased expense of the inferior practices.”

The EU’s continued per se treatment of RPM, by contrast, strongly reflects its “precautionary principle” approach to antitrust. European regulators and courts readily condemn conduct that could conceivably injure consumers, even where such injury is, according to the best economic understanding, exceedingly unlikely. The U.S. approach, which rests on likelihood rather than mere possibility, is far less likely to condemn beneficial conduct erroneously.

Political Discretion in European Competition Law

EU competition law lacks a coherent analytical framework like that found in U.S. law’s reliance on the consumer welfare standard. The EU process is driven by a number of laterally equivalent—and sometimes mutually exclusive—goals, including industrial policy and the perceived need to counteract foreign state ownership and subsidies. Such a wide array of conflicting aims produces lack of clarity for firms seeking to conduct business. Moreover, the discretion that attends this fluid arrangement of goals yields an even larger problem.

The Microsoft case illustrates this problem well. In Microsoft, the commission could have chosen to base its decision on various potential objectives. It notably chose to base its findings on the fact that Microsoft’s behavior reduced “consumer choice.”

The commission, in fact, discounted arguments that economic efficiency may lead to consumer welfare gains, because it determined “consumer choice” among media players was more important:

Another argument relating to reduced transaction costs consists in saying that the economies made by a tied sale of two products saves resources otherwise spent for maintaining a separate distribution system for the second product. These economies would then be passed on to customers who could save costs related to a second purchasing act, including selection and installation of the product. Irrespective of the accuracy of the assumption that distributive efficiency gains are necessarily passed on to consumers, such savings cannot possibly outweigh the distortion of competition in this case. This is because distribution costs in software licensing are insignificant; a copy of a software programme can be duplicated and distributed at no substantial effort. In contrast, the importance of consumer choice and innovation regarding applications such as media players is high. (Commission Decision No. COMP. 37792 (Microsoft)).

It may be true that tying the products in question was unnecessary. But merely dismissing this decision because distribution costs are near-zero is hardly an analytically satisfactory response. There are many more costs involved in creating and distributing complementary software than those associated with hosting and downloading. The commission also simply asserts that consumer choice among some arbitrary number of competing products is necessarily a benefit. This, too, is not necessarily true, and the decision’s implication that any marginal increase in choice is more valuable than any gains from product design or innovation is analytically incoherent.

The Court of First Instance was only too happy to give the commission a pass in its breezy analysis; it saw no objection to these findings. With little substantive reasoning to support its findings, the court fully endorsed the commission’s assessment:

As the Commission correctly observes (see paragraph 1130 above), by such an argument Microsoft is in fact claiming that the integration of Windows Media Player in Windows and the marketing of Windows in that form alone lead to the de facto standardisation of the Windows Media Player platform, which has beneficial effects on the market. Although, generally, standardisation may effectively present certain advantages, it cannot be allowed to be imposed unilaterally by an undertaking in a dominant position by means of tying.

The Court further notes that it cannot be ruled out that third parties will not want the de facto standardisation advocated by Microsoft but will prefer it if different platforms continue to compete, on the ground that that will stimulate innovation between the various platforms. (Microsoft Corp. v Commission, 2007)

Pointing to these conflicting effects of Microsoft’s bundling decision, without weighing either, is a weak basis to uphold the commission’s decision that consumer choice outweighs the benefits of standardization. Moreover, actions undertaken by other firms to enhance consumer choice at the expense of standardization are, on these terms, potentially just as problematic. The dividing line becomes solely which theory the commission prefers to pursue.

What such a practice does is vest the commission with immense discretionary power. Any given case sets up a “heads, I win; tails, you lose” situation in which defendants are easily outflanked by a commission that can change the rules of its analysis as it sees fit. Defendants can play only the cards that they are dealt. Accordingly, Microsoft could not successfully challenge a conclusion that its behavior harmed consumers’ choice by arguing that it improved consumer welfare, on net.

By selecting, in this instance, “consumer choice” as the standard to be judged, the commission was able to evade the constraints that might have been imposed by a more robust welfare standard. Thus, the commission can essentially pick and choose the objectives that best serve its interests in each case. This vastly enlarges the scope of potential antitrust liability, while also substantially decreasing the ability of firms to predict when their behavior may be viewed as problematic. It leads to what, in U.S. courts, would be regarded as an untenable risk of false positives that chill innovative behavior and create nearly unwinnable battles for targeted firms.

Amazingly enough, at a time when legislative proposals for new antitrust restrictions are rapidly multiplying—see the Competition and Antitrust Law Enforcement Reform Act (CALERA), for example—Congress simultaneously is seriously considering granting antitrust immunity to a price-fixing cartel among members of the newsmedia. This would thereby authorize what the late Justice Antonin Scalia termed “the supreme evil of antitrust: collusion.” What accounts for this bizarre development?

Discussion

The antitrust exemption in question, embodied in the Journalism Competition and Preservation Act of 2021, was introduced March 10 simultaneously in the U.S. House and Senate. The press release announcing the bill’s introduction portrayed it as a “good government” effort to help struggling newspapers in their negotiations with large digital platforms, and thereby strengthen American democracy:

We must enable news organizations to negotiate on a level playing field with the big tech companies if we want to preserve a strong and independent press[.] …

A strong, diverse, free press is critical for any successful democracy. …

Nearly 90 percent of Americans now get news while on a smartphone, computer, or tablet, according to a Pew Research Center survey conducted last year, dwarfing the number of Americans who get news via television, radio, or print media. Facebook and Google now account for the vast majority of online referrals to news sources, with the two companies also enjoying control of a majority of the online advertising market. This digital ad duopoly has directly contributed to layoffs and consolidation in the news industry, particularly for local news.

This legislation would address this imbalance by providing a safe harbor from antitrust laws so publishers can band together to negotiate with large platforms. It provides a 48-month window for companies to negotiate fair terms that would flow subscription and advertising dollars back to publishers, while protecting and preserving Americans’ right to access quality news. These negotiations would strictly benefit Americans and news publishers at-large; not just one or a few publishers.

The Journalism Competition and Preservation Act only allows coordination by news publishers if it (1) directly relates to the quality, accuracy, attribution or branding, and interoperability of news; (2) benefits the entire industry, rather than just a few publishers, and are non-discriminatory to other news publishers; and (3) is directly related to and reasonably necessary for these negotiations.

Lurking behind this public-spirited rhetoric, however, is the specter of special interest rent seeking by powerful media groups, as discussed in an insightful article by Thom Lambert. The newspaper industry is indeed struggling, but that is true overseas as well as in the United States. Competition from internet websites has greatly reduced revenues from classified and non-classified advertising. As Lambert notes, in “light of the challenges the internet has created for their advertising-focused funding model, newspapers have sought to employ the government’s coercive power to increase their revenues.”

In particular, media groups have successfully lobbied various foreign governments to impose rules requiring that Google and Facebook pay newspapers licensing fees to display content. The Australian government went even further by mandating that digital platforms share their advertising revenue with news publishers and give the publishers advance notice of any algorithm changes that could affect page rankings and displays. Media rent-seeking efforts took a different form in the United States, as Lambert explains (citations omitted):

In the United States, news publishers have sought to extract rents from digital platforms by lobbying for an exemption from the antitrust laws. Their efforts culminated in the introduction of the Journalism Competition and Preservation Act of 2018. According to a press release announcing the bill, it would allow “small publishers to band together to negotiate with dominant online platforms to improve the access to and the quality of news online.” In reality, the bill would create a four-year safe harbor for “any print or digital news organization” to jointly negotiate terms of trade with Google and Facebook. It would not apply merely to “small publishers” but would instead immunize collusive conduct by such major conglomerates as Murdoch’s News Corporation, the Walt Disney Corporation, the New York Times, Gannet Company, Bloomberg, Viacom, AT&T, and the Fox Corporation. The bill would permit news organizations to fix prices charged to digital platforms as long as negotiations with the platforms were not limited to price, were not discriminatory toward similarly situated news organizations, and somehow related to “the quality, accuracy, attribution or branding, and interoperability of news.” Given the ease of meeting that test—since news organizations could always claim that higher payments were necessary to ensure journalistic quality—the bill would enable news publishers in the United States to extract rents via collusion rather than via direct government coercion, as in Australia.

The 2021 version of the JCPA is nearly identical to the 2018 version discussed by Thom. The only substantive change is that the 2021 version strengthens the pro-cartel coalition by adding broadcasters (it applies to “any print, broadcast, or news organization”). While the JCPA plainly targets Facebook and Google (“online content distributors” with “not fewer than 1,000,000,000 monthly active users, in the aggregate, on its website”), Microsoft President Brad Smith noted in a March 12 House Antitrust Subcommittee Hearing on the bill that his company would also come under its collective-bargaining terms. Other online distributors could eventually become subject to the proposed law as well.

Purported justifications for the proposal were skillfully skewered by John Yun in a 2019 article on the substantively identical 2018 JCPA. Yun makes several salient points. First, the bill clearly shields price fixing. Second, the claim that all news organizations (in particular, small newspapers) would receive the same benefit from the bill rings hollow. The bill’s requirement that negotiations be “nondiscriminatory as to similarly situated news content creators” (emphasis added) would allow the cartel to negotiate different terms of trade for different “tiers” of organizations. Thus The New York Times and The Washington Post, say, might be part of a top tier getting the most favorable terms of trade. Third, the evidence does not support the assertion that Facebook and Google are monopolistic gateways for news outlets.

Yun concludes by summarizing the case against this legislation (citations omitted):

Put simply, the impact of the bill is to legalize a media cartel. The bill expressly allows the cartel to fix the price and set the terms of trade for all market participants. The clear goal is to transfer surplus from online platforms to news organizations, which will likely result in higher content costs for these platforms, as well as provisions that will stifle the ability to innovate. In turn, this could negatively impact quality for the users of these platforms.

Furthermore, a stated goal of the bill is to promote “quality” news and to “highlight trusted brands.” These are usually antitrust code words for favoring one group, e.g., those that are part of the News Media Alliance, while foreclosing others who are not “similarly situated.” What about the non-discrimination clause? Will it protect non-members from foreclosure? Again, a careful reading of the bill raises serious questions as to whether it will actually offer protection. The bill only ensures that the terms of the negotiations are available to all “similarly situated” news organizations. It is very easy to carve out provisions that would favor top tier members of the media cartel.

Additionally, an unintended consequence of antitrust exemptions can be that it makes the beneficiaries lax by insulating them from market competition and, ultimately, can harm the industry by delaying inevitable and difficult, but necessary, choices. There is evidence that this is what occurred with the Newspaper Preservation Act of 1970, which provided antitrust exemption to geographically proximate newspapers for joint operations.

There are very good reasons why antitrust jurisprudence reserves per se condemnation to the most egregious anticompetitive acts including the formation of cartels. Legislative attempts to circumvent the federal antitrust laws should be reserved solely for the most compelling justifications. There is little evidence that this level of justification has been met in this present circumstance.

Conclusion

Statutory exemptions to the antitrust laws have long been disfavored, and with good reason. As I explained in my 2005 testimony before the Antitrust Modernization Commission, such exemptions tend to foster welfare-reducing output restrictions. Also, empirical research suggests that industries sheltered from competition perform less well than those subject to competitive forces. In short, both economic theory and real-world data support a standard that requires proponents of an exemption to bear the burden of demonstrating that the exemption will benefit consumers.

This conclusion applies most strongly when an exemption would specifically authorize hard-core price fixing, as in the case with the JCPA. What’s more, the bill’s proponents have not borne the burden of justifying their pro-cartel proposal in economic welfare terms—quite the opposite. Lambert’s analysis exposes this legislation as the product of special interest rent seeking that has nothing to do with consumer welfare. And Yun’s evaluation of the bill clarifies that, not only would the JCPA foster harmful collusive pricing, but it would also harm its beneficiaries by allowing them to avoid taking steps to modernize and render themselves more efficient competitors.

In sum, though the JCPA claims to fly a “public interest” flag, it is just another private interest bill promoted by well-organized rent seekers would harm consumer welfare and undermine innovation.

Apple’s legal team will be relieved that “you reap what you sow” is just a proverb. After a long-running antitrust battle against Qualcomm unsurprisingly ended in failure, Apple now faces antitrust accusations of its own (most notably from Epic Games). Somewhat paradoxically, this turn of events might cause Apple to see its previous defeat in a new light. Indeed, the well-established antitrust principles that scuppered Apple’s challenge against Qualcomm will now be the rock upon which it builds its legal defense.

But while Apple’s reversal of fortunes might seem anecdotal, it neatly illustrates a fundamental – and often overlooked – principle of antitrust policy: Antitrust law is about maximizing consumer welfare. Accordingly, the allocation of surplus between two companies is only incidentally relevant to antitrust proceedings, and it certainly is not a goal in and of itself. In other words, antitrust law is not about protecting David from Goliath.

Jockeying over the distribution of surplus

Or at least that is the theory. In practice, however, most antitrust cases are but small parts of much wider battles where corporations use courts and regulators in order to jockey for market position and/or tilt the distribution of surplus in their favor. The Microsoft competition suits brought by the DOJ and the European commission (in the EU and US) partly originated from complaints, and lobbying, by Sun Microsystems, Novell, and Netscape. Likewise, the European Commission’s case against Google was prompted by accusations from Microsoft and Oracle, among others. The European Intel case was initiated following a complaint by AMD. The list goes on.

The last couple of years have witnessed a proliferation of antitrust suits that are emblematic of this type of power tussle. For instance, Apple has been notoriously industrious in using the court system to lower the royalties that it pays to Qualcomm for LTE chips. One of the focal points of Apple’s discontent was Qualcomm’s policy of basing royalties on the end-price of devices (Qualcomm charged iPhone manufacturers a 5% royalty rate on their handset sales – and Apple received further rebates):

“The whole idea of a percentage of the cost of the phone didn’t make sense to us,” [Apple COO Jeff Williams] said. “It struck at our very core of fairness. At the time we were making something really really different.”

This pricing dispute not only gave rise to high-profile court cases, it also led Apple to lobby Standard Developing Organizations (“SDOs”) in a partly successful attempt to make them amend their patent policies, so as to prevent this type of pricing. 

However, in a highly ironic turn of events, Apple now finds itself on the receiving end of strikingly similar allegations. At issue is the 30% commission that Apple charges for in app purchases on the iPhone and iPad. These “high” commissions led several companies to lodge complaints with competition authorities (Spotify and Facebook, in the EU) and file antitrust suits against Apple (Epic Games, in the US).

Of course, these complaints are couched in more sophisticated, and antitrust-relevant, reasoning. But that doesn’t alter the fact that these disputes are ultimately driven by firms trying to tilt the allocation of surplus in their favor (for a more detailed explanation, see Apple and Qualcomm).

Pushback from courts: The Qualcomm case

Against this backdrop, a string of recent cases sends a clear message to would-be plaintiffs: antitrust courts will not be drawn into rent allocation disputes that have no bearing on consumer welfare. 

The best example of this judicial trend is Qualcomm’s victory before the United States Court of Appeal for the 9th Circuit. The case centered on the royalties that Qualcomm charged to OEMs for its Standard Essential Patents (SEPs). Both the district court and the FTC found that Qualcomm had deployed a series of tactics (rebates, refusals to deal, etc) that enabled it to circumvent its FRAND pledges. 

However, the Court of Appeal was not convinced. It failed to find any consumer harm, or recognizable antitrust infringement. Instead, it held that the dispute at hand was essentially a matter of contract law:

To the extent Qualcomm has breached any of its FRAND commitments, a conclusion we need not and do not reach, the remedy for such a breach lies in contract and patent law. 

This is not surprising. From the outset, numerous critics pointed that the case lied well beyond the narrow confines of antitrust law. The scathing dissenting statement written by Commissioner Maureen Olhaussen is revealing:

[I]n the Commission’s 2-1 decision to sue Qualcomm, I face an extraordinary situation: an enforcement action based on a flawed legal theory (including a standalone Section 5 count) that lacks economic and evidentiary support, that was brought on the eve of a new presidential administration, and that, by its mere issuance, will undermine U.S. intellectual property rights in Asia and worldwide. These extreme circumstances compel me to voice my objections. 

In reaching its conclusion, the Court notably rejected the notion that SEP royalties should be systematically based upon the “Smallest Saleable Patent Practicing Unit” (or SSPPU):

Even if we accept that the modem chip in a cellphone is the cellphone’s SSPPU, the district court’s analysis is still fundamentally flawed. No court has held that the SSPPU concept is a per se rule for “reasonable royalty” calculations; instead, the concept is used as a tool in jury cases to minimize potential jury confusion when the jury is weighing complex expert testimony about patent damages.

Similarly, it saw no objection to Qualcomm licensing its technology at the OEM level (rather than the component level):

Qualcomm’s rationale for “switching” to OEM-level licensing was not “to sacrifice short-term benefits in order to obtain higher profits in the long run from the exclusion of competition,” the second element of the Aspen Skiing exception. Aerotec Int’l, 836 F.3d at 1184 (internal quotation marks and citation omitted). Instead, Qualcomm responded to the change in patent-exhaustion law by choosing the path that was “far more lucrative,” both in the short term and the long term, regardless of any impacts on competition. 

Finally, the Court concluded that a firm breaching its FRAND pledges did not automatically amount to anticompetitive conduct: 

We decline to adopt a theory of antitrust liability that would presume anticompetitive conduct any time a company could not prove that the “fair value” of its SEP portfolios corresponds to the prices the market appears willing to pay for those SEPs in the form of licensing royalty rates.

Taken together, these findings paint a very clear picture. The Qualcomm Court repeatedly rejected the radical idea that US antitrust law should concern itself with the prices charged by monopolists — as opposed to practices that allow firms to illegally acquire or maintain a monopoly position. The words of Learned Hand and those of Antonin Scalia (respectively, below) loom large:

The successful competitor, having been urged to compete, must not be turned upon when he wins. 

And,

To safeguard the incentive to innovate, the possession of monopoly power will not be found unlawful unless it is accompanied by an element of anticompetitive conduct.

Other courts (both in the US and abroad) have reached similar conclusions

For instance, a district court in Texas dismissed a suit brought by Continental Automotive Systems (which supplies electronic systems to the automotive industry) against a group of SEP holders. 

Continental challenged the patent holders’ decision to license their technology at the vehicle rather than component level (the allegation is very similar to the FTC’s complaint that Qualcomm licensed its SEPs at the OEM, rather than chipset level). However, following a forceful intervention by the DOJ, the Court ultimately held that the facts alleged by Continental were not indicative of antitrust injury. It thus dismissed the case.

Likewise, within weeks of the Qualcomm and Continental decisions, the UK Supreme court also ruled in favor of SEP holders. In its Unwired Planet ruling, the Court concluded that discriminatory licenses did not automatically infringe competition law (even though they might breach a firm’s contractual obligations):

[I]t cannot be said that there is any general presumption that differential pricing for licensees is problematic in terms of the public or private interests at stake.

In reaching this conclusion, the UK Supreme Court emphasized that the determination of whether licenses were FRAND, or not, was first and foremost a matter of contract law. In the case at hand, the most important guide to making this determination were the internal rules of the relevant SDO (as opposed to competition case law):

Since price discrimination is the norm as a matter of licensing practice and may promote objectives which the ETSI regime is intended to promote (such as innovation and consumer welfare), it would have required far clearer language in the ETSI FRAND undertaking to indicate an intention to impose the more strict, “hard-edged” non-discrimination obligation for which Huawei contends. Further, in view of the prevalence of competition laws in the major economies around the world, it is to be expected that any anti-competitive effects from differential pricing would be most appropriately addressed by those laws

All of this ultimately led the Court to rule in favor of Unwired Planet, thus dismissing Huawei’s claims that it had infringed competition law by breaching its FRAND pledges. 

In short, courts and antitrust authorities on both sides of the Atlantic have repeatedly, and unambiguously, concluded that pricing disputes (albeit in the specific context of technological standards) are generally a matter of contract law. Antitrust/competition law intercedes only when unfair/excessive/discriminatory prices are both caused by anticompetitive behavior and result in anticompetitive injury.

Apple’s Loss is… Apple’s gain.

Readers might wonder how the above cases relate to Apple’s app store. But, on closer inspection the parallels are numerous. As explained above, courts have repeatedly stressed that antitrust enforcement should not concern itself with the allocation of surplus between commercial partners. Yet that is precisely what Epic Game’s suit against Apple is all about.

Indeed, Epic’s central claim is not that it is somehow foreclosed from Apple’s App Store (for example, because Apple might have agreed to exclusively distribute the games of one of Epic’s rivals). Instead, all of its objections are down to the fact that it would like to access Apple’s store under more favorable terms:

Apple’s conduct denies developers the choice of how best to distribute their apps. Developers are barred from reaching over one billion iOS users unless they go through Apple’s App Store, and on Apple’s terms. […]

Thus, developers are dependent on Apple’s noblesse oblige, as Apple may deny access to the App Store, change the terms of access, or alter the tax it imposes on developers, all in its sole discretion and on the commercially devastating threat of the developer losing access to the entire iOS userbase. […]

By imposing its 30% tax, Apple necessarily forces developers to suffer lower profits, reduce the quantity or quality of their apps, raise prices to consumers, or some combination of the three.

And the parallels with the Qualcomm litigation do not stop there. Epic is effectively asking courts to make Apple monetize its platform at a different level than the one that it chose to maximize its profits (no more monetization at the app store level). Similarly, Epic Games omits any suggestion of profit sacrifice on the part of Apple — even though it is a critical element of most unilateral conduct theories of harm. Finally, Epic is challenging conduct that is both the industry norm and emerged in a highly competitive setting.

In short, all of Epic’s allegations are about monopoly prices, not monopoly maintenance or monopolization. Accordingly, just as the SEP cases discussed above were plainly beyond the outer bounds of antitrust enforcement (something that the DOJ repeatedly stressed with regard to the Qualcomm case), so too is the current wave of antitrust litigation against Apple. When all is said and done, Apple might thus be relieved that Qualcomm was victorious in their antitrust confrontation. Indeed, the legal principles that caused its demise against Qualcomm are precisely the ones that will, likely, enable it to prevail against Epic Games.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Jan Rybnicek (Counsel at Freshfields Bruckhaus Deringer US LLP in Washington, D.C. and Senior Fellow and Adjunct Professor at the Global Antitrust Institute at the Antonin Scalia Law School at George Mason University).]

In an area where it may seem that agreement is rare, there is near universal agreement on the benefits of withdrawing the DOJ’s 1984 Non-Horizontal Merger Guidelines. The 1984 Guidelines do not reflect current agency thinking on vertical mergers and are not relied upon by businesses or practitioners to anticipate how the agencies may review a vertical transaction. The more difficult question is whether the agencies should now replace the 1984 Guidelines and, if so, what the modern guidelines should say.

There are several important reasons that counsel against issuing new vertical merger guidelines (VMGs). Most significantly, we likely are better off without new VMGs because they invariably will (1) send the wrong message to agency staff about the relative importance of vertical merger enforcement compared to other agency priorities, (2) create new sufficient conditions that tend to trigger wasteful investigations and erroneous enforcement actions, and (3) add very little, if anything, to our understanding of when the agencies will or will not pursue an in-depth investigation or enforcement action of a vertical merger.

Unfortunately, these problems are magnified rather than mitigated by the draft VMGs. But it is unlikely at this point that the agencies will hit the brakes and not issue new VMGs. The agencies therefore should make several key changes that would help prevent the final VMGs from causing more harm than good.

What is the Purpose of Agency Guidelines? 

Before we can have a meaningful conversation about whether the draft VMGs are good or bad for the world, or how they can be improved to ensure they contribute positively to antitrust law, it is important to identify, and have a shared understanding about, the purpose of guidelines and their potential benefits.

In general, I am supportive of guidelines. In fact, I helped urge the FTC to issue its 2015 Policy Statement articulating the agency’s enforcement principles under its Section 5 Unfair Methods of Competition authority. As I have written before, guidelines can be useful if they accomplish two important goals: (1) provide insight and transparency to businesses and practitioners about the agencies’ analytical approach to an issue and (2) offer agency staff direction as to agency priorities while cabining the agencies’ broad discretion by tethering investigational or enforcement decisions to those guidelines. An additional benefit may be that the guidelines also could prove useful to courts interpreting or applying the antitrust laws.

Transparency is important for the obvious reason that it allows the business community and practitioners to know how the agencies will apply the antitrust laws and thereby allows them to evaluate if a specific merger or business arrangement is likely to receive scrutiny. But guidelines are not only consumed by the public. They also are used by agency staff. As a result, guidelines invariably influence how staff approaches a matter, including whether to open an investigation, how in-depth that investigation is, and whether to recommend an enforcement action. Lastly, for guidelines to be meaningful, they also must accurately reflect agency practice, which requires the agencies’ analysis to be tethered to an analytical framework.

As discussed below, there are many reasons to doubt that the draft VMGs can deliver on these goals.

Draft VMGs Will Lead to Bad Enforcement Policy While Providing Little Benefit

 A chief concern with VMGs is that they will inadvertently usher in a new enforcement regime that treats horizontal and vertical mergers as co-equal enforcement priorities despite the mountain of evidence, not to mention simple logic, that mergers among competitors are a significantly greater threat to competition than are vertical mergers. The draft VMGs exacerbate rather than mitigate this risk by creating a false equivalence between vertical and horizontal merger enforcement and by establishing new minimum conditions that are likely to lead the agencies to pursue wasteful investigations of vertical transactions. And the draft VMGs do all this without meaningfully advancing our understanding of the conditions under which the agencies are likely to pursue investigations and enforcement against vertical mergers.

1. No Recognition of the Differences Between Horizontal and Vertical Mergers

One striking feature of the draft VMGs is that they fail to contextualize vertical mergers in the broader antitrust landscape. As a result, it is easy to walk away from the draft VMGs with the impression that vertical mergers are as likely to lead to anticompetitive harm as are horizontal mergers. That is a position not supported by the economic evidence or logic. It is of course true that vertical mergers can result in competitive harm; that is not a seriously contested point. But it is important to acknowledge and provide background for why that harm is significantly less likely than in horizontal cases. That difference should inform agency enforcement priorities. Potentially due to this the lack of framing, the draft VMGs tend to speak more about when the agencies may identify competitive harm rather than when they will not.

The draft VMGs would benefit greatly from a more comprehensive approach to understanding vertical merger transactions. The agencies should add language explaining that, whereas a consensus exists that eliminating a direct competitor always tends to increase the risk of unilateral effects (although often trivially), there is no such consensus that harm will result from the combination of complementary assets. In fact, the current evidence shows such vertical transactions tend to be procompetitive. Absent such language, the VMGs will over time misguidedly focus more agency resources into investigating vertical mergers where there is unlikely to be harm (with inevitably more enforcement errors) and less time on more important priorities, such as pursuing enforcement of anticompetitive horizontal transactions.

2. The 20% Safe Harbor Provides No Harbor and Will Become a Sufficient Condition

The draft VMGs attempt to provide businesses with guidance about the types of transactions the agencies will not investigate by articulating a market share safe harbor. But that safe harbor does not (1) appear to be grounded in any evidence, (2) is surprisingly low in comparison to the EU vertical merger guidelines, and (3) is likely to become a sufficient condition to trigger an in-depth investigation or enforcement. 

The draft VMGs state:

The Agencies are unlikely to challenge a vertical merger where the parties to the merger have a share in the relevant market of less than 20%, and the related product is used in less than 20% of the relevant market.

But in the very next sentence the draft VMGs render the safe harbor virtually meaningless, stating:

In some circumstance, mergers with shares below the threshold can give rise to competitive concerns.

This caveat comes despite the fact that the 20% threshold is low compared to other jurisdictions. Indeed, the EU’s guidelines create a 30% safe harbor. Nor is it clear what the basis is for the 20% threshold, either in economics or law. While it is important for the agencies to remain flexible, too much flexibility will render the draft VMGs meaningless. The draft VMGs should be less equivocal about the types of mergers that will not receive significant scrutiny and are unlikely to be the subject of enforcement action.

What may be most troubling about the market share safe harbor is the likelihood that it will establish general enforcement norms that did not previously exist. It is likely that agency staff will soon interpret (despite language stating otherwise) the 20% market share as the minimum necessary condition to open an in-depth investigation and to pursue an enforcement action. We have seen other guidelines’ tools have similar effects on agency analysis before (see, GUPPIs). This risk is only exacerbated where the safe harbor is not a true safe harbor that provides businesses with clarity on enforcement priorities.

3. Requirements for Proving EDM and Efficiencies Fails to Recognize Vertical Merger Context

The draft VMGs minimize the significant role of EDM and efficiencies in vertical mergers. The agencies frequently take a skeptical approach to efficiencies in the context of horizontal mergers and it is well-known that the hurdle to substantiate efficiencies is difficult, if not impossible, to meet. The draft VMGs oddly continue this skeptical approach by specifically referencing the standards discussed in the horizontal merger guidelines for efficiencies when discussing EDM and vertical merger efficiencies. The draft VMGs do not recognize that the combination of complementary products is inherently more likely to generate efficiencies than in horizontal mergers between competitors. The draft VMGs also oddly discuss EDM and efficiencies in separate sections and spend a trivial amount of time on what is the core motivating feature of vertical mergers. Even the discussion of EDM is as much about where there may be exceptions to EDM as it is about making clear the uncontroversial view that EDM is frequent in vertical transactions. Without acknowledging the inherent nature of EDM and efficiencies more generally, the final VMGs will send the wrong message that vertical merger enforcement should be on par with horizontal merger enforcement.

4. No New Insights into How Agencies Will Assess Vertical Mergers

Some might argue that the costs associated with the draft VMGs nevertheless are tolerable because the guidelines offer significant benefits that far outweigh their costs. But that is not the case here. The draft VMGs provide no new information about how the agencies will review vertical merger transactions and under what circumstances they are likely to seek enforcement actions. And that is because it is a difficult if not impossible task to identify any such general guiding principles. Indeed, unlike in the context of horizontal transactions where an increase in market power informs our thinking about the likely competitive effects, greater market power in the context of a vertical transaction that combines complements creates downward pricing pressure that often will dominate any potential competitive harm.

The draft VMGs do what they can, though, which is to describe in general terms several theories of harm. But the benefits from that exercise are modest and do not outweigh the significant risks discussed above. The theories described are neither novel or unknown to the public today. Nor do the draft VMGs explain any significant new thinking on vertical mergers, likely because there has been none that can provide insight into general enforcement principles. The draft VMGs also do not clarify changes to statutory text (because it has not changed) or otherwise clarify judicial rulings or past enforcement actions. As a result, the draft VMGs do not offer sufficient benefits that would outweigh their substantial cost.

Conclusion

Despite these concerns, it is worth acknowledging the work the FTC and DOJ have put into preparing the draft VMGs. It is no small task to articulate a unified position between the two agencies on an issue such as vertical merger enforcement where so many have such strong views. To the agencies’ credit, the VMGs are restrained in not including novel or more adventurous theories of harm. I anticipate the DOJ and FTC will engage with commentators and take the feedback seriously as they work to improve the final VMGs.

A screenshot of a cell phone

Description automatically generated

This is the first in a series of TOTM blog posts discussing the Commission’s recently published Google Android decision. It draws on research from a soon-to-be published ICLE white paper.

The European Commission’s recent Google Android decision will surely go down as one of the most important competition proceedings of the past decade. And yet, an in-depth reading of the 328 page decision should leave attentive readers with a bitter taste.

One of the Commission’s most significant findings is that the Android operating system and Apple’s iOS are not in the same relevant market, along with the related conclusion that Apple’s App Store and Google Play are also in separate markets.

This blog post points to a series of flaws that undermine the Commission’s reasoning on this point. As a result, the Commission’s claim that Google and Apple operate in separate markets is mostly unsupported.

1. Everyone but the European Commission thinks that iOS competes with Android

Surely the assertion that the two predominant smartphone ecosystems in Europe don’t compete with each other will come as a surprise to… anyone paying attention: 

A screenshot of a cell phone

Description automatically generated

Apple 10-K:

The Company believes the availability of third-party software applications and services for its products depends in part on the developers’ perception and analysis of the relative benefits of developing, maintaining and upgrading such software and services for the Company’s products compared to competitors’ platforms, such as Android for smartphones and tablets and Windows for personal computers.

Google 10-K:

We face competition from: Companies that design, manufacture, and market consumer electronics products, including businesses that have developed proprietary platforms.

This leads to a critical question: Why did the Commission choose to depart from the instinctive conclusion that Google and Apple compete vigorously against each other in the smartphone and mobile operating system market? 

As explained below, its justifications for doing so were deeply flawed.

2. It does not matter that OEMs cannot license iOS (or the App Store)

One of the main reasons why the Commission chose to exclude Apple from the relevant market is that OEMs cannot license Apple’s iOS or its App Store.

But is it really possible to infer that Google and Apple do not compete against each other because their products are not substitutes from OEMs’ point of view? 

The answer to this question is likely no.

Relevant markets, and market shares, are merely a proxy for market power (which is the appropriate baseline upon which build a competition investigation). As Louis Kaplow puts it:

[T]he entire rationale for the market definition process is to enable an inference about market power.

If there is a competitive market for Android and Apple smartphones, then it is somewhat immaterial that Google is the only firm to successfully offer a licensable mobile operating system (as opposed to Apple and Blackberry’s “closed” alternatives).

By exercising its “power” against OEMs by, for instance, degrading the quality of Android, Google would, by the same token, weaken its competitive position against Apple. Google’s competition with Apple in the smartphone market thus constrains Google’s behavior and limits its market power in Android-specific aftermarkets (on this topic, see Borenstein et al., and Klein).

This is not to say that Apple’s iOS (and App Store) is, or is not, in the same relevant market as Google Android (and Google Play). But the fact that OEMs cannot license iOS or the App Store is mostly immaterial for market  definition purposes.

 3. Google would find itself in a more “competitive” market if it decided to stop licensing the Android OS

The Commission’s reasoning also leads to illogical outcomes from a policy standpoint. 

Google could suddenly find itself in a more “competitive” market if it decided to stop licensing the Android OS and operated a closed platform (like Apple does). The direct purchasers of its products – consumers – would then be free to switch between Apple and Google’s products.

As a result, an act that has no obvious effect on actual market power — and that could have a distinctly negative effect on consumers — could nevertheless significantly alter the outcome of competition proceedings on the Commission’s theory. 

One potential consequence is that firms might decide to close their platforms (or refuse to open them in the first place) in order to avoid competition scrutiny (because maintaining a closed platform might effectively lead competition authorities to place them within a wider relevant market). This might ultimately reduce product differentiation among mobile platforms (due to the disappearance of open ecosystems) – the exact opposite of what the Commission sought to achieve with its decision.

This is, among other things, what Antonin Scalia objected to in his Eastman Kodak dissent: 

It is quite simply anomalous that a manufacturer functioning in a competitive equipment market should be exempt from the per se rule when it bundles equipment with parts and service, but not when it bundles parts with service [when the manufacturer has a high share of the “market” for its machines’ spare parts]. This vast difference in the treatment of what will ordinarily be economically similar phenomena is alone enough to call today’s decision into question.

4. Market shares are a poor proxy for market power, especially in narrowly defined markets

Finally, the problem with the Commission’s decision is not so much that it chose to exclude Apple from the relevant markets, but that it then cited the resulting market shares as evidence of Google’s alleged dominance:

(440) Google holds a dominant position in the worldwide market (excluding China) for the licensing of smart mobile OSs since 2011. This conclusion is based on: 

(1) the market shares of Google and competing developers of licensable smart mobile OSs […]

In doing so, the Commission ignored one of the critical findings of the law & economics literature on market definition and market power: Although defining a narrow relevant market may not itself be problematic, the market shares thus adduced provide little information about a firm’s actual market power. 

For instance, Richard Posner and William Landes have argued that:

If instead the market were defined narrowly, the firm’s market share would be larger but the effect on market power would be offset by the higher market elasticity of demand; when fewer substitutes are included in the market, substitution of products outside of the market is easier. […]

If all the submarket approach signifies is willingness in appropriate cases to call a narrowly defined market a relevant market for antitrust purposes, it is unobjectionable – so long as appropriately less weight is given to market shares computed in such a market.

Likewise, Louis Kaplow observes that:

In choosing between a narrower and a broader market (where, as mentioned, we are supposing that the truth lies somewhere in between), one would ask whether the inference from the larger market share in the narrower market overstates market power by more than the inference from the smaller market share in the broader market understates market power. If the lesser error lies with the former choice, then the narrower market is the relevant market; if the latter minimizes error, then the broader market is best.

The Commission failed to heed these important findings.

5. Conclusion

The upshot is that Apple should not have been automatically excluded from the relevant market. 

To be clear, the Commission did discuss this competition from Apple later in the decision. And it also asserted that its findings would hold even if Apple were included in the OS and App Store markets, because Android’s share of devices sold would have ranged from 45% to 79%, depending on the year (although this ignores other potential metrics such as the value of devices sold or Google’s share of advertising revenue

However, by gerrymandering the market definition (which European case law likely permitted it to do), the Commission ensured that Google would face an uphill battle, starting from a very high market share and thus a strong presumption of dominance. 

Moreover, that it might reach the same result by adopting a more accurate market definition is no excuse for adopting a faulty one and resting its case (and undertaking its entire analysis) on it. In fact, the Commission’s choice of a faulty market definition underpins its entire analysis, and is far from a “harmless error.” 

I shall discuss the consequences of this error in an upcoming blog post. Stay tuned.

[TOTM: The following is the fifth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.

This post is authored by Douglas H. Ginsburg, Professor of Law, Antonin Scalia Law School at George Mason University; Senior Judge, United States Court of Appeals for the District of Columbia Circuit; and former Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice; and Joshua D. Wright, University Professor, Antonin Scalia Law School at George Mason University; Executive Director, Global Antitrust Institute; former U.S. Federal Trade Commissioner from 2013-15; and one of the founding bloggers at Truth on the Market.]

[Ginsburg & Wright: Professor Wright is recused from participation in the FTC litigation against Qualcomm, but has provided counseling advice to Qualcomm concerning other regulatory and competition matters. The views expressed here are our own and neither author received financial support.]

Introduction

In a recent article Joe Kattan and Tim Muris (K&M) criticize our article on the predictive power of bargaining models in antitrust, in which we used two recent applications to explore implications for uses of bargaining models in courts and antitrust agencies moving forward.  Like other theoretical models used to predict competitive effects, complex bargaining models require courts and agencies rigorously to test their predictions against data from the real world markets and institutions to which they are being applied.  Where the “real-world evidence,” as Judge Leon described such data in AT&T/Time Warner, is inconsistent with the predictions of a complex bargaining model, then the tribunal should reject the model rather than reality.

K&M, who represent Intel Corporation in connection with the FTC v. Qualcomm case now pending in the Northern District of California, focus exclusively upon, and take particular issue with, one aspect of our prior article:  We argued that, as in AT&T/Time Warner, the market realities at issue in FTC v. Qualcomm are inconsistent with the use of Dr. Carl Shapiro’s bargaining model to predict competitive effects in the relevant market.  K&M—no doubt confident in their superior knowledge of the underlying facts due to their representation in the matter—criticize our analysis for our purported failure to get our hands sufficiently dirty with the facts.  They criticize our broader analysis of bargaining models and their application for our failure to discuss specific pieces of evidence presented at trial, and offer up several quotations from Qualcomm’s customers as support for Shapiro’s economic analysis.  K&M concede that, as we argue, the antitrust laws should not condemn a business practice in the absence of robust economic evidence of actual or likely harm to competition; yet, they do not see any conflict between that concession and their position that the FTC need not, through its expert, quantify the royalty surcharge imposed by Qualcomm because the “exact size of the overcharge was not relevant to the issue of Qualcomm’s liability.” [Kattan and Muris miss the point that within the context of economic modeling, the failure to identify the magnitude of an effect with any certainty when data are available, including whether the effect is statistically different than zero, calls into question the model’s robustness more generally.]

Though our prior article was a broad one, not limited to FTC v. Qualcomm or intended to cover record evidence in detail, we welcome K&M’s critique and are happy to accept their invitation to engage further on the facts of that particular case.  We agree that accounting for market realities is very important when complex economic models are at play.  Unfortunately, K&M’s position that the evidence “supports Shapiro’s testimony overwhelmingly” ignores the sound empirical evidence employed by Dr. Aviv Nevo during trial and has not aged well in light of the internal Apple documents made public in Qualcomm’s Opening Statement following the companies’ decision to settle the case, which Apple had initiated in January 2017.

Qualcomm’s Opening Statement in the Apple litigation revealed a number of new facts that are problematic, to say the least, for K&M’s position and, even more troublesome for Shapiro’s model and the FTC’s case.  Of course, as counsel to an interested party in the FTC case, it is entirely possible that K&M were aware of the internal Apple documents cited in Qualcomm’s Opening Statement (or similar documents) and simply disagree about their significance.  On the other hand, it is quite clear the Department of Justice Antitrust Division found them to be significantly damaging; it took the rare step of filing a Statement of Interest of the United States with the district court citing the documents and imploring the court to call for additional briefing and hold a hearing on issues related to a remedy in the event that it finds Qualcomm liable on any of the FTC’s claims. The internal Apple documents cited in Qualcomm’s Opening Statement leave no doubt as to several critical market realities that call into question the FTC’s theory of harm and Shapiro’s attempts to substantiate it.

(For more on the implications of these documents, see Geoffrey Manne’s post in this series, here).

First, the documents laying out Apple’s litigation strategy clearly establish that it has a high regard for Qualcomm’s technology and patent portfolio and that Apple strategized for several years about how to reduce its net royalties and to hurt Qualcomm financially. 

Second, the documents undermine Apple’s public complaints about Qualcomm and call into question the validity of the underlying theory of harm in the FTC’s case.  In particular, the documents plainly debunk Apple’s claims that Qualcomm’s patents weakened over time as a result of a decline in the quality of the technology and that Qualcomm devised an anticompetitive strategy in order to extract value from a weakening portfolio.  The documents illustrate that in fact, Apple adopted a deliberate strategy of trying to manipulate the value of Qualcomm’s portfolio.  The company planned to “creat[e] evidence” by leveraging its purchasing power to methodically license less expensive patents in hope of making Qualcomm’s royalties appear artificially inflated. In other words, if Apple’s made-for-litigation position were correct, then it would be only because of Apple’s attempt to manipulate and devalue Qualcomm’s patent portfolio, not because there had been any real change in its value. 

Third, the documents directly refute some of the arguments K&M put forth in their critique of our prior article, in which we invoked Dr. Nevo’s empirical analysis of royalty rates over time as important evidence of historical facts that contradict Dr. Shapiro’s model.  For example, K&M attempt to discredit Nevo’s analysis by claiming he did not control for changes in the strength of Qualcomm’s patent portfolio which, they claim, had weakened over time. According to internal Apple documents, however, “Qualcomm holds a stronger position in . . . , and particularly with respect to cellular and Wi-Fi SEPs” than do Huawei, Nokia, Ericsson, IDCC, and Apple. Another document states that “Qualcomm is widely considered the owner of the strongest patent portfolio for essential and relevant patents for wireless standards.” Indeed, Apple’s documents show that Apple sought artificially to “devalue SEPs” in the industry by “build[ing] favorable, arms-length ‘comp’ licenses” in an attempt to reduce what FRAND means. The ultimate goal of this pursuit was stated frankly by Apple: To “reduce Apple’s net royalty to Qualcomm” despite conceding that Qualcomm’s chips “engineering wise . . . have been the best.”

As new facts relevant to the FTC’s case and contrary to its theory of harm come to light, it is important to re-emphasize the fundamental point of our prior article: Model predictions that are inconsistent with actual market evidence should give fact finders serious pause before accepting the results as reliable.  This advice is particularly salient in a case like FTC v. Qualcomm, where intellectual property and innovation are critical components of the industry and its competitiveness, because condemning behavior that is not truly anticompetitive may have serious, unintended consequences. (See Douglas H. Ginsburg & Joshua D. Wright, Dynamic Analysis and the Limits of Antitrust Institutions, 78 Antitrust L.J. 1 (2012); Geoffrey A. Manne & Joshua D. Wright, Innovation and the Limits of Antitrust, 6 J. Competition L. & Econ. 153 (2010)).

The serious consequences of a false positive, that is, the erroneous condemnation of a procompetitive or competitively neutral business practice, is undoubtedly what caused the Antitrust Division to file its Statement of Interest in the FTC’s case against Qualcomm.  That Statement correctly highlights the Apple documents as support for Government’s concern that “an overly broad remedy in this case could reduce competition and innovation in markets for 5G technology and downstream applications that rely on that technology.”

In this reply, we examine closely the market realities that with and hence undermine both Dr. Shapiro’s bargaining model and the FTC’s theory of harm in its case against Qualcomm.  We believe the “large body of evidence” offered by K&M supporting Shapiro’s theoretical analysis is insufficient to sustain his conclusions under standard antitrust analysis, including the requirement that a plaintiff alleging monopolization or attempted monopolization provide evidence of actual or likely anticompetitive effects.  We will also discuss the implications of the newly-public internal Apple documents for the FTC’s case, which remains pending at the time of this writing, and for future government investigations involving allegedly anticompetitive licensing of intellectual property.

I. Kattan and Muris Rely Upon Inconsequential Testimony and Mischaracterize Dr. Nevo’s Empirical Analysis

K&M march through a series of statements from Qualcomm’s customers asserting that the threat of Qualcomm discontinuing the supply of modem chips forced them to agree to unreasonable licensing demands.  This testimony, however, is reminiscent of Dr. Shapiro’s testimony in AT&T/Time Warner concerning the threat of a long-term blackout of CNN and other Turner channels:  Qualcomm has never cut off any customer’s supply of chips.  The assertion that companies negotiating with Qualcomm either had to “agree to the license or basically go out of business” ignores the reality that even if Qualcomm discontinued supplying chips to a customer, the customer could obtain chips from one of four rival sources.  This was not a theoretical possibility.  Indeed, Apple has been sourcing chips from Intel since 2016 and made the decision to switch to Intel specifically in order, in its own words, to exert “commercial pressure against Qualcomm.”

Further, as Dr. Nevo pointed out at trial, SEP license agreements are typically long term (e.g., 10 or 15 year agreements) and are negotiated far less frequently than chip prices, which are typically negotiated annually.  In other words, Qualcomm’s royalty rate is set prior to and independent of chip sale negotiations. 

K&M raise a number of theoretical objections to Nevo’s empirical analysis.  For example, K&M accuse Nevo of “cherry picking” the licenses he included in his empirical analysis to show that royalty rates remained constant over time, stating that he “excluded from consideration any license that had non-standard terms.” They mischaracterize Nevo’s testimony on this point.  Nevo excluded from his analysis agreements that, according to the FTC’s own theory of harm, would be unaffected (e.g., agreements that were signed subject to government supervision or agreements that have substantially different risk splitting provisions).  In any event, Nevo testified that modifying his analysis to account for Shapiro’s criticism regarding the excluded agreements would have no material effect on his conclusions.  To our knowledge, Nevo’s testimony is the only record evidence providing any empirical analysis of the effects of Qualcomm’s licensing agreements.

As previously mentioned, K&M also claim that Dr. Nevo’s analysis failed to account for the alleged weakening of Qualcomm’s patent portfolio over time.  Apple’s internal documents, however, are fatal to that claim..  K&M also pinpoint failure to control for differences among customers and changes in the composition of handsets over time as critical errors in Nevo’s analysis.  Their assertion that Nevo should have controlled for differences among customers is puzzling.  They do not elaborate upon that criticism, but they seem to believe different customers are entitled to different FRAND rates for the same license.  But Qualcomm’s standard practice—due to the enormous size of its patent portfolio—is and has always been to charge all licensees the same rate for the entire portfolio.

As to changes in the composition of handsets over time, no doubt a smartphone today has many more features than a first-generation handset that only made and received calls; those new features, however, would be meaningless without Qualcomm’s SEPs, which are implemented by mobile chips that enable cellular communication.  One must wonder why Qualcomm should have reduced the royalty rate on licenses for patents that are just as fundamental to the functioning of mobile phones today as they were to the functioning of a first-generation handset.  K&M ignore the fundamental importance of Qualcomm’s SEPs in claiming that royalty rates should have declined along with the quality adjusted/? declining prices of mobile phones.  They also, conveniently, ignore the evidence that the industry has been characterized by increasing output and quality—increases which can certainly be attributed at least in part to Qualcomm’s chips being “engineering wise . . . the best.”. 

II. Apple’s Internal Documents Eviscerate the FTC’s Theory of Harm

The FTC’s theory of harm is premised upon Qualcomm’s allegedly charging a supra-FRAND rate for its SEPs (the “royalty surcharge”), which squeezes the margins of OEMs and consequently prevents rival chipset suppliers from obtaining a sufficient return when negotiating with those OEMs. (See Luke Froeb, et al’s criticism of the FTC’s theory of harm on these and related grounds, here). To predict the effects of Qualcomm’s allegedly anticompetitive conduct, Dr. Shapiro compared the gains from trade OEMs receive when they purchase a chip from Qualcomm and pay Qualcomm a FRAND royalty to license its SEPs with the gains from trade OEMs receive when they purchase a chip from a rival manufacturer and pay a “royalty surcharge” to Qualcomm to license its SEPs.  Shapiro testified that he had “reason to believe that the royalty surcharge was substantial” and had “inevitable consequences,” for competition and for consumers, though his bargaining model did not quantify the effects of Qualcomm’s practice. 

The premise of the FTC theory requires a belief about FRAND as a meaningful, objective competitive benchmark that Qualcomm was able to evade as a result of its market power in chipsets.  But Apple manipulated negotiations as a tactic to reshape FRAND itself.  The closer look at the facts invited by K&M does nothing to improve one’s view of the FTC’s claims.  The Apple documents exposed at trial make it clear that Apple deliberately manipulated negotiations with other suppliers in order to make it appear to courts and antitrust agencies that something other than the quality of Qualcomm’s technology was driving royalty rates.  For example, Apple’s own documents show it sought artificially to “devalue SEPs” by “build[ing] favorable, arms-length ‘comp’ licenses” in an attempt to reshape what FRAND means in this industry. Simply put, Apple’s strategy was to negotiate cheap supposedly “comparable” licenses with other chipset suppliers as part of a plan to reduce its net royalties to Qualcomm. 

As part of the same strategy, Apple spent years arguing to regulators and courts that Qualcomm’s patents were no better than those of its competitors.  But their internal documents tell this very different story:

  • “Nokia’s patent portfolio is significantly weaker than Qualcomm’s.”
  • “[InterDigital] makes minimal contributions to [the 4G/LTE] standard”
  • “Compared to [Huawei, Nokia, Ericsson, IDCC, and Apple], Qualcomm holds a stronger position in , and particularly with respect to cellular and Wi-Fi SEPs.”
  • “Compared to other licensors, Qualcomm has more significant holdings in key areas such as media processing, non-cellular communications and hardware.  Likewise, using patent citation analysis as a measure of thorough prosecution within the US PTO, Qualcomm patents (SEPs and non-SEPs both) on average score higher compared to the other, largely non-US based licensors.”

One internal document that is particularly troubling states that Apple’s plan was to “create leverage by building pressure” in order to  (i) hurt Qualcomm financially and (ii) put Qualcomm’s licensing model at risk. What better way to harm Qualcomm financially and put its licensing model at risk than to complain to regulators that the business model is anticompetitive and tie the company up in multiple costly litigations?  That businesses make strategic plans to harm one another is no surprise.  But it underscores the importance of antitrust institutions – with their procedural and evidentiary requirements – to separate meritorious claims from fabricated ones. They failed to do so here.

III. Lessons Learned

So what should we make of evidence suggesting one of the FTC’s key informants during its investigation of Qualcomm didn’t believe the arguments it was selling?  The exposure of Apple’s internal documents is a sobering reminder that the FTC is not immune from the risk of being hoodwinked by rent-seeking antitrust plaintiffs.  That a firm might try to persuade antitrust agencies to investigate and sue its rivals is nothing new (see, e.g., William J. Baumol & Janusz A. Ordover, Use of Antitrust to Subvert Competition, 28 J.L. & Econ. 247 (1985)), but it is a particularly high-stakes game in modern technology markets. 

Lesson number one: Requiring proof of actual anticompetitive effects rather than relying upon a model that is not robust to market realities is an important safeguard to ensure that Section 2 protects competition and not merely an individual competitor.  Yet the agencies’ staked their cases on bargaining models in AT&T/Time Warner and FTC v. Qualcomm that fell short of proving anticompetitive effects.  An agency convinced by one firm or firms to pursue an action against a rival for conduct that does not actually harm competition could have a significant and lasting anticompetitive effect on the market.  Modern antitrust analysis requires plaintiffs to substantiate their claims with more than just theory or scant evidence that rivals have been harmed.  That safeguard is particularly important when an agency is pursuing an enforcement action against a company in a market where the risks of regulatory capture and false positives are high.  With calls to move away from the consumer welfare standard—which would exacerbate both the risks and consequences of false positives–it is imperative to embrace rather than reject the requirement of proof in monopolization cases. (See Elyse Dorsey, Jan Rybnicek & Joshua D. Wright, Hipster Antitrust Meets Public Choice Economics: The Consumer Welfare Standard, Rule of Law, and Rent-Seeking, CPI Antitrust Chron. (Apr. 2018); see also Joshua D. Wright et al., Requiem For a Paradox: The Dubious Rise and Inevitable Fall of Hipster Antitrust, 51 Ariz. St. L.J. 293 (2019).) The DOJ’s Statement of Interest is a reminder of this basic tenet. 

Lesson number two: Antitrust should have a limited role in adjudicating disputes arising between sophisticated parties in bilateral negotiations of patent licenses.  Overzealous claims of harm from patent holdup and anticompetitive licensing can deter the lawful exercise of patent rights, good faith modifications of existing contracts, and more generally interfere with the outcome of arms-length negotiations (See Bruce H. Kobayashi & Joshua D. Wright, The Limits of Antitrust and Patent Holdup: A Reply To Cary et al., 78 Antitrust L.J. 701 (2012)). It is also a difficult task for an antitrust regulator or court to identify and distinguish anticompetitive patent licenses from neutral or welfare-increasing behavior.  An antitrust agency’s willingness to cast the shadow of antitrust remedies over one side of the bargaining table inevitably places the agency in the position of encouraging further rent-seeking by licensees seeking similar intervention on their behalf.

Finally, antitrust agencies intervening in patent holdup and licensing disputes on behalf of one party to a patent licensing agreement risks transforming the agency into a price regulator.  Apple’s fundamental complaint in its own litigation, and the core of the similar FTC allegation against Qualcomm, is that royalty rates are too high.  The risks to competition and consumers of antitrust courts and agencies playing the role of central planner for the innovation economy are well known, and are at the peak when the antitrust enterprise is used to set prices, mandate a particular organizational structure for the firm, or to intervene in garden variety contract and patent disputes in high-tech markets.

The current Commission did not vote out the Complaint now being litigated in the Northern District of California.  That case was initiated by an entirely different set of Commissioners.  It is difficult to imagine the new Commissioners having no reaction to the Apple documents, and in particular to the perception they create that Apple was successful in manipulating the agency in its strategy to bolster its negotiating position against Qualcomm.  A thorough reevaluation of the evidence here might well lead the current Commission to reconsider the merits of the agency’s position in the litigation and whether continuing is in the public interest.  The Apple documents, should they enter the record, may affect significantly the Ninth Circuit’s or Supreme Court’s understanding of the FTC’s theory of harm.

[TOTM: The following is the third in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.

This post is authored by Douglas H. Ginsburg, Professor of Law, Antonin Scalia Law School at George Mason University; Senior Judge, United States Court of Appeals for the District of Columbia Circuit; and former Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice; and Joshua D. Wright, University Professor, Antonin Scalia Law School at George Mason University; Executive Director, Global Antitrust Institute; former U.S. Federal Trade Commissioner from 2013-15; and one of the founding bloggers at Truth on the Market.]

[Ginsburg & Wright: Professor Wright is recused from participation in the FTC litigation against Qualcomm, but has provided counseling advice to Qualcomm concerning other regulatory and competition matters. The views expressed here are our own and neither author received financial support.]

The Department of Justice Antitrust Division (DOJ) and Federal Trade Commission (FTC) have spent a significant amount of time in federal court litigating major cases premised upon an anticompetitive foreclosure theory of harm. Bargaining models, a tool used commonly in foreclosure cases, have been essential to the government’s theory of harm in these cases. In vertical merger or conduct cases, the core theory of harm is usually a variant of the claim that the transaction (or conduct) strengthens the firm’s incentives to engage in anticompetitive strategies that depend on negotiations with input suppliers. Bargaining models are a key element of the agency’s attempt to establish those claims and to predict whether and how firm incentives will affect negotiations with input suppliers, and, ultimately, the impact on equilibrium prices and output. Application of bargaining models played a key role in evaluating the anticompetitive foreclosure theories in the DOJ’s litigation to block the proposed merger of AT&T and Time Warner Cable. A similar model is at the center of the FTC’s antitrust claims against Qualcomm and its patent licensing business model.

Modern antitrust analysis does not condemn business practices as anticompetitive without solid economic evidence of an actual or likely harm to competition. This cautious approach was developed in the courts for two reasons. The first is that the difficulty of distinguishing between procompetitive and anticompetitive explanations for the same conduct suggests there is a high risk of error. The second is that those errors are more likely to be false positives than false negatives because empirical evidence and judicial learning have established that unilateral conduct is usually either procompetitive or competitively neutral. In other words, while the risk of anticompetitive foreclosure is real, courts have sensibly responded by requiring plaintiffs to substantiate their claims with more than just theory or scant evidence that rivals have been harmed.

An economic model can help establish the likelihood and/or magnitude of competitive harm when the model carefully captures the key institutional features of the competition it attempts to explain. Naturally, this tends to mean that the economic theories and models proffered by dueling economic experts to predict competitive effects take center stage in antitrust disputes. The persuasiveness of an economic model turns on the robustness of its assumptions about the underlying market. Model predictions that are inconsistent with actual market evidence give one serious pause before accepting the results as reliable.

For example, many industries are characterized by bargaining between providers and distributors. The Nash bargaining framework can be used to predict the outcomes of bilateral negotiations based upon each party’s bargaining leverage. The model assumes that both parties are better off if an agreement is reached, but that as the utility of one party’s outside option increases relative to the bargain, it will capture an increasing share of the surplus. Courts have had to reconcile these seemingly complicated economic models with prior case law and, in some cases, with direct evidence that is apparently inconsistent with the results of the model.

Indeed, Professor Carl Shapiro recently used bargaining models to analyze harm to competition in two prominent cases alleging anticompetitive foreclosure—one initiated by the DOJ and one by the FTC—in which he served as the government’s expert economist. In United States v. AT&T Inc., Dr. Shapiro testified that the proposed transaction between AT&T and Time Warner would give the vertically integrated company leverage to extract higher prices for content from AT&T’s rival, Dish Network. Soon after, Dr. Shapiro presented a similar bargaining model in FTC v. Qualcomm Inc. He testified that Qualcomm leveraged its monopoly power over chipsets to extract higher royalty rates from smartphone OEMs, such as Apple, wishing to license its standard essential patents (SEPs). In each case, Dr. Shapiro’s models were criticized heavily by the defendants’ expert economists for ignoring market realities that play an important role in determining whether the challenged conduct was likely to harm competition.

Judge Leon’s opinion in AT&T/Time Warner—recently upheld on appeal—concluded that Dr. Shapiro’s application of the bargaining model was significantly flawed, based upon unreliable inputs, and undermined by evidence about actual market performance presented by defendant’s expert, Dr. Dennis Carlton. Dr. Shapiro’s theory of harm posited that the combined company would increase its bargaining leverage and extract greater affiliate fees for Turner content from AT&T’s distributor rivals. The increase in bargaining leverage was made possible by the threat of a post-merger blackout of Turner content for AT&T’s rivals. This theory rested on the assumption that the combined firm would have reduced financial exposure from a long-term blackout of Turner content and would therefore have more leverage to threaten a blackout in content negotiations. The purpose of his bargaining model was to quantify how much AT&T could extract from competitors subjected to a long-term blackout of Turner content.

Judge Leon highlighted a number of reasons for rejecting the DOJ’s argument. First, Dr. Shapiro’s model failed to account for existing long-term affiliate contracts, post-litigation offers of arbitration agreements, and the increasing competitiveness of the video programming and distribution industry. Second, Dr. Carlton had demonstrated persuasively that previous vertical integration in the video programming and distribution industry did not have a significant effect on content prices. Finally, Dr. Shapiro’s model primarily relied upon three inputs: (1) the total number of subscribers the unaffiliated distributor would lose in the event of a long-term blackout of Turner content, (2) the percentage of the distributor’s lost subscribers who would switch to AT&T as a result of the blackout, and (3) the profit margin AT&T would derive from the subscribers it gained from the blackout. Many of Dr. Shapiro’s inputs necessarily relied on critical assumptions and/or third-party sources. Judge Leon considered and discredited each input in turn. 

The parties in Qualcomm are, as of the time of this posting, still awaiting a ruling. Dr. Shapiro’s model in that case attempts to predict the effect of Qualcomm’s alleged “no license, no chips” policy. He compared the gains from trade OEMs receive when they purchase a chip from Qualcomm and pay Qualcomm a FRAND royalty to license its SEPs with the gains from trade OEMs receive when they purchase a chip from a rival manufacturer and pay a “royalty surcharge” to Qualcomm to license its SEPs. In other words, the FTC’s theory of harm is based upon the premise that Qualcomm is charging a supra-FRAND rate for its SEPs (the“royalty surcharge”) that squeezes the margins of OEMs. That margin squeeze, the FTC alleges, prevents rival chipset suppliers from obtaining a sufficient return when negotiating with OEMs. The FTC predicts the end result is a reduction in competition and an increase in the price of devices to consumers.

Qualcomm, like Judge Leon in AT&T, questioned the robustness of Dr. Shapiro’s model and its predictions in light of conflicting market realities. For example, Dr. Shapiro, argued that the

leverage that Qualcomm brought to bear on the chips shifted the licensing negotiations substantially in Qualcomm’s favor and led to a significantly higher royalty than Qualcomm would otherwise have been able to achieve.

Yet, on cross-examination, Dr. Shapiro declined to move from theory to empirics when asked if he had quantified the effects of Qualcomm’s practice on any other chip makers. Instead, Dr. Shapiro responded that he had not, but he had “reason to believe that the royalty surcharge was substantial” and had “inevitable consequences.” Under Dr. Shapiro’s theory, one would predict that royalty rates were higher after Qualcomm obtained market power.

As with Dr. Carlton’s testimony inviting Judge Leon to square the DOJ’s theory with conflicting historical facts in the industry, Qualcomm’s economic expert, Dr. Aviv Nevo, provided an analysis of Qualcomm’s royalty agreements from 1990-2017, confirming that there was no economic and meaningful difference between the royalty rates during the time frame when Qualcomm was alleged to have market power and the royalty rates outside of that time frame. He also presented evidence that ex ante royalty rates did not increase upon implementation of the CDMA standard or the LTE standard. Moreover, Dr.Nevo testified that the industry itself was characterized by declining prices and increasing output and quality.

Dr. Shapiro’s model in Qualcomm appears to suffer from many of the same flaws that ultimately discredited his model in AT&T/Time Warner: It is based upon assumptions that are contrary to real-world evidence and it does not robustly or persuasively identify anticompetitive effects. Some observers, including our Scalia Law School colleague and former FTC Chairman, Tim Muris, would apparently find it sufficient merely to allege a theoretical “ability to manipulate the marketplace.” But antitrust cases require actual evidence of harm. We think Professor Muris instead captured the appropriate standard in his important article rejecting attempts by the FTC to shortcut its requirement of proof in monopolization cases:

This article does reject, however, the FTC’s attempt to make it easier for the government to prevail in Section 2 litigation. Although the case law is hardly a model of clarity, one point that is settled is that injury to competitors by itself is not a sufficient basis to assume injury to competition …. Inferences of competitive injury are, of course, the heart of per se condemnation under the rule of reason. Although long a staple of Section 1, such truncation has never been a part of Section 2. In an economy as dynamic as ours, now is hardly the time to short-circuit Section 2 cases. The long, and often sorry, history of monopolization in the courts reveals far too many mistakes even without truncation.

Timothy J. Muris, The FTC and the Law of Monopolization, 67 Antitrust L. J. 693 (2000)

We agree. Proof of actual anticompetitive effects rather than speculation derived from models that are not robust to market realities are an important safeguard to ensure that Section 2 protects competition and not merely individual competitors.

The future of bargaining models in antitrust remains to be seen. Judge Leon certainly did not question the proposition that they could play an important role in other cases. Judge Leon closely dissected the testimony and models presented by both experts in AT&T/Time Warner. His opinion serves as an important reminder. As complex economic evidence like bargaining models become more common in antitrust litigation, judges must carefully engage with the experts on both sides to determine whether there is direct evidence on the likely competitive effects of the challenged conduct. Where “real-world evidence,” as Judge Leon called it, contradicts the predictions of a bargaining model, judges should reject the model rather than the reality. Bargaining models have many potentially important antitrust applications including horizontal mergers involving a bargaining component – such as hospital mergers, vertical mergers, and licensing disputes. The analysis of those models by the Ninth and D.C. Circuits will have important implications for how they will be deployed by the agencies and parties moving forward.

David Haddock is Professor of Law and Professor of Economics at Northwestern University and a Senior Fellow Emeritus at PERC.

The day Fred McChesney departed this life, the world lost an intelligent, enthusiastic, and intellectually rigorous scholar of law & economics. A great many of us also lost one of our most trusted and generous friends.

I first met Fred when Emory University, hoping to recruit the then young scholar to the law school faculty, brought him to Atlanta to deliver a research paper. The effort was successful, and Fred joined as an assistant professor in the fall of 1983. Jon Macey joined the law school, also as an entry-level assistant professor, at about the same time. A couple of years earlier Professor Bill Carney and law school Dean Tom Morgan had enticed Henry Manne to Emory to establish a new Law & Economics Center. Although Henry did not know me, upon Armen Alchian’s recommendation he persuaded me to leave Ohio State to join the LEC soon after it commenced operation.

I was only a bit older than Fred and Jon. Each of us had training in economics in addition to our interest in law. We shared a respect for markets. We had noticed how often special interests deflected government interventions away from the public interest that was the ostensible motivation. One might say we three had large Venn diagram intersections of background, interest, and outlook. Fred, Jon and I quickly became friends both at work and – along with our respective girlfriends and eventual wives – at leisure. We began to coauthor journal articles and book chapters, sometimes in pairs and sometimes as a trio.

Alas, though Chris Curran and Matt Lindsay from the economics department shared the law school’s enthusiasm for the LEC, the university administration proved decidedly lukewarm toward Manne’s ambitious blueprint. After flashing onto the national, or rather international, stage for a few bright years, the LEC began to atrophy in the face of limitations issuing from above.

Fred, Henry, Jon and I each spent time at the International Center for Economic Research in Torino, Italy, becoming friends with ICER’s director Enrico Colombatto. Macey moved to Cornell. I spent a year at Yale before returning to join Emory’s economics department. Manne left to become dean of a humble law school in the DC suburbs that had been devoted almost exclusively to teaching. Henry quickly transformed that school into a nationally recognized research and innovative teaching institution now known as the Antonin Scalia Law School of George Mason University, but his departure effectively ended the brief if illustrious history of the Emory LEC.

Fred and I visited the University of Chicago in 1987, and though I then moved directly to Northwestern where I finished my career, Fred returned to Emory for another ten years. The two of us continued to coauthor, sometimes with a third such as Bill Shughart, Terry Anderson, or Menahem Spiegel. I worked diligently to get Fred to Northwestern but Cornell succeeded first, though by then Macey had moved on to Yale. Two years later, Fred finally joined me at Northwestern where both he and Elaine held faculty positions until Elaine’s untimely death.

I have mentioned a number of people. Nearly all of those people have changed location, sometimes repeatedly. Through it all and across the deaths of Elaine, then Henry, and now Fred, we have all remained friends and often continued to work together, though usually at a distance.

Everyone who knew him remembers how easily Fred made friends upon meeting new people. Due to his extensive knowledge of rock music, Fred even became a telephone buddy of the late Casey Kasem, longtime host of the nationally syndicated America’s Top 40. Fred’s cordiality was not only social but extended into the work environment. He was no pushover, demanding careful thought in classroom and seminar, but he made his points calmly without endeavoring to cow or humiliate those with whom he disagreed, a trait that unfortunately is far from universal in the academic world.

Considering Fred’s passion for rock music, perhaps it is appropriate to end this remembrance with a few lightly edited lines from James Taylor’s Fire and Rain:

Just yesterday morning, they let me know you were gone.
The path laid down has put an end to you.
I walked out this morning and I wrote down this song,
I just can’t remember who to send it to.

Won’t you look down upon us, Jesus,
You’ve got to help us make a stand.
You’ve just got to see us through another day.
My body’s aching and my time is at hand and I won’t make it any other way.

Oh, I’ve seen fire and I’ve seen rain.
I’ve seen sunny days that I thought would never end.
I’ve seen lonely times when I could not find a friend,
but I always thought that I’d see you again.

Rest in peace, pal.

On October 6, 2016, the U.S. Federal Trade Commission (FTC) issued Patent Assertion Entity Activity: An FTC Study (PAE Study), its much-anticipated report on patent assertion entity (PAE) activity.  The PAE Study defined PAEs as follows:

Patent assertion entities (PAEs) are businesses that acquire patents from third parties and seek to generate revenue by asserting them against alleged infringers.  PAEs monetize their patents primarily through licensing negotiations with alleged infringers, infringement litigation, or both. In other words, PAEs do not rely on producing, manufacturing, or selling goods.  When negotiating, a PAE’s objective is to enter into a royalty-bearing or lump-sum license.  When litigating, to generate any revenue, a PAE must either settle with the defendant or ultimately prevail in litigation and obtain relief from the court.

The FTC was mindful of the costs that would be imposed on PAEs, required by compulsory process to respond to the agency’s requests for information.  Accordingly, the FTC obtained information from only 22 PAEs, 18 of which it called “Litigation PAEs” (which “typically sued potential licensees and settled shortly afterward by entering into license agreements with defendants covering small portfolios,” usually yielding total royalties of under $300,000) and 4 of which it dubbed “Portfolio PAEs” (which typically negotiated multimillion dollars licenses covering large portfolios of patents and raised their capital through institutional investors or manufacturing firms).

Furthermore, the FTC’s research was narrowly targeted, not broad-based.  The agency explained that “[o]f all the patents held by PAEs in the FTC’s study, 88% fell under the Computers & Communications or Other Electrical & Electronic technology categories, and more than 75% of the Study PAEs’ overall holdings were software-related patents.”  Consistent with the nature of this sample, the FTC concentrated primarily on a case study of PAE activity in the wireless chipset sector.  The case study revealed that PAEs were more likely to assert their patents through litigation than were wireless manufacturers, and that “30% of Portfolio PAE wireless patent licenses and nearly 90% of Litigation PAE wireless patent licenses resulted from litigation, while only 1% of Wireless Manufacturer wireless patent licenses resulted from litigation.”  But perhaps more striking than what the FTC found was what it did not uncover.  Due to data limitations, “[t]he FTC . . . [did not] attempt[] to determine if the royalties received by Study PAEs were higher or lower than those that the original assignees of the licensed patents could have earned.”  In addition, the case study did “not report how much revenue PAEs shared with others, including independent inventors, or the costs of assertion activity.”

Curiously, the PAE Study also leaped to certain conclusions regarding PAE settlements based on questionable assumptions and without considering legitimate potential incentives for such settlements.  Thus, for example, the FTC found it particularly significant that 77% of litigation PAE settlements were for less than $300,000.  Why?  Because $300,000 was a “de facto benchmark” for nuisance litigation settlements, merely based on one American Intellectual Property Law Association study that claimed defending a non-practicing entity patent lawsuit through the end of discovery costs between $300,000 and $2.5 million, depending on the amount in controversy.  In light of that one study, the FTC surmised “that discovery costs, and not the technological value of the patent, may set the benchmark for settlement value in Litigation PAE cases.”  Thus, according to the FTC, “the behavior of Litigation PAEs is consistent with nuisance litigation.”  As noted patent lawyer Gene Quinn has pointed out, however, the FTC ignored the alternative eminently logical possibility that many settlements for less than $300,000 merely represented reasonable valuations of the patent rights at issue.  Quinn pithily stated:

[T]he reality is the FTC doesn’t know enough about the industry to understand that $300,000 is an arbitrary line in the sand that holds no relevance in the real world. For the very same reason that they said the term “patent troll” is unhelpful (i.e., because it inappropriately discriminates against rights owners without understanding the business model and practices), so too is $300,000 equally unhelpful. Without any understanding or appreciation of the value of the core innovation subject to the license there is no way to know whether a license is being offered for nuisance value or whether it is being offered at full, fair and appropriate value to compensate the patent owner for the infringement they had to chase down in litigation.

I thought the FTC was charged with ensuring fair business practices? It seems what they are doing is radically discriminating against incremental innovations valued at less than $300,000 and actually encouraging patent owners to charge more for their licenses than they are worth so they don’t get labeled a nuisance. Talk about perverse incentives! The FTC should stick to areas where they have subject matter competence and leave these patent issues to the experts.     

In sum, the FTC found that in one particular specialized industry sector featuring a certain  category of patents (software patents), PAEs tended to sue more than manufacturers before agreeing to licensing terms – hardly a surprising finding or a sign of a problem.  (To the contrary, the existence of “substantial” PAE litigation that led to licenses might be a sign that PAEs were acting as efficient intermediaries representing the interests and effectively vindicating the rights of small patentees.)  The FTC was not, however, able to comment on the relative levels of royalties, the extent to which PAE revenues were distributed to inventors, or the costs of PAE litigation (as opposed to any other sort of litigation).  Additionally, the FTC made certain assumptions about certain PAE litigation settlements that ignored reasonable alternative explanations for the behavior that was observed.  Accordingly, the reasonable observer would conclude from this that the agency was (to say the least) in no position to make any sort of policy recommendations, given the absence of any hard evidence of PAE abuses or excessive waste from litigation.

Unfortunately, the reasonable observer would be mistaken.  The FTC recommended reforms to: (1) address discovery burden and “cost asymmetries” (the notion that PAEs are less subject to costly counterclaims because they are not producers) in PAE litigation; (2) provide the courts and defendants with more information about the plaintiffs that have filed infringement lawsuits; (3) streamline multiple cases brought against defendants on the same theories of infringement; and (4) provide sufficient notice of these infringement theories as courts continue to develop heightened pleading requirements for patent cases.

Without getting into the merits of these individual suggestions (and without in any way denigrating the hard work and dedication of the highly talented FTC staffers who drafted the PAE Study), it is sufficient to note that they bear no logical relationship to the factual findings of the report.  The recommendations, which closely echo certain elements of various “patent reform” legislative proposals that have been floated in recent years, could have been advanced before any data had been gathered – with a saving to the companies that had to respond.  In short, the recommendations are classic pre-baked “solutions” to problems that have long been hypothesized.  Advancing such recommendations based on discrete information regarding a small skewed sample of PAEs – without obtaining crucial information on the direct costs and benefits of the PAE transactions being observed, or the incentive effects of PAE activity – is at odds with the FTC’s proud tradition of empirical research.  Unfortunately, Devin Hartline of the Antonin Scalia Law School proved prescient when commenting last April on the possible problems with the PAE Report, based on what was known about it prior to its release (and based on the preliminary thoughts of noted economists and law professors):

While the FTC study may generate interesting information about a handful of firms, it won’t tell us much about how PAEs affect competition and innovation in general.  The study is simply not designed to do this.  It instead is a fact-finding mission, the results of which could guide future missions.  Such empirical research can be valuable, but it’s very important to recognize the limited utility of the information being collected.  And it’s crucial not to draw policy conclusions from it.  Unfortunately, if the comments of some of the Commissioners and supporters of the study are any indication, many critics have already made up their minds about the net effects of PAEs, and they will likely use the study to perpetuate the biased anti-patent fervor that has captured so much attention in recent years.

To the extent patent reform is warranted, it should be considered carefully in a measured fashion, with full consideration given to the costs, benefits, and potential unintended consequences of suggested changes to the patent system and to litigation procedures.  As John Malcolm and I explained in a 2015 Heritage Foundation Legal Backgrounder which explored the relative merits of individual proposed reforms:

Before deciding to take action, Congress should weigh the particular merits of individual reform proposals carefully and meticulously, taking into account their possible harmful effects as well as their intended benefits. Precipitous, unreflective action on legislation is unwarranted, and caution should be the byword, especially since the effects of 2011 legislative changes and recent Supreme Court decisions have not yet been fully absorbed. Taking time is key to avoiding the serious and costly errors that too often are the fruit of omnibus legislative efforts.

Notably, this Legal Backgrounder also noted potential beneficial aspects of PAE activity that were not reflected in the PAE Study:

[E]ven entities whose business model relies on purchasing patents and licensing them or suing those who refuse to enter into licensing agreements and infringe those patents can serve a useful—even a vital—purpose. Some infringers may be large companies that infringe the patents of smaller companies or individual inventors, banking on the fact that such a small-time inventor will be less likely to file a lawsuit against a well-financed entity. Patent aggregators, often backed by well-heeled investors, help to level the playing field and can prevent such abuses.

More important, patent aggregators facilitate an efficient division of labor between inventors and those who wish to use those inventions for the betterment of their fellow man, allowing inventors to spend their time doing what they do best: inventing. Patent aggregators can expand access to patent pools that allow third parties to deal with one vendor instead of many, provide much-needed capital to inventors, and lead to a variety of licensing and sublicensing agreements that create and reflect a valuable and vibrant marketplace for patent holders and provide the kinds of incentives that spur innovation. They can also aggregate patents for litigation purposes, purchasing patents and licensing them in bundles.

This has at least two advantages: It can reduce the transaction costs for licensing multiple patents, and it can help to outsource and centralize patent litigation for multiple patent holders, thereby decreasing the costs associated with such litigation. In the copyright space, the American Society of Composers, Authors, and Publishers (ASCAP) plays a similar role.

All of this is to say that there can be good patent assertion entities that seek licensing agreements and file claims to enforce legitimate patents and bad patent assertion entities that purchase broad and vague patents and make absurd demands to extort license payments or settlements. The proper way to address patent trolls, therefore, is by using the same means and methods that would likely work against ambulance chasers or other bad actors who exist in other areas of the law, such as medical malpractice, securities fraud, and product liability—individuals who gin up or grossly exaggerate alleged injuries and then make unreasonable demands to extort settlements up to and including filing frivolous lawsuits.

In conclusion, the FTC would be well advised to avoid putting forth patent reform recommendations based on the findings of the PAE Study.  At the very least, it should explicitly weigh the implications of other research, which explores PAE-related efficiencies and considers all the ramifications of procedural and patent law changes, before seeking to advance any “PAE reform” recommendations.

The Global Antitrust Institute (GAI) at George Mason University’s Antonin Scalia Law School released today a set of comments on the joint U.S. Department of Justice (DOJ) – Federal Trade Commission (FTC) August 12 Proposed Update to their 1995 Antitrust Guidelines for the Licensing of Intellectual Property (Proposed Update).  As has been the case with previous GAI filings (see here, for example), today’s GAI Comments are thoughtful and on the mark.

For those of you who are pressed for time, the latest GAI comments make these major recommendations (summary in italics):

Standard Essential Patents (SEPs):  The GAI Comments commended the DOJ and the FTC for preserving the principle that the antitrust framework is sufficient to address potential competition issues involving all IPRs—including both SEPs and non-SEPs.  In doing so, the DOJ and the FTC correctly rejected the invitation to adopt a special brand of antitrust analysis for SEPs in which effects-based analysis was replaced with unique presumptions and burdens of proof. 

o   The GAI Comments noted that, as FTC Chairman Edith Ramirez has explained, “the same key enforcement principles [found in the 1995 IP Guidelines] also guide our analysis when standard essential patents are involved.”

o   This is true because SEP holders, like other IP holders, do not necessarily possess market power in the antitrust sense, and conduct by SEP holders, including breach of a voluntary assurance to license its SEP on fair, reasonable, and nondiscriminatory (FRAND) terms, does not necessarily result in harm to the competitive process or to consumers. 

o   Again, as Chairwoman Ramirez has stated, “it is important to recognize that a contractual dispute over royalty terms, whether the rate or the base used, does not in itself raise antitrust concerns.”

Refusals to License:  The GAI Comments expressed concern that the statements regarding refusals to license in Sections 2.1 and 3 of the Proposed Update seem to depart from the general enforcement approach set forth in the 2007 DOJ-FTC IP Report in which those two agencies stated that “[a]ntitrust liability for mere unilateral, unconditional refusals to license patents will not play a meaningful part in the interface between patent rights and antitrust protections.”  The GAI recommended that the DOJ and the FTC incorporate this approach into the final version of their updated IP Guidelines.

“Unreasonable Conduct”:  The GAI Comments recommended that Section 2.2 of the Proposed Update be revised to replace the phrase “unreasonable conduct” with a clear statement that the agencies will only condemn licensing restraints when anticompetitive effects outweigh procompetitive benefits.

R&D Markets:  The GAI Comments urged the DOJ and the FTC to reconsider the inclusion (or, at the very least, substantially limit the use) of research and development (R&D) markets because: (1) the process of innovation is often highly speculative and decentralized, making it impossible to identify all market participants to be; (2) the optimal relationship between R&D and innovation is unknown; (3) the market structure most conducive to innovation is unknown; (4) the capacity to innovate is hard to monopolize given that the components of modern R&D—research scientists, engineers, software developers, laboratories, computer centers, etc.—are continuously available on the market; and (5) anticompetitive conduct can be challenged under the actual potential competition theory or at a later time.

While the GAI Comments are entirely on point, even if their recommendations are all adopted, much more needs to be done.  The Proposed Update, while relatively sound, should be viewed in the larger context of the Obama Administration’s unfortunate use of antitrust policy to weaken patent rights (see my article here, for example).  In addition to strengthening the revised Guidelines, as suggested by the GAI, the DOJ and the FTC should work with other component agencies of the next Administration – including the Patent Office and the White House – to signal enhanced respect for IP rights in general.  In short, a general turnaround in IP policy is called for, in order to spur American innovation, which has been all too lacking in recent years.

On August 6, the Global Antitrust Institute (the GAI, a division of the Antonin Scalia Law School at George Mason University) submitted a filing (GAI filing or filing) in response to the Japan Fair Trade Commission’s (JFTC’s) consultation on reforms to the Japanese system of administrative surcharges assessed for competition law violations (see here for a link to the GAI’s filing).  The GAI’s outstanding filing was authored by GAI Director Koren Wong Ervin and Professors Douglas Ginsburg, Joshua Wright, and Bruce Kobayashi of the Scalia Law School.

The GAI filing’s three sets of major recommendations, set forth in italics, are as follows:

(1)   Due Process

 While the filing recognizes that the process may vary depending on the jurisdiction, the filing strongly urges the JFTC to adopt the core features of a fair and transparent process, including:   

(a)        Legal representation for parties under investigation, allowing the participation of local and foreign counsel of the parties’ choosing;

(b)        Notifying the parties of the legal and factual bases of an investigation and sharing the evidence on which the agency relies, including any exculpatory evidence and excluding only confidential business information;

(c)        Direct and meaningful engagement between the parties and the agency’s investigative staff and decision-makers;

(d)        Allowing the parties to present their defense to the ultimate decision-makers; and

(e)        Ensuring checks and balances on agency decision-making, including meaningful access to independent courts.

(2)   Calculation of Surcharges

The filing agrees with the JFTC that Japan’s current inflexible system of surcharges is unlikely to accurately reflect the degree of economic harm caused by anticompetitive practices.  As a general matter, the filing recommends that under Japan’s new surcharge system, surcharges imposed should rely upon economic analysis, rather than using sales volume as a proxy, to determine the harm caused by violations of Japan’s Antimonopoly Act.   

In that light, and more specifically, the filing therefore recommends that the JFTC limit punitive surcharges to matters in which:

(a)          the antitrust violation is clear (i.e., if considered at the time the conduct is undertaken, and based on existing laws, rules, and regulations, a reasonable party should expect the conduct at issue would likely be illegal) and is without any plausible efficiency justification;

(b)          it is feasible to articulate and calculate the harm caused by the violation;

(c)           the measure of harm calculated is the basis for any fines or penalties imposed; and

(d)          there are no alternative remedies that would adequately deter future violations of the law. 

In the alternative, and at the very least, the filing urges the JFTC to expand the circumstances under which it will not seek punitive surcharges to include two types of conduct that are widely recognized as having efficiency justifications:

  • unilateral conduct, such as refusals to deal and discriminatory dealing; and
  • vertical restraints, such as exclusive dealing, tying and bundling, and resale price maintenance.

(3)   Settlement Process

The filing recommends that the JFTC consider incorporating safeguards that prevent settlement provisions unrelated to the violation and limit the use of extended monitoring programs.  The filing notes that consent decrees and commitments extracted to settle a case too often end up imposing abusive remedies that undermine the welfare-enhancing goals of competition policy.  An agency’s ability to obtain in terrorem concessions reflects a party’s weighing of the costs and benefits of litigating versus the costs and benefits of acquiescing in the terms sought by the agency.  When firms settle merely to avoid the high relative costs of litigation and regulatory procedures, an agency may be able to extract more restrictive terms on firm behavior by entering into an agreement than by litigating its accusations in a court.  In addition, while settlements may be a more efficient use of scarce agency resources, the savings may come at the cost of potentially stunting the development of the common law arising through adjudication.

In sum, the latest filing maintains the GAI’s practice of employing law and economics analysis to recommend reforms in the imposition of competition law remedies (see here, here, and here for summaries of prior GAI filings that are in the same vein).  The GAI’s dispassionate analysis highlights principles of universal application – principles that may someday point the way toward greater economically-sensible convergence among national antitrust remedial systems.

The Global Antitrust Institute (GAI) at George Mason University Law School (officially the “Antonin Scalia Law School at George Mason University” as of July 1st) is doing an outstanding job at providing sound law and economics-centered advice to foreign governments regarding their proposed antitrust laws and guidelines.

The GAI’s latest inspired filing, released on July 9 (July 9 Comment), concerns guidelines on the disgorgement of illegal gains and punitive fines for antitrust violations proposed by China’s National Development and Reform Commission (NDRC) – a powerful agency that has broad planning and administrative authority over the Chinese economy.  With respect to antitrust, the NDRC is charged with investigating price-related anticompetitive behavior and abuses of dominance.  (China has two other antitrust agencies, the State Administration of Industry and Commerce (SAIC) that investigates non-price-related monopolistic behavior, and the Ministry of Foreign Commerce (MOFCOM) that reviews mergers.)  The July 9 Comment stresses that the NDRC’s proposed Guidelines call for Chinese antitrust enforcers to impose punitive financial sanctions on conduct that is not necessarily anticompetitive and may be efficiency-enhancing – an approach that is contrary to sound economics.  In so doing, the July 9 Comment summarizes the economics of penalties, recommends that the NDRD employ economic analysis in considering sanctions, and provides specific suggested changes to the NDRC’s draft.  The July 9 Comment provides a helpful summary of its analysis:

We respectfully recommend that the Draft Guidelines be revised to limit the application of disgorgement (or the confiscating of illegal gain) and punitive fines to matters in which: (1) the antitrust violation is clear (i.e., if measured at the time the conduct is undertaken, and based on existing laws, rules, and regulations, a reasonable party should expect that the conduct at issue would likely be found to be illegal) and without any plausible efficiency justifications; (2) it is feasible to articulate and calculate the harm caused by the violation; (3) the measure of harm calculated is the basis for any fines or penalties imposed; and (4) there are no alternative remedies that would adequately deter future violations of the law.  In the alternative, and at the very least, we strongly urge the NDRC to expand the circumstances under which the Anti-Monopoly Enforcement Agencies (AMEAs) will not seek punitive sanctions such as disgorgement or fines to include two conduct categories that are widely recognized as having efficiency justifications: unilateral conduct such as refusals to deal and discriminatory dealing and vertical restraints such as exclusive dealing, tying and bundling, and resale price maintenance.

We also urge the NDRC to clarify how the total penalty, including disgorgement and fines, relate to the specific harm at issue and the theoretical optimal penalty.  As explained below, the economic analysis determines the total optimal penalties, which includes any disgorgement and fines.  When fines are calculated consistent with the optimal penalty framework, disgorgement should be a component of the total fine as opposed to an additional penalty on top of an optimal fine.  If disgorgement is an additional penalty, then any fines should be reduced relative to the optimal penalty.

Lastly, we respectfully recommend that the AMEAs rely on economic analysis to determine the harm caused by any violation.  When using proxies for the harm caused by the violation, such as using the illegal gains from the violations as the basis for fines or disgorgement, such calculations should be limited to those costs and revenues that are directly attributable to a clear violation.  This should be done in order to ensure that the resulting fines or disgorgement track the harms caused by the violation.  To that end, we recommend that the Draft Guidelines explicitly state that the AMEAs will use economic analysis to determine the but-for world, and will rely wherever possible on relevant market data.  When the calculation of illegal gain is unclear due to a lack of relevant information, we strongly recommend that the AMEAs refrain from seeking disgorgement.

The lack of careful economic analysis of the implications of disgorgement (which is really a financial penalty, viewed through an economic lens) is not confined to Chinese antitrust enforcers.  In recent years, the U.S. Federal Trade Commission (FTC) has shown an interest in more broadly employing disgorgement as an antitrust remedy, without fully weighing considerations of error costs and the deterrence of efficient business practices (see, for example, here and here).  Relatedly, the U.S. Department of Justice’s Antitrust Division has determined that disgorgement may be invoked as a remedy for a Sherman Antitrust Act violation, a position confirmed by a lower court (see, for example, here).  The general principles informing the thoughtful analysis delineated in the July 9 Comment could profitably be consulted by FTC and DOJ policy officials should they choose to reexamine their approach to disgorgement and other financial penalties.

More broadly, emphasizing the importantance of optimal sanctions and the economic analysis of business conduct, the July 9 Comment is in line with a cost-benefit framework for antitrust enforcement policy, rooted in decision theory – an approach that all antitrust agencies (including United States enforcers) should seek to adopt (see also here for an evaluation of the implicit decision-theoretic approach to antitrust employed by the U.S. Supreme Court under Chief Justice John Roberts).  Let us hope that DOJ, the FTC, and other government antitrust authorities around the world take to heart the benefits of decision-theoretic antitrust policy in evaluating (and, as appropriate, reforming) their enforcement norms.  Doing so would promote beneficial international convergence toward better enforcement policy and redound to the economic benefit of both producers and consumers.