Archives For error costs

A debate has broken out among the four sitting members of the Federal Trade Commission (FTC) in connection with the recently submitted FTC Report to Congress on Privacy and Security. Chair Lina Khan argues that the commission “must explore using its rulemaking tools to codify baseline protections,” while Commissioner Rebecca Kelly Slaughter has urged the FTC to initiate a broad-based rulemaking proceeding on data privacy and security. By contrast, Commissioners Noah Joshua Phillips and Christine Wilson counsel against a broad-based regulatory initiative on privacy.

Decisions to initiate a rulemaking should be viewed through a cost-benefit lens (See summaries of Thom Lambert’s masterful treatment of regulation, of which rulemaking is a subset, here and here). Unless there is a market failure, rulemaking is not called for. Even in the face of market failure, regulation should not be adopted unless it is more cost-beneficial than reliance on markets (including the ability of public and private litigation to address market-failure problems, such as data theft). For a variety of reasons, it is unlikely that FTC rulemaking directed at privacy and data security would pass a cost-benefit test.

Discussion

As I have previously explained (see here and here), FTC rulemaking pursuant to Section 6(g) of the FTC Act (which authorizes the FTC “to make rules and regulations for the purpose of carrying out the provisions of this subchapter”) is properly read as authorizing mere procedural, not substantive, rules. As such, efforts to enact substantive competition rules would not pass a cost-benefit test. Such rules could well be struck down as beyond the FTC’s authority on constitutional law grounds, and as “arbitrary and capricious” on administrative law grounds. What’s more, they would represent retrograde policy. Competition rules would generate higher error costs than adjudications; could be deemed to undermine the rule of law, because the U.S. Justice Department (DOJ) could not apply such rules; and innovative efficiency-seeking business arrangements would be chilled.

Accordingly, the FTC likely would not pursue 6(g) rulemaking should it decide to address data security and privacy, a topic which best fits under the “consumer protection” category. Rather, the FTC presumably would most likely initiate a “Magnuson-Moss” rulemaking (MMR) under Section 18 of the FTC Act, which authorizes the commission to prescribe “rules which define with specificity acts or practices which are unfair or deceptive acts or practices in or affecting commerce within the meaning of Section 5(a)(1) of the Act.” Among other things, Section 18 requires that the commission’s rulemaking proceedings provide an opportunity for informal hearings at which interested parties are accorded limited rights of cross-examination. Also, before commencing an MMR proceeding, the FTC must have reason to believe the practices addressed by the rulemaking are “prevalent.” 15 U.S.C. Sec. 57a(b)(3).

MMR proceedings, which are not governed under the Administrative Procedure Act (APA), do not present the same degree of legal problems as Section 6(g) rulemakings (see here). The question of legal authority to adopt a substantive rule is not raised; “rule of law” problems are far less serious (the DOJ is not a parallel enforcer of consumer-protection law); and APA issues of “arbitrariness” and “capriciousness” are not directly presented. Indeed, MMR proceedings include a variety of procedures aimed at promoting fairness (see here, for example). An MMR proceeding directed at data privacy predictably would be based on the claim that the failure to adhere to certain data-protection norms is an “unfair act or practice.”

Nevertheless, MMR rules would be subject to two substantial sources of legal risk.

The first of these arises out of federalism. Three states (California, Colorado, and Virginia) recently have enacted comprehensive data-privacy laws, and a large number of other state legislatures are considering data-privacy bills (see here). The proliferation of state data-privacy statutes would raise the risk of inconsistent and duplicative regulatory norms, potentially chilling business innovations addressed at data protection (a severe problem in the Internet Age, when business data-protection programs typically will have interstate effects).

An FTC MMR data-protection regulation that successfully “occupied the field” and preempted such state provisions could eliminate that source of costs. The Magnuson–Moss Warranty Act, however, does not contain an explicit preemption clause, leaving in serious doubt the ability of an FTC rule to displace state regulations (see here for a summary of the murky state of preemption law, including the skepticism of textualist Supreme Court justices toward implied “obstacle preemption”). In particular, the long history of state consumer-protection and antitrust laws that coexist with federal laws suggests that the case for FTC rule-based displacement of state data protection is a weak one. The upshot, then, of a Section 18 FTC data-protection rule enactment could be “the worst of all possible worlds,” with drawn-out litigation leading to competing federal and state norms that multiplied business costs.

The second source of risk arises out of the statutory definition of “unfair practices,” found in Section 5(n) of the FTC Act. Section 5(n) codifies the meaning of unfair practices, and thereby constrains the FTC’s application of rulemakings covering such practices. Section 5(n) states:

The Commission shall have no authority . . . to declare unlawful an act or practice on the grounds that such an act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

In effect, Section 5(n) implicitly subjects unfair practices to a well-defined cost-benefit framework. Thus, in promulgating a data-privacy MMR, the FTC first would have to demonstrate that specific disfavored data-protection practices caused or were likely to cause substantial harm. What’s more, the commission would have to show that any actual or likely harm would not be outweighed by countervailing benefits to consumers or competition. One would expect that a data-privacy rulemaking record would include submissions that pointed to the efficiencies of existing data-protection policies that would be displaced by a rule.

Moreover, subsequent federal court challenges to a final FTC rule likely would put forth the consumer and competitive benefits sacrificed by rule requirements. For example, rule challengers might point to the added business costs passed on to consumers that would arise from particular rule mandates, and the diminution in competition among data-protection systems generated by specific rule provisions. Litigation uncertainties surrounding these issues could be substantial and would cast into further doubt the legal viability of any final FTC data protection rule.

Apart from these legal risk-based costs, an MMR data privacy predictably would generate error-based costs. Given imperfect information in the hands of government and the impossibility of achieving welfare-maximizing nirvana through regulation (see, for example, here), any MMR data-privacy rule would erroneously condemn some economically inefficient business protocols and disincentivize some efficiency-seeking behavior. The Section 5(n) cost-benefit framework, though helpful, would not eliminate such error. (For example, even bureaucratic efforts to accommodate some business suggestions during the rulemaking process might tilt the post-rule market in favor of certain business models, thereby distorting competition.) In the abstract, it is difficult to say whether the welfare benefits of a final MMA data-privacy rule (measured by reductions in data-privacy-related consumer harm) would outweigh the costs, even before taking legal costs into account.

Conclusion

At least two FTC commissioners (and likely a third, assuming that President Joe Biden’s highly credentialed nominee Alvaro Bedoya will be confirmed by the U.S. Senate) appear to support FTC data-privacy regulation, even in the absence of new federal legislation. Such regulation, which presumably would be adopted as an MMR pursuant to Section 18 of the FTC Act, would probably not prove cost-beneficial. Not only would adoption of a final data-privacy rule generate substantial litigation costs and uncertainty, it would quite possibly add an additional layer of regulatory burdens above and beyond the requirements of proliferating state privacy rules. Furthermore, it is impossible to say whether the consumer-privacy benefits stemming from such an FTC rule would outweigh the error costs (manifested through competitive distortions and consumer harm) stemming from the inevitable imperfections of the rule’s requirements. All told, these considerations counsel against the allocation of scarce FTC resources to a Section 18 data-privacy rulemaking initiative.

But what about legislation? New federal privacy legislation that explicitly preempted state law would eliminate costs arising from inconsistencies among state privacy rules. Ideally, if such legislation were to be pursued, it should to the extent possible embody a cost-benefit framework designed to minimize the sum of administrative (including litigation) and error costs. The nature of such a possible law, and the role the FTC might play in administering it, however, is a topic for another day.

[This post adapts elements of “Technology Mergers and the Market for Corporate Control,” forthcoming in the Missouri Law Review.]

In recent years, a growing chorus of voices has argued that existing merger rules fail to apprehend competitively significant mergers, either because they fall below existing merger-filing thresholds or because they affect innovation in ways that are purportedly ignored.

These fears are particularly acute in the pharmaceutical and tech industries, where several high-profile academic articles and reports claim to have identified important gaps in current merger-enforcement rules, particularly with respect to acquisitions involving nascent and potential competitors (here, here, and here, among many others).

Such fears have led activists, lawmakers, and enforcers to call for tougher rules, including the introduction of more stringent merger-filing thresholds and other substantive changes, such as the inversion of the burden of proof when authorities review mergers and acquisitions involving digital platforms.

However, as we discuss in a recent working paper—forthcoming in the Missouri Law Review and available on SSRN—these proposals tend to overlook the important tradeoffs that would ensue from attempts to decrease the number of false positives under existing merger rules and thresholds.

The paper draws from two key strands of economic literature that are routinely overlooked (or summarily dismissed) by critics of the status quo.

For a start, antitrust enforcement is not costless. In the case of merger enforcement, not only is it expensive for agencies to detect anticompetitive deals but, more importantly, overbearing rules may deter beneficial merger activity that creates value for consumers.

Second, critics tend to overlook the possibility that incumbents’ superior managerial or other capabilities (i.e., what made them successful in the first place) makes them the ideal acquisition partners for entrepreneurs and startup investors looking to sell.

The result is a body of economic literature that focuses almost entirely on hypothetical social costs, while ignoring the redeeming benefits of corporate acquisitions, as well as the social cost of enforcement.

Kill Zones

One of the most significant allegations leveled against large tech firms is that their very presence in a market may hinder investments, entry, and innovation, creating what some have called a “kill zone.” The strongest expression in the economic literature of this idea of a kill zone stems from a working paper by Sai Krishna Kamepalli, Raghuram Rajan, and Luigi Zingales.

The paper makes two important claims, one theoretical and one empirical. From a theoretical standpoint, the authors argue that the prospect of an acquisition by a dominant platform deters consumers from joining rival platforms, and that this, in turn, hampers the growth of these rivals. The authors then test a similar hypothesis empirically. They find that acquisitions by a dominant platform—such as Google or Facebook—decrease investment levels and venture capital deals in markets that are “similar” to that of the target firm.

But both findings are problematic. For a start, Zingales and his co-authors’ theoretical model is premised on questionable assumptions about the way in which competition develops in the digital space. The first is that early adopters of new platforms—called “techies” in the authors’ parlance—face high switching costs because of their desire to learn these platforms in detail. As an initial matter, it would appear facially contradictory that “techies” both are the group with the highest switching costs and that they switch the most. The authors further assume that “techies” would incur lower adoption costs if they remained on the incumbent platform and waited for the rival platform to be acquired.

Unfortunately, while these key behavioral assumptions drive the results of the theoretical model, the paper presents no evidence to support their presence in real-world settings. In that sense, the authors commit the same error as previous theoretical work concerning externalities, which have tended to overestimate their frequency.

Second, the empirical analysis put forward in the paper is unreliable for policymaking purposes. The authors notably find that:

[N]ormalized VC investments in start-ups in the same space as the company acquired by Google and Facebook drop by over 40% and the number of deals falls by over 20% in the three years following an acquisition.

However, the results of this study are derived from the analysis of only nine transactions. The study also fails to clearly show that firms in the treatment and controls are qualitatively similar. In a nutshell, the study compares industry acquisitions exceeding $500 million to Facebook and Google’s acquisitions that exceed that amount. This does not tell us whether the mergers in both groups involved target companies with similar valuations or similar levels of maturity. This does not necessarily invalidate the results, but it does suggest that policymakers should be circumspect in interpreting those results.

Finally, the paper fails to demonstrate evidence that existing antitrust regimes fail to achieve an optimal error-cost balance. The central problem is that the paper has indeterminate welfare implications. For instance, as the authors note, the declines in investment in spaces adjacent to the incumbent platforms occurred during a time of rapidly rising venture capital investment, both in terms of the number of deals and dollars invested. It is entirely plausible that venture capital merely shifted to other sectors.

Put differently, on its own terms, the evidence merely suggests that acquisitions by Google and Facebook affected the direction of innovation, not its overall rate. And there is little to suggest that this shift was suboptimal, from a welfare standpoint.

In short, as the authors themselves conclude: “[i]t would be premature to draw any policy conclusion on antitrust enforcement based solely on our model and our limited evidence.”

Mergers and Potential Competition

Scholars have also posited more direct effects from acquisitions of startups or nascent companies by incumbent technology market firms.

Some scholars argue that incumbents might acquire rivals that do not yet compete with them directly, in order to reduce the competitive pressure they will face in the future. In his paper “Potential Competition and Antitrust Analysis: Monopoly Profits Exceed Duopoly Profits,” Steven Salop argues:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

However, these antitrust theories of harm suffer from several important flaws. They rest upon several restrictive assumptions that are not certain to occur in real-world settings. Most are premised on the notion that, in a given market, monopoly profits generally exceed joint duopoly profits. This allegedly makes it profitable, and mutually advantageous, for an incumbent to protect its monopoly position by preemptively acquiring potential rivals.

Accordingly, under these theories, anticompetitive mergers are only possible when the acquired rival could effectively challenge the incumbent. But these are, of course, only potential challengers; there is no guarantee that any one of them could or would mount a viable competitive threat.

Less obviously, it must be the case that the rival can hope to share only duopoly profits, as opposed to completely overthrowing the incumbent or surpassing them with a significantly larger share of the market. Where competition is “for the market” itself, monopoly maintenance would fail to explain a rival’s decision to sell.  Because there would be no asymmetry between the expected profits of the incumbent and the rival, monopoly maintenance alone would not give rise to mutually advantageous deals.

Second, potential competition does not always increase consumer welfare.  Indeed, while the presence of potential competitors might increase price competition, it can also have supply-side effects that cut in the opposite direction.

For example, as Nobel laureate Joseph Stiglitz observed, a monopolist threatened by potential competition may invest in socially wasteful R&D efforts or entry-deterrence mechanisms, and it may operate at below-optimal scale in anticipation of future competitive entry.

There are also pragmatic objections. Analyzing a merger’s effect on potential competition would compel antitrust authorities and courts to make increasingly speculative assessments concerning the counterfactual setting of proposed acquisitions.

In simple terms, it is far easier to determine whether a merger between McDonald’s and Burger King would lead to increased hamburger prices in the short run than it is to determine whether a gaming platform like Steam or the Epic Games Store might someday compete with video-streaming or music-subscription platforms like Netflix or Spotify. It is not that the above models are necessarily wrong, but rather that applying them to practical cases would require antitrust enforcers to estimate mostly unknowable factors.

Finally, the real test for regulators is not just whether they can identify possibly anticompetitive mergers, but whether they can do so in a cost-effective manner. Whether it is desirable to implement a given legal test is not simply a function of its accuracy, the cost to administer it, and the respective costs of false positives and false negatives. It also critically depends on how prevalent the conduct is that adjudicators would be seeking to foreclose.

Consider two hypothetical settings. Imagine there are 10,000 tech mergers in a given year, of which either 1,000 or 2,500 are anticompetitive (the remainder are procompetitive or competitively neutral). Suppose that authorities can either attempt to identify anticompetitive mergers with 75% accuracy, or perform no test at all—i.e., letting all mergers go through unchallenged.

If there are 1,000 anticompetitive mergers, applying the test would result in 7,500 correct decisions and 2,500 incorrect ones (2,250 false positives and 250 false negatives). Doing nothing would lead to 9,000 correct decisions and 1,000 false negatives. If the number of anticompetitive deals were 2,500, applying the test would lead to the same number of incorrect decisions as not applying it (1,875 false positives and 625 false negatives, versus 2,500 false negatives). The advantage would tilt toward applying the test if anticompetitive mergers were even more widespread.

This hypothetical example holds a simple lesson for policymakers: the rarer the conduct that they are attempting to identify, the more accurate their identification method must be, and the more costly false negatives must be relative to false positives.

As discussed below, current empirical evidence does not suggest that anticompetitive mergers of this sort are particularly widespread, nor does it offer accurate heuristics to detect the ones that are. Finally, there is little sense that the cost of false negatives significantly outweighs that of false positives. In short, there is currently little evidence to suggest that tougher enforcement would benefit consumers.

Killer Acquisitions

Killer acquisitions are, effectively, a subset of the “potential competitor” mergers discussed in the previous section. As defined by Colleen Cunningham, Florian Ederer, and Song Ma, they are those deals where “an incumbent firm may acquire an innovative target and terminate the development of the target’s innovations to preempt future competition.”

Cunningham, Ederer, and Ma’s highly influential paper on killer acquisitions has been responsible for much of the recent renewed interest in the effect that mergers exert on innovation. The authors studied thousands of pharmaceutical mergers and concluded that between 5.3% and 7.4% of them were killer acquisitions. As they write:

[W]e empirically compare development probabilities of overlapping acquisitions, which are, in our theory, motivated by a mix of killer and development intentions, and non-overlapping acquisitions, which are motivated only by development intentions. We find an increase in acquisition probability and a decrease in post-acquisition development for overlapping acquisitions and interpret that as evidence for killer acquisitions. […]

[W]e find that projects acquired by an incumbent with an overlapping drug are 23.4% less likely to have continued development activity compared to drugs acquired by non-overlapping incumbents.

From a policy standpoint, the question is what weight antitrust authorities, courts, and legislators should give to these findings. Stated differently, does the paper provide sufficient evidence to warrant reform of existing merger-filing thresholds and review standards? There are several factors counseling that policymakers should proceed with caution.

To start, the study’s industry-specific methodology means that it may not be a useful guide to understand acquisitions in other industries, like the tech sector, for example.

Second, even if one assumes that the findings of Cunningham, et al., are correct and apply with equal force in the tech sector (as some official reports have), it remains unclear whether the 5.3–7.4% of mergers they describe warrant a departure from the status quo.

Antitrust enforcers operate under uncertainty. The critical policy question is thus whether this subset of anticompetitive deals can be identified ex-ante. If not, is there a heuristic that would enable enforcers to identify more of these anticompetitive deals without producing excessive false positives?

The authors focus on the effect that overlapping R&D pipelines have on project discontinuations. In the case of non-overlapping mergers, acquired projects continue 17.5% of the time, while this number is 13.4% when there are overlapping pipelines. The authors argue that this gap is evidence of killer acquisitions. But it misses the bigger picture: under the authors’ own numbers and definition of a “killer acquisition,” a vast majority of overlapping acquisitions are perfectly benign; prohibiting them would thus have important social costs.

Third, there are several problems with describing this kind of behavior as harmful. Indeed, Cunningham, et al., acknowledge that this kind of behavior could increase innovation by boosting the returns to innovation.

And even if one ignores incentives to innovate, product discontinuations can improve consumer welfare. This question ultimately boils down to identifying the counterfactual to a merger. As John Yun writes:

For instance, an acquisition that results in a discontinued product is not per se evidence of either consumer harm or benefit. The answer involves comparing the counterfactual world without the acquisition with the world with the acquisition. The comparison includes potential efficiencies that were gained from the acquisition, including integration of intellectual property, the reduction of transaction costs, economies of scope, and better allocation of skilled labor.

One of the reasons R&D project discontinuation may be beneficial is simply cost savings. R&D is expensive. Pharmaceutical firms spend up to 27.8% of their annual revenue on R&D. Developing a new drug has an estimated median cost of $985.3 million. Cost-cutting—notably as it concerns R&D—is thus a critical part of pharmaceutical (as well as tech) companies’ businesses. As a report by McKinsey concludes:

The recent boom in M&A in the pharma industry is partly the result of attempts to address short-term productivity challenges. An acquiring or merging company typically designs organization-wide integration programs to capture synergies, especially in costs. Such programs usually take up to three years to complete and deliver results.

Another report finds that:

Maximizing the efficiency of production labor and equipment is one important way top-quartile drugmakers break out of the pack. Their rates of operational-equipment effectiveness are more than twice those of bottom-quartile companies (Exhibit 1), and when we looked closely we found that processes account for two-thirds of the difference.

In short, pharmaceutical companies do not just compete along innovation-related parameters, though these are obviously important, but also on more traditional grounds such as cost-rationalization. Accordingly, as the above reports suggest, pharmaceutical mergers are often about applying an incumbent’s superior managerial efficiency to the acquired firm’s assets through operation of the market for corporate control.

This cost-cutting (and superior project selection) ultimately enables companies to offer lower prices, thereby benefiting consumers and increasing their incentives to invest in R&D in the first place by making successfully developed drugs more profitable.

In that sense, Henry Manne’s seminal work relating to mergers and the market for corporate control sheds at least as much light on pharmaceutical (and tech) mergers as the killer acquisitions literature. And yet, it is hardly ever mentioned in modern economic literature on this topic.

While Colleen Cunningham and her co-authors do not entirely ignore these considerations, as we discuss in our paper, their arguments for dismissing them are far from watertight.

A natural extension of the killer acquisitions work is to question whether mergers of this sort also take place in the tech industry. Interest in this question is notably driven by the central role that digital markets currently occupy in competition-policy discussion, but also by the significant number of startup acquisitions that take place in the tech industry. However, existing studies provide scant evidence that killer acquisitions are a common occurrence in these markets.

This is not surprising. Unlike in the pharmaceutical industry—where drugs need to go through a lengthy and visible regulatory pipeline before they can be sold—incumbents in digital industries will likely struggle to identify their closest rivals and prevent firms from rapidly pivoting to seize new commercial opportunities. As a result, the basic conditions for killer acquisitions to take place (i.e., firms knowing they are in a position to share monopoly profits) are less likely to be present; it also would be harder to design research methods to detect these mergers.

The empirical literature on killer acquisitions in the tech sector is still in its infancy. But, as things stand, no study directly examines whether killer acquisitions actually take place in digital industries (i.e., whether post-merger project discontinuations are more common in overlapping than non-overlapping tech mergers). This is notably the case for studies by Axel Gautier & Joe Lamesch, and Elena Argentesi and her co-authors. Instead, these studies merely show that product discontinuations are common after an acquisition by a big tech company.

To summarize, while studies of this sort might suggest that the clearance of certain mergers might not have been optimal, it is hardly a sufficient basis on which to argue that enforcement should be tightened.

The reason for this is simple. The fact that some anticompetitive mergers may have escaped scrutiny and/or condemnation is never a sufficient basis to tighten rules. For that, it is also necessary to factor in the administrative costs of increased enforcement, as well as potential false convictions to which it might give rise. As things stand, economic research on killer acquisitions in the tech sector does not warrant tougher antitrust enforcement, though it does show the need for further empirical research on the topic.

Conclusion

Many proposed merger-enforcement reforms risk throwing the baby out with the bathwater. Mergers are largely beneficial to society (here, here and here); anticompetitive ones are rare; and there is little way, at the margin, to tell good from bad. To put it mildly, there is a precious baby that needs to be preserved and relatively little bathwater to throw out.

Take the fulcrum of policy debates that is the pharmaceutical industry. It is not hard to point to pharmaceutical mergers (or long-term agreements) that have revolutionized patient outcomes. Most recently, Pfizer and BioNTech’s efforts to successfully market an mRNA vaccine against COVID-19 offers a case in point.

The deal struck by both firms could naïvely be construed as bearing hallmarks of a killer acquisition or an anticompetitive agreement (long-term agreements can easily fall into either of these categories). Pfizer was a powerful incumbent in the vaccine industry; BioNTech threatened to disrupt the industry with new technology; and the deal likely caused Pfizer to forgo some independent R&D efforts. And yet, it also led to the first approved COVID-19 vaccine and groundbreaking advances in vaccine technology.

Of course, the counterfactual is unclear, and the market might be more competitive absent the deal, just as there might be only one approved mRNA vaccine today instead of two—we simply do not know. More importantly, this counterfactual was even less knowable at the time of the deal. And much the same could be said about countless other pharmaceutical mergers.

The key policy question is how authorities should handle this uncertainty. Critics of the status quo argue that current rules and thresholds leave certain anticompetitive deals unchallenged. But these calls for tougher enforcement fail to satisfy the requirements of the error-cost framework. Critics have so far failed to show that, on balance, mergers harm social welfare—even overlapping ones or mergers between potential competitors—just as they are yet to suggest alternative institutional arrangements that would improve social welfare.

In other words, they mistakenly analyze purported false negatives of merger-enforcement regimes in isolation. In doing so, they ignore how measures that aim to reduce such judicial errors may lead to other errors, as well as higher enforcement costs. In short, they paint a world where policy decisions involve facile tradeoffs, and this undermines their policy recommendations.

Given these significant limitations, this body of academic research should be met with an appropriate degree of caution. For all the criticism it has faced, the current merger-review system is mostly a resounding success. It is administrable, predictable, and timely. Yet it also eliminates a vast majority of judicial errors: even its critics concede that false negatives make up only a tiny fraction of decisions. Policymakers must decide whether the benefits from catching the very few arguably anticompetitive mergers that currently escape prosecution outweigh the significant costs that are required to achieve this goal. There is currently little evidence to suggest that this is, indeed, the case.

The U.S. House this week passed H.R. 2668, the Consumer Protection and Recovery Act (CPRA), which authorizes the Federal Trade Commission (FTC) to seek monetary relief in federal courts for injunctions brought under Section 13(b) of the Federal Trade Commission Act.

Potential relief under the CPRA is comprehensive. It includes “restitution for losses, rescission or reformation of contracts, refund of money, return of property … and disgorgement of any unjust enrichment that a person, partnership, or corporation obtained as a result of the violation that gives rise to the suit.” What’s more, under the CPRA, monetary relief may be obtained for violations that occurred up to 10 years before the filing of the suit in which relief is requested by the FTC.

The Senate should reject the House version of the CPRA. Its monetary-recovery provisions require substantial narrowing if it is to pass cost-benefit muster.

The CPRA is a response to the Supreme Court’s April 22 decision in AMG Capital Management v. FTC, which held that Section 13(b) of the FTC Act does not authorize the commission to obtain court-ordered equitable monetary relief. As I explained in an April 22 Truth on the Market post, Congress’ response to the court’s holding should not be to grant the FTC carte blanche authority to obtain broad monetary exactions for any and all FTC Act violations. I argued that “[i]f Congress adopts a cost-beneficial error-cost framework in shaping targeted legislation, it should limit FTC monetary relief authority (recoupment and disgorgement) to situations of consumer fraud or dishonesty arising under the FTC’s authority to pursue unfair or deceptive acts or practices.”

Error cost and difficulties of calculation counsel against pursuing monetary recovery in FTC unfair methods of competition cases. As I explained in my post:

Consumer redress actions are problematic for a large proportion of FTC antitrust enforcement (“unfair methods of competition”) initiatives. Many of these antitrust cases are “cutting edge” matters involving novel theories and complex fact patterns that pose a significant threat of type I [false positives] error. (In comparison, type I error is low in hardcore collusion cases brought by the U.S. Justice Department where the existence, nature, and effects of cartel activity are plain). What’s more, they generally raise extremely difficult if not impossible problems in estimating the degree of consumer harm. (Even DOJ price-fixing cases raise non-trivial measurement difficulties.)

These error-cost and calculation difficulties became even more pronounced as of July 1. On that date, the FTC unwisely voted 3-2 to withdraw a bipartisan 2015 policy statement providing that the commission would apply consumer welfare and rule-of-reason (weighing efficiencies against anticompetitive harm) considerations in exercising its unfair methods of competition authority (see my commentary here). This means that, going forward, the FTC will arrogate to itself unbounded discretion to decide what competitive practices are “unfair.” Business uncertainty, and the costly risk aversion it engenders, would be expected to grow enormously if the FTC could extract monies from firms due to competitive behavior deemed “unfair,” based on no discernible neutral principle.

Error costs and calculation problems also strongly suggest that monetary relief in FTC consumer-protection matters should be limited to cases of fraud or clear deception. As I noted:

[M]atters involving a higher likelihood of error and severe measurement problems should be the weakest candidates for consumer redress in the consumer protection sphere. For example, cases involve allegedly misleading advertising regarding the nature of goods, or allegedly insufficient advertising substantiation, may generate high false positives and intractable difficulties in estimating consumer harm. As a matter of judgment, given resource constraints, seeking financial recoveries solely in cases of fraud or clear deception where consumer losses are apparent and readily measurable makes the most sense from a cost-benefit perspective.

In short, the Senate should rewrite its Section 13(b) amendments to authorize FTC monetary recoveries only when consumer fraud and dishonesty is shown.

Finally, the Senate would be wise to sharply pare back the House language that allows the FTC to seek monetary exactions based on conduct that is a decade old. Serious problems of making accurate factual determinations of economic effects and specific-damage calculations would arise after such a long period of time. Allowing retroactive determinations based on a shorter “look-back” period prior to the filing of a complaint (three years, perhaps) would appear to strike a better balance in allowing reasonable redress while controlling error costs.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Nicolas Petit himself, the Joint Chair in Competition Law at the Department of Law at European University Institute in Fiesole, Italy, and at EUI’s Robert Schuman Centre for Advanced Studies. He is also invited professor at the College of Europe in Bruges
.]

A lot of water has gone under the bridge since my book was published last year. To close this symposium, I thought I would discuss the new phase of antirust statutorification taking place before our eyes. In the United States, Congress is working on five antitrust bills that propose to subject platforms to stringent obligations, including a ban on mergers and acquisitions, required data portability and interoperability, and line-of-business restrictions. In the European Union (EU), lawmakers are examining the proposed Digital Markets Act (“DMA”) that sets out a complicated regulatory system for digital “gatekeepers,” with per se behavioral limitations of their freedom over contractual terms, technological design, monetization, and ecosystem leadership.

Proponents of legislative reform on both sides of the Atlantic appear to share the common view that ongoing antitrust adjudication efforts are both instrumental and irrelevant. They are instrumental because government (or plaintiff) losses build the evidence needed to support the view that antitrust doctrine is exceedingly conservative, and that legal reform is needed. Two weeks ago, antitrust reform activists ran to Twitter to point out that the U.S. District Court dismissal of the Federal Trade Commission’s (FTC) complaint against Facebook was one more piece of evidence supporting the view that the antitrust pendulum needed to swing. They are instrumental because, again, government (or plaintiffs) wins will support scaling antitrust enforcement in the marginal case by adoption of governmental regulation. In the EU, antitrust cases follow each other almost like night the day, lending credence to the view that regulation will bring much needed coordination and economies of scale.

But both instrumentalities are, at the end of the line, irrelevant, because they lead to the same conclusion: legislative reform is long overdue. With this in mind, the logic of lawmakers is that they need not await the courts, and they can advance with haste and confidence toward the promulgation of new antitrust statutes.

The antitrust reform process that is unfolding is a cause for questioning. The issue is not legal reform in itself. There is no suggestion here that statutory reform is necessarily inferior, and no correlative reification of the judge-made-law method. Legislative intervention can occur for good reason, like when it breaks judicial inertia caused by ideological logjam.

The issue is rather one of precipitation. There is a lot of learning in the cases. The point, simply put, is that a supplementary court-legislative dialogue would yield additional information—or what Guido Calabresi has called “starting points” for regulation—that premature legislative intervention is sweeping under the rug. This issue is important because specification errors (see Doug Melamed’s symposium piece on this) in statutory legislation are not uncommon. Feedback from court cases create a factual record that will often be missing when lawmakers act too precipitously.

Moreover, a court-legislative iteration is useful when the issues in discussion are cross-cutting. The digital economy brings an abundance of them. As tech analysist Ben Evans has observed, data-sharing obligations raise tradeoffs between contestability and privacy. Chapter VI of my book shows that breakups of social networks or search engines might promote rivalry and, at the same time, increase the leverage of advertisers to extract more user data and conduct more targeted advertising. In such cases, Calabresi said, judges who know the legal topography are well-placed to elicit the preferences of society. He added that they are better placed than government agencies’ officials or delegated experts, who often attend to the immediate problem without the big picture in mind (all the more when officials are denied opportunities to engage with civil society and the press, as per the policy announced by the new FTC leadership).

Of course, there are three objections to this. The first consists of arguing that statutes are needed now because courts are too slow to deal with problems. The argument is not dissimilar to Frank Easterbrook’s concerns about irreversible harms to the economy, though with a tweak. Where Easterbook’s concern was one of ossification of Type I errors due to stare decisis, the concern here is one of entrenchment of durable monopoly power in the digital sector due to Type II errors. The concern, however, fails the test of evidence. The available data in both the United States and Europe shows unprecedented vitality in the digital sector. Venture capital funding cruises at historical heights, fueling new firm entry, business creation, and economic dynamism in the U.S. and EU digital sectors, topping all other industries. Unless we require higher levels of entry from digital markets than from other industries—or discount the social value of entry in the digital sector—this should give us reason to push pause on lawmaking efforts.

The second objection is that following an incremental process of updating the law through the courts creates intolerable uncertainty. But this objection, too, is unconvincing, at best. One may ask which of an abrupt legislative change of the law after decades of legal stability or of an experimental process of judicial renovation brings more uncertainty.

Besides, ad hoc statutes, such as the ones in discussion, are likely to pose quickly and dramatically the problem of their own legal obsolescence. Detailed and technical statutes specify rights, requirements, and procedures that often do not stand the test of time. For example, the DMA likely captures Windows as a core platform service subject to gatekeeping. But is the market power of Microsoft over Windows still relevant today, and isn’t it constrained in effect by existing antitrust rules?  In antitrust, vagueness in critical statutory terms allows room for change.[1] The best way to give meaning to buzzwords like “smart” or “future-proof” regulation consists of building in first principles, not in creating discretionary opportunities for permanent adaptation of the law. In reality, it is hard to see how the methods of future-proof regulation currently discussed in the EU creates less uncertainty than a court process.

The third objection is that we do not need more information, because we now benefit from economic knowledge showing that existing antitrust laws are too permissive of anticompetitive business conduct. But is the economic literature actually supportive of stricter rules against defendants than the rule-of-reason framework that applies in many unilateral conduct cases and in merger law? The answer is surely no. The theoretical economic literature has travelled a lot in the past 50 years. Of particular interest are works on network externalities, switching costs, and multi-sided markets. But the progress achieved in the economic understanding of markets is more descriptive than normative.

Take the celebrated multi-sided market theory. The main contribution of the theory is its advice to decision-makers to take the periscope out, so as to consider all possible welfare tradeoffs, not to be more or less defendant friendly. Payment cards provide a good example. Economic research suggests that any antitrust or regulatory intervention on prices affect tradeoffs between, and payoffs to, cardholders and merchants, cardholders and cash users, cardholders and banks, and banks and card systems. Equally numerous tradeoffs arise in many sectors of the digital economy, like ridesharing, targeted advertisement, or social networks. Multi-sided market theory renders these tradeoffs visible. But it does not come with a clear recipe for how to solve them. For that, one needs to follow first principles. A system of measurement that is flexible and welfare-based helps, as Kelly Fayne observed in her critical symposium piece on the book.

Another example might be worth considering. The theory of increasing returns suggests that markets subject to network effects tend to converge around the selection of a single technology standard, and it is not a given that the selected technology is the best one. One policy implication is that social planners might be justified in keeping a second option on the table. As I discuss in Chapter V of my book, the theory may support an M&A ban against platforms in tipped markets, on the conjecture that the assets of fringe firms might be efficiently repositioned to offer product differentiation to consumers. But the theory of increasing returns does not say under what conditions we can know that the selected technology is suboptimal. Moreover, if the selected technology is the optimal one, or if the suboptimal technology quickly obsolesces, are policy efforts at all needed?

Last, as Bo Heiden’s thought provoking symposium piece argues, it is not a given that antitrust enforcement of rivalry in markets is the best way to maintain an alternative technology alive, let alone to supply the innovation needed to deliver economic prosperity. Government procurement, science and technology policy, and intellectual-property policy might be equally effective (note that the fathers of the theory, like Brian Arthur or Paul David, have been very silent on antitrust reform).

There are, of course, exceptions to the limited normative content of modern economic theory. In some areas, economic theory is more predictive of consumer harms, like in relation to algorithmic collusion, interlocking directorates, or “killer” acquisitions. But the applications are discrete and industry-specific. All are insufficient to declare that the antitrust apparatus is dated and that it requires a full overhaul. When modern economic research turns normative, it is often way more subtle in its implications than some wild policy claims derived from it. For example, the emerging studies that claim to identify broad patterns of rising market power in the economy in no way lead to an implication that there are no pro-competitive mergers.

Similarly, the empirical picture of digital markets is incomplete. The past few years have seen a proliferation of qualitative research reports on industry structure in the digital sectors. Most suggest that industry concentration has risen, particularly in the digital sector. As with any research exercise, these reports’ findings deserve to be subject to critical examination before they can be deemed supportive of a claim of “sufficient experience.” Moreover, there is no reason to subject these reports to a lower standard of accountability on grounds that they have often been drafted by experts upon demand from antitrust agencies. After all, we academics are ethically obliged to be at least equally exacting with policy-based research as we are with science-based research.

Now, with healthy skepticism at the back of one’s mind, one can see immediately that the findings of expert reports to date have tended to downplay behavioral observations that counterbalance findings of monopoly power—such as intense business anxiety, technological innovation, and demand-expansion investments in digital markets. This was, I believe, the main takeaway from Chapter IV of my book. And less than six months ago, The Economist ran its leading story on the new marketplace reality of “Tech’s Big Dust-Up.”

More importantly, the findings of the various expert reports never seriously contemplate the possibility of competition by differentiation in business models among the platforms. Take privacy, for example. As Peter Klein reasonably writes in his symposium article, we should not be quick to assume market failure. After all, we might have more choice than meets the eye, with Google free but ad-based, and Apple pricy but less-targeted. More generally, Richard Langlois makes a very convincing point that diversification is at the heart of competition between the large digital gatekeepers. We might just be too short-termist—here, digital communications technology might help create a false sense of urgency—to wait for the end state of the Big Tech moligopoly.

Similarly, the expert reports did not really question the real possibility of competition for the purchase of regulation. As in the classic George Stigler paper, where the railroad industry fought motor-trucking competition with state regulation, the businesses that stand to lose most from the digital transformation might be rationally jockeying to convince lawmakers that not all business models are equal, and to steer regulation toward specific business models. Again, though we do not know how to consider this issue, there are signs that a coalition of large news corporations and the publishing oligopoly are behind many antitrust initiatives against digital firms.

Now, as is now clear from these few lines, my cautionary note against antitrust statutorification might be more relevant to the U.S. market. In the EU, sunk investments have been made, expectations have been created, and regulation has now become inevitable. The United States, however, has a chance to get this right. Court cases are the way to go. And unlike what the popular coverage suggests, the recent District Court dismissal of the FTC case far from ruled out the applicability of U.S. antitrust laws to Facebook’s alleged killer acquisitions. On the contrary, the ruling actually contains an invitation to rework a rushed complaint. Perhaps, as Shane Greenstein observed in his retrospective analysis of the U.S. Microsoft case, we would all benefit if we studied more carefully the learning that lies in the cases, rather than haste to produce instant antitrust analysis on Twitter that fits within 280 characters.


[1] But some threshold conditions like agreement or dominance might also become dated. 

The Biden Administration’s July 9 Executive Order on Promoting Competition in the American Economy is very much a mixed bag—some positive aspects, but many negative ones.

It will have some positive effects on economic welfare, to the extent it succeeds in lifting artificial barriers to competition that harm consumers and workers—such as allowing direct sales of hearing aids in drug stores—and helping to eliminate unnecessary occupational licensing restrictions, to name just two of several examples.

But it will likely have substantial negative effects on economic welfare as well. Many aspects of the order appear to emphasize new regulation—such as Net Neutrality requirements that may reduce investment in broadband by internet service providers—and imposing new regulatory requirements on airlines, pharmaceutical companies, digital platforms, banks, railways, shipping, and meat packers, among others. Arbitrarily imposing new rules in these areas, without a cost-beneficial appraisal and a showing of a market failure, threatens to reduce innovation and slow economic growth, hurting producers and consumer. (A careful review of specific regulatory proposals may shed greater light on the justifications for particular regulations.)

Antitrust-related proposals to challenge previously cleared mergers, and to impose new antitrust rulemaking, are likely to raise costly business uncertainty, to the detriment of businesses and consumers. They are a recipe for slower economic growth, not for vibrant competition.

An underlying problem with the order is that it is based on the false premise that competition has diminished significantly in recent decades and that “big is bad.” Economic analysis found in the February 2020 Economic Report of the President, and in other economic studies, debunks this flawed assumption.

In short, the order commits the fundamental mistake of proposing intrusive regulatory solutions for a largely nonexistent problem. Competitive issues are best handled through traditional well-accepted antitrust analysis, which centers on promoting consumer welfare and on weighing procompetitive efficiencies against anticompetitive harm on a case-by-case basis. This approach:

  1. Deals effectively with serious competitive problems; while at the same time
  2. Cabining error costs by taking into account all economically relevant considerations on a case-specific basis.

Rather than using an executive order to direct very specific regulatory approaches without a strong economic and factual basis, the Biden administration would have been better served by raising a host of competitive issues that merit possible study and investigation by expert agencies. Such an approach would have avoided imposing the costs of unwarranted regulation that unfortunately are likely to stem from the new order.

Finally, the order’s call for new regulations and the elimination of various existing legal policies will spawn matter-specific legal challenges, and may, in many cases, not succeed in court. This will impose unnecessary business uncertainty in addition to public and private resources wasted on litigation.

There is little doubt that Federal Trade Commission (FTC) unfair methods of competition rulemaking proceedings are in the offing. Newly named FTC Chair Lina Khan and Commissioner Rohit Chopra both have extolled the benefits of competition rulemaking in a major law review article. What’s more, in May, Commissioner Rebecca Slaughter (during her stint as acting chair) established a rulemaking unit in the commission’s Office of General Counsel empowered to “explore new rulemakings to prohibit unfair or deceptive practices and unfair methods of competition” (emphasis added).

In short, a majority of sitting FTC commissioners apparently endorse competition rulemaking proceedings. As such, it is timely to ask whether FTC competition rules would promote consumer welfare, the paramount goal of competition policy.

In a recently published Mercatus Center research paper, I assess the case for competition rulemaking from a competition perspective and find it wanting. I conclude that, before proceeding, the FTC should carefully consider whether such rulemakings would be cost-beneficial. I explain that any cost-benefit appraisal should weigh both the legal risks and the potential economic policy concerns (error costs and “rule of law” harms). Based on these considerations, competition rulemaking is inappropriate. The FTC should stick with antitrust enforcement as its primary tool for strengthening the competitive process and thereby promoting consumer welfare.

A summary of my paper follows.

Section 6(g) of the original Federal Trade Commission Act authorizes the FTC “to make rules and regulations for the purpose of carrying out the provisions of this subchapter.” Section 6(g) rules are enacted pursuant to the “informal rulemaking” requirements of Section 553 of the Administrative Procedures Act (APA), which apply to the vast majority of federal agency rulemaking proceedings.

Before launching Section 6(g) competition rulemakings, however, the FTC would be well-advised first to weigh the legal risks and policy concerns associated with such an endeavor. Rulemakings are resource-intensive proceedings and should not lightly be undertaken without an eye to their feasibility and implications for FTC enforcement policy.

Only one appeals court decision addresses the scope of Section 6(g) rulemaking. In 1971, the FTC enacted a Section 6(g) rule stating that it was both an “unfair method of competition” and an “unfair act or practice” for refiners or others who sell to gasoline retailers “to fail to disclose clearly and conspicuously in a permanent manner on the pumps the minimum octane number or numbers of the motor gasoline being dispensed.” In 1973, in the National Petroleum Refiners case, the U.S. Court of Appeals for the D.C. Circuit upheld the FTC’s authority to promulgate this and other binding substantive rules. The court rejected the argument that Section 6(g) authorized only non-substantive regulations concerning regarding the FTC’s non-adjudicatory, investigative, and informative functions, spelled out elsewhere in Section 6.

In 1975, two years after National Petroleum Refiners was decided, Congress granted the FTC specific consumer-protection rulemaking authority (authorizing enactment of trade regulation rules dealing with unfair or deceptive acts or practices) through Section 202 of the Magnuson-Moss Warranty Act, which added Section 18 to the FTC Act. Magnuson-Moss rulemakings impose adjudicatory-type hearings and other specific requirements on the FTC, unlike more flexible section 6(g) APA informal rulemakings. However, the FTC can obtain civil penalties for violation of Magnuson-Moss rules, something it cannot do if 6(g) rules are violated.

In a recent set of public comments filed with the FTC, the Antitrust Section of the American Bar Association stated:

[T]he Commission’s [6(g)] rulemaking authority is buried in within an enumerated list of investigative powers, such as the power to require reports from corporations and partnerships, for example. Furthermore, the [FTC] Act fails to provide any sanctions for violating any rule adopted pursuant to Section 6(g). These two features strongly suggest that Congress did not intend to give the agency substantive rulemaking powers when it passed the Federal Trade Commission Act.

Rephrased, this argument suggests that the structure of the FTC Act indicates that the rulemaking referenced in Section 6(g) is best understood as an aid to FTC processes and investigations, not a source of substantive policymaking. Although the National Petroleum Refiners decision rejected such a reading, that ruling came at a time of significant judicial deference to federal agency activism, and may be dated.

The U.S. Supreme Court’s April 2021 decision in AMG Capital Management v. FTC further bolsters the “statutory structure” argument that Section 6(g) does not authorize substantive rulemaking. In AMG, the U.S. Supreme Court unanimously held that Section 13(b) of the FTC Act, which empowers the FTC to seek a “permanent injunction” to restrain an FTC Act violation, does not authorize the FTC to seek monetary relief from wrongdoers. The court’s opinion rejected the FTC’s argument that the term “permanent injunction” had historically been understood to include monetary relief. The court explained that the injunctive language was “buried” in a lengthy provision that focuses on injunctive, not monetary relief (note that the term “rules” is similarly “buried” within 6(g) language dealing with unrelated issues). The court also pointed to the structure of the FTC Act, with detailed and specific monetary-relief provisions found in Sections 5(l) and 19, as “confirm[ing] the conclusion” that Section 13(b) does not grant monetary relief.

By analogy, a court could point to Congress’ detailed enumeration of substantive rulemaking provisions in Section 18 (a mere two years after National Petroleum Refiners) as cutting against the claim that Section 6(g) can also be invoked to support substantive rulemaking. Finally, the Supreme Court in AMG flatly rejected several relatively recent appeals court decisions that upheld Section 13(b) monetary-relief authority. It follows that the FTC cannot confidently rely on judicial precedent (stemming from one arguably dated court decision, National Petroleum Refiners) to uphold its competition rulemaking authority.

In sum, the FTC will have to overcome serious fundamental legal challenges to its section 6(g) competition rulemaking authority if it seeks to promulgate competition rules.

Even if the FTC’s 6(g) authority is upheld, it faces three other types of litigation-related risks.

First, applying the nondelegation doctrine, courts might hold that the broad term “unfair methods of competition” does not provide the FTC “an intelligible principle” to guide the FTC’s exercise of discretion in rulemaking. Such a judicial holding would mean the FTC could not issue competition rules.

Second, a reviewing court might strike down individual proposed rules as “arbitrary and capricious” if, say, the court found that the FTC rulemaking record did not sufficiently take into account potentially procompetitive manifestations of a condemned practice.

Third, even if a final competition rule passes initial legal muster, applying its terms to individual businesses charged with rule violations may prove difficult. Individual businesses may seek to structure their conduct to evade the particular strictures of a rule, and changes in commercial practices may render less common the specific acts targeted by a rule’s language.

Economic Policy Concerns Raised by Competition Rulemaking

In addition to legal risks, any cost-benefit appraisal of FTC competition rulemaking should consider the economic policy concerns raised by competition rulemaking. These fall into two broad categories.

First, competition rules would generate higher error costs than adjudications. Adjudications cabin error costs by allowing for case-specific analysis of likely competitive harms and procompetitive benefits. In contrast, competition rules inherently would be overbroad and would suffer from a very high rate of false positives. By characterizing certain practices as inherently anticompetitive without allowing for consideration of case-specific facts bearing on actual competitive effects, findings of rule violations inevitably would condemn some (perhaps many) efficient arrangements.

Second, competition rules would undermine the rule of law and thereby reduce economic welfare. FTC-only competition rules could lead to disparate legal treatment of a firm’s business practices, depending upon whether the FTC or the U.S. Justice Department was the investigating agency. Also, economic efficiency gains could be lost due to the chilling of aggressive efficiency-seeking business arrangements in those sectors subject to rules.

Conclusion

A combination of legal risks and economic policy harms strongly counsels against the FTC’s promulgation of substantive competition rules.

First, litigation issues would consume FTC resources and add to the costly delays inherent in developing competition rules in the first place. The compounding of separate serious litigation risks suggests a significant probability that costs would be incurred in support of rules that ultimately would fail to be applied.

Second, even assuming competition rules were to be upheld, their application would raise serious economic policy questions. The inherent inflexibility of rule-based norms is ill-suited to deal with dynamic evolving market conditions, compared with matter-specific antitrust litigation that flexibly applies the latest economic thinking to particular circumstances. New competition rules would also exacerbate costly policy inconsistencies stemming from the existence of dual federal antitrust enforcement agencies, the FTC and the Justice Department.

In conclusion, an evaluation of rule-related legal risks and economic policy concerns demonstrates that a reallocation of some FTC enforcement resources to the development of competition rules would not be cost-effective. Continued sole reliance on case-by-case antitrust litigation would generate greater economic welfare than a mixture of litigation and competition rules.

The U.S. Supreme Court’s just-published unanimous decision in AMG Capital Management LLC v. FTC—holding that Section 13(b) of the Federal Trade Commission Act does not authorize the commission to obtain court-ordered equitable monetary relief (such as restitution or disgorgement)—is not surprising. Moreover, by dissipating the cloud of litigation uncertainty that has surrounded the FTC’s recent efforts to seek such relief, the court cleared the way for consideration of targeted congressional legislation to address the issue.

But what should such legislation provide? After briefly summarizing the court’s holding, I will turn to the appropriate standards for optimal FTC consumer redress actions, which inform a welfare-enhancing legislative fix.

The Court’s Opinion

Justice Stephen Breyer’s opinion for the court is straightforward, centering on the structure and history of the FTC Act. Section 13(b) makes no direct reference to monetary relief. Its plain language merely authorizes the FTC to seek a “permanent injunction” in federal court against “any person, partnership, or corporation” that it believes “is violating, or is about to violate, any provision of law” that the commission enforces. In addition, by its terms, Section 13(b) is forward-looking, focusing on relief that is prospective, not retrospective (this cuts against the argument that payments for prior harm may be recouped from wrongdoers).

Furthermore, the FTC Act provisions that specifically authorize conditioned and limited forms of monetary relief (Section 5(l) and Section 19) are in the context of commission cease and desist orders, involving FTC administrative proceedings, unlike Section 13(b) actions that avoid the administrative route. In sum, the court concludes that:

[T]o read §13(b) to mean what it says, as authorizing injunctive but not monetary relief, produces a coherent enforcement scheme: The Commission may obtain monetary relief by first invoking its administrative procedures and then §19’s redress provisions (which include limitations). And the Commission may use §13(b) to obtain injunctive relief while administrative proceedings are foreseen or in progress, or when it seeks only injunctive relief. By contrast, the Commission’s broad reading would allow it to use §13(b) as a substitute for §5 and §19. For the reasons we have just stated, that could not have been Congress’ intent.

The court’s opinion concludes by succinctly rejecting the FTC’s arguments to the contrary.

What Comes Next

The Supreme Court’s decision has been anticipated by informed observers. All four sitting FTC Commissioners have already called for a Section 13(b) “legislative fix,” and in an April 20 hearing of Senate Commerce Committee, Chairwoman Maria Cantwell (D-Wash.) emphasized that, “[w]e have to do everything we can to protect this authority and, if necessary, pass new legislation to do so.”

What, however, should be the contours of such legislation? In considering alternative statutory rules, legislators should keep in mind not only the possible consumer benefits of monetary relief, but the costs of error, as well. Error costs are a ubiquitous element of public law enforcement, and this is particularly true in the case of FTC actions. Ideally, enforcers should seek to minimize the sum of the costs attributable to false positives (type I error), false negatives (type II error), administrative costs, and disincentive costs imposed on third parties, which may also be viewed as a subset of false positives. (See my 2014 piece “A Cost-Benefit Framework for Antitrust Enforcement Policy.”

Monetary relief is most appropriate in cases where error costs are minimal, and the quantum of harm is relatively easy to measure. This suggests a spectrum of FTC enforcement actions that may be candidates for monetary relief. Ideally, selection of targets for FTC consumer redress actions should be calibrated to yield the highest return to scarce enforcement resources, with an eye to optimal enforcement criteria.

Consider consumer protection enforcement. The strongest cases involve hardcore consumer fraud (where fraudulent purpose is clear and error is almost nil); they best satisfy accuracy in measurement and error-cost criteria. Next along the spectrum are cases of non-fraudulent but unfair or deceptive acts or practices that potentially involve some degree of error. In this category, situations involving easily measurable consumer losses (e.g., systematic failure to deliver particular goods requested or poor quality control yielding shipments of ruined goods) would appear to be the best candidates for monetary relief.

Moving along the spectrum, matters involving a higher likelihood of error and severe measurement problems should be the weakest candidates for consumer redress in the consumer protection sphere. For example, cases involve allegedly misleading advertising regarding the nature of goods, or allegedly insufficient advertising substantiation, may generate high false positives and intractable difficulties in estimating consumer harm. As a matter of judgment, given resource constraints, seeking financial recoveries solely in cases of fraud or clear deception where consumer losses are apparent and readily measurable makes the most sense from a cost-benefit perspective.

Consumer redress actions are problematic for a large proportion of FTC antitrust enforcement (“unfair methods of competition”) initiatives. Many of these antitrust cases are “cutting edge” matters involving novel theories and complex fact patterns that pose a significant threat of type I error. (In comparison, type I error is low in hardcore collusion cases brought by the U.S. Justice Department where the existence, nature, and effects of cartel activity are plain). What’s more, they generally raise extremely difficult if not impossible problems in estimating the degree of consumer harm. (Even DOJ price-fixing cases raise non-trivial measurement difficulties.)

For example, consider assigning a consumer welfare loss number to a patent antitrust settlement that may or may not have delayed entry of a generic drug by some length of time (depending upon the strength of the patent) or to a decision by a drug company to modify a drug slightly just before patent expiration in order to obtain a new patent period (raising questions of valuing potential product improvements). These and other examples suggest that only rarely should the FTC pursue requests for disgorgement or restitution in antitrust cases, if error-cost-centric enforcement criteria are to be honored.

Unfortunately, the FTC currently has nothing to say about when it will seek monetary relief in antitrust matters. Commendably, in 2003, the commission issued a Policy Statement on Monetary Equitable Remedies in Competition Cases specifying that it would only seek monetary relief in “exceptional cases” involving a “[c]lear [v]iolation” of the antitrust laws. Regrettably, in 2012, a majority of the FTC (with Commissioner Maureen Ohlhausen dissenting) withdrew that policy statement and the limitations it imposed. As I concluded in a 2012 article:

This action, which was taken without the benefit of advance notice and public comment, raises troubling questions. By increasing business uncertainty, the withdrawal may substantially chill efficient business practices that are not well understood by enforcers. In addition, it raises the specter of substantial error costs in the FTC’s pursuit of monetary sanctions. In short, it appears to represent a move away from, rather than towards, an economically enlightened antitrust enforcement policy.

In a 2013 speech, then-FTC Commissioner Josh Wright also lamented the withdrawal of the 2003 Statement, and stated that he would limit:

… the FTC’s ability to pursue disgorgement only against naked price fixing agreements among competitors or, in the case of single firm conduct, only if the monopolist’s conduct has no plausible efficiency justification. This latter category would include fraudulent or deceptive conduct, or tortious activity such as burning down a competitor’s plant.

As a practical matter, the FTC does not bring cases of this sort. The DOJ brings naked price-fixing cases and the unilateral conduct cases noted are as scarce as unicorns. Given that fact, Wright’s recommendation may rightly be seen as a rejection of monetary relief in FTC antitrust cases. Based on the previously discussed serious error-cost and measurement problems associated with monetary remedies in FTC antitrust cases, one may also conclude that the Wright approach is right on the money.

Finally, a recent article by former FTC Chairman Tim Muris, Howard Beales, and Benjamin Mundel opined that Section 13(b) should be construed to “limit[] the FTC’s ability to obtain monetary relief to conduct that a reasonable person would know was dishonest or fraudulent.” Although such a statutory reading is now precluded by the Supreme Court’s decision, its incorporation in a new statutory “fix” would appear ideal. It would allow for consumer redress in appropriate cases, while avoiding the likely net welfare losses arising from a more expansive approach to monetary remedies.

 Conclusion

The AMG Capital decision is sure to generate legislative proposals to restore the FTC’s ability to secure monetary relief in federal court. If Congress adopts a cost-beneficial error-cost framework in shaping targeted legislation, it should limit FTC monetary relief authority (recoupment and disgorgement) to situations of consumer fraud or dishonesty arising under the FTC’s authority to pursue unfair or deceptive acts or practices. Giving the FTC carte blanche to obtain financial recoveries in the full spectrum of antitrust and consumer protection cases would spawn uncertainty and could chill a great deal of innovative business behavior, to the ultimate detriment of consumer welfare.


,

Antitrust by Fiat

Jonathan M. Barnett —  23 February 2021

The Competition and Antitrust Law Enforcement Reform Act (CALERA), recently introduced in the U.S. Senate, exhibits a remarkable willingness to cast aside decades of evidentiary standards that courts have developed to uphold the rule of law by precluding factually and economically ungrounded applications of antitrust law. Without those safeguards, antitrust enforcement is prone to be driven by a combination of prosecutorial and judicial fiat. That would place at risk the free play of competitive forces that the antitrust laws are designed to protect.

Antitrust law inherently lends itself to the risk of erroneous interpretations of ambiguous evidence. Outside clear cases of interfirm collusion, virtually all conduct that might appear anti-competitive might just as easily be proven, after significant factual inquiry, to be pro-competitive. This fundamental risk of a false diagnosis has guided antitrust case law and regulatory policy since at least the Supreme Court’s landmark Continental Television v. GTE Sylvania decision in 1977 and arguably earlier. Judicial and regulatory efforts to mitigate this ambiguity, while preserving the deterrent power of the antitrust laws, have resulted in the evidentiary requirements that are targeted by the proposed bill.

Proponents of the legislative “reforms” might argue that modern antitrust case law’s careful avoidance of enforcement error yields excessive caution. To relieve regulators and courts from having to do their homework before disrupting a targeted business and its employees, shareholders, customers and suppliers, the proposed bill empowers plaintiffs to allege and courts to “find” anti-competitive conduct without having to be bound to the reasonably objective metrics upon which courts and regulators have relied for decades. That runs the risk of substituting rhetoric and intuition for fact and analysis as the guiding principles of antitrust enforcement and adjudication.

This dismissal of even a rudimentary commitment to rule-of-law principles is illustrated by two dramatic departures from existing case law in the proposed bill. Each constitutes a largely unrestrained “blank check” for regulatory and judicial overreach.

Blank Check #1

The bill includes a broad prohibition on “exclusionary” conduct, which is defined to include any conduct that “materially disadvantages 1 or more actual or potential competitors” and “presents an appreciable risk of harming competition.” That amorphous language arguably enables litigants to target a firm that offers consumers lower prices but “disadvantages” less efficient competitors that cannot match that price.

In fact, the proposed legislation specifically facilitates this litigation strategy by relieving predatory pricing claims from having to show that pricing is below cost or likely to result ultimately in profits for the defendant. While the bill permits a defendant to escape liability by showing sufficiently countervailing “procompetitive benefits,” the onus rests on the defendant to show otherwise. This burden-shifting strategy encourages lagging firms to shift competition from the marketplace to the courthouse.

Blank Check #2

The bill then removes another evidentiary safeguard by relieving plaintiffs from always having to define a relevant market. Rather, it may be sufficient to show that the contested practice gives rise to an “appreciable risk of harming competition … based on the totality of the circumstances.” It is hard to miss the high degree of subjectivity in this standard.

This ambiguous threshold runs counter to antitrust principles that require a credible showing of market power in virtually all cases except horizontal collusion. Those principles make perfect sense. Market power is the gateway concept that enables courts to distinguish between claims that plausibly target alleged harms to competition and those that do not. Without a well-defined market, it is difficult to know whether a particular practice reflects market power or market competition. Removing the market power requirement can remove any meaningful grounds on which a defendant could avoid a nuisance lawsuit or contest or appeal a conclusory allegation or finding of anticompetitive conduct.

Anti-Market Antitrust

The bill’s transparently outcome-driven approach is likely to give rise to a cloud of liability that penalizes businesses that benefit consumers through price and quality combinations that competitors cannot replicate. This obviously runs directly counter to the purpose of the antitrust laws. Certainly, winners can and sometimes do entrench themselves through potentially anticompetitive practices that should be closely scrutinized. However, the proposed legislation seems to reflect a presumption that successful businesses usually win by employing illegitimate tactics, rather than simply being the most efficient firm in the market. Under that assumption, competition law becomes a tool for redoing, rather than enabling, competitive outcomes.

While this populist approach may be popular, it is neither economically sound nor consistent with a market-driven economy in which resources are mostly allocated through pricing mechanisms and government intervention is the exception, not the rule. It would appear that some legislators would like to reverse that presumption. Far from being a victory for consumers, that outcome would constitute a resounding loss.

FTC v. Qualcomm

Last week the International Center for Law & Economics (ICLE) and twelve noted law and economics scholars filed an amicus brief in the Ninth Circuit in FTC v. Qualcomm, in support of appellant (Qualcomm) and urging reversal of the district court’s decision. The brief was authored by Geoffrey A. Manne, President & founder of ICLE, and Ben Sperry, Associate Director, Legal Research of ICLE. Jarod M. Bona and Aaron R. Gott of Bona Law PC collaborated in drafting the brief and they and their team provided invaluable pro bono legal assistance, for which we are enormously grateful. Signatories on the brief are listed at the end of this post.

We’ve written about the case several times on Truth on the Market, as have a number of guest bloggers, in our ongoing blog series on the case here.   

The ICLE amicus brief focuses on the ways that the district court exceeded the “error cost” guardrails erected by the Supreme Court to minimize the risk and cost of mistaken antitrust decisions, particularly those that wrongly condemn procompetitive behavior. As the brief notes at the outset:

The district court’s decision is disconnected from the underlying economics of the case. It improperly applied antitrust doctrine to the facts, and the result subverts the economic rationale guiding monopolization jurisprudence. The decision—if it stands—will undercut the competitive values antitrust law was designed to protect.  

The antitrust error cost framework was most famously elaborated by Frank Easterbrook in his seminal article, The Limits of Antitrust (1984). It has since been squarely adopted by the Supreme Court—most significantly in Brooke Group (1986), Trinko (2003), and linkLine (2009).  

In essence, the Court’s monopolization case law implements the error cost framework by (among other things) obliging courts to operate under certain decision rules that limit the use of inferences about the consequences of a defendant’s conduct except when the circumstances create what game theorists call a “separating equilibrium.” A separating equilibrium is a 

solution to a game in which players of different types adopt different strategies and thereby allow an uninformed player to draw inferences about an informed player’s type from that player’s actions.

Baird, Gertner & Picker, Game Theory and the Law

The key problem in antitrust is that while the consequence of complained-of conduct for competition (i.e., consumers) is often ambiguous, its deleterious effect on competitors is typically quite evident—whether it is actually anticompetitive or not. The question is whether (and when) it is appropriate to infer anticompetitive effect from discernible harm to competitors. 

Except in the narrowly circumscribed (by Trinko) instance of a unilateral refusal to deal, anticompetitive harm under the rule of reason must be proven. It may not be inferred from harm to competitors, because such an inference is too likely to be mistaken—and “mistaken inferences are especially costly, because they chill the very conduct the antitrust laws are designed to protect.” (Brooke Group (quoting yet another key Supreme Court antitrust error cost case, Matsushita (1986)). 

Yet, as the brief discusses, in finding Qualcomm liable the district court did not demand or find proof of harm to competition. Instead, the court’s opinion relies on impermissible inferences from ambiguous evidence to find that Qualcomm had (and violated) an antitrust duty to deal with rival chip makers and that its conduct resulted in anticompetitive foreclosure of competition. 

We urge you to read the brief (it’s pretty short—maybe the length of three blogs posts) to get the whole argument. Below we draw attention to a few points we make in the brief that are especially significant. 

The district court bases its approach entirely on Microsoft — which it misinterprets in clear contravention of Supreme Court case law

The district court doesn’t stay within the strictures of the Supreme Court’s monopolization case law. In fact, although it obligingly recites some of the error cost language from Trinko, it quickly moves away from Supreme Court precedent and bases its approach entirely on its reading of the D.C. Circuit’s Microsoft (2001) decision. 

Unfortunately, the district court’s reading of Microsoft is mistaken and impermissible under Supreme Court precedent. Indeed, both the Supreme Court and the D.C. Circuit make clear that a finding of illegal monopolization may not rest on an inference of anticompetitive harm.

The district court cites Microsoft for the proposition that

Where a government agency seeks injunctive relief, the Court need only conclude that Qualcomm’s conduct made a “significant contribution” to Qualcomm’s maintenance of monopoly power. The plaintiff is not required to “present direct proof that a defendant’s continued monopoly power is precisely attributable to its anticompetitive conduct.”

It’s true Microsoft held that, in government actions seeking injunctions, “courts [may] infer ‘causation’ from the fact that a defendant has engaged in anticompetitive conduct that ‘reasonably appears capable of making a significant contribution to maintaining monopoly power.’” (Emphasis added). 

But Microsoft never suggested that anticompetitiveness itself may be inferred.

“Causation” and “anticompetitive effect” are not the same thing. Indeed, Microsoft addresses “anticompetitive conduct” and “causation” in separate sections of its decision. And whereas Microsoft allows that courts may infer “causation” in certain government actions, it makes no such allowance with respect to “anticompetitive effect.” In fact, it explicitly rules it out:

[T]he plaintiff… must demonstrate that the monopolist’s conduct indeed has the requisite anticompetitive effect…; no less in a case brought by the Government, it must demonstrate that the monopolist’s conduct harmed competition, not just a competitor.”

The D.C. Circuit subsequently reinforced this clear conclusion of its holding in Microsoft in Rambus

Deceptive conduct—like any other kind—must have an anticompetitive effect in order to form the basis of a monopolization claim…. In Microsoft… [t]he focus of our antitrust scrutiny was properly placed on the resulting harms to competition.

Finding causation entails connecting evidentiary dots, while finding anticompetitive effect requires an economic assessment. Without such analysis it’s impossible to distinguish procompetitive from anticompetitive conduct, and basing liability on such an inference effectively writes “anticompetitive” out of the law.

Thus, the district court is correct when it holds that it “need not conclude that Qualcomm’s conduct is the sole reason for its rivals’ exits or impaired status.” But it is simply wrong to hold—in the same sentence—that it can thus “conclude that Qualcomm’s practices harmed competition and consumers.” The former claim is consistent with Microsoft; the latter is emphatically not.

Under Trinko and Aspen Skiing the district court’s finding of an antitrust duty to deal is impermissible 

Because finding that a company operates under a duty to deal essentially permits a court to infer anticompetitive harm without proof, such a finding “comes dangerously close to being a form of ‘no-fault’ monopolization,” as Herbert Hovenkamp has written. It is also thus seriously disfavored by the Court’s error cost jurisprudence.

In Trinko the Supreme Court interprets its holding in Aspen Skiing to identify essentially a single scenario from which it may plausibly be inferred that a monopolist’s refusal to deal with rivals harms consumers: the existence of a prior, profitable course of dealing, and the termination and replacement of that arrangement with an alternative that not only harms rivals, but also is less profitable for the monopolist.

In an effort to satisfy this standard, the district court states that “because Qualcomm previously licensed its rivals, but voluntarily stopped licensing rivals even though doing so was profitable, Qualcomm terminated a voluntary and profitable course of dealing.”

But it’s not enough merely that the prior arrangement was profitable. Rather, Trinko and Aspen Skiing hold that when a monopolist ends a profitable relationship with a rival, anticompetitive exclusion may be inferred only when it also refuses to engage in an ongoing arrangement that, in the short run, is more profitable than no relationship at all. The key is the relative value to the monopolist of the current options on offer, not the value to the monopolist of the terminated arrangement. In a word, what the Court requires is that the defendant exhibit behavior that, but-for the expectation of future, anticompetitive returns, is irrational.

It should be noted, as John Lopatka (here) and Alan Meese (here) (both of whom joined the amicus brief) have written, that even the Supreme Court’s approach is likely insufficient to permit a court to distinguish between procompetitive and anticompetitive conduct. 

But what is certain is that the district court’s approach in no way permits such an inference.

“Evasion of a competitive constraint” is not an antitrust-relevant refusal to deal

In order to infer anticompetitive effect, it’s not enough that a firm may have a “duty” to deal, as that term is colloquially used, based on some obligation other than an antitrust duty, because it can in no way be inferred from the evasion of that obligation that conduct is anticompetitive.

The district court bases its determination that Qualcomm’s conduct is anticompetitive on the fact that it enables the company to avoid patent exhaustion, FRAND commitments, and thus price competition in the chip market. But this conclusion is directly precluded by the Supreme Court’s holding in NYNEX

Indeed, in Rambus, the D.C. Circuit, citing NYNEX, rejected the FTC’s contention that it may infer anticompetitive effect from defendant’s evasion of a constraint on its monopoly power in an analogous SEP-licensing case: “But again, as in NYNEX, an otherwise lawful monopolist’s end-run around price constraints, even when deceptive or fraudulent, does not alone present a harm to competition.”

As Josh Wright has noted:

[T]he objection to the “evasion” of any constraint approach is… that it opens the door to enforcement actions applied to business conduct that is not likely to harm competition and might be welfare increasing.

Thus NYNEX and Rambus (and linkLine) reinforce the Court’s repeated holding that an inference of harm to competition is permissible only where conduct points clearly to anticompetitive effect—and, bad as they may be, evading obligations under other laws or violating norms of “business morality” do not suffice.

The district court’s elaborate theory of harm rests fundamentally on the claim that Qualcomm injures rivals—and the record is devoid of evidence demonstrating actual harm to competition. Instead, the court infers it from what it labels “unreasonably high” royalty rates, enabled by Qualcomm’s evasion of competition from rivals. In turn, the court finds that that evasion of competition can be the source of liability if what Qualcomm evaded was an antitrust duty to deal. And, in impermissibly circular fashion, the court finds that Qualcomm indeed evaded an antitrust duty to deal—because its conduct allowed it to sustain “unreasonably high” prices. 

The Court’s antitrust error cost jurisprudence—from Brooke Group to NYNEX to Trinko & linkLine—stands for the proposition that no such circular inferences are permitted.

The district court’s foreclosure analysis also improperly relies on inferences in lieu of economic evidence

Because the district court doesn’t perform a competitive effects analysis, it fails to demonstrate the requisite “substantial” foreclosure of competition required to sustain a claim of anticompetitive exclusion. Instead the court once again infers anticompetitive harm from harm to competitors. 

The district court makes no effort to establish the quantity of competition foreclosed as required by the Supreme Court. Nor does the court demonstrate that the alleged foreclosure harms competition, as opposed to just rivals. Foreclosure per se is not impermissible and may be perfectly consistent with procompetitive conduct.

Again citing Microsoft, the district court asserts that a quantitative finding is not required. Yet, as the court’s citation to Microsoft should have made clear, in its stead a court must find actual anticompetitive effect; it may not simply assert it. As Microsoft held: 

It is clear that in all cases the plaintiff must… prove the degree of foreclosure. This is a prudential requirement; exclusivity provisions in contracts may serve many useful purposes. 

The court essentially infers substantiality from the fact that Qualcomm entered into exclusive deals with Apple (actually, volume discounts), from which the court concludes that Qualcomm foreclosed rivals’ access to a key customer. But its inference that this led to substantial foreclosure is based on internal business statements—so-called “hot docs”—characterizing the importance of Apple as a customer. Yet, as Geoffrey Manne and Marc Williamson explain, such documentary evidence is unreliable as a guide to economic significance or legal effect: 

Business people will often characterize information from a business perspective, and these characterizations may seem to have economic implications. However, business actors are subject to numerous forces that influence the rhetoric they use and the conclusions they draw….

There are perfectly good reasons to expect to see “bad” documents in business settings when there is no antitrust violation lurking behind them.

Assuming such language has the requisite economic or legal significance is unsupportable—especially when, as here, the requisite standard demands a particular quantitative significance.

Moreover, the court’s “surcharge” theory of exclusionary harm rests on assumptions regarding the mechanism by which the alleged surcharge excludes rivals and harms consumers. But the court incorrectly asserts that only one mechanism operates—and it makes no effort to quantify it. 

The court cites “basic economics” via Mankiw’s Principles of Microeconomics text for its conclusion:

The surcharge affects demand for rivals’ chips because as a matter of basic economics, regardless of whether a surcharge is imposed on OEMs or directly on Qualcomm’s rivals, “the price paid by buyers rises, and the price received by sellers falls.” Thus, the surcharge “places a wedge between the price that buyers pay and the price that sellers receive,” and demand for such transactions decreases. Rivals see lower sales volumes and lower margins, and consumers see less advanced features as competition decreases.

But even assuming the court is correct that Qualcomm’s conduct entails such a surcharge, basic economics does not hold that decreased demand for rivals’ chips is the only possible outcome. 

In actuality, an increase in the cost of an input for OEMs can have three possible effects:

  1. OEMs can pass all or some of the cost increase on to consumers in the form of higher phone prices. Assuming some elasticity of demand, this would mean fewer phone sales and thus less demand by OEMs for chips, as the court asserts. But the extent of that effect would depend on consumers’ demand elasticity and the magnitude of the cost increase as a percentage of the phone price. If demand is highly inelastic at this price (i.e., relatively insensitive to the relevant price change), it may have a tiny effect on the number of phones sold and thus the number of chips purchased—approaching zero as price insensitivity increases.
  2. OEMs can absorb the cost increase and realize lower profits but continue to sell the same number of phones and purchase the same number of chips. This would not directly affect demand for chips or their prices.
  3. OEMs can respond to a price increase by purchasing fewer chips from rivals and more chips from Qualcomm. While this would affect rivals’ chip sales, it would not necessarily affect consumer prices, the total number of phones sold, or OEMs’ margins—that result would depend on whether Qualcomm’s chips cost more or less than its rivals’. If the latter, it would even increase OEMs’ margins and/or lower consumer prices and increase output.

Alternatively, of course, the effect could be some combination of these.

Whether any of these outcomes would substantially exclude rivals is inherently uncertain to begin with. But demonstrating a reduction in rivals’ chip sales is a necessary but not sufficient condition for proving anticompetitive foreclosure. The FTC didn’t even demonstrate that rivals were substantially harmed, let alone that there was any effect on consumers—nor did the district court make such findings. 

Doing so would entail consideration of whether decreased demand for rivals’ chips flows from reduced consumer demand or OEMs’ switching to Qualcomm for supply, how consumer demand elasticity affects rivals’ chip sales, and whether Qualcomm’s chips were actually less or more expensive than rivals’. Yet the court determined none of these. 

Conclusion

Contrary to established Supreme Court precedent, the district court’s decision relies on mere inferences to establish anticompetitive effect. The decision, if it stands, would render a wide range of potentially procompetitive conduct presumptively illegal and thus harm consumer welfare. It should be reversed by the Ninth Circuit.

Joining ICLE on the brief are:

  • Donald J. Boudreaux, Professor of Economics, George Mason University
  • Kenneth G. Elzinga, Robert C. Taylor Professor of Economics, University of Virginia
  • Janice Hauge, Professor of Economics, University of North Texas
  • Justin (Gus) Hurwitz, Associate Professor of Law, University of Nebraska College of Law; Director of Law & Economics Programs, ICLE
  • Thomas A. Lambert, Wall Chair in Corporate Law and Governance, University of Missouri Law School
  • John E. Lopatka, A. Robert Noll Distinguished Professor of Law, Penn State University Law School
  • Daniel Lyons, Professor of Law, Boston College Law School
  • Geoffrey A. Manne, President and Founder, International Center for Law & Economics; Distinguished Fellow, Northwestern University Center on Law, Business & Economics
  • Alan J. Meese, Ball Professor of Law, William & Mary Law School
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics Emeritus, Emory University
  • Vernon L. Smith, George L. Argyros Endowed Chair in Finance and Economics, Chapman University School of Business; Nobel Laureate in Economics, 2002
  • Michael Sykuta, Associate Professor of Economics, University of Missouri


This blurb published yesterday by Competition Policy International nicely illustrates the problem with the growing focus on unilateral conduct investigations by the European Commission (EC) and other leading competition agencies:

EU: Qualcomm to face antitrust complaint on predatory pricing

Dec 03, 2015

The European Union is preparing an antitrust complaint against Qualcomm Inc. over suspected predatory pricing tactics that could hobble smaller rivals, according to three people familiar with the probe.

Regulators are in the final stages of preparing a so-called statement of objections, based on a complaint by a unit of Nvidia Corp., that asked the EU to act against predatory pricing for mobile-phone chips, the people said. Qualcomm designs chipsets that power most of the world’s smartphones, licensing its technology across the industry.

Qualcomm would add to a growing list of U.S. technology companies to face EU antitrust action, following probes into Google, Microsoft Corp. and Intel Corp. A statement of objections may lead to fines, capped at 10 percent of yearly global revenue, which can be avoided if a company agrees to make changes to business behavior.

Regulators are less advanced with another probe into whether the company grants payments, rebates or other financial incentives to customers in returning for buying Qualcomm chipsets. Another case that focused on complaints that the company was charging excessive royalties on patents was dropped in 2009.

“Predatory pricing” complaints by competitors of successful innovators are typically aimed at hobbling efficient rivals and reducing aggressive competition.  If and when successful, such rent-seeking complaints attenuate competitive vigor (thereby disincentivizing innovation) and tend to raise prices to consumers – a result inimical with antitrust’s overarching goal, consumer welfare promotion.  Although I admittedly am not privy to the facts at issue in the Qualcomm predatory pricing investigation, Nvidia is not a firm that fits the model of a rival being decimated by economic predation (given its overall success and its rapid growth and high profitability in smartchip markets).  In this competitive and dynamic industry, the likelihood that Qualcomm could recoup short-term losses from predation through sustainable monopoly pricing following Nvidia’s exit from the market would seem to be infinitesimally small or non-existent (even assuming pricing below average variable cost or average avoidable cost could be shown).  Thus, there is good reason to doubt the wisdom of the EC’s apparent decision to issue a statement of objections to Qualcomm regarding predatory pricing for mobile phone chips.

The investigation of (presumably loyalty) payments and rebates to buyers of Qualcomm chipsets also is unlikely to enhance consumer welfare.  As a general matter, such financial incentives lower costs to loyal customers, and may promote efficiencies such as guaranteed purchase volumes under favorable terms.  Although theoretically loyalty payments might be structured to effectuate anticompetitive exclusion of competitors under very special circumstances, as a general matter such payments – which like alleged “predatory” pricing typically benefit consumers – should not be a high priority for investigation by competition agencies.  This conclusion applies in spades to chipset markets, which are characterized by vigorous competition among successful firms.  Rebate schemes in dynamic markets of this sort are almost certainly a symptom of creative, welfare-enhancing competitive vigor, rather than inefficient exclusionary behavior.

A pattern of investigating price reductions and discounting plans in highly dynamic and innovative industries, exemplified by the EC’s Qualcomm investigations summarized above, is troubling in at least two respects.

First, it creates regulatory disincentives to aggressive welfare-enhancing competition aimed at capturing the customer’s favor.  Companies like Qualcomm, after being suitably chastised, may well “take the cue” and decide to avoid future trouble by “playing nice” and avoiding innovative discounting, to the detriment of future consumers and industry efficiency.

Second, the dedication of enforcement resources to investigating discounting practices by successful firms that (based on first principles and industry conditions) are highly likely to be procompetitive points to a severe misallocation of resources by the responsible competition agencies.  Such agencies should seek to optimize the use of their scarce resources by allocating them to the highest-valued targets in welfare terms, such as anticompetitive government restraints on competition and hard-core cartel conduct.  Spending any resources on chasing down what is almost certainly efficient unilateral pricing conduct not only sends a bad signal to industry (see point one), it suggests that agency priorities are badly misplaced.  (Admittedly, a problem faced by the EC and many other competition authorities is that they are required to respond to third party complaints, but the nature of that response and the resources allocated could be better calibrated to the likely merit of such complaints.  Whether the law should be changed to grant such competition authorities broad prosecutorial discretion to ignore clearly non-meritorious complaints (such as the wide discretion enjoyed by U.S. antitrust enforcers) is beyond the scope of this commentary, and merits separate treatment.)

A proper application of decision theory and its error cost approach could help the EC and other competition enforcers avoid the problem of inefficiently chasing down procompetitive unilateral conduct.  Such an approach would focus intensively on highly welfare inimical conduct that lacks credible efficiencies (thus minimizing false positives in enforcement) that can be pursued with a relatively low expenditure of administrative costs (given the lack of credible efficiency justifications that need to be evaluated).  As indicated above, a substantial allocation of resources to hard core cartel conduct, bid rigging, and anticompetitive government-imposed market distortions (including poorly designed regulations and state aids) would be consistent with such an approach.  Relatedly, investigating single firm conduct, which is central to spurring a dynamic competitive process and is often misdiagnosed as anticompetitive (thereby imposing false positive costs), should be deemphasized.  (Obviously, even under a decision-theoretic framework, certain agency resources would continue to be devoted to mandatory merger reviews and other core legally required agency functions.)

My article with Thom Lambert arguing that the Supreme Court – but not the Obama Administration – has substantially adopted an error cost approach to antitrust enforcement, appears in the newly released September 2015 issue of the Journal of Competition Law and Economics.  To whet your appetite, I am providing the abstract:

In his seminal 1984 article, The Limits of Antitrust, Judge Frank Easterbrook proposed that courts and enforcers adopt a simple set of screening rules for application in antitrust cases, in order to minimize error and decision costs and thereby maximize antitrust’s social value. Over time, federal courts in general—and the U.S. Supreme Court in particular, under Chief Justice Roberts—have in substantial part adopted Easterbrook’s “limits of antitrust” approach, thereby helping to reduce costly antitrust uncertainty. Recently, however, antitrust enforcers in the Obama Administration (unlike their predecessors in the Reagan, Bush, and Clinton Administrations) have been less attuned to this approach, and have undertaken initiatives that reduce clarity and predictability in antitrust enforcement. Regardless of the cause of the diverging stances on the limits of antitrust, two things are clear. First, recent enforcement agency policies are severely at odds with the philosophy that informs Supreme Court antitrust jurisprudence. Second, if the agencies do not reverse course, acknowledge antitrust’s limits, and seek to optimize the law in light of those limits, consumers will suffer.

Let us hope that error cost considerations figure more prominently in antitrust enforcement under the next Administration.

As the organizer of this retrospective on Josh Wright’s tenure as FTC Commissioner, I have the (self-conferred) honor of closing out the symposium.

When Josh was confirmed I wrote that:

The FTC will benefit enormously from Josh’s expertise and his error cost approach to antitrust and consumer protection law will be a tremendous asset to the Commission — particularly as it delves further into the regulation of data and privacy. His work is rigorous, empirically grounded, and ever-mindful of the complexities of both business and regulation…. The Commissioners and staff at the FTC will surely… profit from his time there.

Whether others at the Commission have really learned from Josh is an open question, but there’s no doubt that Josh offered an enormous amount from which they could learn. As Tim Muris said, Josh “did not disappoint, having one of the most important and memorable tenures of any non-Chair” at the agency.

Within a month of his arrival at the Commission, in fact, Josh “laid down the cost-benefit-analysis gauntlet” in a little-noticed concurring statement regarding a proposed amendment to the Hart-Scott-Rodino Rules. The technical details of the proposed rule don’t matter for these purposes, but, as Josh noted in his statement, the situation intended to be avoided by the rule had never arisen:

The proposed rulemaking appears to be a solution in search of a problem. The Federal Register notice states that the proposed rules are necessary to prevent the FTC and DOJ from “expend[ing] scarce resources on hypothetical transactions.” Yet, I have not to date been presented with evidence that any of the over 68,000 transactions notified under the HSR rules have required Commission resources to be allocated to a truly hypothetical transaction.

What Josh asked for in his statement was not that the rule be scrapped, but simply that, before adopting the rule, the FTC weigh its costs and benefits.

As I noted at the time:

[I]t is the Commission’s responsibility to ensure that the rules it enacts will actually be beneficial (it is a consumer protection agency, after all). The staff, presumably, did a perfectly fine job writing the rule they were asked to write. Josh’s point is simply that it isn’t clear the rule should be adopted because it isn’t clear that the benefits of doing so would outweigh the costs.

As essentially everyone who has contributed to this symposium has noted, Josh was singularly focused on the rigorous application of the deceptively simple concept that the FTC should ensure that the benefits of any rule or enforcement action it adopts outweigh the costs. The rest, as they say, is commentary.

For Josh, this basic principle should permeate every aspect of the agency, and permeate the way it thinks about everything it does. Only an entirely new mindset can ensure that outcomes, from the most significant enforcement actions to the most trivial rule amendments, actually serve consumers.

While the FTC has a strong tradition of incorporating economic analysis in its antitrust decision-making, its record in using economics in other areas is decidedly mixed, as Berin points out. But even in competition policy, the Commission frequently uses economics — but it’s not clear it entirely understands economics. The approach that others have lauded Josh for is powerful, but it’s also subtle.

Inherent limitations on anyone’s knowledge about the future of technology, business and social norms caution skepticism, as regulators attempt to predict whether any given business conduct will, on net, improve or harm consumer welfare. In fact, a host of factors suggests that even the best-intentioned regulators tend toward overconfidence and the erroneous condemnation of novel conduct that benefits consumers in ways that are difficult for regulators to understand. Coase’s famous admonition in a 1972 paper has been quoted here before (frequently), but bears quoting again:

If an economist finds something – a business practice of one sort or another – that he does not understand, he looks for a monopoly explanation. And as in this field we are very ignorant, the number of ununderstandable practices tends to be very large, and the reliance on a monopoly explanation, frequent.

Simply “knowing” economics, and knowing that it is important to antitrust enforcement, aren’t enough. Reliance on economic formulae and theoretical models alone — to say nothing of “evidence-based” analysis that doesn’t or can’t differentiate between probative and prejudicial facts — doesn’t resolve the key limitations on regulatory decisionmaking that threaten consumer welfare, particularly when it comes to the modern, innovative economy.

As Josh and I have written:

[O]ur theoretical knowledge cannot yet confidently predict the direction of the impact of additional product market competition on innovation, much less the magnitude. Additionally, the multi-dimensional nature of competition implies that the magnitude of these impacts will be important as innovation and other forms of competition will frequently be inversely correlated as they relate to consumer welfare. Thus, weighing the magnitudes of opposing effects will be essential to most policy decisions relating to innovation. Again, at this stage, economic theory does not provide a reliable basis for predicting the conditions under which welfare gains associated with greater product market competition resulting from some regulatory intervention will outweigh losses associated with reduced innovation.

* * *

In sum, the theoretical and empirical literature reveals an undeniably complex interaction between product market competition, patent rules, innovation, and consumer welfare. While these complexities are well understood, in our view, their implications for the debate about the appropriate scale and form of regulation of innovation are not.

Along the most important dimensions, while our knowledge has expanded since 1972, the problem has not disappeared — and it may only have magnified. As Tim Muris noted in 2005,

[A] visitor from Mars who reads only the mathematical IO literature could mistakenly conclude that the U.S. economy is rife with monopoly power…. [Meanwhile, Section 2’s] history has mostly been one of mistaken enforcement.

It may not sound like much, but what is needed, what Josh brought to the agency, and what turns out to be absolutely essential to getting it right, is unflagging awareness of and attention to the institutional, political and microeconomic relationships that shape regulatory institutions and regulatory outcomes.

Regulators must do their best to constantly grapple with uncertainty, problems of operationalizing useful theory, and, perhaps most important, the social losses associated with error costs. It is not (just) technicians that the FTC needs; it’s regulators imbued with the “Economic Way of Thinking.” In short, what is needed, and what Josh brought to the Commission, is humility — the belief that, as Coase also wrote, sometimes the best answer is to “do nothing at all.”

The technocratic model of regulation is inconsistent with the regulatory humility required in the face of fast-changing, unexpected — and immeasurably valuable — technological advance. As Virginia Postrel warns in The Future and Its Enemies:

Technocrats are “for the future,” but only if someone is in charge of making it turn out according to plan. They greet every new idea with a “yes, but,” followed by legislation, regulation, and litigation…. By design, technocrats pick winners, establish standards, and impose a single set of values on the future.

For Josh, the first JD/Econ PhD appointed to the FTC,

economics provides a framework to organize the way I think about issues beyond analyzing the competitive effects in a particular case, including, for example, rulemaking, the various policy issues facing the Commission, and how I weigh evidence relative to the burdens of proof and production. Almost all the decisions I make as a Commissioner are made through the lens of economics and marginal analysis because that is the way I have been taught to think.

A representative example will serve to illuminate the distinction between merely using economics and evidence and understanding them — and their limitations.

In his Nielson/Arbitron dissent Josh wrote:

The Commission thus challenges the proposed transaction based upon what must be acknowledged as a novel theory—that is, that the merger will substantially lessen competition in a market that does not today exist.

[W]e… do not know how the market will evolve, what other potential competitors might exist, and whether and to what extent these competitors might impose competitive constraints upon the parties.

Josh’s straightforward statement of the basis for restraint stands in marked contrast to the majority’s decision to impose antitrust-based limits on economic activity that hasn’t even yet been contemplated. Such conduct is directly at odds with a sensible, evidence-based approach to enforcement, and the economic problems with it are considerable, as Josh also notes:

[I]t is an exceedingly difficult task to predict the competitive effects of a transaction where there is insufficient evidence to reliably answer the[] basic questions upon which proper merger analysis is based.

When the Commission’s antitrust analysis comes unmoored from such fact-based inquiry, tethered tightly to robust economic theory, there is a more significant risk that non-economic considerations, intuition, and policy preferences influence the outcome of cases.

Compare in this regard Josh’s words about Nielsen with Deborah Feinstein’s defense of the majority from such charges:

The Commission based its decision not on crystal-ball gazing about what might happen, but on evidence from the merging firms about what they were doing and from customers about their expectations of those development plans. From this fact-based analysis, the Commission concluded that each company could be considered a likely future entrant, and that the elimination of the future offering of one would likely result in a lessening of competition.

Instead of requiring rigorous economic analysis of the facts, couched in an acute awareness of our necessary ignorance about the future, for Feinstein the FTC fulfilled its obligation in Nielsen by considering the “facts” alone (not economic evidence, mind you, but customer statements and expressions of intent by the parties) and then, at best, casually applying to them the simplistic, outdated structural presumption – the conclusion that increased concentration would lead inexorably to anticompetitive harm. Her implicit claim is that all the Commission needed to know about the future was what the parties thought about what they were doing and what (hardy disinterested) customers thought they were doing. This shouldn’t be nearly enough.

Worst of all, Nielsen was “decided” with a consent order. As Josh wrote, strongly reflecting the essential awareness of the broader institutional environment that he brought to the Commission:

[w]here the Commission has endorsed by way of consent a willingness to challenge transactions where it might not be able to meet its burden of proving harm to competition, and which therefore at best are competitively innocuous, the Commission’s actions may alter private parties’ behavior in a manner that does not enhance consumer welfare.

Obviously in this regard his successful effort to get the Commission to adopt a UMC enforcement policy statement is a most welcome development.

In short, Josh is to be applauded not because he brought economics to the Commission, but because he brought the economic way of thinking. Such a thing is entirely too rare in the modern administrative state. Josh’s tenure at the FTC was relatively short, but he used every moment of it to assiduously advance his singular, and essential, mission. And, to paraphrase the last line of the movie The Right Stuff (it helps to have the rousing film score playing in the background as you read this): “for a brief moment, [Josh Wright] became the greatest [regulator] anyone had ever seen.”

I would like to extend my thanks to everyone who participated in this symposium. The contributions here will stand as a fitting and lasting tribute to Josh and his legacy at the Commission. And, of course, I’d also like to thank Josh for a tenure at the FTC very much worth honoring.