Archives For error costs

[This post from Jonathan M. Barnett, the Torrey H. Webb Professor of Law at the University of Southern California’s Gould School of Law, is an entry in Truth on the Market’s continuing FTC UMC Rulemaking symposium. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

In its Advance Notice for Proposed Rulemaking (ANPR) on Commercial Surveillance and Data Security, the Federal Trade Commission (FTC) has requested public comment on an unprecedented initiative to promulgate and implement wide-ranging rules concerning the gathering and use of consumer data in digital markets. In this contribution, I will assume, for the sake of argument, that the commission has the legal authority to exercise its purported rulemaking powers for this purpose without a specific legislative mandate (a question as to which I recognize there is great uncertainty, which is further heightened by the fact that Congress is concurrently considered legislation in the same policy area).

In considering whether to use these powers for the purposes of adopting and implementing privacy-related regulations in digital markets, the commission would be required to undertake a rigorous assessment of the expected costs and benefits of any such regulation. Any such cost-benefit analysis must comprise at least two critical elements that are omitted from, or addressed in highly incomplete form in, the ANPR.

The Hippocratic Oath of Regulatory Intervention

There is a longstanding consensus that regulatory intervention is warranted only if a market failure can be identified with reasonable confidence. This principle is especially relevant in the case of the FTC, which is entrusted with preserving competitive markets and, therefore, should be hesitant about intervening in market transactions without a compelling evidentiary basis. As a corollary to this proposition, it is also widely agreed that implementing any intervention to correct a market failure would only be warranted to the extent that such intervention would be reasonably expected to correct any such failure at a net social gain.

This prudent approach tracks the “economic effect” analysis that the commission must apply in the rulemaking process contemplated under the Federal Trade Commission Act and the analysis of “projected benefits and … adverse economic effects” of proposed and final rules contemplated by the commission’s rules of practice. Consistent with these requirements, the commission has exhibited a longstanding commitment to thorough cost-benefit analysis. As observed by former Commissioner Julie Brill in 2016, “the FTC conducts its rulemakings with the same level of attention to costs and benefits that is required of other agencies.” Former Commissioner Brill also observed that the “FTC combines our broad mandate to protect consumers with a rigorous, empirical approach to enforcement matters.”

This demanding, fact-based protocol enhances the likelihood that regulatory interventions result in a net improvement relative to the status quo, an uncontroversial goal of any rational public policy. Unfortunately, the ANPR does not make clear that the commission remains committed to this methodology.

Assessing Market Failure in the Use of Consumer Data

To even “get off the ground,” any proposed privacy regulation would be required to identify a market failure arising from a particular use of consumer data. This requires a rigorous and comprehensive assessment of the full range of social costs and benefits that can be reasonably attributed to any such practice.

The ANPR’s Oversights

In contrast to the approach described by former Commissioner Brill, several elements of the ANPR raise significant doubts concerning the current commission’s willingness to assess evidence relevant to the potential necessity of privacy-related regulations in a balanced, rigorous, and comprehensive manner.

First, while the ANPR identifies a plethora of social harms attributable to data-collection practices, it merely acknowledges the possibility that consumers enjoy benefits from such practices “in theory.” This skewed perspective is not empirically serious. Focusing almost entirely on the costs of data collection and dismissing as conjecture any possible gains defies market realities, especially given the fact that (as discussed below) those gains are clearly significant and, in some cases, transformative.

Second, the ANPR’s choice of the normatively charged term “data surveillance” to encompass all uses of consumer data conveys the impression that all data collection through digital services is surreptitious or coerced, whereas (as discussed below) some users may knowingly provide such data to enable certain data-reliant functionalities.

Third, there is no mention in the ANPR that online providers widely provide users with notices concerning certain uses of consumer data and often require users to select among different levels of data collection.

Fourth, the ANPR unusually relies substantially on news websites and non-peer-reviewed publications in the style of policy briefs or advocacy papers, rather than the empirical social-science research on which the commission has historically made policy determinations.

This apparent indifference to analytical balance is particularly exhibited in the ANPR’s failure to address the economic gains generated through the use of consumer data in online markets. As was recognized in a 2014 White House report, many valuable digital services could not function effectively without engaging in some significant level of data collection. The examples are numerous and diverse, including traffic-navigation services that rely on data concerning a user’s geographic location (as well as other users’ geographic location); personalized ad delivery, which relies on data concerning a user’s search history and other disclosed characteristics; and search services, which rely on the ability to use user data to offer search services at no charge while offering targeted advertisements to paying advertisers.

There are equally clear gains on the “supply” side of the market. Data-collection practices can expand market access by enabling smaller vendors to leverage digital intermediaries to attract consumers that are most likely to purchase those vendors’ goods or services. The commission has recognized this point in the past, observing in a 2014 report:

Data brokers provide the information they compile to clients, who can use it to benefit consumers … [C]onsumers may benefit from increased and innovative product offerings fueled by increased competition from small businesses that are able to connect with consumers that they may not have otherwise been able to reach.

Given the commission’s statutory mission under the FTC Act to protect consumers’ interests and preserve competitive markets, these observations should be of special relevance.

Data Protection v. Data-Reliant Functionality

Data-reliant services yield social gains by substantially lowering transaction costs and, in the process, enabling services that would not otherwise be feasible, with favorable effects for consumers and vendors. This observation does not exclude the possibility that specific uses of consumer data may constitute a potential market failure that merits regulatory scrutiny and possible intervention (assuming there is sufficient legal authority for the relevant agency to undertake any such intervention). That depends on whether the social costs reasonably attributable to a particular use of consumer data exceed the social gains reasonably attributable to that use. This basic principle seems to be recognized by the ANPR, which states that the commission can only deem a practice “unfair” under the FTC Act if “it causes or is likely to cause substantial injury” and “the injury is not outweighed by benefits to consumers or competition.”

In implementing this principle, it is important to keep in mind that a market failure could only arise if the costs attributable to any particular use of consumer data are not internalized by the parties to the relevant transaction. This requires showing either that a particular use of consumer data imposes harms on third parties (a plausible scenario in circumstances implicating risks to data security) or consumers are not aware of, or do not adequately assess or foresee, the costs they incur as a result of such use (a plausible scenario in circumstances implicating risks to consumer data). For the sake of brevity, I will focus on the latter scenario.

Many scholars have taken the view that consumers do not meaningfully read privacy notices or consider privacy risks, although the academic literature has also recognized efforts by private entities to develop notice methodologies that can improve consumers’ ability to do so. Even accepting this view, however, it does not necessarily follow (as the ANPR appears to assume) that a more thorough assessment of privacy risks would inevitably lead consumers to elect higher levels of data privacy even where that would degrade functionality or require paying a positive price for certain services. That is a tradeoff that will vary across consumers. It is therefore difficult to predict and easy to get wrong.

As the ANPR indirectly acknowledges in questions 26 and 40, interventions that bar certain uses of consumer data may therefore harm consumers by compelling the modification, positive pricing, or removal from the market of popular data-reliant services. For this reason, some scholars and commentators have favored the informed-consent approach that provides users with the option to bar or limit certain uses of their data. This approach minimizes error costs since it avoids overestimating consumer preferences for privacy. Unlike a flat prohibition of certain uses of consumer data, it also can reflect differences in those preferences across consumers. The ANPR appears to dismiss this concern, asking in question 75 whether certain practices should be made illegal “irrespective of whether consumers consent to them” (my emphasis added).

Addressing the still-uncertain body of evidence concerning the tradeoff between privacy protections on the one hand and data-reliant functionalities on the other (as well as the still-unresolved extent to which users can meaningfully make that tradeoff) lies outside the scope of this discussion. However, the critical observation is that any determination of market failure concerning any particular use of consumer data must identify the costs (and specifically, identify non-internalized costs) attributable to any such use and then offset those costs against the gains attributable to that use.

This balancing analysis is critical. As the commission recognized in a 2015 report, it is essential to strike a balance between safeguarding consumer privacy without suppressing the economic gains that arise from data-reliant services that can benefit consumers and vendors alike. This even-handed approach is largely absent from the ANPR—which, as noted above, focuses almost entirely on costs while largely overlooking the gains associated with the uses of consumer data in online markets. This suggests a one-sided approach to privacy regulation that is incompatible with the cost-benefit analysis that the commission recognizes it must follow in the rulemaking process.

Private-Ordering Approaches to Consumer-Data Regulation

Suppose that a rigorous and balanced cost-benefit analysis determines that a particular use of consumer data would likely yield social costs that exceed social gains. It would still remain to be determined whether and howa regulator should intervene to yield a net social gain. As regulators make this determination, it is critical that they consider the full range of possible mechanisms to address a particular market failure in the use of consumer data.

Consistent with this approach, the FTC Act specifically requires that the commission specify in an ANPR “possible regulatory alternatives under consideration,” a requirement that is replicated at each subsequent stage of the rulemaking process, as provided in the rules of practice. The range of alternatives should include the possibility of taking no action, if no feasible intervention can be identified that would likely yield a net gain.

In selecting among those alternatives, it is imperative that the commission consider the possibility of unnecessary or overly burdensome rules that could impede the efficient development and supply of data-reliant services, either degrading the quality or raising the price of those services. In the past, the commission has emphasized this concern, stating in 2011 that “[t]he FTC actively looks for means to reduce burdens while preserving the effectiveness of a rule.”

This consideration (which appears to be acknowledged in question 24 of the ANPR) is of special importance to privacy-related regulation, given that the estimated annual costs to the U.S. economy (as calculated by the Information Technology and Innovation Foundation) of compliance with the most extensive proposed forms of privacy-related regulations would exceed $100 billion dollars. Those costs would be especially burdensome for smaller entities, effectively raising entry barriers and reducing competition in online markets (a concern that appears to be acknowledged in question 27 of the ANPR).

Given the exceptional breadth of the rules that the ANPR appears to contemplate—cover an ambitious range of activities that would typically be the subject of a landmark piece of federal legislation, rather than administrative rulemaking—it is not clear that the commission has seriously considered this vital point of concern.

In the event that the FTC does move forward with any of these proposed rulemakings (which would be required to rest on a factually supported finding of market failure), it would confront a range of possible interventions in markets for consumer data. That range is typically viewed as being bounded, on the least-interventionist side, by notice and consent requirements to facilitate informed user choice, and on the most interventionist side, by prohibitions that specifically bar certain uses of consumer data.

This is well-traveled ground within the academic and policy literature and the relative advantages and disadvantages of each regulatory approach are well-known (and differ depending on the type of consumer data and other factors). Within the scope of this contribution, I wish to address an alternative regulatory approach that lies outside this conventional range of policy options.

Bottom-Up v. Top-Down Regulation

Any cost-benefit analysis concerning potential interventions to modify or bar a particular use of consumer data, or to mandate notice-and-consent requirements in connection with any such use, must contemplate not only government-implemented solutions but also market-implemented solutions, including hybrid mechanisms in which government action facilitates or complements market-implemented solutions.

This is not a merely theoretical proposal (and is referenced indirectly in questions 36, 51, and 87 of the ANPR). As I have discussed in previously published research, the U.S. economy has a long-established record of having adopted, largely without government intervention, collective solutions to the information asymmetries that can threaten the efficient operation of consumer goods and services markets.

Examples abound: Underwriters Laboratories (UL), which establishes product-safety standards in hundreds of markets; large accounting firms, which confirm compliance with Generally Accepted Accounting Principles (GAAP), which are in turn established and updated by the Financial Accounting Standards Board, a private entity subject to oversight by the Securities and Exchange Commission; and intermediaries in other markets, such as consumer credit, business credit, insurance carriers, bond issuers, and content ratings in the entertainment and gaming industries. Collectively, these markets encompass thousands of providers, hundreds of millions of customers, and billions of dollars in value.

A collective solution is often necessary to resolve information asymmetries efficiently because the benefits from establishing an industrywide standard of product or service quality, together with a trusted mechanism for showing compliance with that standard, generates gains that cannot be fully internalized by any single provider.

Jurisdictions outside the United States have tended to address this collective-action problem through the top-down imposition of standards by government mandate and enforcement by regulatory agencies, as illustrated by the jurisdictions referenced by the ANPR that have imposed restrictions on the use of consumer data through direct regulatory intervention. By contrast, the U.S. economy has tended to favor the bottom-up development of voluntary standards, accompanied by certification and audit services, all accomplished by a mix of industry groups and third-party intermediaries. In certain markets, this may be a preferred model to address the information asymmetries between vendors and customers that are the key sources of potential market failure in the use of consumer data.

Privately organized initiatives to set quality standards and monitor compliance benefit the market by supplying a reliable standard that reduces information asymmetries and transaction costs between consumers and vendors. This, in turn, yields economic gains in the form of increased output, since consumers have reduced uncertainty concerning product quality. These quality standards are generally implemented through certification marks (for example, the “UL” certification mark) or ranking mechanisms (for example, consumer-credit or business-credit scores), which induce adoption and compliance through the opportunity to accrue reputational goodwill that, in turn, translates into economic gains.

These market-implemented voluntary mechanisms are a far less costly means to reduce information asymmetries in consumer-goods markets than regulatory interventions, which require significant investments of public funds in rulemaking, detection, investigation, enforcement, and adjudication activities.

Hybrid Policy Approaches

Private-ordering solutions to collective-action failures in markets that suffer from information asymmetries can sometimes benefit from targeted regulatory action, resulting in a hybrid policy approach. In particular, regulators can sometimes play two supplemental functions in this context.

First, regulators can require that providers in certain markets comply with (or can provide a liability safe harbor for providers that comply with) the quality standards developed by private intermediaries that have developed track records of efficiently establishing those standards and reliably confirming compliance. This mechanism is anticipated by the ANPR, which asks in question 51 whether the commission should “require firms to certify that their commercial surveillance practices meet clear standards concerning collection, use, retention, transfer, or monetization of consumer data” and further asks whether those standards should be set by “the Commission, a third-party organization, or some other entity.”

Other regulatory agencies already follow this model. For example, federal and state regulatory agencies in the fields of health care and education rely on accreditation by designated private entities for purposes of assessing compliance with applicable licensing requirements.

Second, regulators can supervise and review the quality standards implemented, adjusted, and enforced by private intermediaries. This is illustrated by the example of securities markets, in which the major exchanges institute and enforce certain governance, disclosure, and reporting requirements for listed companies but are subject to regulatory oversight by the SEC, which must approve all exchange rules and amendments. Similarly, major accounting firms monitor compliance by public companies with GAAP but must register with, and are subject to oversight by, the Public Company Accounting Oversight Board (PCAOB), a nonprofit entity subject to SEC oversight.

These types of hybrid mechanisms shift to private intermediaries most of the costs involved in developing, updating, and enforcing quality standards (in this context, standards for the use of consumer data) and harness private intermediaries’ expertise, capacities, and incentives to execute these functions efficiently and rapidly, while using targeted forms of regulatory oversight as a complementary policy tool.

Conclusion

Certain uses of consumer data in digital markets may impose net social harms that can be mitigated through appropriately crafted regulation. Assuming, for the sake of argument, that the commission has the legal power to enact regulation to address such harms (again, a point as to which there is great doubt), any specific steps must be grounded in rigorous and balanced cost-benefit analysis.

As a matter of law and sound public policy, it is imperative that the commission meaningfully consider the full range of reliable evidence to identify any potential market failures in the use of consumer data and how to formulate rules to rectify or mitigate such failures at a net social gain. Given the extent to which business models in digital environments rely on the use of consumer data, and the substantial value those business models confer on consumers and businesses, the potential “error costs” of regulatory overreach are high. It is therefore critical to engage in a thorough balancing of costs and gains concerning any such use.

Privacy regulation is a complex and economically consequential policy area that demands careful diagnosis and targeted remedies grounded in analysis and evidence, rather than sweeping interventions accompanied by rhetoric and anecdote.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

Earlier this month, Professors Fiona Scott Morton, Steve Salop, and David Dinielli penned a letter expressing their “strong support” for the proposed American Innovation and Choice Online Act (AICOA). In the letter, the professors address criticisms of AICOA and urge its approval, despite possible imperfections.

“Perhaps this bill could be made better if we lived in a perfect world,” the professors write, “[b]ut we believe the perfect should not be the enemy of the good, especially when change is so urgently needed.”

The problem is that the professors and other supporters of AICOA have shown neither that “change is so urgently needed” nor that the proposed law is, in fact, “good.”

Is Change ‘Urgently Needed’?

With respect to the purported urgency that warrants passage of a concededly imperfect bill, the letter authors assert two points. First, they claim that AICOA’s targets—Google, Apple, Facebook, Amazon, and Microsoft (collectively, GAFAM)—“serve as the essential gatekeepers of economic, social, and political activity on the internet.” It is thus appropriate, they say, to amend the antitrust laws to do something they have never before done: saddle a handful of identified firms with special regulatory duties.

But is this oft-repeated claim about “gatekeeper” status true? The label conjures up the old Terminal Railroad case, where a group of firms controlled the only bridges over the Mississippi River at St. Louis. Freighters had no choice but to utilize their services. Do the GAFAM firms really play a similar role with respect to “economic, social, and political activity on the internet”? Hardly.

With respect to economic activity, Amazon may be a huge player, but it still accounts for only 39.5% of U.S. ecommerce sales—and far less of retail sales overall. Consumers have gobs of other ecommerce options, and so do third-party merchants, which may sell their wares using Shopify, Ebay, Walmart, Etsy, numerous other ecommerce platforms, or their own websites.

For social activity on the internet, consumers need not rely on Facebook and Instagram. They can connect with others via Snapchat, Reddit, Pinterest, TikTok, Twitter, and scores of other sites. To be sure, all these services have different niches, but the letter authors’ claim that the GAFAM firms are “essential gatekeepers” of “social… activity on the internet” is spurious.

Nor are the firms singled out by AICOA essential gatekeepers of “political activity on the internet.” The proposed law touches neither Twitter, the primary hub of political activity on the internet, nor TikTok, which is increasingly used for political messaging.

The second argument the letter authors assert in support of their claim of urgency is that “[t]he decline of antitrust enforcement in the U.S. is well known, pervasive, and has left our jurisprudence unable to protect and maintain competitive markets.” In other words, contemporary antitrust standards are anemic and have led to a lack of market competition in the United States.

The evidence for this claim, which is increasingly parroted in the press and among the punditry, is weak. Proponents primarily point to studies showing:

  1. increasing industrial concentration;
  2. higher markups on goods and services since 1980;
  3. a declining share of surplus going to labor, which could indicate monopsony power in labor markets; and
  4. a reduction in startup activity, suggesting diminished innovation. 

Examined closely, however, those studies fail to establish a domestic market power crisis.

Industrial concentration has little to do with market power in actual markets. Indeed, research suggests that, while industries may be consolidating at the national level, competition at the market (local) level is increasing, as more efficient national firms open more competitive outlets in local markets. As Geoff Manne sums up this research:

Most recently, several working papers looking at the data on concentration in detail and attempting to identify the likely cause for the observed data, show precisely the opposite relationship. The reason for increased concentration appears to be technological, not anticompetitive. And, as might be expected from that cause, its effects are beneficial. Indeed, the story is both intuitive and positive.

What’s more, while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.

With respect to the evidence on markups, the claim of a significant increase in the price-cost margin depends crucially on the measure of cost. The studies suggesting an increase in margins since 1980 use the “cost of goods sold” (COGS) metric, which excludes a firm’s management and marketing costs—both of which have become an increasingly significant portion of firms’ costs. Measuring costs using the “operating expenses” (OPEX) metric, which includes management and marketing costs, reveals that public-company markups increased only modestly since the 1980s and that the increase was within historical variation. (It is also likely that increased markups since 1980 reflect firms’ more extensive use of technology and their greater regulatory burdens, both of which raise fixed costs and require higher markups over marginal cost.)

As for the declining labor share, that dynamic is occurring globally. Indeed, the decline in the labor share in the United States has been less severe than in Japan, Canada, Italy, France, Germany, China, Mexico, and Poland, suggesting that anemic U.S. antitrust enforcement is not to blame. (A reduction in the relative productivity of labor is a more likely culprit.)

Finally, the claim of reduced startup activity is unfounded. In its report on competition in digital markets, the U.S. House Judiciary Committee asserted that, since the advent of the major digital platforms:

  1. “[t]he number of new technology firms in the digital economy has declined”;
  2. “the entrepreneurship rate—the share of startups and young firms in the [high technology] industry as a whole—has also fallen significantly”; and
  3. “[u]nsurprisingly, there has also been a sharp reduction in early-stage funding for technology startups.” (pp. 46-47.)

Those claims, however, are based on cherry-picked evidence.

In support of the first two, the Judiciary Committee report cited a study based on data ending in 2011. As Benedict Evans has observed, “standard industry data shows that startup investment rounds have actually risen at least 4x since then.”

In support of the third claim, the report cited statistics from an article noting that the number and aggregate size of the very smallest venture capital deals—those under $1 million—fell between 2014 and 2018 (after growing substantially from 2008 to 2014). The Judiciary Committee report failed to note, however, the cited article’s observation that small venture deals ($1 million to $5 million) had not dropped and that larger venture deals (greater than $5 million) had grown substantially during the same time period. Nor did the report acknowledge that venture-capital funding has continued to increase since 2018.

Finally, there is also reason to think that AICOA’s passage would harm, not help, the startup environment:

AICOA doesn’t directly restrict startup acquisitions, but the activities it would restrict most certainly do dramatically affect the incentives that drive many startup acquisitions. If a platform is prohibited from engaging in cross-platform integration of acquired technologies, or if it can’t monetize its purchase by prioritizing its own technology, it may lose the motivation to make a purchase in the first place.

Despite the letter authors’ claims, neither a paucity of avenues for “economic, social, and political activity on the internet” nor the general state of market competition in the United States establishes an “urgent need” to re-write the antitrust laws to saddle a small group of firms with unprecedented legal obligations.

Is the Vagueness of AICOA’s Primary Legal Standard a Feature?

AICOA bars covered platforms from engaging in three broad classes of conduct (self-preferencing, discrimination among business users, and limiting business users’ ability to compete) where the behavior at issue would “materially harm competition.” It then forbids several specific business practices, but allows the defendant to avoid liability by proving that their use of the practice would not cause a “material harm to competition.”

Critics have argued that “material harm to competition”—a standard that is not used elsewhere in the antitrust laws—is too indeterminate to provide business planners and adjudicators with adequate guidance. The authors of the pro-AICOA letter, however, maintain that this “different language is a feature, not a bug.”

That is so, the letter authors say, because the language effectively signals to courts and policymakers that antitrust should prohibit more conduct. They explain:

To clarify to courts and policymakers that Congress wants something different (and stronger), new terminology is required. The bill’s language would open up a new space and move beyond the standards imposed by the Sherman Act, which has not effectively policed digital platforms.

Putting aside the weakness of the letter authors’ premise (i.e., that Sherman Act standards have proven ineffective), the legislative strategy they advocate—obliquely signal that you want “change” without saying what it should consist of—is irresponsible and risky.

The letter authors assert two reasons Congress should not worry about enacting a liability standard that has no settled meaning. One is that:

[t]he same judges who are called upon to render decisions under the existing, insufficient, antitrust regime, will also be called upon to render decisions under the new law. They will be the same people with the same worldview.

It is thus unlikely that “outcomes under the new law would veer drastically away from past understandings of core concepts….”

But this claim undermines the argument that a new standard is needed to get the courts to do “something different” and “move beyond the standards imposed by the Sherman Act.” If we don’t need to worry about an adverse outcome from a novel, ill-defined standard because courts are just going to continue applying the standard they’re familiar with, then what’s the point of changing the standard?

A second reason not to worry about the lack of clarity on AICOA’s key liability standard, the letter authors say, is that federal enforcers will define it:

The new law would mandate that the [Federal Trade Commission and the Antitrust Division of the U.S. Department of Justice], the two expert agencies in the area of competition, together create guidelines to help courts interpret the law. Any uncertainty about the meaning of words like ‘competition’ will be resolved in those guidelines and over time with the development of caselaw.

This is no doubt music to the ears of members of Congress, who love to get credit for “doing something” legislatively, while leaving the details to an agency so that they can avoid accountability if things turn out poorly. Indeed, the letter authors explicitly play upon legislators’ unwholesome desire for credit-sans-accountability. They emphasize that “[t]he agencies must [create and] update the guidelines periodically. Congress doesn’t have to do much of anything very specific other than approve budgets; it certainly has no obligation to enact any new laws, let alone amend them.”

AICOA does not, however, confer rulemaking authority on the agencies; it merely directs them to create and periodically update “agency enforcement guidelines” and “agency interpretations” of certain affirmative defenses. Those guidelines and interpretations would not bind courts, which would be free to interpret AICOA’s new standard differently. The letter authors presume that courts would defer to the agencies’ interpretation of the vague standard, and they probably would. But that raises other problems.

For one thing, it reduces certainty, which is likely to chill innovation. Giving the enforcement agencies de facto power to determine and redetermine what behaviors “would materially harm competition” means that the rules are never settled. Administrations differ markedly in their views about what the antitrust laws should forbid, so business planners could never be certain that a product feature or revenue model that is legal today will not be deemed to “materially harm competition” by a future administration with greater solicitude for small rivals and upstarts. Such uncertainty will hinder investment in novel products, services, and business models.

Consider, for example, Google’s investment in the Android mobile operating system. Google makes money from Android—which it licenses to device manufacturers for free—by ensuring that Google’s revenue-generating services (e.g., its search engine and browser) are strongly preferenced on Android products. One administration might believe that this is a procompetitive arrangement, as it creates a different revenue model for mobile operating systems (as opposed to Apple’s generation of revenue from hardware sales), resulting in both increased choice and lower prices for consumers. A subsequent administration might conclude that the arrangement materially harms competition by making it harder for rival search engines and web browsers to gain market share. It would make scant sense for a covered platform to make an investment like Google did with Android if its underlying business model could be upended by a new administration with de facto power to rewrite the law.

A second problem with having the enforcement agencies determine and redetermine what covered platforms may do is that it effectively transforms the agencies from law enforcers into sectoral regulators. Indeed, the letter authors agree that “the ability of expert agencies to incorporate additional protections in the guidelines” means that “the bill is not a pure antitrust law but also safeguards other benefits to consumers.” They tout that “the complementarity between consumer protection and competition can be addressed in the guidelines.”

Of course, to the extent that the enforcement guidelines address concerns besides competition, they will be less useful for interpreting AICOA’s “material harm to competition” standard; they might deem a practice suspect on non-competition grounds. Moreover, it is questionable whether creating a sectoral regulator for five widely diverse firms is a good idea. The history of sectoral regulation is littered with examples of agency capture, rent-seeking, and other public-choice concerns. At a minimum, Congress should carefully examine the potential downsides of sectoral regulation, install protections to mitigate those downsides, and explicitly establish the sectoral regulator.

Will AICOA Break Popular Products and Services?

Many popular offerings by the platforms covered by AICOA involve self-preferencing, discrimination among business users, or one of the other behaviors the bill presumptively bans. Pre-installation of iPhone apps and services like Siri, for example, involves self-preferencing or discrimination among business users of Apple’s iOS platform. But iPhone consumers value having a mobile device that offers extensive services right out of the box. Consumers love that Google’s search result for an establishment offers directions to the place, which involves the preferencing of Google Maps. And consumers positively adore Amazon Prime, which can provide free expedited delivery because Amazon conditions Prime designation on a third-party seller’s use of Amazon’s efficient, reliable “Fulfillment by Amazon” service—something Amazon could not do under AICOA.

The authors of the pro-AICOA letter insist that the law will not ban attractive product features like these. AICOA, they say:

provides a powerful defense that forecloses any thoughtful concern of this sort: conduct otherwise banned under the bill is permitted if it would ‘maintain or substantially enhance the core functionality of the covered platform.’

But the authors’ confidence that this affirmative defense will adequately protect popular offerings is misplaced. The defense is narrow and difficult to mount.

First, it immunizes only those behaviors that maintain or substantially enhance the “core” functionality of the covered platform. Courts would rightly interpret AICOA to give effect to that otherwise unnecessary word, which dictionaries define as “the central or most important part of something.” Accordingly, any self-preferencing, discrimination, or other presumptively illicit behavior that enhances a covered platform’s service but not its “central or most important” functions is not even a candidate for the defense.

Even if a covered platform could establish that a challenged practice would maintain or substantially enhance the platform’s core functionality, it would also have to prove that the conduct was “narrowly tailored” and “reasonably necessary” to achieve the desired end, and, for many behaviors, the “le[ast] discriminatory means” of doing so. That is a remarkably heavy burden, and it beggars belief to suppose that business planners considering novel offerings involving self-preferencing, discrimination, or some other presumptively illicit conduct would feel confident that they could make the required showing. It is likely, then, that AICOA would break existing products and services and discourage future innovation.

Of course, Congress could mitigate this concern by specifying that AICOA does not preclude certain things, such as pre-installed apps or consumer-friendly search results. But the legislation would then lose the support of the many interest groups who want the law to preclude various popular offerings that its text would now forbid. Unlike consumers, who are widely dispersed and difficult to organize, the groups and competitors that would benefit from things like stripped-down smartphones, map-free search results, and Prime-less Amazon are effective lobbyists.

Should the US Follow Europe?

Having responded to criticisms of AICOA, the authors of the pro-AICOA letter go on offense. They assert that enactment of the bill is needed to ensure that the United States doesn’t lose ground to Europe, both in regulatory leadership and in innovation. Observing that the European Union’s Digital Markets Act (DMA) has just become law, the authors write that:

[w]ithout [AICOA], the role of protecting competition and innovation in the digital sector outside China will be left primarily to the European Union, abrogating U.S. leadership in this sector.

Moreover, if Europe implements its DMA and the United States does not adopt AICOA, the authors claim:

the center of gravity for innovation and entrepreneurship [could] shift from the U.S. to Europe, where the DMA would offer greater protections to start ups and app developers, and even makers and artisans, against exclusionary conduct by the gatekeeper platforms.

Implicit in the argument that AICOA is needed to maintain America’s regulatory leadership is the assumption that to lead in regulatory policy is to have the most restrictive rules. The most restrictive regulator will necessarily be the “leader” in the sense that it will be the one with the most control over regulated firms. But leading in the sense of optimizing outcomes and thereby serving as a model for other jurisdictions entails crafting the best policies—those that minimize the aggregate social losses from wrongly permitting bad behavior, wrongly condemning good behavior, and determining whether conduct is allowed or forbidden (i.e., those that “minimize the sum of error and decision costs”). Rarely is the most restrictive regulatory regime the one that optimizes outcomes, and as I have elsewhere explained, the rules set forth in the DMA hardly seem calibrated to do so.

As for “innovation and entrepreneurship” in the technological arena, it would be a seismic shift indeed if the center of gravity were to migrate to Europe, which is currently home to zero of the top 20 global tech companies. (The United States hosts 12; China, eight.)

It seems implausible, though, that imposing a bunch of restrictions on large tech companies that have significant resources for innovation and are scrambling to enter each other’s markets will enhance, rather than retard, innovation. The self-preferencing bans in AICOA and DMA, for example, would prevent Apple from developing its own search engine to compete with Google, as it has apparently contemplated. Why would Apple develop its own search engine if it couldn’t preference it on iPhones and iPads? And why would Google have started its shopping service to compete with Amazon if it couldn’t preference Google Shopping in search results? And why would any platform continually improve to gain more users as it neared the thresholds for enhanced duties under DMA or AICOA? It seems more likely that the DMA/AICOA approach will hinder, rather than spur, innovation.

At the very least, wouldn’t it be prudent to wait and see whether DMA leads to a flourishing of innovation and entrepreneurship in Europe before jumping on the European bandwagon? After all, technological innovations that occur in Europe won’t be available only to Europeans. Just as Europeans benefit from innovation by U.S. firms, American consumers will be able to reap the benefits of any DMA-inspired innovation occurring in Europe. Moreover, if DMA indeed furthers innovation by making it easier for entrants to gain footing, even American technology firms could benefit from the law by launching their products in Europe. There’s no reason for the tech sector to move to Europe to take advantage of a small-business-protective European law.

In fact, the optimal outcome might be to have one jurisdiction in which major tech platforms are free to innovate, enter each other’s markets via self-preferencing, etc. (the United States, under current law) and another that is more protective of upstart businesses that use the platforms (Europe under DMA). The former jurisdiction would create favorable conditions for platform innovation and inter-platform competition; the latter might enhance innovation among businesses that rely on the platforms. Consumers in each jurisdiction, however, would benefit from innovation facilitated by the other.

It makes little sense, then, for the United States to rush to adopt European-style regulation. DMA is a radical experiment. Regulatory history suggests that the sort of restrictiveness it imposes retards, rather than furthers, innovation. But in the unlikely event that things turn out differently this time, little harm would result from waiting to see DMA’s benefits before implementing its restrictive approach. 

Does AICOA Threaten Platforms’ Ability to Moderate Content and Police Disinformation?

The authors of the pro-AICOA letter conclude by addressing the concern that AICOA “will inadvertently make content moderation difficult because some of the prohibitions could be read… to cover and therefore prohibit some varieties of content moderation” by covered platforms.

The letter authors say that a reading of AICOA to prohibit content moderation is “strained.” They maintain that the act’s requirement of “competitive harm” would prevent imposition of liability based on content moderation and that the act is “plainly not intended to cover” instances of “purported censorship.” They further contend that the risk of judicial misconstrual exists with all proposed laws and therefore should not be a sufficient reason to oppose AICOA.

Each of these points is weak. Section 3(a)(3) of AICOA makes it unlawful for a covered platform to “discriminate in the application or enforcement of the terms of service of the covered platform among similarly situated business users in a manner that would materially harm competition.” It is hardly “strained” to reason that this provision is violated when, say, Google’s YouTube selectively demonetizes a business user for content that Google deems harmful or misleading. Or when Apple removes Parler, but not every other violator of service terms, from its App Store. Such conduct could “materially harm competition” by impeding the de-platformed business’ ability to compete with its rivals.

And it is hard to say that AICOA is “plainly not intended” to forbid these acts when a key supporting senator touted the bill as a means of policing content moderation and observed during markup that it would “make some positive improvement on the problem of censorship” (i.e., content moderation) because “it would provide protections to content providers, to businesses that are discriminated against because of the content of what they produce.”

At a minimum, we should expect some state attorneys general to try to use the law to police content moderation they disfavor, and the mere prospect of such legal action could chill anti-disinformation efforts and other forms of content moderation.

Of course, there’s a simple way for Congress to eliminate the risk of what the letter authors deem judicial misconstrual: It could clarify that AICOA’s prohibitions do not cover good-faith efforts to moderate content or police disinformation. Such clarification, however, would kill the bill, as several Republican legislators are supporting the act because it restricts content moderation.

The risk of judicial misconstrual with AICOA, then, is not the sort that exists with “any law, new or old,” as the letter authors contend. “Normal” misconstrual risk exists when legislators try to be clear about their intentions but, because language has its limits, some vagueness or ambiguity persists. AICOA’s architects have deliberately obscured their intentions in order to cobble together enough supporters to get the bill across the finish line.

The one thing that all AICOA supporters can agree on is that they deserve credit for “doing something” about Big Tech. If the law is construed in a way they disfavor, they can always act shocked and blame rogue courts. That’s shoddy, cynical lawmaking.

Conclusion

So, I respectfully disagree with Professors Scott Morton, Salop, and Dinielli on AICOA. There is no urgent need to pass the bill right now, especially as we are on the cusp of seeing an AICOA-like regime put to the test. The bill’s central liability standard is overly vague, and its plain terms would break popular products and services and thwart future innovation. The United States should equate regulatory leadership with the best, not the most restrictive, policies. And Congress should thoroughly debate and clarify its intentions on content moderation before enacting legislation that could upend the status quo on that important matter.

For all these reasons, Congress should reject AICOA. And for the same reasons, a future in which AICOA is adopted is extremely unlikely to resemble the Utopian world that Professors Scott Morton, Salop, and Dinielli imagine.

The Biden administration’s antitrust reign of error continues apace. The U.S. Justice Department’s (DOJ) Antitrust Division has indicated in recent months that criminal prosecutions may be forthcoming under Section 2 of the Sherman Antitrust Act, but refuses to provide any guidance regarding enforcement criteria.

Earlier this month, Deputy Assistant Attorney General Richard Powers stated that “there’s ample case law out there to help inform those who have concerns or questions” regarding Section 2 criminal enforcement, conveniently ignoring the fact that criminal Section 2 cases have not been brought in almost half a century. Needless to say, those ancient Section 2 cases (which are relatively few in number) antedate the modern era of economic reasoning in antitrust analysis. What’s more, unlike Section 1 price-fixing and market-division precedents, they yield no clear rule as to what constitutes criminal unilateral behavior. Thus, DOJ’s suggestion that old cases be consulted for guidance is disingenuous at best. 

It follows that DOJ criminal-monopolization prosecutions would be sheer folly. They would spawn substantial confusion and uncertainty and disincentivize dynamic economic growth.

Aggressive unilateral business conduct is a key driver of the competitive process. It brings about “creative destruction” that transforms markets, generates innovation, and thereby drives economic growth. As such, one wants to be particularly careful before condemning such conduct on grounds that it is anticompetitive. Accordingly, error costs here are particularly high and damaging to economic prosperity.

Moreover, error costs in assessing unilateral conduct are more likely than in assessing joint conduct, because it is very hard to distinguish between procompetitive and anticompetitive single-firm conduct, as DOJ’s 2008 Report on Single Firm Conduct Under Section 2 explains (citations omitted):

Courts and commentators have long recognized the difficulty of determining what means of acquiring and maintaining monopoly power should be prohibited as improper. Although many different kinds of conduct have been found to violate section 2, “[d]efining the contours of this element … has been one of the most vexing questions in antitrust law.” As Judge Easterbrook observes, “Aggressive, competitive conduct by any firm, even one with market power, is beneficial to consumers. Courts should prize and encourage it. Aggressive, exclusionary conduct is deleterious to consumers, and courts should condemn it. The big problem lies in this: competitive and exclusionary conduct look alike.”

The problem is not simply one that demands drawing fine lines separating different categories of conduct; often the same conduct can both generate efficiencies and exclude competitors. Judicial experience and advances in economic thinking have demonstrated the potential procompetitive benefits of a wide variety of practices that were once viewed with suspicion when engaged in by firms with substantial market power. Exclusive dealing, for example, may be used to encourage beneficial investment by the parties while also making it more difficult for competitors to distribute their products.

If DOJ does choose to bring a Section 2 criminal case soon, would it target one of the major digital platforms? Notably, a U.S. House Judiciary Committee letter recently called on DOJ to launch a criminal investigation of Amazon (see here). Also, current Federal Trade Commission (FTC) Chair Lina Khan launched her academic career with an article focusing on Amazon’s “predatory pricing” and attacking the consumer welfare standard (see here).

Khan’s “analysis” has been totally discredited. As a trenchant scholarly article by Timothy Muris and Jonathan Nuechterlein explains:

[DOJ’s criminal Section 2 prosecution of A&P, begun in 1944,] bear[s] an eerie resemblance to attacks today on leading online innovators. Increasingly integrated and efficient retailers—first A&P, then “big box” brick-and-mortar stores, and now online retailers—have challenged traditional retail models by offering consumers lower prices and greater convenience. For decades, critics across the political spectrum have reacted to such disruption by urging Congress, the courts, and the enforcement agencies to stop these American success stories by revising antitrust doctrine to protect small businesses rather than the interests of consumers. Using antitrust law to punish pro-competitive behavior makes no more sense today than it did when the government attacked A&P for cutting consumers too good a deal on groceries. 

Before bringing criminal Section 2 charges against Amazon, or any other “dominant” firm, DOJ leaders should read and absorb the sobering Muris and Nuechterlein assessment. 

Finally, not only would DOJ Section 2 criminal prosecutions represent bad public policy—they would also undermine the rule of law. In a very thoughtful 2017 speech, then-Acting Assistant Attorney General for Antitrust Andrew Finch succinctly summarized the importance of the rule of law in antitrust enforcement:

[H]ow do we administer the antitrust laws more rationally, accurately, expeditiously, and efficiently? … Law enforcement requires stability and continuity both in rules and in their application to specific cases.

Indeed, stability and continuity in enforcement are fundamental to the rule of law. The rule of law is about notice and reliance. When it is impossible to make reasonable predictions about how a law will be applied, or what the legal consequences of conduct will be, these important values are diminished. To call our antitrust regime a “rule of law” regime, we must enforce the law as written and as interpreted by the courts and advance change with careful thought.

The reliance fostered by stability and continuity has obvious economic benefits. Businesses invest, not only in innovation but in facilities, marketing, and personnel, and they do so based on the economic and legal environment they expect to face.

Of course, we want businesses to make those investments—and shape their overall conduct—in accordance with the antitrust laws. But to do so, they need to be able to rely on future application of those laws being largely consistent with their expectations. An antitrust enforcement regime with frequent changes is one that businesses cannot plan for, or one that they will plan for by avoiding certain kinds of investments.

Bringing criminal monopolization cases now, after a half-century of inaction, would be antithetical to the stability and continuity that underlie the rule of law. What’s worse, the failure to provide prosecutorial guidance would be squarely at odds with concerns of notice and reliance that inform the rule of law. As such, a DOJ decision to target firms for Section 2 criminal charges would offend the rule of law (and, sadly, follow the FTC ‘s recent example of flouting the rule of law, see here and here).

In sum, the case against criminal Section 2 prosecutions is overwhelming. At a time when DOJ is facing difficulties winning “slam dunk” criminal Section 1  prosecutions targeting facially anticompetitive joint conduct (see here, here, and here), the notion that it would criminally pursue unilateral conduct that may generate substantial efficiencies is ludicrous. Hopefully, DOJ leadership will come to its senses and drop any and all plans to bring criminal Section 2 cases.

[The 12th entry in our FTC UMC Rulemaking symposium is from guest contributor Steven J. Cernak, a partner in the antitrust and competition practice of BonaLaw in Detroit, Michigan. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

The Federal Trade Commission (FTC) has been in the antitrust-enforcement business for more than 100 years. Its new leadership is considering some of the biggest changes ever in its enforcement methods. Instead of a detailed analysis of each case on its own merits, some FTC leaders now want its unelected bureaucrats to write competition rules for the entire economy under its power to stop unfair methods of competition. Such a move would be bad for competition and the economy—and for the FTC itself.

The FTC enforces the antitrust laws through its statutory authority to police unfair methods of competition (UMC). Like all antitrust challengers, the FTC now must conduct a detailed analysis of the specific actions of particular competitors. Whether the FTC decides to challenge actions initially in its own administrative courts or in federal courts, eventually it must convince independent judges that the challenged conduct really does harm competition. When finalized, those decisions set precedent. Future parties can argue their particular details are different or otherwise require a different outcome. As a result, the antitrust laws slowly evolve in ways understandable to all.

Some members of FTC’s new leadership have argued that the agency should skip the hard work of individual cases and instead issue blanket rules to cover competitive situations across the economy. Since taking over in the new administration, they have taken steps that seem to make it easier for the FTC to issue such broad competition rules. Doing so would be a mistake for several reasons.

First, it is far from clear that Congress gave the FTC the authority to issue such rules. Also, any such grant of quasi-legislative power to this independent agency might be unconstitutional. The FTC already gets to play prosecutor and judge in many cases. Becoming a legislature might be going too far. Other commentators, both in this symposium and elsewhere, have detailed those arguments. But however those arguments shake out, the FTC will need to take the time and resources to fight off the inevitable challenges.

But even if it can, the FTC should not. The case-by-case approach allows for detailed analysis, making it more likely to be correct. If there are any mistakes, they only affect those parties.

If it turns to competition rulemaking, how will the FTC gain the knowledge and develop the wisdom to develop rules that apply across large swaths of the economy for an unlimited time? Will it apply the same rules to companies with 8% and 80% market share? And to companies making software or automobiles or flying passengers across the country? And will it apply those rules today and next year, no matter the innovations that occur in between? The hubris to think that some all-knowing Washington wizards can get all that right, all the time, is staggering.

Yes, there are some general antitrust rules, like price-fixing agreements being illegal because they harm consumers. But those rules were developed by many lawyers, economists, judges, and witnesses through decades of case-by-case analyses and, even today, parties can argue to a court that they don’t apply to their particular facts. A one-size-fits-all rule won’t have even that flexibility.

For example, what if the FTC develops a rule based on, say, an investigation of toilet-bowl manufacturers that all price-fixing, even if the fixed price is reasonable, is automatically illegal. How would such a rigid rule handle, say, a joint license with a single price issued by competing music composers? Or could a single rule that anticipates the very different facts of Trenton Potteriesand Broadcast Musicbe written in a way that is both short enough to be understood but broad enough to anticipate all potential future facts? Perhaps the rule inspired by Trenton Potteries could be adjusted when the Broadcast Music facts become known. But then, that is just back to the detailed, case-by-case, analysis that we have now, except with the FTC rule-makers changing the rules rather than an independent judge.

Any new FTC rules could conflict with the court opinions generated by antitrust cases brought by the U.S. Justice Department’s (DOJ) Antitrust Division, state attorneys general, or private parties. For instance, the FTC and the Division generally divide up the industries that make up the economy based on expertise and experience. Should the competitive rules differ by enforcer? By industry?

As an example, consider, say, a hypothetical automatic-transmission company whose smallest products can be used in light-duty pickup trucks while the bulk of its product line is used in the largest heavy-duty trucks and equipment. Traditionally, the FTC has reviewed antitrust issues in the light-duty industry while the Division has taken heavy-duty. Should the antitrust rules affecting this hypothetical company’s light-duty sales be different than those affecting the heavy-duty sales based solely on the enforcer and not the applicable competitive facts?

Antitrust is a law-enforcement regime with rules that have changed slowly over decades through individual cases, as economic understandings have evolved. It could have been a regulatory regime, but elected officials did not make that choice. Antitrust could be changed now to a regulatory regime. Individual rules could be changed. Such monumental changes, however, should only be made by Congress, as is being debated now, not by three unelected FTC officials.

In the 1970s, the FTC overreached on rules about deceptive marketing and was slapped down by Congress, the courts, and the public. The Washington Post criticized it as “the national nanny.” Its reputation and authority suffered. We did not need a national nanny then. We don’t need one today, hectoring us to follow overbroad, ill-fitting rules designed by insulated “experts” and not subject to review.

The FTC has very important roles to play regarding understanding and protecting competition in the U.S. economy (before even getting to its crucial consumer-protection mission.) Even with potential increases in its budget, the FTC, like all of us, will have limited resources, time, expertise, and reputation. It should not squander any of that on an ill-fated, quixotic, and hubristic effort to tell everyone how to compete. Instead, the FTC should focus on what it does best: challenging the bad actions of bad actors and convincing a court that it got it right. That is how the FTC can best protect America’s consumers, as its (nicely redesigned) website proclaims.

[The ninth entry in our FTC UMC Rulemaking symposium comes from guest contributor Aaron Nielsen of BYU Law. It is the second post we are publishing today; see also this related post from Jonathan M. Barnett of USC Gould School of Law. Like that post, it adapts a paper that will appear as a chapter in the forthcoming book FTC’s Rulemaking Authority, which will be published by Concurrences later this year. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

For obvious reasons, many scholars, lawyers, and policymakers are thinking hard about whether the Federal Trade Commission (FTC) has authority to promulgate substantive “unfair methods of competition” (UMC) regulations. I first approached this issue a couple of years ago when the FTC asked me to present on the agency’s rulemaking powers. For my presentation, I focused on 1973’s National Petroleum Refiners Association v. FTC and, in particular, whether the U.S. Court of Appeals for the D.C. Circuit correctly held that the FTC has authority to promulgate such rules. I ventured that relying on National Petroleum Refiners would present “litigation risk” for the FTC because the method of statutory interpretation used by the D.C. Circuit is out of step with how courts read statutes today. Richard Pierce, who presented at the same event, was even more blunt:

Let me just express my complete agreement with Aaron’s analysis of the extraordinary fragility of the FTC position that National Petroleum Refiners is going to protect them. I teach National Petroleum Refiners every year. And I teach it as an object lesson in what no court, modern court, would ever do today. The reasoning is, by today’s standards, preposterous.  … [T]he interpretive method that was used in that case was fairly commonly used on the DC Circuit at that time. There is no justice today—not just Gorsuch, but Kagan, Breyer—there is no justice today that would [use that method]. 

That was a fun academic discussion—with emphasis on the word academic. After all, for decades, this issue has only been an academic question because the FTC has not attempted to use such authority. That academic question, however, may soon become a concrete dispute. 

Pierce and others have advanced the anti-National Petroleum Refiners position. Recently, Kacyn H. Fujii has advanced the pro-National Petroleum Refiners position. Should the FTC promulgate a substantive UMC rule, the federal courts will decide which position is right. As that day approaches, many more experts will offer thoughts on this important question. 

Here, however, I want to focus on a different question. What would happen if the FTC can promulgate broad high-profile UMC rules, including new antitrust tests? 

I’ve just posted to SSRN a new essay that addresses that question: “What Happens If the FTC Becomes a Serious Rulemaker?” This essay will be published in the forthcoming book FTC’s Rulemaking Authority. Here is the abstract:

The Federal Trade Commission (FTC) is no one’s idea of a serious rulemaker. To the contrary, the FTC is in many respects a law enforcement agency that operates through litigation and consent decrees. There are understandable reasons for this absence of FTC rulemaking. Not only has Congress imposed heightened procedural obligations on the FTC’s ability to promulgate consumer protection rules, but also it is far from clear that the FTC even has statutory authority to promulgate substantive rules relating to unfair methods of competition (UMC). Yet things may be changing. It appears that the FTC is preparing to begin using rulemaking more aggressively, including for substantive UMC regulations. The FTC’s ability to use rulemaking this way will undoubtedly prompt sharp and important legal challenges.

This short essay, however, considers the question of FTC rulemaking from a different angle: What if the FTC has broad rulemaking authority?  And what if the FTC begins to use that authority for controversial policies? Traditionally, the FTC operates in a case-by-case fashion that attempts to apply familiar principles to the facts of individual matters. Should the FTC begin making broader policy choices through rulemaking, however, it should be prepared for at least three unintended consequences: (i) more ossification, including more judicial challenges and perhaps White House oversight; (ii) more zigzagging policy as new FTC leadership, in response to changes in presidential control, moves to undo what the agency has just done; and (iii) to more often be the target of what has been called “administrative law as blood sport,” by which political actors make it more difficult for the agency to function, for example by delaying the confirmation process.  The upshot would be an agency that could in theory (and sometimes no doubt in fact) regulate more broadly than the FTC does now, but also one with a different character.  In short, the more the FTC becomes a serious rulemaker, the more the FTC will change as an institution.

Here, I will summarize some of the thoughts from my essay. Please read the full essay, however, if you’re looking for citations and a more complete explanation. 

At the outset, my essay is not an attack on rulemaking. There are good reasons to prefer agencies to make policy through rulemaking rather than, say, case-by-case adjudication or threats. In fact, Kristin Hickman and I have written an entire article explaining why rulemaking (generally) should be favored over adjudication. That said, I am concerned about the idea that the FTC has substantive rulemaking authority to promulgate broad UMC rules under Section 5 of the FTC Act. Rulemaking has many advantages, but it does not follow that rulemaking under this very open-ended statute makes sense, especially if the goal is broad policy change. Indeed, if the FTC were to use rulemaking authority for small issues, presumably some of the concerns I sketch out would not apply (though the legal question, of course, still would). 

As I explain in my essay, when agencies attempt to use rulemaking for significant policies—which, not by coincidence, disproportionately tend to be controversial policies—at least three unintended consequences may result: ossification, zigzagging policy, and blood-sport tactics.   

First, ossification. For decades, many administrative law scholars have lamented how ossified the rulemaking process has become. Notice-and-comment rulemaking may not look all that difficult. The process has become challenging, at least for the most significant rules. (There is an empirical dispute about how ossified the process is, but part of that debate may be explained by the nature of the rules at issue; agencies perhaps can promulgate lower-profile rules without much trouble, while struggling with the more significant ones.) Agencies looking to make important policy changes through notice-and-comment rulemaking, for example, often receive mountains of comments from the public. Indeed, agencies may receive millions of comments.  Because agencies have to respond to material comments, rules that prompt that volume of commentary aren’t so easy to do. Likewise, the most consequential rules almost invariably prompt litigation, and as part of so-called “hard look” review, the agency will have to persuade a court that it has considered the important aspects of the problem. Preparing for that sort of review can require a great deal of upfront work. And although its domain does not extend to independent agencies, the Office of Information and Regulatory Affairs (OIRA) also requires agencies to do a great deal of analysis before promulgating the most significant rules. 

If the FTC begins promulgating significant rules, it should be prepared for an ossified process that requires reallocating resources within the agency and engaging in more “admin law” litigation. Because rulemaking can be labor intensive, moreover, the FTC may not be able to pursue as many policies as some no doubt wish. Furthermore, the U.S. Justice Department has concluded that the White House has the authority to subject independent agencies to the OIRA process. If the FTC begins promulgating significant rules—especially regulations of the sort that may be improved by inter-agency coordination and external evaluation, two hallmarks of the OIRA process—the White House may decide that the time has come to put the FTC within OIRA’s tent. Such developments would change how the FTC functions. 

Second, zigzagging policy. It turns out that when agencies use regulatory power for significant policies, agencies sometimes find themselves using that same power to undo those policies when control of the White House shifts. Elsewhere, I’ve written about the Federal Communications Commission and so-called “net neutrality” rules. For decades, the FCC has flip-flopped on this significant issue; when Republicans control the White House, the FCC does one thing, but when Democrats take over, it does something else. Flip-flopping, however, is not limited to the FCC.  As Pierce has put it, “[t]he same analysis applies in each of the hundreds of contexts in which Democrats and Republicans have opposing and uncompromising preferences with respect to policy issues. …” Zigzagging policy is bad for business because it makes it harder to invest, and for that same reason, is bad for consumers who do not gain the benefits of foregone investment. It is also bad for regulators, who must spend time and effort to undo the agency’s own prior actions. To be sure, agencies don’t always flip-flop; indeed, the ossification of the rulemaking process may limit it, at the margins. But especially for the most consequential policies, zigzagging sometimes happens.

Accordingly, if the FTC begins promulgating significant policies through rulemaking, it should expect some zigzagging policy when the White House changes hands. As my essay explains:

In this current age of polarization, regulatory efforts to address divisive issues may not work well because what an agency does under one administration can be undone in the next administration. Thus, the end result may be policy that exists under some administrations but not others. Indeed, the FTC’s recent slew of party-line votes suggests that if the FTC begins using rulemaking for controversial policies, the FTC will look to undo those rules when the political balance flips.  Of course, not all FTC rules will vacillate—there are not enough resources to undo everything, especially as agencies confront new issues. But if the FTC becomes a serious rulemaker, some zigzagging should occur.

Finally, consider “administrative law as blood sport”—an evocative phrase that comes from Thomas McGarity. The idea is that agencies engaged in rulemaking are increasingly subject to political opposition across several dimensions, including “strategies aimed at indirectly disrupting the implementation of regulatory programs by blocking Senate confirmation of new agency leaders, cutting off promised funding for agencies, introducing rifle-shot riders aimed at undoing ongoing agency action, and subjecting agency heads to contentious oversight hearings.” In other words, an opponent of a proposed regulation may try to stop it through the rulemaking process (for example, by filing comment and then going to court), but may also try to stop it outside of the rulemaking process through political means. 

As my essay explains, if the FTC begins using rulemaking for controversial policies, blood-sport tactics presumably will follow. Similarly, the FTC should also expect litigation of a more fundamental character. The U.S. Supreme Court is increasingly wary of independent agencies; to the extent that the FTC begins making significant policy choices without presidential control, the likelihood that the Supreme Court will say “enough” increases. 

In short, if the FTC engages in significant rulemaking, its character will change. No doubt, some proponents of FTC rulemaking would accept that cost, but in assessing FTC rulemaking, it is important to remember unintended consequences, too.

[Wrapping up the first week of our FTC UMC Rulemaking symposium is a post from Truth on the Market’s own Justin (Gus) Hurwitz, director of law & economics programs at the International Center for Law & Economics and an assistant professor of law and co-director of the Space, Cyber, and Telecom Law program at the University of Nebraska College of Law. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

Introduction

In 2014, I published a pair of articles—”Administrative Antitrust” and “Chevron and the Limits of Administrative Antitrust”—that argued that the U.S. Supreme Court’s recent antitrust and administrative-law jurisprudence was pushing antitrust law out of the judicial domain and into the domain of regulatory agencies. The first article focused on the Court’s then-recent antitrust cases, arguing that the Court, which had long since moved away from federal common law, had shown a clear preference that common-law-like antitrust law be handled on a statutory or regulatory basis where possible. The second article evaluated and rejected the FTC’s long-held belief that the Federal Trade Commission’s (FTC) interpretations of the FTC Act do not receive Chevron deference.

Together, these articles made the case (as a descriptive, not normative, matter) that we were moving towards a period of what I called “administrative antitrust.” From today’s perspective, it surely seems that I was right, with the FTC set to embrace Section 5’s broad ambiguities to redefine modern understandings of antitrust law. Indeed, those articles have been cited by both former FTC Commissioner Rohit Chopra and current FTC Chair Lina Khan in speeches and other materials that have led up to our current moment.

This essay revisits those articles, in light of the past decade of Supreme Court precedent. It comes as no surprise to anyone familiar with recent cases that the Court is increasingly viewing the broad deference characteristic of administrative law with what, charitably, can be called skepticism. While I stand by the analysis offered in my previous articles—and, indeed, believe that the Court maintains a preference for administratively defined antitrust law over judicially defined antitrust law—I find it less likely today that the Court would defer to any agency interpretation of antitrust law that represents more than an incremental move away from extant law.

I will approach this discussion in four parts. First, I will offer some reflections on the setting of my prior articles. The piece on Chevron and the FTC, in particular, argued that the FTC had misunderstood how Chevron would apply to its interpretations of the FTC Act because it was beholden to out-of-date understandings of administrative law. I will make the point below that the same thing can be said today. I will then briefly recap the essential elements of the arguments made in both of those prior articles, to the extent needed to evaluate how administrative approaches to antitrust will be viewed by the Court today. The third part of the discussion will then summarize some key elements of administrative law that have changed over roughly the past decade. And, finally, I will bring these elements together to look at the viability of administrative antitrust today, arguing that the FTC’s broad embrace of power anticipated by many is likely to meet an ill fate at the hands of the courts on both antitrust and administrative law grounds.

In reviewing these past articles in light of the past decade’s case law, this essay reaches an important conclusion: for the same reasons that the Court seemed likely in 2013 to embrace an administrative approach to antitrust, today it is likely to view such approaches with great skepticism unless they are undertaken on an incrementalist basis. Others are currently developing arguments that sound primarily in current administrative law: the major questions doctrine and the potential turn away from National Petroleum Refiners. My conclusion is based primarily in the Court’s view that administrative antitrust would prove less indeterminate than judicially defined antitrust law. If the FTC shows that not to be the case, the Court seems likely to close the door on administrative antitrust for reasons sounding in both administrative and antitrust law.

Setting the Stage, Circa 2013

It is useful to start by visiting the stage as it was set when I wrote “Administrative Antitrust” and “Limits of Administrative Antitrust” in 2013. I wrote these articles while doing a fellowship at the University of Pennsylvania Law School, prior to which I had spent several years working at the U.S. Justice Department Antitrust Division’s Telecommunications Section. This was a great time to be involved on the telecom side of antitrust, especially for someone with an interest in administrative law, as well. Recent important antitrust cases included Pacific Bell v. linkLine and Verizon v. Trinko and recent important administrative-law cases included Brand-X, Fox v. FCC, and City of Arlington v. FCC. Telecommunications law was defining the center of both fields.

I started working on “Administrative Antitrust” first, prompted by what I admit today was an overreading of the Court’s 2011 American Electric Power Co. Inc. v. Connecticut opinion, in which the Court held broadly that a decision by Congress to regulate broadly displaces judicial common law. In Trinko and Credit Suisse, the Court had held something similar: roughly, that regulation displaces antitrust law. Indeed, in linkLine,the Court had stated that regulation is preferable to antitrust, known for its vicissitudes and adherence to the extra-judicial development of economic theory. “Administrative Antitrust” tied these strands together, arguing that antitrust law, long-discussed as one of the few remaining bastions of federal common law, would—and in the Court’s eyes, should—be displaced by regulation.

Antitrust and administrative law also came together, and remain together, in the debates over net neutrality. It was this nexus that gave rise to “Limits of Administrative Antitrust,” which I started in 2013 while working on “Administrative Antitrust”and waiting for the U.S. Court of Appeals for the D.C. Circuit’s opinion in Verizon v. FCC.

Some background on the net-neutrality debate is useful. In 2007, the Federal Communications Commission (FCC) attempted to put in place net-neutrality rules by adopting a policy statement on the subject. This approach was rejected by the D.C. Circuit in 2010, on grounds that a mere policy statement lacked the force of law. The FCC then adopted similar rules through a rulemaking process, finding authority to issue those rules in its interpretation of the ambiguous language of Section 706 of the Telecommunications Act. In January 2014, the D.C. Circuit again rejected the specific rules adopted by the FCC, on grounds that those rules violated the Communications Act’s prohibition on treating internet service providers (ISPs) as common carriers. But critically, the court affirmed the FCC’s interpretation of Section 706 as allowing it, in principle, to adopt rules regulating ISPs.

Unsurprisingly, whether the language of Section 706 was either ambiguous or subject to the FCC’s interpretation was a central debate within the regulatory community during 2012 and 2013. The broadest consensus, at least among my peers, was strongly of the view that it was neither: the FCC and industry had long read Section 706 as not giving the FCC authority to regulate ISP conduct and, to the extent that it did confer legislative authority, that authority was expressly deregulatory. I was the lone voice arguing that the D.C. Circuit was likely to find that Chevron applied to Section 706 and that the FCC’s reading was permissible on its own (that is, not taking into account such restrictions as the prohibition on treating non-common carriers as common carriers).

I actually had thought this conclusion quite obvious. The past decade of the Court’s Chevron case law followed a trend of increasing deference. Starting with Mead, then Brand-X, Fox v. FCC, and City of Arlington, the safe money was consistently placed on deference to the agency.

This was the setting in which I started thinking about what became “Chevron and the Limits of Administrative Antitrust.” If my argument in “Administrative Antitrust”was right—that the courts would push development of antitrust law from the courts to regulatory agencies—this would most clearly happen through the FTC’s Section 5 authority over unfair methods of competition (UMC). But there was longstanding debate about the limits of the FTC’s UMC authority. These debates included whether it was necessarily coterminous with the Sherman Act (so limited by the judicially defined federal common law of antitrust).

And there was discussion about whether the FTC would receive Chevron deference to its interpretations of its UMC authority. As with the question of the FCC receiving deference to its interpretation of Section 706, there was widespread understanding that the FTC would not receive Chevron deference to its interpretations of its Section 5 UMC authority. “Chevron and the Limits of Administrative Antitrust” explored that issue, ultimately concluding that the FTC likely would indeed be given the benefit of Chevron deference, tracing the commission’s belief to the contrary back to longstanding institutional memory of pre-Chevron judicial losses.

The Administrative Antitrust Argument

The discussion above is more than mere historical navel-gazing. The context and setting in which those prior articles were written is important to understanding both their arguments and the continual currents that propel us across antitrust’s sea of doubt. But we should also look at the specific arguments from each paper in some detail, as well.

Administrative Antitrust

The opening lines of this paper capture the curious judicial statute of antitrust law:

Antitrust is a peculiar area of law, one that has long been treated as exceptional by the courts. Antitrust cases are uniquely long, complicated, and expensive; individual cases turn on case-specific facts, giving them limited precedential value; and what precedent there is changes on a sea of economic—rather than legal—theory. The principal antitrust statutes are minimalist and have left the courts to develop their meaning. As Professor Thomas Arthur has noted, “in ‘the anti-trust field the courts have been accorded, by common consent, an authority they have in no other branch of enacted law.’” …


This Article argues that the Supreme Court is moving away from this exceptionalist treatment of antitrust law and is working to bring antitrust within a normalized administrative law jurisprudence.

Much of this argument is based in the arguments framed above: Trinko and Credit Suisse prioritize regulation over the federal common law of antitrust, and American Electric Power emphasizes the general displacement of common law by regulation. The article adds, as well, the Court’s focus, at the time, against domain-specific “exceptionalism.” Its opinion in Mayo had rejected the longstanding view that tax law was “exceptional” in some way that excluded it from the Administrative Procedure Act (APA) and other standard administrative law doctrine. And thus, so too must the Court’s longstanding treatment of antitrust as exceptional also fall.

Those arguments can all be characterized as pulling antitrust law toward an administrative approach. But there was a push as well. In his majority opinion, Chief Justice John Roberts expressed substantial concern about the difficulties that antitrust law poses for courts and litigants alike. His opinion for the majority notes that “it is difficult enough for courts to identify and remedy an alleged anticompetitive practice” and laments “[h]ow is a judge or jury to determine a ‘fair price?’” And Justice Stephen Breyer writes in concurrence, that “[w]hen a regulatory structure exists [as it does in this case] to deter and remedy anticompetitive harm, the costs of antitrust enforcement are likely to be greater than the benefits.”

In other words, the argument in “Administrative Antitrust” goes, the Court is motivated both to bring antitrust law into a normalized administrative-law framework and also to remove responsibility for the messiness inherent in antitrust law from the courts’ dockets. This latter point will be of particular importance as we turn to how the Court is likely to think about the FTC’s potential use of its UMC authority to develop new antitrust rules.

Chevron and the Limits of Administrative Antitrust

The core argument in “Limits of Administrative Antitrust” is more doctrinal and institutionally focused. In its simplest statement, I merely applied Chevron as it was understood circa 2013 to the FTC’s UMC authority. There is little argument that “unfair methods of competition” is inherently ambiguous—indeed, the term was used, and the power granted to the FTC, expressly to give the agency flexibility and to avoid the limits the Court was placing on antitrust law in the early 20th century.

There are various arguments against application of Chevron to Section 5; the article goes through and rejects them all. Section 5 has long been recognized as including, but being broader than, the Sherman Act. National Petroleum Refiners has long held that the FTC has substantive-rulemaking authority—a conclusion made even more forceful by the Supreme Court’s more recent opinion in Iowa Utilities Board. Other arguments are (or were) unavailing.

The real puzzle the paper unpacks is why the FTC ever believed it wouldn’t receive the benefit of Chevron deference. The article traces it back to a series of cases the FTC lost in the 1980s, contemporaneous with the development of the Chevron doctrine. The commission had big losses in cases like E.I. Du Pont and Ethyl Corp. Perhaps most important, in its 1986 Indiana Federation of Dentists opinion (two years after Chevron was decided), the Court seemed to adopt a de novo standard for review of Section 5 cases. But, “Limits of Administrative Antitrust” argues, this is a misreading and overreading of Indiana Federation of Dentists (a close reading of which actually suggests that it is entirely in line with Chevron), and it misunderstands the case’s relationship with Chevron (the importance of which did not start to come into focus for another several years).

The curious conclusion of the argument is, in effect, that a generation of FTC lawyers, “shell-shocked by its treatment in the courts,” internalized the lesson that they would not receive the benefits of Chevron deference and that Section 5 was subject to de novo review, but also that this would start to change as a new generation of lawyers, trained in the modern Chevron era, came to practice within the halls of the FTC. Today, that prediction appears to have borne out.

Things Change

The conclusion from “Limits of Administrative Antitrust” that FTC lawyers failed to recognize that the agency would receive Chevron deference because they were half a generation behind the development of administrative-law doctrine is an important one. As much as antitrust law may be adrift in a sea of change, administrative law is even more so. From today’s perspective, it feels as though I wrote those articles at Chevron’s zenith—and watching the FTC consider aggressive use of its UMC authority feels like watching a commission that, once again, is half a generation behind the development of administrative law.

The tide against Chevron’sexpansive deference was already beginning to grow at the time I was writing. City of Arlington, though affirming application of Chevron to agencies’ interpretations of their own jurisdictional statutes in a 6-3 opinion, generated substantial controversy at the time. And a short while later, the Court decided a case that many in the telecom space view as a sea change: Utility Air Regulatory Group (UARG). In UARG, Justice Antonin Scalia, writing for a 9-0 majority, struck down an Environmental Protection Agency (EPA) regulation related to greenhouse gasses. In doing so, he invoked language evocative of what today is being debated as the major questions doctrine—that the Court “expect[s] Congress to speak clearly if it wishes to assign to an agency decisions of vast economic and political significance.” Two years after that, the Court decided Encino Motorcars, in which the Court acted upon a limit expressed in Fox v. FCC that agencies face heightened procedural requirements when changing regulations that “may have engendered serious reliance interests.”

And just like that, the dams holding back concern over the scope of Chevron have burst. Justices Clarence Thomas and Neil Gorsuch have openly expressed their views that Chevron needs to be curtailed or eliminated. Justice Brett Kavanaugh has written extensively in favor of the major questions doctrine. Chief Justice Roberts invoked the major questions doctrine in King v. Burwell. Each term, litigants are more aggressively bringing more aggressive cases to probe and tighten the limits of the Chevron doctrine. As I write this, we await the Court’s opinion in American Hospital Association v. Becerra—which, it is widely believed could dramatically curtail the scope of the Chevron doctrine.

Administrative Antitrust, Redux

The prospects for administrative antitrust look very different today than they did a decade ago. While the basic argument continues to hold—the Court will likely encourage and welcome a transition of antitrust law to a normalized administrative jurisprudence—the Court seems likely to afford administrative agencies (viz., the FTC) much less flexibility in how they administer antitrust law than they would have a decade ago. This includes through both the administrative-law vector, with the Court reconsidering how it views delegation of congressional authority to agencies such as through the major questions doctrine and agency rulemaking authority, as well as through the Court’s thinking about how agencies develop and enforce antitrust law.

Major Questions and Major Rules

Two hotly debated areas where we see this trend: the major questions doctrine and the ongoing vitality of National Petroleum Refiners. These are only briefly recapitulated here. The major questions doctrine is an evolving doctrine, seemingly of great interest to many current justices on the Court, that requires Congress to speak clearly when delegating authority to agencies to address major questions—that is, questions of vast economic and political significance. So, while the Court may allow an agency to develop rules governing mergers when tasked by Congress to prohibit acquisitions likely to substantially lessen competition, it is unlikely to allow that agency to categorically prohibit mergers based upon a general congressional command to prevent unfair methods of competition. The first of those is a narrow rule based upon a specific grant of authority; the other is a very broad rule based upon a very general grant of authority.

The major questions doctrine has been a major topic of discussion in administrative-law circles for the past several years. Interest in the National Petroleum Refiners question has been more muted, mostly confined to those focused on the FTC and FCC. National Petroleum Refiners is a 1973 D.C. Circuit case that found that the FTC Act’s grant of power to make rules to implement the act confers broad rulemaking power relating to the act’s substantive provisions. In 1999, the Supreme Court reached a similar conclusion in Iowa Utilities Board, finding that a provision in Section 202 of the Communications Act allowing the FCC to create rules seemingly for the implementation of that section conferred substantive rulemaking power running throughout the Communications Act.

Both National Petroleum Refiners and Iowa Utilities Board reflect previous generations’ understanding of administrative law—and, in particular, the relationship between the courts and Congress in empowering and policing agency conduct. That understanding is best captured in the evolution of the non-delegation doctrine, and the courts’ broad acceptance of broad delegations of congressional power to agencies in the latter half of the 20th century. National Petroleum Refiners and Iowa Utilities Board are not non-delegation cases-—but, similar to the major questions doctrine, they go to similar issues of how specific Congress must be when delegating broad authority to an agency.

In theory, there is little difference between an agency that can develop legal norms through case-by-case adjudications that are backstopped by substantive and procedural judicial review, on the one hand, and authority to develop substantive rules backstopped by procedural judicial review and by Congress as a check on substantive errors. In practice, there is a world of difference between these approaches. As with the Court’s concerns about the major questions doctrine, were the Court to review National Petroleum Refiners Association or Iowa Utilities Board today, it seems at least possible, if not simply unlikely, that most of the Justices would not so readily find agencies to have such broad rulemaking authority without clear congressional intent supporting such a finding.

Both of these ideas—the major question doctrine and limits on broad rules made using thin grants of rulemaking authority—present potential limits on the potential scope of rules the FTC might make using its UMC authority.

Limits on the Antitrust Side of Administrative Antitrust

The potential limits on FTC UMC rulemaking discussed above sound in administrative-law concerns. But administrative antitrust may also find a tepid judicial reception on antitrust concerns, as well.

Many of the arguments advanced in “Administrative Antitrust” and the Court’s opinions on the antitrust-regulation interface echo traditional administrative-law ideas. For instance, much of the Court’s preference that agencies granted authority to engage in antitrust or antitrust-adjacent regulation take precedence over the application of judicially defined antitrust law track the same separation of powers and expertise concerns that are central to the Chevron doctrine itself.

But the antitrust-focused cases—linkLine, Trinko, Credit Suisse—also express concerns specific to antitrust law. Chief Justice Roberts notes that the justices “have repeatedly emphasized the importance of clear rules in antitrust law,” and the need for antitrust rules to “be clear enough for lawyers to explain them to clients.” And the Court and antitrust scholars have long noted the curiosity that antitrust law has evolved over time following developments in economic theory. This extra-judicial development of the law runs contrary to basic principles of due process and the stability of the law.

The Court’s cases in this area express hope that an administrative approach to antitrust could give a clarity and stability to the law that is currently lacking. These are rules of vast economic significance: they are “the Magna Carta of free enterprise”; our economy organizes itself around them; substantial changes to these rules could have a destabilizing effect that runs far deeper than Congress is likely to have anticipated when tasking an agency with enforcing antitrust law. Empowering agencies to develop these rules could, the Court’s opinions suggest, allow for a more thoughtful, expert, and deliberative approach to incorporating incremental developments in economic knowledge into the law.

If an agency’s administrative implementation of antitrust law does not follow this path—and especially if the agency takes a disruptive approach to antitrust law that deviates substantially from established antitrust norms—this defining rationale for an administrative approach to antitrust would not hold.

The courts could respond to such overreach in several ways. They could invoke the major questions or similar doctrines, as above. They could raise due-process concerns, tracking Fox v. FCC and Encino Motorcars, to argue that any change to antitrust law must not be unduly disruptive to engendered reliance interests. They could argue that the FTC’s UMC authority, while broader than the Sherman Act, must be compatible with the Sherman Act. That is, while the FTC has authority for the larger circle in the antitrust Venn diagram, the courts continue to define the inner core of conduct regulated by the Sherman Act.

A final aspect to the Court’s likely approach to administrative antitrust falls from the Roberts Court’s decision-theoretic approach to antitrust law. First articulated in Judge Frank Easterbrook’s “The Limits of Antitrust,” the decision-theoretic approach to antitrust law focuses on the error costs of incorrect judicial decisions and the likelihood that those decisions will be corrected. The Roberts Court has strongly adhered to this framework in its antitrust decisions. This can be seen, for instance, in Justice Breyer’s statement that: “When a regulatory structure exists to deter and remedy anticompetitive harm, the costs of antitrust enforcement are likely to be greater than the benefits.”

The error-costs framework described by Judge Easterbrook focuses on the relative costs of errors, and correcting those errors, between judicial and market mechanisms. In the administrative-antitrust setting, the relevant comparison is between judicial and administrative error costs. The question on this front is whether an administrative agency, should it get things wrong, is likely to correct. Here there are two models, both of concern. The first is that in which law is policy or political preference. Here, the FCC’s approach to net neutrality and the National Labor Relations Board’s (NLRB) approach to labor law loom large; there have been dramatic swing between binary policy preferences held by different political parties as control of agencies shifts between administrations. The second model is one in which Congress responds to agency rules by refining, rejecting, or replacing them through statute. Here, again, net neutrality and the FCC loom large, with nearly two decades of calls for Congress to clarify the FCC’s authority and statutory mandate, while the agency swings between policies with changing administrations.

Both of these models reflect poorly on the prospects for administrative antitrust and suggest a strong likelihood that the Court would reject any ambitious use of administrative authority to remake antitrust law. The stability of these rules is simply too important to leave to change with changing political wills. And, indeed, concern that Congress no longer does its job of providing agencies with clear direction—that Congress has abdicated its job of making important policy decisions and let them fall instead to agency heads—is one of the animating concerns behind the major questions doctrine.

Conclusion

Writing in 2013, it seemed clear that the Court was pushing antitrust law in an administrative direction, as well as that the FTC would likely receive broad Chevron deference in its interpretations of its UMC authority to shape and implement antitrust law. Roughly a decade later, the sands have shifted and continue to shift. Administrative law is in the midst of a retrenchment, with skepticism of broad deference and agency claims of authority.

Many of the underlying rationales behind the ideas of administrative antitrust remain sound. Indeed, I expect the FTC will play an increasingly large role in defining the contours of antitrust law and that the Court and courts will welcome this role. But that role will be limited. Administrative antitrust is a preferred vehicle for administering antitrust law, not for changing it. Should the FTC use its power aggressively, in ways that disrupt longstanding antitrust principles or seem more grounded in policy better created by Congress, it is likely to find itself on the losing side of the judicial opinion.

[This post is the first in our FTC UMC Rulemaking symposium. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1500-4000 word responses for potential inclusion in the symposium.]

There is widespread interest in the potential tools that the Biden administration’s Federal Trade Commission (FTC) may use to address a range of competition-related and competition-adjacent concerns. A focal point for this interest is the potential that the FTC may use its broad authority to regulate unfair methods of competition (UMC) under Section 5 of the FTC Act to make rules that address a wide range of conduct. This “potential” is expected to become a “likelihood” with confirmation of Alvaro Bedoya, a third Democratic commissioner, expected to occur any day.

This post marks the start of a Truth on the Market symposium that brings together academics, practitioners, and other commentators to discuss issues relating to potential UMC-related rulemaking. Contributions to this symposium will cover a range of topics, including:

  • Constitutional and administrative-law limits on UMC rulemaking: does such rulemaking potentially present “major question” or delegation issues, or other issues under the Administrative Procedure Act (APA)? If so, what is the scope of permissible rulemaking?
  • Substantive issues in UMC rulemaking: costs and benefits to be considered in developing rules, prudential concerns, and similar concerns.
  • Using UMC to address competition-adjacent issues: consideration of how or whether the FTC can use its UMC authority to address firm conduct that is governed by other statutory or regulatory regimes. For instance, firms using copyright law and the Digital Millennium Copyright Act (DMCA) to limit competitors’ ability to alter or repair products, or labor or entry issues that might be governed by licensure or similar laws.

Timing and Structure of the Symposium

Starting tomorrow, one or two contributions to this symposium will be posted each morning. During the first two weeks of the symposium, we will generally try to group posts on similar topics together. When multiple contributions are posted on the same day, they will generally be implicitly or explicitly in dialogue with each other. The first week’s contributions will generally focus on constitutional and administrative law issues relating to UMC rulemaking, while the second week’s contributions will focus on more specific substantive topics. 

Readers are encouraged to engage with these posts through comments. In addition, academics, practitioners, and other antitrust and regulatory commentators are invited to submit additional contributions for inclusion in this symposium. Such contributions may include responses to posts published by others or newly developed ideas. Interested authors should submit pieces for consideration to Gus Hurwitz and Keith Fierro Benson.

This symposium will run through at least Friday, May 6. We do not, however, anticipate, ending or closing it at that time. To the contrary, it is very likely that topics relating to FTC UMC rulemaking will continue to be timely and of interest to our community—we anticipate keeping the symposium running for the foreseeable future, and welcome submissions on an ongoing basis. Readers interested in these topics are encouraged to check in regularly for new posts, including by following the symposium page, the FTC UMC Rulemaking tag, or by subscribing to Truth on the Market for notifications of new posts.

For decades, consumer-welfare enhancement appeared to be a key enforcement goal of competition policy (antitrust, in the U.S. usage) in most jurisdictions:

  • The U.S. Supreme Court famously proclaimed American antitrust law to be a “consumer welfare prescription” in Reiter v. Sonotone Corp. (1979).
  • A study by the current adviser to the European Competition Commission’s chief economist found that that there are “many statements indicating that, seen from the European Commission, modern EU competition policy to a large extent is about protecting consumer welfare.”
  • A comprehensive international survey presented at the 2011 Annual International Competition Network Conference, found that a majority of competition authorities state that “their national [competition] legislation refers either directly or indirectly to consumer welfare,” and that most competition authorities “base their enforcement efforts on the premise that they enlarge consumer welfare.”  

Recently, however, the notion that a consumer welfare standard (CWS) should guide antitrust enforcement has come under attack (see here). In the United States, this movement has been led by populist “neo-Brandeisians” who have “call[ed] instead for enforcement that takes into account firm size, fairness, labor rights, and the protection of smaller enterprises.” (Interestingly, there appear to be more direct and strident published attacks on the CWS from American critics than from European commentators, perhaps reflecting an unspoken European assumption that “ordoliberal” strong government oversight of markets advances the welfare of consumers and society in general.) The neo-Brandeisian critique is badly flawed and should be rejected.

Assuming that the focus on consumer welfare in U.S. antitrust enforcement survives this latest populist challenge, what considerations should inform the design and application of a CWS? Before considering this question, one must confront the context in which it arises—the claim that the U.S. economy has become far less competitive in recent decades and that antitrust enforcement has been ineffective at addressing this problem. After dispatching with this flawed claim, I advance four principles aimed at properly incorporating consumer-welfare considerations into antitrust-enforcement analysis.  

Does the US Suffer from Poor Antitrust Enforcement and Declining Competition?

Antitrust interventionists assert that lax U.S. antitrust enforcement has coincided with a serious decline in competition—a claim deployed to argue that, even if one assumes that promoting consumer welfare remains an overarching goal, U.S. antitrust policy nonetheless requires a course correction. After all, basic price theory indicates that a reduction in market competition raises deadweight loss and reduces consumers’ relative share of total surplus. As such, it might seem to follow that “ramping up antitrust” would lead to more vigorously competitive markets, featuring less deadweight loss and relatively more consumer surplus.

This argument, of course, avoids error cost, rent seeking, and public choice issues that raise serious questions about the welfare effects of more aggressive “invigorated” enforcement (see here, for example). But more fundamentally, the argument is based on two incorrect premises:

  1. That competition has declined; and
  2. That U.S. trustbusters have applied the CWS in a narrow manner ineffective to address competitive problems.

Those premises (which also underlie President Joe Biden’s July 2021 Executive Order on Promoting Competition in the American Economy) do not stand up to scrutiny.

In a recent article in the Stigler Center journal Promarket, Yale University economics professor Fiona Scott-Morton and Yale Law student Leah Samuel accepted those premises in complaining about poor antitrust enforcement and substandard competition (hyperlinks omitted and emphasis in the original):

In recent years, the [CWS] term itself has become the target of vocal criticism in light of mounting evidence that recent enforcement—and what many call the “consumer welfare standard era” of antitrust enforcement—has been a failure. …

This strategy of non-enforcement has harmed markets and consumers. Today we see the evidence of this under-enforcement in a range of macroeconomic measures, studies of markups, as well as in merger post-mortems and studies of anticompetitive behavior that agencies have not pursued. Non-economist observers– journalists, advocates, and lawyers – who have noticed the lack of enforcement and the pernicious results have learned to blame “economics” and the CWS. They are correct that using CWS, as defined and warped by Chicago-era jurists and economists, has been a failure. That kind of enforcement—namely, insufficient enforcement—does not protect competition. But we argue that the “economics” at fault are the corporate-sponsored Chicago School assumptions, which are at best outdated, generally unjustified, and usually incorrect.

While the Chicago School caused the “consumer welfare standard” to become associated with an anti-enforcement philosophy in the legal community, it has never changed its meaning among PhD-trained economists.

To an economist, consumer welfare is a well-defined concept. Price, quality, and innovation are all part of the demand curve and all form the basis for the standard academic definition of consumer welfare. CW is the area under the demand curve and above the quality-adjusted price paid. … Quality-adjusted price represents all the value consumers get from the product less the price they paid, and therefore encapsulates the role of quality of any kind, innovation, and price on the welfare of the consumer.

In my published response to Scott-Morton and Samuel, I summarized recent economic literature that contradicts the “competition is declining” claim. I also demonstrated that antitrust enforcement has been robust and successful, refuting the authors’ claim to the contrary (cross links to economic literature omitted):

There are only two problems with the [authors’] argument. First, it is not clear at all that competition has declined during the reign of this supposedly misused [CWS] concept. Second, the consumer welfare standard has not been misapplied at all. Indeed, as antitrust scholars and enforcement officials have demonstrated … modern antitrust enforcement has not adopted a narrow “Chicago School” view of the world. To the contrary, it has incorporated the more sophisticated analysis the authors advocate, and enforcement initiatives have been vigorous and largely successful. Accordingly, the authors’ call for an adjustment in antitrust enforcement is a solution in search of a non-existent problem.

In short, competitive conditions in U.S. markets are robust and have not been declining. Moreover, U.S. antitrust enforcement has been sophisticated and aggressive, fully attuned to considerations of quality and innovation.

A Suggested Framework for Consumer Welfare Analysis

Although recent claims of “weak” U.S. antitrust enforcement are baseless, they do, nevertheless, raise “front and center” the nature of the CWS. The CWS is a worthwhile concept, but it eludes a precise definition. That is as it should be. In our common law system, fact-specific analyses of particular competitive practices are key to determining whether welfare is or is not being advanced in the case at hand. There is no simple talismanic CWS formula that is readily applicable to diverse cases.

While Scott-Morton argues that the area under the demand curve (consumer surplus) is essentially coincident with the CWS, other leading commentators take account of the interests of producers as well. For example, the leading antitrust treatise writer, Herbert Hovenkamp, suggests thinking about consumer welfare in terms of “maxim[izing] output that is consistent with sustainable competition. Output includes quantity, quality, and improvements in innovation. As an aside, it is worth noting that high output favors suppliers, including labor, as well as consumers because job opportunities increase when output is higher.” (Hovenkamp, Federal Antitrust Policy 102 (6th ed. 2020).)

Federal Trade Commission (FTC) Commissioner Christine Wilson (like Ken Heyer and other scholars) advocates a “total welfare standard” (consumer plus producer surplus). She stresses that it would beneficially:

  1. Make efficiencies more broadly cognizable, capturing cost reductions not passed through in the short run;
  2. Better enable the agencies to consider multi-market effects (whether consumer welfare gains in one market swamp consumer welfare losses in another market); and
  3. Better capture dynamic efficiencies (such as firm-specific efficiencies that are emulated by other “copycat” firms in the market).

Hovenkamp and Wilson point to the fact that efficiency-enhancing business conduct often has positive ramifications for both consumers and producers. As such, a CWS that focuses narrowly on short-term consumer surplus may prompt antitrust challenges to conduct that, properly understood, will prove beneficial to both consumers and producers over time.

With this in mind, I will now suggest four general “framework principles” to inform a CWS analysis that properly accounts for innovation and dynamic factors. These principles are tentative and merely suggestive, intended to prompt a further dialogue on CWS among interested commentators. (Also, many practical details will need to be filled in, based on further analysis.)

  1. Enforcers should consider all effects on consumer welfare in evaluating a transaction. Under the rule of reason, a reduction in surplus to particular defined consumers should not condemn a business practice (merger or non-merger) if other consumers are likely to enjoy accretions to surplus and if aggregate consumer surplus appears unlikely to decline, on net, due to the practice. Surplus need not be quantified—the likely direction of change in surplus is all that is required. In other words, “actual welfare balancing” is not required, consistent with the practical impossibility of quantifying new welfare effects in almost all cases (see, e.g., Hovenkamp, here). This principle is unaffected by market definition—all affected consumers should be assessed, whether they are “in” or “out” of a hypothesized market.
  2. Vertical intellectual-property-licensing contracts should not be subject to antitrust scrutiny unless there is substantial evidence that they are being used to facilitate horizontal collusion. This principle draws on the “New Madison Approach” associated with former Assistant Attorney General for Antitrust Makan Delrahim. It applies to a set of practices that further the interests of both consumers and producers. Vertical IP licensing (particularly patent licensing) “is highly important to the dynamic and efficient dissemination of new technologies throughout the economy, which, in turn, promotes innovation and increased welfare (consumer and producer surplus).” (See here, for example.) The 9th U.S. Circuit Court of Appeals’ refusal to condemn Qualcomm’s patent-licensing contracts (which had been challenged by the FTC) is consistent with this principle; it “evinces a refusal to find anticompetitive harm in licensing markets without hard empirical support.” (See here.)
  3. Furthermore, enforcers should carefully assess the ability of “non-standard” commercial contracts—horizontal and vertical—to overcome market failures, as described by transaction-cost economics (see here, and here, for example). Non-standard contracts may be designed to deal with problems (for instance) of contractual incompleteness and opportunism that stymie efforts to advance new commercial opportunities. To the extent that such contracts create opportunities for transactions that expand or enhance market offerings, they generate new consumer surplus (new or “shifted out” demand curves) and enhance consumer welfare. Thus, they should enjoy a general (though rebuttable) presumption of legality.
  4. Fourth, and most fundamentally, enforcers should take account of cost-benefit analysis, rooted in error-cost considerations, in their enforcement initiatives, in order to further consumer welfare. As I have previously written:

Assuming that one views modern antitrust enforcement as an exercise in consumer welfare maximization, what does that tell us about optimal antitrust enforcement policy design? In order to maximize welfare, enforcers must have an understanding of – and seek to maximize the difference between – the aggregate costs and benefits that are likely to flow from their policies. It therefore follows that cost-benefit analysis should be applied to antitrust enforcement design. Specifically, antitrust enforcers first should ensure that the rules they propagate create net welfare benefits. Next, they should (to the extent possible) seek to calibrate those rules so as to maximize net welfare. (Significantly, Federal Trade Commissioner Josh Wright also has highlighted the merits of utilizing cost-benefit analysis in the work of the FTC.) [Eight specific suggestions for implementing cost-beneficial antitrust evaluations are then put forth in this article.]

Conclusion

One must hope that efforts to eliminate consumer welfare as the focal point of U.S. antitrust will fail. But even if they do, market-oriented commentators should be alert to any efforts to “hijack” the CWS by interventionist market-skeptical scholars. A particular threat may involve efforts to define the CWS as merely involving short-term consumer surplus maximization in narrowly defined markets. Such efforts could, if successful, justify highly interventionist enforcement protocols deployed against a wide variety of efficient (though too often mischaracterized) business practices.

To counter interventionist antitrust proposals, it is important to demonstrate that claims of faltering competition and inadequate antitrust enforcement under current norms simply are inaccurate. Such an effort, though necessary, is not enough.

In order to win the day, it will be important for market mavens to explain that novel business practices aimed at promoting producer surplus tend to increase consumer surplus as well. That is because efficiency-enhancing stratagems (often embodied in restrictive IP-licensing agreements and non-standard contracts) that further innovation and overcome transaction-cost difficulties frequently pave the way for innovation and the dissemination of new technologies throughout the economy. Those effects, in turn, expand and create new market opportunities, yielding huge additions to consumer surplus—accretions that swamp short-term static effects.

Enlightened enforcers should apply enforcement protocols that allow such benefits to be taken into account. They should also focus on the interests of all consumers affected by a practice, not just a narrow subset of targeted potentially “harmed” consumers. Finally, public officials should view their enforcement mission through a cost-benefit lens, which is designed to promote welfare. 

There has been a rapid proliferation of proposals in recent years to closely regulate competition among large digital platforms. The European Union’s Digital Markets Act (DMA, which will become effective in 2023) imposes a variety of data-use, interoperability, and non-self-preferencing obligations on digital “gatekeeper” firms. A host of other regulatory schemes are being considered in Australia, France, Germany, and Japan, among other countries (for example, see here). The United Kingdom has established a Digital Markets Unit “to operationalise the future pro-competition regime for digital markets.” Recently introduced U.S. Senate and House Bills—although touted as “antitrust reform” legislation—effectively amount to “regulation in disguise” of disfavored business activities by very large companies,  including the major digital platforms (see here and here).

Sorely missing from these regulatory proposals is any sense of the fallibility of regulation. Indeed, proponents of new regulatory proposals seem to implicitly assume that government regulation of platforms will enhance welfare, ignoring real-life regulatory costs and regulatory failures (see here, for example). Without evidence, new regulatory initiatives are put forth as superior to long-established, consumer-based antitrust law enforcement.

The hope that new regulatory tools will somehow “solve” digital market competitive “problems” stems from the untested assumption that established consumer welfare-based antitrust enforcement is “not up to the task.” Untested assumptions, however, are an unsound guide to public policy decisions. Rather, in order to optimize welfare, all proposed government interventions in the economy, including regulation and antitrust, should be subject to decision-theoretic analysis that is designed to minimize the sum of error and decision costs (see here). What might such an analysis reveal?

Wonder no more. In a just-released Mercatus Center Working Paper, Professor Thom Lambert has conducted a decision-theoretic analysis that evaluates the relative merits of U.S. consumer welfare-based antitrust, ex ante regulation, and ongoing agency oversight in addressing the market power of large digital platforms. While explaining that antitrust and its alternatives have their respective costs and benefits, Lambert concludes that antitrust is the welfare-superior approach to dealing with platform competition issues. According to Lambert:

This paper provides a comparative institutional analysis of the leading approaches to addressing the market power of large digital platforms: (1) the traditional US antitrust approach; (2) imposition of ex ante conduct rules such as those in the EU’s Digital Markets Act and several bills recently advanced by the Judiciary Committee of the US House of Representatives; and (3) ongoing agency oversight, exemplified by the UK’s newly established “Digital Markets Unit.” After identifying the advantages and disadvantages of each approach, this paper examines how they might play out in the context of digital platforms. It first examines whether antitrust is too slow and indeterminate to tackle market power concerns arising from digital platforms. It next considers possible error costs resulting from the most prominent proposed conduct rules. It then shows how three features of the agency oversight model—its broad focus, political susceptibility, and perpetual control—render it particularly vulnerable to rent-seeking efforts and agency capture. The paper concludes that antitrust’s downsides (relative indeterminacy and slowness) are likely to be less significant than those of ex ante conduct rules (large error costs resulting from high informational requirements) and ongoing agency oversight (rent-seeking and agency capture).

Lambert’s analysis should be carefully consulted by American legislators and potential rule-makers (including at the Federal Trade Commission) before they institute digital platform regulation. One also hopes that enlightened foreign competition officials will also take note of Professor Lambert’s well-reasoned study. 

A debate has broken out among the four sitting members of the Federal Trade Commission (FTC) in connection with the recently submitted FTC Report to Congress on Privacy and Security. Chair Lina Khan argues that the commission “must explore using its rulemaking tools to codify baseline protections,” while Commissioner Rebecca Kelly Slaughter has urged the FTC to initiate a broad-based rulemaking proceeding on data privacy and security. By contrast, Commissioners Noah Joshua Phillips and Christine Wilson counsel against a broad-based regulatory initiative on privacy.

Decisions to initiate a rulemaking should be viewed through a cost-benefit lens (See summaries of Thom Lambert’s masterful treatment of regulation, of which rulemaking is a subset, here and here). Unless there is a market failure, rulemaking is not called for. Even in the face of market failure, regulation should not be adopted unless it is more cost-beneficial than reliance on markets (including the ability of public and private litigation to address market-failure problems, such as data theft). For a variety of reasons, it is unlikely that FTC rulemaking directed at privacy and data security would pass a cost-benefit test.

Discussion

As I have previously explained (see here and here), FTC rulemaking pursuant to Section 6(g) of the FTC Act (which authorizes the FTC “to make rules and regulations for the purpose of carrying out the provisions of this subchapter”) is properly read as authorizing mere procedural, not substantive, rules. As such, efforts to enact substantive competition rules would not pass a cost-benefit test. Such rules could well be struck down as beyond the FTC’s authority on constitutional law grounds, and as “arbitrary and capricious” on administrative law grounds. What’s more, they would represent retrograde policy. Competition rules would generate higher error costs than adjudications; could be deemed to undermine the rule of law, because the U.S. Justice Department (DOJ) could not apply such rules; and innovative efficiency-seeking business arrangements would be chilled.

Accordingly, the FTC likely would not pursue 6(g) rulemaking should it decide to address data security and privacy, a topic which best fits under the “consumer protection” category. Rather, the FTC presumably would most likely initiate a “Magnuson-Moss” rulemaking (MMR) under Section 18 of the FTC Act, which authorizes the commission to prescribe “rules which define with specificity acts or practices which are unfair or deceptive acts or practices in or affecting commerce within the meaning of Section 5(a)(1) of the Act.” Among other things, Section 18 requires that the commission’s rulemaking proceedings provide an opportunity for informal hearings at which interested parties are accorded limited rights of cross-examination. Also, before commencing an MMR proceeding, the FTC must have reason to believe the practices addressed by the rulemaking are “prevalent.” 15 U.S.C. Sec. 57a(b)(3).

MMR proceedings, which are not governed under the Administrative Procedure Act (APA), do not present the same degree of legal problems as Section 6(g) rulemakings (see here). The question of legal authority to adopt a substantive rule is not raised; “rule of law” problems are far less serious (the DOJ is not a parallel enforcer of consumer-protection law); and APA issues of “arbitrariness” and “capriciousness” are not directly presented. Indeed, MMR proceedings include a variety of procedures aimed at promoting fairness (see here, for example). An MMR proceeding directed at data privacy predictably would be based on the claim that the failure to adhere to certain data-protection norms is an “unfair act or practice.”

Nevertheless, MMR rules would be subject to two substantial sources of legal risk.

The first of these arises out of federalism. Three states (California, Colorado, and Virginia) recently have enacted comprehensive data-privacy laws, and a large number of other state legislatures are considering data-privacy bills (see here). The proliferation of state data-privacy statutes would raise the risk of inconsistent and duplicative regulatory norms, potentially chilling business innovations addressed at data protection (a severe problem in the Internet Age, when business data-protection programs typically will have interstate effects).

An FTC MMR data-protection regulation that successfully “occupied the field” and preempted such state provisions could eliminate that source of costs. The Magnuson–Moss Warranty Act, however, does not contain an explicit preemption clause, leaving in serious doubt the ability of an FTC rule to displace state regulations (see here for a summary of the murky state of preemption law, including the skepticism of textualist Supreme Court justices toward implied “obstacle preemption”). In particular, the long history of state consumer-protection and antitrust laws that coexist with federal laws suggests that the case for FTC rule-based displacement of state data protection is a weak one. The upshot, then, of a Section 18 FTC data-protection rule enactment could be “the worst of all possible worlds,” with drawn-out litigation leading to competing federal and state norms that multiplied business costs.

The second source of risk arises out of the statutory definition of “unfair practices,” found in Section 5(n) of the FTC Act. Section 5(n) codifies the meaning of unfair practices, and thereby constrains the FTC’s application of rulemakings covering such practices. Section 5(n) states:

The Commission shall have no authority . . . to declare unlawful an act or practice on the grounds that such an act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

In effect, Section 5(n) implicitly subjects unfair practices to a well-defined cost-benefit framework. Thus, in promulgating a data-privacy MMR, the FTC first would have to demonstrate that specific disfavored data-protection practices caused or were likely to cause substantial harm. What’s more, the commission would have to show that any actual or likely harm would not be outweighed by countervailing benefits to consumers or competition. One would expect that a data-privacy rulemaking record would include submissions that pointed to the efficiencies of existing data-protection policies that would be displaced by a rule.

Moreover, subsequent federal court challenges to a final FTC rule likely would put forth the consumer and competitive benefits sacrificed by rule requirements. For example, rule challengers might point to the added business costs passed on to consumers that would arise from particular rule mandates, and the diminution in competition among data-protection systems generated by specific rule provisions. Litigation uncertainties surrounding these issues could be substantial and would cast into further doubt the legal viability of any final FTC data protection rule.

Apart from these legal risk-based costs, an MMR data privacy predictably would generate error-based costs. Given imperfect information in the hands of government and the impossibility of achieving welfare-maximizing nirvana through regulation (see, for example, here), any MMR data-privacy rule would erroneously condemn some economically inefficient business protocols and disincentivize some efficiency-seeking behavior. The Section 5(n) cost-benefit framework, though helpful, would not eliminate such error. (For example, even bureaucratic efforts to accommodate some business suggestions during the rulemaking process might tilt the post-rule market in favor of certain business models, thereby distorting competition.) In the abstract, it is difficult to say whether the welfare benefits of a final MMA data-privacy rule (measured by reductions in data-privacy-related consumer harm) would outweigh the costs, even before taking legal costs into account.

Conclusion

At least two FTC commissioners (and likely a third, assuming that President Joe Biden’s highly credentialed nominee Alvaro Bedoya will be confirmed by the U.S. Senate) appear to support FTC data-privacy regulation, even in the absence of new federal legislation. Such regulation, which presumably would be adopted as an MMR pursuant to Section 18 of the FTC Act, would probably not prove cost-beneficial. Not only would adoption of a final data-privacy rule generate substantial litigation costs and uncertainty, it would quite possibly add an additional layer of regulatory burdens above and beyond the requirements of proliferating state privacy rules. Furthermore, it is impossible to say whether the consumer-privacy benefits stemming from such an FTC rule would outweigh the error costs (manifested through competitive distortions and consumer harm) stemming from the inevitable imperfections of the rule’s requirements. All told, these considerations counsel against the allocation of scarce FTC resources to a Section 18 data-privacy rulemaking initiative.

But what about legislation? New federal privacy legislation that explicitly preempted state law would eliminate costs arising from inconsistencies among state privacy rules. Ideally, if such legislation were to be pursued, it should to the extent possible embody a cost-benefit framework designed to minimize the sum of administrative (including litigation) and error costs. The nature of such a possible law, and the role the FTC might play in administering it, however, is a topic for another day.

[This post adapts elements of “Technology Mergers and the Market for Corporate Control,” forthcoming in the Missouri Law Review.]

In recent years, a growing chorus of voices has argued that existing merger rules fail to apprehend competitively significant mergers, either because they fall below existing merger-filing thresholds or because they affect innovation in ways that are purportedly ignored.

These fears are particularly acute in the pharmaceutical and tech industries, where several high-profile academic articles and reports claim to have identified important gaps in current merger-enforcement rules, particularly with respect to acquisitions involving nascent and potential competitors (here, here, and here, among many others).

Such fears have led activists, lawmakers, and enforcers to call for tougher rules, including the introduction of more stringent merger-filing thresholds and other substantive changes, such as the inversion of the burden of proof when authorities review mergers and acquisitions involving digital platforms.

However, as we discuss in a recent working paper—forthcoming in the Missouri Law Review and available on SSRN—these proposals tend to overlook the important tradeoffs that would ensue from attempts to decrease the number of false positives under existing merger rules and thresholds.

The paper draws from two key strands of economic literature that are routinely overlooked (or summarily dismissed) by critics of the status quo.

For a start, antitrust enforcement is not costless. In the case of merger enforcement, not only is it expensive for agencies to detect anticompetitive deals but, more importantly, overbearing rules may deter beneficial merger activity that creates value for consumers.

Second, critics tend to overlook the possibility that incumbents’ superior managerial or other capabilities (i.e., what made them successful in the first place) makes them the ideal acquisition partners for entrepreneurs and startup investors looking to sell.

The result is a body of economic literature that focuses almost entirely on hypothetical social costs, while ignoring the redeeming benefits of corporate acquisitions, as well as the social cost of enforcement.

Kill Zones

One of the most significant allegations leveled against large tech firms is that their very presence in a market may hinder investments, entry, and innovation, creating what some have called a “kill zone.” The strongest expression in the economic literature of this idea of a kill zone stems from a working paper by Sai Krishna Kamepalli, Raghuram Rajan, and Luigi Zingales.

The paper makes two important claims, one theoretical and one empirical. From a theoretical standpoint, the authors argue that the prospect of an acquisition by a dominant platform deters consumers from joining rival platforms, and that this, in turn, hampers the growth of these rivals. The authors then test a similar hypothesis empirically. They find that acquisitions by a dominant platform—such as Google or Facebook—decrease investment levels and venture capital deals in markets that are “similar” to that of the target firm.

But both findings are problematic. For a start, Zingales and his co-authors’ theoretical model is premised on questionable assumptions about the way in which competition develops in the digital space. The first is that early adopters of new platforms—called “techies” in the authors’ parlance—face high switching costs because of their desire to learn these platforms in detail. As an initial matter, it would appear facially contradictory that “techies” both are the group with the highest switching costs and that they switch the most. The authors further assume that “techies” would incur lower adoption costs if they remained on the incumbent platform and waited for the rival platform to be acquired.

Unfortunately, while these key behavioral assumptions drive the results of the theoretical model, the paper presents no evidence to support their presence in real-world settings. In that sense, the authors commit the same error as previous theoretical work concerning externalities, which have tended to overestimate their frequency.

Second, the empirical analysis put forward in the paper is unreliable for policymaking purposes. The authors notably find that:

[N]ormalized VC investments in start-ups in the same space as the company acquired by Google and Facebook drop by over 40% and the number of deals falls by over 20% in the three years following an acquisition.

However, the results of this study are derived from the analysis of only nine transactions. The study also fails to clearly show that firms in the treatment and controls are qualitatively similar. In a nutshell, the study compares industry acquisitions exceeding $500 million to Facebook and Google’s acquisitions that exceed that amount. This does not tell us whether the mergers in both groups involved target companies with similar valuations or similar levels of maturity. This does not necessarily invalidate the results, but it does suggest that policymakers should be circumspect in interpreting those results.

Finally, the paper fails to demonstrate evidence that existing antitrust regimes fail to achieve an optimal error-cost balance. The central problem is that the paper has indeterminate welfare implications. For instance, as the authors note, the declines in investment in spaces adjacent to the incumbent platforms occurred during a time of rapidly rising venture capital investment, both in terms of the number of deals and dollars invested. It is entirely plausible that venture capital merely shifted to other sectors.

Put differently, on its own terms, the evidence merely suggests that acquisitions by Google and Facebook affected the direction of innovation, not its overall rate. And there is little to suggest that this shift was suboptimal, from a welfare standpoint.

In short, as the authors themselves conclude: “[i]t would be premature to draw any policy conclusion on antitrust enforcement based solely on our model and our limited evidence.”

Mergers and Potential Competition

Scholars have also posited more direct effects from acquisitions of startups or nascent companies by incumbent technology market firms.

Some scholars argue that incumbents might acquire rivals that do not yet compete with them directly, in order to reduce the competitive pressure they will face in the future. In his paper “Potential Competition and Antitrust Analysis: Monopoly Profits Exceed Duopoly Profits,” Steven Salop argues:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

However, these antitrust theories of harm suffer from several important flaws. They rest upon several restrictive assumptions that are not certain to occur in real-world settings. Most are premised on the notion that, in a given market, monopoly profits generally exceed joint duopoly profits. This allegedly makes it profitable, and mutually advantageous, for an incumbent to protect its monopoly position by preemptively acquiring potential rivals.

Accordingly, under these theories, anticompetitive mergers are only possible when the acquired rival could effectively challenge the incumbent. But these are, of course, only potential challengers; there is no guarantee that any one of them could or would mount a viable competitive threat.

Less obviously, it must be the case that the rival can hope to share only duopoly profits, as opposed to completely overthrowing the incumbent or surpassing them with a significantly larger share of the market. Where competition is “for the market” itself, monopoly maintenance would fail to explain a rival’s decision to sell.  Because there would be no asymmetry between the expected profits of the incumbent and the rival, monopoly maintenance alone would not give rise to mutually advantageous deals.

Second, potential competition does not always increase consumer welfare.  Indeed, while the presence of potential competitors might increase price competition, it can also have supply-side effects that cut in the opposite direction.

For example, as Nobel laureate Joseph Stiglitz observed, a monopolist threatened by potential competition may invest in socially wasteful R&D efforts or entry-deterrence mechanisms, and it may operate at below-optimal scale in anticipation of future competitive entry.

There are also pragmatic objections. Analyzing a merger’s effect on potential competition would compel antitrust authorities and courts to make increasingly speculative assessments concerning the counterfactual setting of proposed acquisitions.

In simple terms, it is far easier to determine whether a merger between McDonald’s and Burger King would lead to increased hamburger prices in the short run than it is to determine whether a gaming platform like Steam or the Epic Games Store might someday compete with video-streaming or music-subscription platforms like Netflix or Spotify. It is not that the above models are necessarily wrong, but rather that applying them to practical cases would require antitrust enforcers to estimate mostly unknowable factors.

Finally, the real test for regulators is not just whether they can identify possibly anticompetitive mergers, but whether they can do so in a cost-effective manner. Whether it is desirable to implement a given legal test is not simply a function of its accuracy, the cost to administer it, and the respective costs of false positives and false negatives. It also critically depends on how prevalent the conduct is that adjudicators would be seeking to foreclose.

Consider two hypothetical settings. Imagine there are 10,000 tech mergers in a given year, of which either 1,000 or 2,500 are anticompetitive (the remainder are procompetitive or competitively neutral). Suppose that authorities can either attempt to identify anticompetitive mergers with 75% accuracy, or perform no test at all—i.e., letting all mergers go through unchallenged.

If there are 1,000 anticompetitive mergers, applying the test would result in 7,500 correct decisions and 2,500 incorrect ones (2,250 false positives and 250 false negatives). Doing nothing would lead to 9,000 correct decisions and 1,000 false negatives. If the number of anticompetitive deals were 2,500, applying the test would lead to the same number of incorrect decisions as not applying it (1,875 false positives and 625 false negatives, versus 2,500 false negatives). The advantage would tilt toward applying the test if anticompetitive mergers were even more widespread.

This hypothetical example holds a simple lesson for policymakers: the rarer the conduct that they are attempting to identify, the more accurate their identification method must be, and the more costly false negatives must be relative to false positives.

As discussed below, current empirical evidence does not suggest that anticompetitive mergers of this sort are particularly widespread, nor does it offer accurate heuristics to detect the ones that are. Finally, there is little sense that the cost of false negatives significantly outweighs that of false positives. In short, there is currently little evidence to suggest that tougher enforcement would benefit consumers.

Killer Acquisitions

Killer acquisitions are, effectively, a subset of the “potential competitor” mergers discussed in the previous section. As defined by Colleen Cunningham, Florian Ederer, and Song Ma, they are those deals where “an incumbent firm may acquire an innovative target and terminate the development of the target’s innovations to preempt future competition.”

Cunningham, Ederer, and Ma’s highly influential paper on killer acquisitions has been responsible for much of the recent renewed interest in the effect that mergers exert on innovation. The authors studied thousands of pharmaceutical mergers and concluded that between 5.3% and 7.4% of them were killer acquisitions. As they write:

[W]e empirically compare development probabilities of overlapping acquisitions, which are, in our theory, motivated by a mix of killer and development intentions, and non-overlapping acquisitions, which are motivated only by development intentions. We find an increase in acquisition probability and a decrease in post-acquisition development for overlapping acquisitions and interpret that as evidence for killer acquisitions. […]

[W]e find that projects acquired by an incumbent with an overlapping drug are 23.4% less likely to have continued development activity compared to drugs acquired by non-overlapping incumbents.

From a policy standpoint, the question is what weight antitrust authorities, courts, and legislators should give to these findings. Stated differently, does the paper provide sufficient evidence to warrant reform of existing merger-filing thresholds and review standards? There are several factors counseling that policymakers should proceed with caution.

To start, the study’s industry-specific methodology means that it may not be a useful guide to understand acquisitions in other industries, like the tech sector, for example.

Second, even if one assumes that the findings of Cunningham, et al., are correct and apply with equal force in the tech sector (as some official reports have), it remains unclear whether the 5.3–7.4% of mergers they describe warrant a departure from the status quo.

Antitrust enforcers operate under uncertainty. The critical policy question is thus whether this subset of anticompetitive deals can be identified ex-ante. If not, is there a heuristic that would enable enforcers to identify more of these anticompetitive deals without producing excessive false positives?

The authors focus on the effect that overlapping R&D pipelines have on project discontinuations. In the case of non-overlapping mergers, acquired projects continue 17.5% of the time, while this number is 13.4% when there are overlapping pipelines. The authors argue that this gap is evidence of killer acquisitions. But it misses the bigger picture: under the authors’ own numbers and definition of a “killer acquisition,” a vast majority of overlapping acquisitions are perfectly benign; prohibiting them would thus have important social costs.

Third, there are several problems with describing this kind of behavior as harmful. Indeed, Cunningham, et al., acknowledge that this kind of behavior could increase innovation by boosting the returns to innovation.

And even if one ignores incentives to innovate, product discontinuations can improve consumer welfare. This question ultimately boils down to identifying the counterfactual to a merger. As John Yun writes:

For instance, an acquisition that results in a discontinued product is not per se evidence of either consumer harm or benefit. The answer involves comparing the counterfactual world without the acquisition with the world with the acquisition. The comparison includes potential efficiencies that were gained from the acquisition, including integration of intellectual property, the reduction of transaction costs, economies of scope, and better allocation of skilled labor.

One of the reasons R&D project discontinuation may be beneficial is simply cost savings. R&D is expensive. Pharmaceutical firms spend up to 27.8% of their annual revenue on R&D. Developing a new drug has an estimated median cost of $985.3 million. Cost-cutting—notably as it concerns R&D—is thus a critical part of pharmaceutical (as well as tech) companies’ businesses. As a report by McKinsey concludes:

The recent boom in M&A in the pharma industry is partly the result of attempts to address short-term productivity challenges. An acquiring or merging company typically designs organization-wide integration programs to capture synergies, especially in costs. Such programs usually take up to three years to complete and deliver results.

Another report finds that:

Maximizing the efficiency of production labor and equipment is one important way top-quartile drugmakers break out of the pack. Their rates of operational-equipment effectiveness are more than twice those of bottom-quartile companies (Exhibit 1), and when we looked closely we found that processes account for two-thirds of the difference.

In short, pharmaceutical companies do not just compete along innovation-related parameters, though these are obviously important, but also on more traditional grounds such as cost-rationalization. Accordingly, as the above reports suggest, pharmaceutical mergers are often about applying an incumbent’s superior managerial efficiency to the acquired firm’s assets through operation of the market for corporate control.

This cost-cutting (and superior project selection) ultimately enables companies to offer lower prices, thereby benefiting consumers and increasing their incentives to invest in R&D in the first place by making successfully developed drugs more profitable.

In that sense, Henry Manne’s seminal work relating to mergers and the market for corporate control sheds at least as much light on pharmaceutical (and tech) mergers as the killer acquisitions literature. And yet, it is hardly ever mentioned in modern economic literature on this topic.

While Colleen Cunningham and her co-authors do not entirely ignore these considerations, as we discuss in our paper, their arguments for dismissing them are far from watertight.

A natural extension of the killer acquisitions work is to question whether mergers of this sort also take place in the tech industry. Interest in this question is notably driven by the central role that digital markets currently occupy in competition-policy discussion, but also by the significant number of startup acquisitions that take place in the tech industry. However, existing studies provide scant evidence that killer acquisitions are a common occurrence in these markets.

This is not surprising. Unlike in the pharmaceutical industry—where drugs need to go through a lengthy and visible regulatory pipeline before they can be sold—incumbents in digital industries will likely struggle to identify their closest rivals and prevent firms from rapidly pivoting to seize new commercial opportunities. As a result, the basic conditions for killer acquisitions to take place (i.e., firms knowing they are in a position to share monopoly profits) are less likely to be present; it also would be harder to design research methods to detect these mergers.

The empirical literature on killer acquisitions in the tech sector is still in its infancy. But, as things stand, no study directly examines whether killer acquisitions actually take place in digital industries (i.e., whether post-merger project discontinuations are more common in overlapping than non-overlapping tech mergers). This is notably the case for studies by Axel Gautier & Joe Lamesch, and Elena Argentesi and her co-authors. Instead, these studies merely show that product discontinuations are common after an acquisition by a big tech company.

To summarize, while studies of this sort might suggest that the clearance of certain mergers might not have been optimal, it is hardly a sufficient basis on which to argue that enforcement should be tightened.

The reason for this is simple. The fact that some anticompetitive mergers may have escaped scrutiny and/or condemnation is never a sufficient basis to tighten rules. For that, it is also necessary to factor in the administrative costs of increased enforcement, as well as potential false convictions to which it might give rise. As things stand, economic research on killer acquisitions in the tech sector does not warrant tougher antitrust enforcement, though it does show the need for further empirical research on the topic.

Conclusion

Many proposed merger-enforcement reforms risk throwing the baby out with the bathwater. Mergers are largely beneficial to society (here, here and here); anticompetitive ones are rare; and there is little way, at the margin, to tell good from bad. To put it mildly, there is a precious baby that needs to be preserved and relatively little bathwater to throw out.

Take the fulcrum of policy debates that is the pharmaceutical industry. It is not hard to point to pharmaceutical mergers (or long-term agreements) that have revolutionized patient outcomes. Most recently, Pfizer and BioNTech’s efforts to successfully market an mRNA vaccine against COVID-19 offers a case in point.

The deal struck by both firms could naïvely be construed as bearing hallmarks of a killer acquisition or an anticompetitive agreement (long-term agreements can easily fall into either of these categories). Pfizer was a powerful incumbent in the vaccine industry; BioNTech threatened to disrupt the industry with new technology; and the deal likely caused Pfizer to forgo some independent R&D efforts. And yet, it also led to the first approved COVID-19 vaccine and groundbreaking advances in vaccine technology.

Of course, the counterfactual is unclear, and the market might be more competitive absent the deal, just as there might be only one approved mRNA vaccine today instead of two—we simply do not know. More importantly, this counterfactual was even less knowable at the time of the deal. And much the same could be said about countless other pharmaceutical mergers.

The key policy question is how authorities should handle this uncertainty. Critics of the status quo argue that current rules and thresholds leave certain anticompetitive deals unchallenged. But these calls for tougher enforcement fail to satisfy the requirements of the error-cost framework. Critics have so far failed to show that, on balance, mergers harm social welfare—even overlapping ones or mergers between potential competitors—just as they are yet to suggest alternative institutional arrangements that would improve social welfare.

In other words, they mistakenly analyze purported false negatives of merger-enforcement regimes in isolation. In doing so, they ignore how measures that aim to reduce such judicial errors may lead to other errors, as well as higher enforcement costs. In short, they paint a world where policy decisions involve facile tradeoffs, and this undermines their policy recommendations.

Given these significant limitations, this body of academic research should be met with an appropriate degree of caution. For all the criticism it has faced, the current merger-review system is mostly a resounding success. It is administrable, predictable, and timely. Yet it also eliminates a vast majority of judicial errors: even its critics concede that false negatives make up only a tiny fraction of decisions. Policymakers must decide whether the benefits from catching the very few arguably anticompetitive mergers that currently escape prosecution outweigh the significant costs that are required to achieve this goal. There is currently little evidence to suggest that this is, indeed, the case.

The U.S. House this week passed H.R. 2668, the Consumer Protection and Recovery Act (CPRA), which authorizes the Federal Trade Commission (FTC) to seek monetary relief in federal courts for injunctions brought under Section 13(b) of the Federal Trade Commission Act.

Potential relief under the CPRA is comprehensive. It includes “restitution for losses, rescission or reformation of contracts, refund of money, return of property … and disgorgement of any unjust enrichment that a person, partnership, or corporation obtained as a result of the violation that gives rise to the suit.” What’s more, under the CPRA, monetary relief may be obtained for violations that occurred up to 10 years before the filing of the suit in which relief is requested by the FTC.

The Senate should reject the House version of the CPRA. Its monetary-recovery provisions require substantial narrowing if it is to pass cost-benefit muster.

The CPRA is a response to the Supreme Court’s April 22 decision in AMG Capital Management v. FTC, which held that Section 13(b) of the FTC Act does not authorize the commission to obtain court-ordered equitable monetary relief. As I explained in an April 22 Truth on the Market post, Congress’ response to the court’s holding should not be to grant the FTC carte blanche authority to obtain broad monetary exactions for any and all FTC Act violations. I argued that “[i]f Congress adopts a cost-beneficial error-cost framework in shaping targeted legislation, it should limit FTC monetary relief authority (recoupment and disgorgement) to situations of consumer fraud or dishonesty arising under the FTC’s authority to pursue unfair or deceptive acts or practices.”

Error cost and difficulties of calculation counsel against pursuing monetary recovery in FTC unfair methods of competition cases. As I explained in my post:

Consumer redress actions are problematic for a large proportion of FTC antitrust enforcement (“unfair methods of competition”) initiatives. Many of these antitrust cases are “cutting edge” matters involving novel theories and complex fact patterns that pose a significant threat of type I [false positives] error. (In comparison, type I error is low in hardcore collusion cases brought by the U.S. Justice Department where the existence, nature, and effects of cartel activity are plain). What’s more, they generally raise extremely difficult if not impossible problems in estimating the degree of consumer harm. (Even DOJ price-fixing cases raise non-trivial measurement difficulties.)

These error-cost and calculation difficulties became even more pronounced as of July 1. On that date, the FTC unwisely voted 3-2 to withdraw a bipartisan 2015 policy statement providing that the commission would apply consumer welfare and rule-of-reason (weighing efficiencies against anticompetitive harm) considerations in exercising its unfair methods of competition authority (see my commentary here). This means that, going forward, the FTC will arrogate to itself unbounded discretion to decide what competitive practices are “unfair.” Business uncertainty, and the costly risk aversion it engenders, would be expected to grow enormously if the FTC could extract monies from firms due to competitive behavior deemed “unfair,” based on no discernible neutral principle.

Error costs and calculation problems also strongly suggest that monetary relief in FTC consumer-protection matters should be limited to cases of fraud or clear deception. As I noted:

[M]atters involving a higher likelihood of error and severe measurement problems should be the weakest candidates for consumer redress in the consumer protection sphere. For example, cases involve allegedly misleading advertising regarding the nature of goods, or allegedly insufficient advertising substantiation, may generate high false positives and intractable difficulties in estimating consumer harm. As a matter of judgment, given resource constraints, seeking financial recoveries solely in cases of fraud or clear deception where consumer losses are apparent and readily measurable makes the most sense from a cost-benefit perspective.

In short, the Senate should rewrite its Section 13(b) amendments to authorize FTC monetary recoveries only when consumer fraud and dishonesty is shown.

Finally, the Senate would be wise to sharply pare back the House language that allows the FTC to seek monetary exactions based on conduct that is a decade old. Serious problems of making accurate factual determinations of economic effects and specific-damage calculations would arise after such a long period of time. Allowing retroactive determinations based on a shorter “look-back” period prior to the filing of a complaint (three years, perhaps) would appear to strike a better balance in allowing reasonable redress while controlling error costs.