Archives For Justice Department

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

When I was a kid, I trailed behind my mother in the grocery store with a notepad and a pencil adding up the cost of each item she added to our cart. This was partly my mother’s attempt to keep my math skills sharp, but it was also a necessity. As a low-income family, there was no slack in the budget for superfluous spending. The Hostess cupcakes I longed for were a luxury item that only appeared in our cart if there was an unexpected windfall. If the antitrust populists who castigate all forms of market power succeed in their crusade to radically deconcentrate the economy, life will be much harder for low-income families like the one I grew up in.

Antitrust populists like Biden White House official Tim Wu and author Matt Stoller decry the political influence of large firms. But instead of advocating for policies that tackle this political influence directly, they seek reforms to antitrust enforcement that aim to limit the economic advantages of these firms, believing that will translate into political enfeeblement. The economic advantages arising from scale benefit consumers, particularly low-income consumers, often at the expense of smaller economic rivals. But because the protection of small businesses is so paramount to their worldview, antitrust populists blithely ignore the harm that advancing their objectives would cause to low-income families.

This desire to protect small businesses, without acknowledging the economic consequences for low-income families, is plainly obvious in calls for reinvigorated Robinson-Patman Act enforcement (a law from the 1930s for which independent businesses advocated to limit the rise of chain stores) and in plans to revise the antitrust enforcement agencies’ merger guidelines. The U.S. Justice Department (DOJ) and the Federal Trade Commission (FTC) recently held a series of listening sessions to demonstrate the need for new guidelines. During the listening session on food and agriculture, independent grocer Anthony Pena described the difficulty he has competing with larger competitors like Walmart. He stated that:

Just months ago, I was buying a 59-ounce orange juice just north of $4 a unit, where we couldn’t get the supplier to sell it to us … Meanwhile, I go to the bigger box like a Walmart or a club store. Not only do they have it fully stocked, but they have it about half the price that I would buy it for at cost.

Half the price. Anthony Pena is complaining that competitors such as Walmart are selling the same product at half the price. To protect independent grocers like Anthony Pena, antitrust populists would have consumers, including low-income families, pay twice as much for groceries.

Walmart is an important food retailer for low-income families. Nearly a fifth of all spending through the Supplemental Nutrition Assistance Program (SNAP), the program formerly known as food stamps, takes place at Walmart. After housing and transportation, food is the largest expense for low-income families. The share of expenditures going toward food for low-income families (i.e., families in the lowest 20% of the income distribution) is 34% higher than for high-income families (i.e., families in the highest 20% of the income distribution). This means that higher grocery prices disproportionately burden low-income families.

In 2019, the U.S. Department of Agriculture (USDA) launched the SNAP Online Purchasing Pilot, which allows SNAP recipients to use their benefits at online food retailers. The pandemic led to an explosion in the number of SNAP recipients using their benefits online—increasing from just 35,000 households in March 2020 to nearly 770,000 households just three months later. While the pilot originally only included Walmart and Amazon, the number of eligible retailers has expanded rapidly. In order to make grocery delivery more accessible to low-income families, an important service during the pandemic, Amazon reduced its Prime membership fee (which helps pay for free delivery) by 50% for SNAP recipients.

The antitrust populists are not only targeting the advantages of large brick-and-mortar retailers, such as Walmart, but also of large online retailers like Amazon. Again, these advantages largely flow to consumers—particularly low-income ones.

The proposed American Innovation and Choice Online Act (AICOA), which was voted out of the Senate Judiciary Committee in February and may make an appearance on the Senate floor this summer, threatens those consumer benefits. AICOA would prohibit so-called “self-preferencing” by Amazon and other large technology platforms.

Should a ban on self-preferencing come to fruition, Amazon would not be able to prominently show its own products in any capacity—even when its products are a good match for a consumer’s search. In search results, Amazon will not be able to promote its private-label products, including Amazon Basics and 365 by Whole Foods, or products for which it is a first-party seller (i.e., a reseller of another company’s product). Amazon may also have to downgrade the ranking of popular products it sells, making them harder for consumers to find. Forcing Amazon to present offers that do not correspond to products consumers want to buy or are not a good value inflicts harm on all consumers but is particularly problematic for low-income consumers. All else equal, most consumers, especially low-income ones, obviously prefer cheaper products. It is important not to take that choice away from them.

Consider the case of orange juice, the product causing so much consternation for Mr. Pena. In a recent search on Amazon for a 59-ounce orange juice, as seen in the image below, the first four “organic” search results are SNAP-eligible, first-party, or private-label products sold by Amazon and ranging in price from $3.55 to $3.79. The next two results are from third-party sellers offering two 59-ounce bottles of orange juice at $38.99 and $84.54—more than five times the unit price offered by Amazon. By prohibiting self-preferencing, Amazon would be forced to promote products to consumers that are significantly more expensive and that are not SNAP-eligible. This increases costs directly for consumers who purchase more expensive products when cheaper alternatives are available but not presented. But it also increases costs indirectly by forcing consumers to search longer for better prices and SNAP-eligible products or by discouraging them from considering timesaving, online shopping altogether. Low-income families are least able to afford these increased costs.

The upshot is that antitrust populists are choosing to support (often well-off) small-business owners at the expense of vulnerable working people. Congress should not allow them to put the squeeze on low-income families. These families are already suffering due to record-high inflation—particularly for items that constitute the largest share of their expenditures, such as transportation and food. Proposed antitrust reforms such as AICOA and reinvigorated Robinson-Patman Act enforcement will only make it harder for low-income families to make ends meet.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

If S.2992—the American Innovation and Choice Online Act or AICOA—were to become law, it would be, at the very least, an incomplete law. By design—and not for good reason, but for political expediency—AICOA is riddled with intentional uncertainty. In theory, the law’s glaring definitional deficiencies are meant to be rectified by “expert” agencies (i.e., the DOJ and FTC) after passage. But in actuality, no such certainty would ever emerge, and the law would stand as a testament to the crass political machinations and absence of rigor that undergird it. Among many other troubling outcomes, this is what the future under AICOA would hold.

Two months ago, the American Bar Association’s (ABA) Antitrust Section published a searing critique of AICOA in which it denounced the bill for being poorly written, vague, and departing from established antitrust-law principles. As Lazar Radic and I discussed in a previous post, what made the ABA’s letter to Congress so eye-opening was that it was penned by a typically staid group with a reputation for independence, professionalism, and ideational heterogeneity.

One of the main issues the ABA flagged in its letter is that the introduction of vague new concepts—like “materially harm competition,” which does not exist anywhere in current antitrust law—into the antitrust mainstream will risk substantial legal uncertainty and produce swathes of unintended consequences.

According to some, however, the bill’s inherent uncertainty is a feature, not a bug. It leaves enough space for specialist agencies to define the precise meaning of key terms without unduly narrowing the scope of the bill ex ante.

In particular, supporters of the bill have pointed to the prospect of agency guidelines under the law to rescue it from the starkest of the fundamental issues identified by the ABA. Section 4 of AICOA requires the DOJ and FTC to issue “agency enforcement guidelines” no later than 270 days after the date of enactment:

outlining policies and practices relating to conduct that may materially harm competition under section 3(a), agency interpretations of the affirmative defenses under section 3(b), and policies for determining the appropriate amount of a civil penalty to be sought under section 3(c).

In pointing to the prospect of guidelines, however, supporters are inadvertently admitting defeat—and proving the ABA’s point: AICOA is not ready for prime time.

This thinking is misguided for at least three reasons:

Guidelines are not rules

As section 4(d) of AICOA recognizes, guidelines are emphatically nonbinding:

The joint guidelines issued under this section do not … operate to bind the Commission, Department of Justice, or any person, State, or locality to the approach recommended in the guidelines.

As such, the value of guidelines in dispelling legal uncertainty is modest, at best.

This is even more so in today’s highly politicized atmosphere, where guidelines can be withdrawn at the tip of the ballot (we’ve just seen the FTC rescind the Vertical Merger Guidelines it put in place less than a year ago). Given how politicized the issuing agencies themselves have become, it’s a virtual certainty that the guidelines produced in response to AICOA would be steeped in partisan politics and immediately changed with a change in administration, thus providing no more lasting legal certainty than speculation by a member of Congress.

Guidelines are not the appropriate tool to define novel concepts

Regardless of this political reality, however, the mixture of vagueness and novelty inherent in the key concepts that underpin the infringements and affirmative defenses under AICOA—such as “fairness,” “preferencing,” “materiality”, or the “intrinsic” value of a product—undermine the usefulness (and legitimacy) of guidelines.

Indeed, while laws are sometimes purposefully vague—operating as standards rather than prescriptive rules—to allow for more flexibility, the concepts introduced by AICOA don’t even offer any cognizable standards suitable for fine-tuning.

The operative terms of AICOA don’t have definitive meanings under antitrust law, either because they are wholly foreign to accepted antitrust law (as in the case of “self-preferencing”) or because the courts have never agreed on an accepted definition (as in the case of “fairness”). Nor are they technical standards, which are better left to specialized agencies rather than to legislators to define, such as in the case of, e.g., pollution (by contrast: what is the technical standard for “fairness”?).

Indeed, as Elyse Dorsey has noted, the only certainty that would emerge from this state of affairs is the certainty of pervasive rent-seeking by non-altruistic players seeking to define the rules in their favor.

As we’ve pointed out elsewhere, the purpose of guidelines is to reflect the state of the art in a certain area of antitrust law and not to push the accepted scope of knowledge and practice in a new direction. This not only overreaches the FTC’s and DOJ’s powers, but also risks galvanizing opposition from the courts, thereby undermining the utility of adopting guidelines in the first place.

Guidelines can’t fix a fundamentally flawed law

Expecting guidelines to provide sensible, administrable content for the bill sets the bar overly high for guidelines, and unduly low for AICOA.

The alleged harms at the heart of AICOA are foreign to antitrust law, and even to the economic underpinnings of competition policy more broadly. Indeed, as Sean Sullivan has pointed out, the law doesn’t even purport to define “harms,” but only serves to make specific conduct illegal:

Even if the conduct has no effect, it’s made illegal, unless an affirmative defense is raised. And the affirmative defense requires that it doesn’t ‘harm competition.’ But ‘harm competition’ is undefined…. You have to prove that harm doesn’t result, but it’s not really ever made clear what the harm is in the first place.”

“Self-preferencing” is not a competitive defect, and simply declaring it to be so does not make it one. As I’ve noted elsewhere:

The notion that platform entry into competition with edge providers is harmful to innovation is entirely speculative. Moreover, it is flatly contrary to a range of studies showing that the opposite is likely true…. The theory of vertical discrimination harm is at odds not only with this platform-specific empirical evidence, it is also contrary to the long-standing evidence on the welfare effects of vertical restraints more broadly …

… [M]andating openness is not without costs, most importantly in terms of the effective operation of the platform and its own incentives for innovation.

Asking agencies with an expertise in competition policy to enact economically sensible guidelines to direct enforcement against such conduct is a fool’s errand. It is a recipe for purely political legislation adopted by competition agencies that does nothing to further their competition missions.

AICOA’s Catch-22 Is Its Own Doing, and Will Be Its Downfall

AICOA’s Catch-22 is that, by making the law so vague that it needs enforcement guidelines to flesh it out, AICOA renders both itself and those same guidelines irrelevant and misses the point of both legal instruments.

Ultimately, guidelines cannot resolve the fundamental rule-of-law issues raised by the bill and highlighted by the ABA in its letter. To the contrary, they confirm the ABA’s concerns that AICOA is a poorly written and indeterminate bill. Further, the contentious elements of the bill that need clarification are inherently legislative ones that—paradoxically—shouldn’t be left to competition-agency guidelines to elucidate.

The upshot is that any future under AICOA will be one marked by endless uncertainty and the extreme politicization of both competition policy and the agencies that enforce it.

Slow wage growth and rising inequality over the past few decades have pushed economists more and more toward the study of monopsony power—particularly firms’ monopsony power over workers. Antitrust policy has taken notice. For example, when the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) initiated the process of updating their merger guidelines, their request for information included questions about how they should respond to monopsony concerns, as distinct from monopoly concerns. ​

From a pure economic-theory perspective, there is no important distinction between monopsony power and monopoly power. If Armen is trading his apples in exchange for Ben’s bananas, we can call Armen the seller of apples or the buyer of bananas. The labels (buyer and seller) are kind of arbitrary. It doesn’t matter as a pure theory matter. Monopsony and monopoly are just mirrored images.

Some infer from this monopoly-monopsony symmetry, however, that extending antitrust to monopsony power will be straightforward. As a practical matter for antitrust enforcement, it becomes less clear. The moment we go slightly less abstract and use the basic models that economists use, monopsony is not simply the mirror image of monopoly. The tools that antitrust economists use to identify market power differ in the two cases.

Monopsony Requires Studying Output

Suppose that the FTC and DOJ are considering a proposed merger. For simplicity, they know that the merger will generate efficiency gains (and they want to allow it) or market power (and they want to stop it) but not both. The challenge is to look at readily available data like prices and quantities to decide which it is. (Let’s ignore the ideal case that involves being able to estimate elasticities of demand and supply.)

In a monopoly case, if there are efficiency gains from a merger, the standard model has a clear prediction: the quantity sold in the output market will increase. An economist at the FTC or DOJ with sufficient data will be able to see (or estimate) the efficiencies directly in the output market. Efficiency gains result in either greater output at lower unit cost or else product-quality improvements that increase consumer demand. Since the merger lowers prices for consumers, the agencies (assume they care about the consumer welfare standard) will let the merger go through, since consumers are better off.

In contrast, if the merger simply enhances monopoly power without efficiency gains, the quantity sold will decrease, either because the merging parties raise prices or because quality declines. Again, the empirical implication of the merger is seen directly in the market in question. Since the merger raises prices for consumers, the agencies (assume they care about the consumer welfare standard) will let not the merger go through, since consumers are worse off. In both cases, you judge monopoly power by looking directly at the market that may or may not have monopoly power.

Unfortunately, the monopsony case is more complicated. Ultimately, we can be certain of the effects of monopsony only by looking at the output market, not the input market where the monopsony power is claimed.

To see why, consider again a merger that generates either efficiency gains or market (now monopsony) power. A merger that creates monopsony power will necessarily reduce the prices and quantity purchased of inputs like labor and materials. An overly eager FTC may see a lower quantity of input purchased and jump to the conclusion that the merger increased monopsony power. After all, monopsonies purchase fewer inputs than competitive firms.

Not so fast. Fewer input purchases may be because of efficiency gains. For example, if the efficiency gain arises from the elimination of redundancies in a hospital merger, the hospital will buy fewer inputs, hire fewer technicians, or purchase fewer medical supplies. This may even reduce the wages of technicians or the price of medical supplies, even if the newly merged hospitals are not exercising any market power to suppress wages.

The key point is that monopsony needs to be treated differently than monopoly. The antitrust agencies cannot simply look at the quantity of inputs purchased in the monopsony case as the flip side of the quantity sold in the monopoly case, because the efficiency-enhancing merger can look like the monopsony merger in terms of the level of inputs purchased.

How can the agencies differentiate efficiency-enhancing mergers from monopsony mergers? The easiest way may be for the agencies to look at the output market: an entirely different market than the one with the possibility of market power. Once we look at the output market, as we would do in a monopoly case, we have clear predictions. If the merger is efficiency-enhancing, there will be an increase in the output-market quantity. If the merger increases monopsony power, the firm perceives its marginal cost as higher than before the merger and will reduce output. 

In short, as we look for how to apply antitrust to monopsony-power cases, the agencies and courts cannot look to the input market to differentiate them from efficiency-enhancing mergers; they must look at the output market. It is impossible to discuss monopsony power coherently without considering the output market.

In real-world cases, mergers will not necessarily be either strictly efficiency-enhancing or strictly monopsony-generating, but a blend of the two. Any rigorous consideration of merger effects must account for both and make some tradeoff between them. The question of how guidelines should address monopsony power is inextricably tied to the consideration of merger efficiencies, particularly given the point above that identifying and evaluating monopsony power will often depend on its effects in downstream markets.

This is just one complication that arises when we move from the purest of pure theory to slightly more applied models of monopoly and monopsony power. Geoffrey Manne, Dirk Auer, Eric Fruits, Lazar Radic and I go through more of the complications in our comments summited to the FTC and DOJ on updating the merger guidelines.

What Assumptions Make the Difference Between Monopoly and Monopsony?

Now that we have shown that monopsony and monopoly are different, how do we square this with the initial observation that it was arbitrary whether we say Armen has monopsony power over apples or monopoly power over bananas?

There are two differences between the standard monopoly and monopsony models. First, in a vast majority of models of monopsony power, the agent with the monopsony power is buying goods only to use them in production. They have a “derived demand” for some factors of production. That demand ties their buying decision to an output market. For monopoly power, the firm sells the goods, makes some money, and that’s the end of the story.

The second difference is that the standard monopoly model looks at one output good at a time. The standard factor-demand model uses two inputs, which introduces a tradeoff between, say, capital and labor. We could force monopoly to look like monopsony by assuming the merging parties each produce two different outputs, apples and bananas. An efficiency gain could favor apple production and hurt banana consumers. While this sort of substitution among outputs is often realistic, it is not the standard economic way of modeling an output market.

[On Monday, June 27, Concurrences hosted a conference on the Rulemaking Authority of the Federal Trade Commission. This conference featured the work of contributors to a new book on the subject edited by Professor Dan Crane. Several of these authors have previously contributed to the Truth on the Market FTC UMC Symposium. We are pleased to be able to share with you excerpts or condensed versions of chapters from this book prepared by authors of of those chapters. Our thanks and compliments to Dan and Concurrences for bringing together an outstanding event and set of contributors and for supporting our sharing them with you here.]

[The post below was authored by former Federal Trade Commission Acting Chair Maureen K. Ohlhausen and former Assistant U.S. Attorney General James F. Rill.]

Since its founding in 1914, the Federal Trade Commission (FTC) has held a unique and multifaceted role in the U.S. administrative state and the economy. It possesses powerful investigative and information-gathering powers, including through compulsory processes; a multi-layered administrative-adjudication process to prosecute “unfair methods of competition (UMC)” (and later, “unfair and deceptive acts and practices (UDAP),” as well); and an important role in educating and informing the business community and the public. What the FTC cannot be, however, is a legislature with broad authority to expand, contract, or alter the laws that Congress has tasked it with enforcing.

Recent proposals for aggressive UMC rulemaking, predicated on Section 6(g) of the FTC Act, would have the effect of claiming just this sort of quasi-legislative power for the commission based on a thin statutory reed authorizing “rules and regulations for the purpose of carrying out the provisions of” that act. This usurpation of power would distract the agency from its core mission of case-by-case expert application of the FTC Act through administrative adjudication. It would also be inconsistent with the explicit grants of rulemaking authority that Congress has given the FTC and run afoul of the congressional and constitutional “guard rails” that cabin the commission’s authority.

FTC’s Unique Role as an Administrative Adjudicator

The FTC’s Part III adjudication authority is central to its mission of preserving fair competition in the U.S. economy. The FTC has enjoyed considerable success in recent years with its administrative adjudications, both in terms of winning on appeal and in shaping the development of antitrust law overall (not simply a separate category of UMC law) by creating citable precedent in key areas. However, as a result of its July 1, 2021, open meeting and President Joe Biden’s “Promoting Competition in the American Economy” executive order, the FTC appears to be headed for another misadventure in response to calls to claim authority for broad, legislative-style “unfair methods of competition” rulemaking out of Section 6(g) of the FTC Act. The commission recently took a significant and misguided step toward this goal by rescinding—without replacing—its bipartisan Statement of Enforcement Principles Regarding “Unfair Methods of Competition” Under Section 5 of the FTC Act, divorcing (at least in the commission majority’s view) Section 5 from prevailing antitrust-law principles and leaving the business community without any current guidance as to what the commission considers “unfair.”

FTC’s Rulemaking Authority Was Meant to Complement its Case-by-Case Adjudicatory Authority, Not Supplant It

As described below, broad rulemaking of this sort would likely encounter stiff resistance in the courts, due to its tenuous statutory basis and the myriad constitutional and institutional problems it creates. But even aside from the issue of legality, such a move would distract the FTC from its fundamental function as an expert case-by-case adjudicator of competition issues. It would be far too tempting for the commission to simply regulate its way to the desired outcome, bypassing all neutral arbiters along the way. And by seeking to promulgate such rules through abbreviated notice-and-comment rulemaking, the FTC would be claiming extremely broad substantive authority to directly regulate business conduct across the economy with relatively few of the procedural protections that Congress felt necessary for the FTC’s trade-regulation rules in the consumer-protection context. This approach risks not only a diversion of scarce agency resources from meaningful adjudication opportunities, but also potentially a loss of public legitimacy for the commission should it try to exempt itself from these important rulemaking safeguards.

FTC Lacks Authority to Promulgate Legislative-Style Competition Rules

The FTC has historically been hesitant to exercise UMC rulemaking authority under Section 6(g) of the FTC Act, which simply states that FTC shall have power “[f]rom time to time to classify corporations and … to make rules and regulations for the purpose of carrying out the provisions” of the FTC Act. Current proponents of UMC rulemaking argue for a broad interpretation of this clause, allowing for legally binding rulemaking on any issue subject to the FTC’s jurisdiction. But the FTC’s past reticence to exercise such sweeping powers is likely due to the existence of significant and unresolved questions of the FTC’s UMC rulemaking authority from both a statutory and constitutional perspective.

Absence of Statutory Authority

The FTC’s authority to conduct rulemaking under Section 6(g) has been tested in court only once, in National Petroleum Refiners Association v. FTC. In that case, the FTC succeeded in classifying the failure to post octane ratings on gasoline pumps as “an unfair method of competition.” The U.S. Court of Appeals for the D.C. Circuit found that Section 6(g) did confer this rulemaking authority. But Congress responded two years later with the Magnuson-Moss Warranty-Federal Trade Commission Improvement Act of 1975, which created a new rulemaking scheme that applied exclusively to the FTC’s consumer-protection rules. This act expressly excluded rulemaking on unfair methods of competition from its authority. The statute’s provision that UMC rulemaking is unaffected by the legislation manifests strong congressional design that such rules would be governed not by Magnuson-Moss, but by the FTC Act itself. The reference in Magnuson-Moss to the statute not affecting “any authority” of the FTC to engage in UMC rulemaking—as opposed to “the authority”— reflects Congress’ agnostic view on whether the FTC possessed any such authority. It simply means that whatever authority exists for UMC rulemaking, the Magnuson-Moss provisions do not affect it, and Congress left the question open for the courts to resolve.

Proponents of UMC rulemaking argue that Magnuson-Moss left the FTC’s competition-rulemaking authority intact and entitled to Chevron deference. But, as has been pointed out by many commentators over the decades, that would be highly incongruous, given that National Petroleum Refiners dealt with both UMC and UDAP authority under Section 6(g), yet Congress’ reaction was to provide specific UDAP rulemaking authority and expressly take no position on UMC rulemaking. As further evidenced by the fact that the FTC has never attempted to promulgate a UMC rule in the years following enactment of Magnuson-Moss, the act is best read as declining to endorse the FTC’s UMC rulemaking authority. Instead, it leaves the question open for future consideration by the courts.

Turning to the terms of the FTC Act, modern statutory interpretation takes a far different approach than the court in National Petroleum Refiners, which discounted the significance of Section 5’s enumeration of adjudication as the means for restraining UMC and UDAP, reasoning that Section 5(b) did not use limiting language and that Section 6(g) provides a source of substantive rulemaking authority. This approach is in clear tension with the elephants-in-mouseholes doctrine developed by the Supreme Court in recent years. The FTC’s recent claim of broad substantive UMC rulemaking authority based on the absence of limiting language and a vague, ancillary provision authorizing rulemaking alongside the ability to “classify corporations” stands in conflict with the Court’s admonition in Whitman v. American Trucking Association. The Court in AMG Capital Management, LLC v. FTC recently applied similar principles in the context of the FTC’s authority under the FTC Act. Here,the Court emphasized “the historical importance of administrative proceedings” and declined to give the FTC a shortcut to desirable outcomes in federal court. Similarly, granting broad UMC-rulemaking authority to the FTC would permit it to circumvent the FTC Act’s defining feature of case-by-case adjudications. Applying the principles enunciated in Whitman and AMG, Section 5 is best read as specifying the sole means of UMC enforcement (adjudication), and Section 6(g) is best understood as permitting the FTC to specify how it will carry out its adjudicative, investigative, and informative functions. Thus, Section 6(g) grants ministerial, not legislative, rulemaking authority.

Notably, this reading of the FTC Act would accord with how the FTC viewed its authority until 1962, a fact that the D.C. Circuit found insignificant, but that later doctrine would weigh heavily. Courts should consider an agency’s “past approach” toward its interpretation of a statute, and an agency’s longstanding view that it lacks the authority to take a certain action is a “rather telling” clue that the agency’s newfound claim to such authority is incorrect. Conversely, even widespread judicial acceptance of an interpretation of an agency’s authority does not necessarily mean the construction of the statute is correct. In AMG, the Court gave little weight to the FTC’s argument that appellate courts “have, until recently, consistently accepted its interpretation.” It also rejected the FTC’s argument that “Congress has in effect twice ratified that interpretation in subsequent amendments to the Act.” Because the amendments did not address the scope of Section 13(b), they did not convince the Court in AMG that Congress had acquiesced in the lower courts’ interpretation.

The court in National Petroleum Refiners also lauded the benefits of rulemaking authority and emphasized that the ability to promulgate rules would allow the FTC to carry out the purpose of the act. But the Supreme Court has emphasized that “however sensible (or not)” an interpretation may be, “a reviewing court’s task is to apply the text of the statute, not to improve upon it.” Whatever benefits UMC-rulemaking authority may confer on the FTC, they cannot justify departure from the text of the FTC Act.

In sum, even Chevron requires the agency to rely on a “permissible construction” of the statute, and it is doubtful that the current Supreme Court would see a broad assertion of substantive antitrust rulemaking as “permissible” under the vague language of Section 6(g).

Constitutional Vulnerabilities

The shaky foundation supporting the FTC’s claimed authority for UMC rulemaking is belied by both the potential breadth of such rules and the lack of clear guidance in Section 6(g) itself. The presence of either of these factors increases the likelihood that any rule promulgated under Section 6 runs afoul of the constitutional nondelegation doctrine.

The nondelegation doctrine requires Congress to provide “an intelligible principle” to assist the agency to which it has delegated legislative discretion. Although long considered moribund, the doctrine was recently addressed by the U.S. Supreme Court in Gundy v. United States, which underscored the current relevance of limitations on Congress’ ability to transfer unfettered legislative-like powers to federal agencies. Although the statute in that case was ruled permissible by a plurality of justices, most of the Court’s current members have expressed concerns that the Court has long been too quick to reject nondelegation arguments, arguing for stricter controls in this area. In a concurrence, Justice Samuel Alito lamented that the Court has “uniformly rejected nondelegation arguments and has upheld provisions that authorized agencies to adopt important rules pursuant to extraordinarily capacious standards,” while Justices Neil Gorsuch and Clarence Thomas and Chief Justice John Roberts dissented, decrying the “unbounded policy choices” Congress had bestowed, stating that it “is delegation running riot” to “hand off to the nation’s chief prosecutor the power to write his own criminal code.”

The Gundy dissent cited to A.L.A. Schechter Poultry Corp. v. United States, where the Supreme Court struck down Congress’ delegation of authority based on language very similar to Section 5 of the FTC Act. Schechter Poultry examined whether the authority that Congress granted to the president under the National Industrial Recovery Act (NIRA) violated the nondelegation clause. The offending NIRA provision gave the president authority to approve “codes of fair competition,” which comes uncomfortably close to the FTC Act’s “unfair methods of competition” grant of authority. Notably, Schechter Poultry expressly differentiated NIRA from the FTC Act based on distinctions that do not apply in the rulemaking context. Specifically, the Court stated that, despite the similar delegation of authority, unlike NIRA, actions under the FTC Act are subject to an adjudicative process. The Court observed that the commission serves as “a quasi judicial body” and assesses what constitutes unfair methods of competition “in particular instances, upon evidence, in light of particular competitive conditions.” That essential distinction disappears in the case of rulemaking, where the commission acts in a quasi-legislative role and promulgates rules of broad application.

It appears that the nondelegation doctrine may be poised for a revival and may play a significant role in the Supreme Court’s evaluation of expansive attempts by the Biden administration to exercise legislative-type authority without explicit congressional authorization and guidance. This would create a challenging backdrop for the FTC to attempt aggressive new UMC rulemaking.

Antitrust Rulemaking by FTC Is Likely to Lead to Inefficient Outcomes and Institutional Conflicts

Aside from the doubts raised by these significant statutory and constitutional issues as to the legality of competition rulemaking by the FTC, there are also several policy and institutional factors counseling against legislative-style antitrust rulemaking.

Legislative Rulemaking on Competition Issues Runs Contrary to the Purpose of Antitrust Law

The core of U.S. antitrust law is based on broadly drafted statutes that, at least for violations outside the criminal-conspiracy context, leave determinations of likely anticompetitive effects, procompetitive justifications, and ultimate liability up to factfinders charged with highly detailed, case-specific determinations. Although no factfinder is infallible, this requirement for highly fact-bound analysis helps to ensure that each case’s outcome has a high likelihood of preserving or increasing consumer welfare.

Legislative rulemaking would replace this quintessential fact-based process with one-size-fits-all bright-line rules. Competition rules would function like per se prohibitions, but based on notice-and-comment procedures, rather than the broad and longstanding legal and economic consensus usually required for per se condemnation under the Sherman Act. Past experience with similar regulatory regimes should give reason for pause here: the Interstate Commerce Commission, for example, failed to efficiently regulate the railroad industry before being abolished with bipartisan consensus in 1996, costing consumers, by some estimates, as much as several billion (in today’s) dollars annually in lost competitive benefits. As FTC Commissioner Christine Wilson observes, regulatory rules “frequently stifle innovation, raise prices, and lower output and quality without producing concomitant health, safety, and other benefits for consumers.” By sacrificing the precision of case-by-case adjudication, rulemaking advocates are also losing one of the best tools we have to account for “market dynamics, new sources of competition, and consumer preferences.”

Potential for Institutional Conflict with DOJ

In addition to these substantive concerns, UMC rulemaking by the FTC would also create institutional conflicts between the FTC and DOJ and lead to divergence between the legal standards applicable to the FTC Act, on the one hand, and the Sherman and Clayton acts, on the other. At present, courts have interpreted the FTC Act to be generally coextensive with the prohibitions on unlawful mergers and anticompetitive conduct under the Sherman and Clayton acts, with the limited exception of invitations to collude. But because the FTC alone has the authority to enforce the FTC Act, and rulemaking by the FTC would be limited to interpretations of that act (and could not directly affect or repeal caselaw interpreting the Sherman and Clayton acts), it would create two separate standards of liability. Given that the FTC and DOJ historically have divided enforcement between the agencies based on the industry at issue, this could result in different rules of conduct, depending on the industry involved. Types of conduct that have the potential for anticompetitive effects under certain circumstances but generally pass a rule-of-reason analysis could nonetheless be banned outright if the industry is subject to FTC oversight. Dissonance between the two federal enforcement agencies would be even more difficult for companies not falling firmly within either agency’s purview; those entities would lack certainty as to which guidelines to follow: rule-of-reason precedent or FTC rules.

Conclusion

Following its rebuke at the Supreme Court in the AMG Capital Management case, now is the time for the FTC to focus on its core, case-by-case administrative mission, taking full advantage of its unique adjudicative expertise. Broad unfair methods of competition rulemaking, however, would be an aggressive step in the wrong direction—away from FTC’s core mission and toward a no-man’s-land far afield from the FTC’s governing statutes.

The Biden administration’s antitrust reign of error continues apace. The U.S. Justice Department’s (DOJ) Antitrust Division has indicated in recent months that criminal prosecutions may be forthcoming under Section 2 of the Sherman Antitrust Act, but refuses to provide any guidance regarding enforcement criteria.

Earlier this month, Deputy Assistant Attorney General Richard Powers stated that “there’s ample case law out there to help inform those who have concerns or questions” regarding Section 2 criminal enforcement, conveniently ignoring the fact that criminal Section 2 cases have not been brought in almost half a century. Needless to say, those ancient Section 2 cases (which are relatively few in number) antedate the modern era of economic reasoning in antitrust analysis. What’s more, unlike Section 1 price-fixing and market-division precedents, they yield no clear rule as to what constitutes criminal unilateral behavior. Thus, DOJ’s suggestion that old cases be consulted for guidance is disingenuous at best. 

It follows that DOJ criminal-monopolization prosecutions would be sheer folly. They would spawn substantial confusion and uncertainty and disincentivize dynamic economic growth.

Aggressive unilateral business conduct is a key driver of the competitive process. It brings about “creative destruction” that transforms markets, generates innovation, and thereby drives economic growth. As such, one wants to be particularly careful before condemning such conduct on grounds that it is anticompetitive. Accordingly, error costs here are particularly high and damaging to economic prosperity.

Moreover, error costs in assessing unilateral conduct are more likely than in assessing joint conduct, because it is very hard to distinguish between procompetitive and anticompetitive single-firm conduct, as DOJ’s 2008 Report on Single Firm Conduct Under Section 2 explains (citations omitted):

Courts and commentators have long recognized the difficulty of determining what means of acquiring and maintaining monopoly power should be prohibited as improper. Although many different kinds of conduct have been found to violate section 2, “[d]efining the contours of this element … has been one of the most vexing questions in antitrust law.” As Judge Easterbrook observes, “Aggressive, competitive conduct by any firm, even one with market power, is beneficial to consumers. Courts should prize and encourage it. Aggressive, exclusionary conduct is deleterious to consumers, and courts should condemn it. The big problem lies in this: competitive and exclusionary conduct look alike.”

The problem is not simply one that demands drawing fine lines separating different categories of conduct; often the same conduct can both generate efficiencies and exclude competitors. Judicial experience and advances in economic thinking have demonstrated the potential procompetitive benefits of a wide variety of practices that were once viewed with suspicion when engaged in by firms with substantial market power. Exclusive dealing, for example, may be used to encourage beneficial investment by the parties while also making it more difficult for competitors to distribute their products.

If DOJ does choose to bring a Section 2 criminal case soon, would it target one of the major digital platforms? Notably, a U.S. House Judiciary Committee letter recently called on DOJ to launch a criminal investigation of Amazon (see here). Also, current Federal Trade Commission (FTC) Chair Lina Khan launched her academic career with an article focusing on Amazon’s “predatory pricing” and attacking the consumer welfare standard (see here).

Khan’s “analysis” has been totally discredited. As a trenchant scholarly article by Timothy Muris and Jonathan Nuechterlein explains:

[DOJ’s criminal Section 2 prosecution of A&P, begun in 1944,] bear[s] an eerie resemblance to attacks today on leading online innovators. Increasingly integrated and efficient retailers—first A&P, then “big box” brick-and-mortar stores, and now online retailers—have challenged traditional retail models by offering consumers lower prices and greater convenience. For decades, critics across the political spectrum have reacted to such disruption by urging Congress, the courts, and the enforcement agencies to stop these American success stories by revising antitrust doctrine to protect small businesses rather than the interests of consumers. Using antitrust law to punish pro-competitive behavior makes no more sense today than it did when the government attacked A&P for cutting consumers too good a deal on groceries. 

Before bringing criminal Section 2 charges against Amazon, or any other “dominant” firm, DOJ leaders should read and absorb the sobering Muris and Nuechterlein assessment. 

Finally, not only would DOJ Section 2 criminal prosecutions represent bad public policy—they would also undermine the rule of law. In a very thoughtful 2017 speech, then-Acting Assistant Attorney General for Antitrust Andrew Finch succinctly summarized the importance of the rule of law in antitrust enforcement:

[H]ow do we administer the antitrust laws more rationally, accurately, expeditiously, and efficiently? … Law enforcement requires stability and continuity both in rules and in their application to specific cases.

Indeed, stability and continuity in enforcement are fundamental to the rule of law. The rule of law is about notice and reliance. When it is impossible to make reasonable predictions about how a law will be applied, or what the legal consequences of conduct will be, these important values are diminished. To call our antitrust regime a “rule of law” regime, we must enforce the law as written and as interpreted by the courts and advance change with careful thought.

The reliance fostered by stability and continuity has obvious economic benefits. Businesses invest, not only in innovation but in facilities, marketing, and personnel, and they do so based on the economic and legal environment they expect to face.

Of course, we want businesses to make those investments—and shape their overall conduct—in accordance with the antitrust laws. But to do so, they need to be able to rely on future application of those laws being largely consistent with their expectations. An antitrust enforcement regime with frequent changes is one that businesses cannot plan for, or one that they will plan for by avoiding certain kinds of investments.

Bringing criminal monopolization cases now, after a half-century of inaction, would be antithetical to the stability and continuity that underlie the rule of law. What’s worse, the failure to provide prosecutorial guidance would be squarely at odds with concerns of notice and reliance that inform the rule of law. As such, a DOJ decision to target firms for Section 2 criminal charges would offend the rule of law (and, sadly, follow the FTC ‘s recent example of flouting the rule of law, see here and here).

In sum, the case against criminal Section 2 prosecutions is overwhelming. At a time when DOJ is facing difficulties winning “slam dunk” criminal Section 1  prosecutions targeting facially anticompetitive joint conduct (see here, here, and here), the notion that it would criminally pursue unilateral conduct that may generate substantial efficiencies is ludicrous. Hopefully, DOJ leadership will come to its senses and drop any and all plans to bring criminal Section 2 cases.

[The following is a guest post from Andrew Mercado, a research assistant at the Mercatus Center at George Mason University and an adjunct professor and research assistant at George Mason’s Antonin Scalia Law School.]

The Competition and Transparency in Digital Advertising Act (CTDAA), introduced May 19 by Sens. Mike Lee (R-Utah), Ted Cruz (R-Texas), Amy Klobuchar (D-Minn.), and Richard Blumenthal (D-Conn.), is the latest manifestation of the congressional desire to “do something” legislatively about big digital platforms. Although different in substance from the other antitrust bills introduced this Congress, it shares one key characteristic: it is fatally flawed and should not be enacted.  

Restrictions

In brief, the CTDAA imposes revenue-based restrictions on the ownership structure of firms engaged in digital advertising. The CTDAA bars a firm with more than $20 billion in annual advertising revenue (adjusted annually for inflation) from:

  1. owning a digital-advertising exchange if it owns either a sell-side ad brokerage or a buy-side ad brokerage; and
  2. owning a sell-side brokerage if it owns a buy-side brokerage, or from owning a buy-side or sell-side brokerage if it is also a buyer or seller of advertising space.

The proposal’s ownership restrictions present the clearest harm to the future of the digital-advertising market. From an efficiency perspective, vertical integration of both sides of the market can lead to enormous gains. Since, for example, Google owns and operates an ad exchange, a sell-side broker, and a buy-side broker, there are very few frictions that exist between each side of the market. All of the systems are integrated and the supply of advertising space, demand for that space, and the marketplace conducting price-discovery auctions are automatically updated in real time.

While this instantaneous updating is not unique to Google’s system, and other buy- and sell-side firms can integrate into the system, the benefit to advertisers and publishers can be found in the cost savings that come from the integration. Since Google is able to create synergies on all sides of the market, the fees on any given transaction are lower. Further, incorporating Google’s vast trove of data allows for highly relevant and targeted ads. All of this means that advertisers spend less for the same quality of ad; publishers get more for each ad they place; and consumers see higher-quality, more relevant ads.

Without the ability to own and invest in the efficiency and transaction-cost reduction of an integrated platform, there will likely be less innovation and lower quality on all sides of the market. Further, advertisers and publishers will have to shoulder the burden of using non-integrated marketplaces and would likely pay higher fees for less-efficient brokers. Since Google is a one-stop shop for all of a company’s needs—whether that be on the advertising side or the publishing side—companies can move seamlessly from one side of the market to the other, all while paying lower costs per transaction, because of the integrated nature of the platform.

In the absence of such integration, a company would have to seek out one buy-side brokerage to place ads and another, separate sell-side brokerage to receive ads. These two brokers would then have to go to an ad exchange to facilitate the deal, bringing three different brokers into the mix. Each of these middlemen would take a proportionate cut of the deal. When comparing the situation between an integrated and non-integrated market, the fees associated with serving ads in a non-integrated market are almost certainly higher.

Additionally, under this proposal, the innovative potential of each individual firm is capped. If a firm grows big enough and gains sufficient revenue through integrating different sides of the market, they will be forced to break up their efficiency-inducing operations. Marginal improvements on each side of the market may be possible, but without integrating different sides of the market, the scale required to justify those improvements would be insurmountable.

Assumptions

The CTDAA assumes that:

  1. there is a serious competitive problem in digital advertising; and
  2. the structural separation and regulation of advertising brokerages run by huge digital-advertising platforms (as specified in the CTDAA) would enhance competition and benefit digital advertising customers and consumers.

The first assumption has not been proven and is subject to debate, while the second assumption is likely to be false.

Fundamental to the bill’s assumption that the digital-advertising market lacks competition is a misunderstanding of competitive forces and the idea that revenue and profit are inversely related to competition. While it is true that high profits can be a sign of consolidation and anticompetitive outcomes, the dynamic nature of the internet economy makes this theory unlikely.

As Christopher Kaiser and I have discussed, competition in the internet economy is incredibly dynamic. Vigorous competition can be achieved with just a handful of firms,  despite claims from some quarters that four competitors is necessarily too few. Even in highly concentrated markets, there is the omnipresent threat that new entrants will emerge to usurp an incumbent’s reign. Additionally, while some studies may show unusually large profits in those markets, when adjusted for the consumer welfare created by large tech platforms, profits should actually be significantly higher than they are.

Evidence of dynamic entry in digital markets can be found in a recently announced product offering from a small (but more than $6 billion in revenue) competitor in digital advertising. Following the outcry associated with Google’s alleged abuse with Project Bernanke, the Trade Desk developed OpenPath. This allowed the Trade Desk, a buy-side broker, to handle some of the functions of a sell-side broker and eliminate harms from Google’s alleged bid-rigging to better serve its clients.

In developing the platform, the Trade Desk said it would discontinue serving any Google-based customers, effectively severing ties with the largest advertising exchange on the market. While this runs afoul of the letter of the law spelled out in CTDAA, it is well within the spirit its sponsor’s stated goal: businesses engaging in robust free-market competition. If Google’s market power was as omnipresent and suffocating as the sponsors allege, then eliminating traffic from Google would have been a death sentence for the Trade Desk.

While various theories of vertical and horizontal competitive harm have been put forward, there has not been an empirical showing that consumers and advertising customers have failed to benefit from the admittedly efficient aspects of digital-brokerage auctions administered by Google, Facebook, and a few other platforms. The rapid and dramatic growth of digital advertising and associated commerce strongly suggests that this has been an innovative and welfare-enhancing development. Moreover, the introduction of a new integrated brokerage platform by a “small” player in the advertising market indicates there is ample opportunity to increase this welfare further.  

Interfering in brokerage operations under the unproven assumption that “monopoly rents” are being charged and that customers are being “exploited” is rhetoric unmoored from hard evidence. Furthermore, if specific platform practices are shown inefficiently to exclude potential entrants, existing antitrust law can be deployed on a case-specific basis. This approach is currently being pursued by a coalition of state attorneys general against Google (the merits of which are not relevant to this commentary).   

Even assuming for the sake of argument that there are serious competition problems in the digital-advertising market, there is no reason to believe that the arbitrary provisions and definitions found in the CTDAA would enhance welfare. Indeed, it is likely that the act would have unforeseen consequences:

  • It would lead to divestitures supervised by the U.S. Justice Department (DOJ) that could destroy efficiencies derived from efficient targeting by brokerages integrated into platforms;
  • It would disincentivize improvements in advertising brokerages and likely would reduce future welfare on both the buy and sell sides of digital advertising;
  • It would require costly recordkeeping and disclosures by covered platforms that could have unforeseen consequences for privacy and potentially reduce the efficiency of bidding practices;
  • It would establish a fund for damage payments that would encourage wasteful litigation (see next two points);
  • It would spawn a great deal of wasteful private rent-seeking litigation that would discourage future platform and brokerage innovations; and
  • It would likely generate wasteful lawsuits by rent-seeking state attorneys general (and perhaps the DOJ as well).

The legislation would ultimately harm consumers who currently benefit from a highly efficient form of targeted advertising (for more on the welfare benefits of targeted advertising, see here). Since Google continually invests in creating a better search engine (to deliver ads directly to consumers) and collects more data to better target ads (to deliver ads to specific consumers), the value to advertisers of displaying ads on Google constantly increases.

Proposing a new regulatory structure that would directly affect the operations of highly efficient auction markets is the height of folly. It ignores the findings of Nobel laureate James M. Buchanan (among others) that, to justify regulation, there should first be a provable serious market failure and that, even if such a failure can be shown, the net welfare costs of government intervention should be smaller than the net welfare costs of non-intervention.

Given the likely substantial costs of government intervention and the lack of proven welfare costs from the present system (which clearly has been associated with a growth in output), the second prong of the Buchanan test clearly has not been met.

Conclusion

While there are allegations of abuses in the digital-advertising market, it is not at all clear that these abuses have had a long-term negative economic impact. As shown in a study by Erik Brynjolfsson and his student Avinash Collis—recently summarized in the Harvard Business Review (Alden Abbott offers commentary here)—the consumer surplus generated by digital platforms has far outstripped the advertising and services revenues received by the platforms. The CTDAA proposal would seek to unwind much of these gains.

If the goal is to create a multitude of small, largely inefficient advertising companies that charge high fees and provide low-quality service, this bill will deliver. The market for advertising will have a far greater number of players but it will be far less competitive, since no companies will be willing to exceed the $20 billion revenue threshold that would leave them subject to the proposal’s onerous ownership standards.

If, however, the goal is to increase consumer welfare, increase rigorous competition, and cement better outcomes for advertisers and publishers, then it is likely to fail. Ownership requirements laid out in the proposal will lead to a stagnant advertising market, higher fees for all involved, and lower-quality, less-relevant ads. Government regulatory interference in highly successful and efficient platform markets are a terrible idea.

Sens. Amy Klobuchar (D-Minn.) and Chuck Grassley (R-Iowa)—cosponsors of the American Innovation Online and Choice Act, which seeks to “rein in” tech companies like Apple, Google, Meta, and Amazon—contend that “everyone acknowledges the problems posed by dominant online platforms.”

In their framing, it is simply an acknowledged fact that U.S. antitrust law has not kept pace with developments in the digital sector, allowing a handful of Big Tech firms to exploit consumers and foreclose competitors from the market. To address the issue, the senators’ bill would bar “covered platforms” from engaging in a raft of conduct, including self-preferencing, tying, and limiting interoperability with competitors’ products.

That’s what makes the open letter to Congress published late last month by the usually staid American Bar Association’s (ABA) Antitrust Law Section so eye-opening. The letter is nothing short of a searing critique of the legislation, which the section finds to be poorly written, vague, and departing from established antitrust-law principles.

The ABA, of course, has a reputation as an independent, highly professional, and heterogenous group. The antitrust section’s membership includes not only in-house corporate counsel, but lawyers from nonprofits, consulting firms, federal and state agencies, judges, and legal academics. Given this context, the comments must be read as a high-level judgment that recent legislative and regulatory efforts to “discipline” tech fall outside the legal mainstream and would come at the cost of established antitrust principles, legal precedent, transparency, sound economic analysis, and ultimately consumer welfare.

The Antitrust Section’s Comments

As the ABA Antitrust Law Section observes:

The Section has long supported the evolution of antitrust law to keep pace with evolving circumstances, economic theory, and empirical evidence. Here, however, the Section is concerned that the Bill, as written, departs in some respects from accepted principles of competition law and in so doing risks causing unpredicted and unintended consequences.

Broadly speaking, the section’s criticisms fall into two interrelated categories. The first relates to deviations from antitrust orthodoxy and the principles that guide enforcement. The second is a critique of the AICOA’s overly broad language and ambiguous terminology.

Departing from established antitrust-law principles

Substantively, the overarching concern expressed by the ABA Antitrust Law Section is that AICOA departs from the traditional role of antitrust law, which is to protect the competitive process, rather than choosing to favor some competitors at the expense of others. Indeed, the section’s open letter observes that, out of the 10 categories of prohibited conduct spelled out in the legislation, only three require a “material harm to competition.”

Take, for instance, the prohibition on “discriminatory” conduct. As it stands, the bill’s language does not require a showing of harm to the competitive process. It instead appears to enshrine a freestanding prohibition of discrimination. The bill targets tying practices that are already prohibited by U.S. antitrust law, but while similarly eschewing the traditional required showings of market power and harm to the competitive process. The same can be said, mutatis mutandis, for “self-preferencing” and the “unfair” treatment of competitors.

The problem, the section’s letter to Congress argues, is not only that this increases the teleological chasm between AICOA and the overarching goals and principles of antitrust law, but that it can also easily lead to harmful unintended consequences. For instance, as the ABA Antitrust Law Section previously observed in comments to the Australian Competition and Consumer Commission, a prohibition of pricing discrimination can limit the extent of discounting generally. Similarly, self-preferencing conduct on a platform can be welfare-enhancing, while forced interoperability—which is also contemplated by AICOA—can increase prices for consumers and dampen incentives to innovate. Furthermore, some of these blanket prohibitions are arguably at loggerheads with established antitrust doctrine, such as in, e.g., Trinko, which established that even monopolists are generally free to decide with whom they will deal.

Arguably, the reason why the Klobuchar-Grassley bill can so seamlessly exclude or redraw such a central element of antitrust law as competitive harm is because it deliberately chooses to ignore another, preceding one. Namely, the bill omits market power as a requirement for a finding of infringement or for the legislation’s equally crucial designation as a “covered platform.” It instead prescribes size metrics—number of users, market capitalization—to define which platforms are subject to intervention. Such definitions cast an overly wide net that can potentially capture consumer-facing conduct that doesn’t have the potential to harm competition at all.

It is precisely for this reason that existing antitrust laws are tethered to market power—i.e., because it long has been recognized that only companies with market power can harm competition. As John B. Kirkwood of Seattle University School of Law has written:

Market power’s pivotal role is clear…This concept is central to antitrust because it distinguishes firms that can harm competition and consumers from those that cannot.

In response to the above, the ABA Antitrust Law Section (reasonably) urges Congress explicitly to require an effects-based showing of harm to the competitive process as a prerequisite for all 10 of the infringements contemplated in the AICOA. This also means disclaiming generalized prohibitions of “discrimination” and of “unfairness” and replacing blanket prohibitions (such as the one for self-preferencing) with measured case-by-case analysis.

Opaque language for opaque ideas

Another underlying issue is that the Klobuchar-Grassley bill is shot through with indeterminate language and fuzzy concepts that have no clear limiting principles. For instance, in order either to establish liability or to mount a successful defense to an alleged violation, the bill relies heavily on inherently amorphous terms such as “fairness,” “preferencing,” and “materiality,” or the “intrinsic” value of a product. But as the ABA Antitrust Law Section letter rightly observes, these concepts are not defined in the bill, nor by existing antitrust case law. As such, they inject variability and indeterminacy into how the legislation would be administered.

Moreover, it is also unclear how some incommensurable concepts will be weighed against each other. For example, how would concerns about safety and security be weighed against prohibitions on self-preferencing or requirements for interoperability? What is a “core function” and when would the law determine it has been sufficiently “enhanced” or “maintained”—requirements the law sets out to exempt certain otherwise prohibited behavior? The lack of linguistic and conceptual clarity not only explodes legal certainty, but also invites judicial second-guessing into the operation of business decisions, something against which the U.S. Supreme Court has long warned.

Finally, the bill’s choice of language and recent amendments to its terminology seem to confirm the dynamic discussed in the previous section. Most notably, the latest version of AICOA replaces earlier language invoking “harm to the competitive process” with “material harm to competition.” As the ABA Antitrust Law Section observes, this “suggests a shift away from protecting the competitive process towards protecting individual competitors.” Indeed, “material harm to competition” deviates from established categories such as “undue restraint of trade” or “substantial lessening of competition,” which have a clear focus on the competitive process. As a result, it is not unreasonable to expect that the new terminology might be interpreted as meaning that the actionable standard is material harm to competitors.

In its letter, the antitrust section urges Congress not only to define more clearly the novel terminology used in the bill, but also to do so in a manner consistent with existing antitrust law. Indeed:

The Section further recommends that these definitions direct attention to analysis consistent with antitrust principles: effects-based inquiries concerned with harm to the competitive process, not merely harm to particular competitors

Conclusion

The AICOA is a poorly written, misguided, and rushed piece of regulation that contravenes both basic antitrust-law principles and mainstream economic insights in the pursuit of a pre-established populist political goal: punishing the success of tech companies. If left uncorrected by Congress, these mistakes could have potentially far-reaching consequences for innovation in digital markets and for consumer welfare. They could also set antitrust law on a regressive course back toward a policy of picking winners and losers.

If you wander into an undergraduate economics class on the right day at the right time, you might catch the lecturer talking about Giffen goods: the rare case where demand curves can slope upward. The Irish potato famine is often used as an example. As the story goes, potatoes were a huge part of the Irish diet and consumed a large part of Irish family budgets. A failure of the potato crop reduced the supply of potatoes and potato prices soared. Because families had to spend so much on potatoes, they couldn’t afford much else, so spending on potatoes increased despite rising prices.

It’s a great story of injustice with a nugget of economics: Demand curves can slope upward!

Follow the students around for a few days, and they’ll be looking for Giffen goods everywhere. Surely, packaged ramen and boxed macaroni and cheese are Giffen goods. So are white bread and rice. Maybe even low-end apartments.

While it’s a fun concept to consider, the potato famine story is likely apocryphal. In truth, it’s nearly impossible to find a Giffen good in the real world. My version of Greg Mankiw’s massive “Principles of Economics” textbook devotes five paragraphs to Giffen goods, but it’s not especially relevant, which is perhaps why it’s only five paragraphs.

Wander into another economics class, and you might catch the lecturer talking about monopsony—that is, a market in which a small number of buyers control the price of inputs such as labor. I say “might” because—like Giffen goods—monopsony is an interesting concept to consider, but very hard to find a clear example of in the real world. Mankiw’s textbook devotes only four paragraphs to monopsony, explaining that the book “does not present a formal model of monopsony because, in the world, monopsonies are rare.”

Even so, monopsony is a hot topic these days. It seems that monopsonies are everywhere. Walmart and Amazon are monopsonist employers. So are poultry, pork, and beef companies. Local hospitals monopsonize the market for nurses and physicians. The National Collegiate Athletic Association is a monopsony employer of college athletes. Ultimate Fighting Championship has a monopsony over mixed-martial-arts fighters.

In 1994, David Card and Alan Krueger’s earthshaking study found a minimum wage increase had no measurable effect on fast-food employment and retail prices. They investigated monopsony power as one explanation but concluded that a monopsony model was not supported by their findings. They note:

[W]e find that prices of fast-food meals increased in New Jersey relative to Pennsylvania, suggesting that much of the burden of the minimum-wage rise was passed on to consumers. Within New Jersey, however, we find no evidence that prices increased more in stores that were most affected by the minimum-wage rise. Taken as a whole, these findings are difficult to explain with the standard competitive model or with models in which employers face supply constraints (e.g., monopsony or equilibrium search models). [Emphasis added]

Even so, the monopsony hunt was on and it intensified during President Barack Obama’s administration. During his term, the U.S. Justice Department (DOJ) brought suit against several major Silicon Valley employers for anticompetitively entering into agreements not to “poach” programmers and engineers from each other. The administration also brought suit against a hospital association for an agreement to set uniform billing rates for certain nurses. Both cases settled but the Silicon Valley allegations led to a private class-action lawsuit.

In 2016, Obama’s Council of Economic Advisers published an issue brief on labor-market monopsony. The brief concluded that “evidence suggest[s] that firms may have wage-setting power in a broad range of settings.”

Around the same time, the Obama administration announced that it intended to “criminally investigate naked no-poaching or wage-fixing agreements that are unrelated or unnecessary to a larger legitimate collaboration between the employers.” The DOJ argued that no-poach agreements that allocate employees between companies are per se unlawful restraints of trade that violate Section 1 of the Sherman Act.

If one believes that monopsony power is stifling workers’ wages and benefits, then this would be a good first step to build up a body of evidence and precedence. Go after the low-hanging fruit of a conspiracy that is a per se violation of the Sherman Act, secure some wins, and then start probing the more challenging cases.

After several matters that resulted in settlements, the DOJ brought its first criminal wage-fixing case in late 2020. In United States v. Jindal, the government charged two employees of a Texas health-care staffing company of colluding with another staffing company to decrease pay rates for physical therapists and physical-therapist assistants.

The defense in Jindal conceded that that price-fixing was per se illegal under the Sherman Act but argued that prices and wages are two different concepts. Therefore, the defense claimed that, even if it was engaged in wage-fixing, the conduct would not be per se illegal. That was a stretch, and the district court judge was having none of that in ruling that: “The antitrust laws fully apply to the labor markets, and price-fixing agreements among buyers … are prohibited by the Sherman Act.”

Nevertheless, the jury in Jindal found the defendants not guilty of wage-fixing in violation of the Sherman Act, and also not guilty of a related conspiracy charge.

The DOJ also brought criminal no-poach cases against three other health-care companies and their employees: United States v. Surgical Care Affiliates LLC; United States v. Hee; and United States v. DaVita Inc. Each of the indictments alleged no-poach agreements in which defendants conspired with competitors not to recruit each other’s employees. Hee also included wage-fixing allegations.

Before trial, the defense in DaVita filed a motion to dismiss, arguing that no-poach agreements did not amount to illegal market-allocation agreements. Instead, the defense claimed that no-poach agreements were something less restrictive. Rather than a flat-out refusal to hire competitors’ employees, they were more akin to agreeing not to seek out competitors’ employees. As with Jindal, this was too much of a stretch for the judge who ruled that no-poach agreements could be an illegal market-allocation agreement.

A day after the Jindal verdict, the jury in DaVita acquitted the kidney-dialysis provider and its former CEO of charges that they conspired with competitors to suppress competition for employees through no-poach agreements.

The DaVita jurors appeared to be hung up on the definition of “meaningful competition” in the relevant market. The defense presented information showing that, despite any agreements, employees frequently changed jobs among the companies. Thus, it was argued that any agreement did not amount to an allocation of the market for employees.

The prosecution called several corporate executives who testified that the non-solicitation agreements merely required DaVita employees to tell their bosses they were looking for another job before they could be considered for positions at the three alleged co-conspirator companies. Some witnesses indicated that, by informing their bosses, they were able to obtain promotions and/or increased compensation. This was supported by expert testimony concluding that DaVita salaries changed during the alleged conspiracy period at a rate higher than the health-care industry as a whole. This finding is at-odds with a theory that the non-solicitation agreement was designed to stabilize or suppress compensation.

The Jindal and DaVita cases highlight some of the enormous challenges in mounting a labor-monopsonization case. Even if agencies can “win” or get concessions on defining the relevant markets, they still face challenges in establishing that no-poach agreements amount to a “meaningful” restraint of trade. DaVita suggests that a showing of job turnover and/or increased compensation during an alleged conspiracy period may be sufficient to convince a jury that a no-poach agreement may not be anticompetitive and—under certain circumstances—may even be pro-competitive.

For now, the hunt for a monopsony labor market continues its quest, along with the hunt for the ever-elusive Giffen good.

Biden administration enforcers at the U.S. Justice Department (DOJ) and the Federal Trade Commission (FTC) have prioritized labor-market monopsony issues for antitrust scrutiny (see, for example, here and here). This heightened interest comes in light of claims that labor markets are highly concentrated and are rife with largely neglected competitive problems that depress workers’ income. Such concerns are reflected in a March 2022 U.S. Treasury Department report on “The State of Labor Market Competition.”

Monopsony is the “flip side” of monopoly and U.S. antitrust law clearly condemns agreements designed to undermine the “buyer side” competitive process (see, for example, this U.S. government submission to the OECD). But is a special new emphasis on labor markets warranted, given that antitrust enforcers ideally should seek to allocate their scarce resources to the most pressing (highest valued) areas of competitive concern?

A May 2022 Information Technology & Innovation (ITIF) study from ITIF Associate Director (and former FTC economist) Julie Carlson indicates that the degree of emphasis the administration’s antitrust enforcers are placing on labor issues may be misplaced. In particular, the ITIF study debunks the Treasury report’s findings of high levels of labor-market concentration and the claim that workers face a “decrease in wages [due to labor market power] at roughly 20 percent relative to the level in a fully competitive market.” Furthermore, while noting the importance of DOJ antitrust prosecutions of hard-core anticompetitive agreements among employers (wage-fixing and no-poach agreements), the ITIF report emphasizes policy reforms unrelated to antitrust as key to improving workers’ lot.

Key takeaways from the ITIF report include:

  • Labor markets are not highly concentrated. Local labor-market concentration has been declining for decades, with the most concentrated markets seeing the largest declines.
  • Labor-market power is largely due to labor-market frictions, such as worker preferences, search costs, bargaining, and occupational licensing, rather than concentration.
  • As a case study, changes in concentration in the labor market for nurses have little to no effect on wages, whereas nurses’ preferences over job location are estimated to lead to wage markdowns of 50%.
  • Firms are not profiting at the expense of workers. The decline in the labor share of national income is primarily due to rising home values, not increased labor-market concentration.
  • Policy reform should focus on reducing labor-market frictions and strengthening workers’ ability to collectively bargain. Policies targeting concentration are misguided and will be ineffective at improving outcomes for workers.

The ITIF report also throws cold water on the notion of emphasizing labor-market issues in merger reviews, which was teed up in the January 2022 joint DOJ/FTC request for information (RFI) on merger enforcement. The ITIF report explains:

Introducing the evaluation of labor market effects unnecessarily complicates merger review and needlessly ties up agency resources at a time when the agencies are facing severe resource constraints.48 As discussed previously, labor markets are not highly concentrated, nor is labor market concentration a key factor driving down wages.

A proposed merger that is reportable to the agencies under the Hart-Scott-Rodino Act and likely to have an anticompetitive effect in a relevant labor market is also likely to have an anticompetitive effect in a relevant product market. … Evaluating mergers for labor market effects is unnecessary and costly for both firms and the agencies. The current merger guidelines adequately address competition concerns in input markets, so any contemplated revision to the guidelines should not incorporate a “framework to analyze mergers that may lessen competition in labor markets.” [Citation to Request for Information on Merger Enforcement omitted.]

In sum, the administration’s recent pronouncements about highly anticompetitive labor markets that have resulted in severely underpaid workers—used as the basis to justify heightened antitrust emphasis on labor issues—appear to be based on false premises. As such, they are a species of government misinformation, which, if acted upon, threatens to misallocate scarce enforcement resources and thereby undermine efficient government antitrust enforcement. What’s more, an unnecessary overemphasis on labor-market antitrust questions could impose unwarranted investigative costs on companies and chill potentially efficient business transactions. (Think of a proposed merger that would reduce production costs and benefit consumers but result in a workforce reduction by the merged firm.)

Perhaps the administration will take heed of the ITIF report and rethink its plans to ramp up labor-market antitrust-enforcement initiatives. Promoting pro-market regulatory reforms that benefit both labor and consumers (for instance, excessive occupational-licensing restrictions) would be a welfare-superior and cheaper alternative to misbegotten antitrust actions.

[Wrapping up the first week of our FTC UMC Rulemaking symposium is a post from Truth on the Market’s own Justin (Gus) Hurwitz, director of law & economics programs at the International Center for Law & Economics and an assistant professor of law and co-director of the Space, Cyber, and Telecom Law program at the University of Nebraska College of Law. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

Introduction

In 2014, I published a pair of articles—”Administrative Antitrust” and “Chevron and the Limits of Administrative Antitrust”—that argued that the U.S. Supreme Court’s recent antitrust and administrative-law jurisprudence was pushing antitrust law out of the judicial domain and into the domain of regulatory agencies. The first article focused on the Court’s then-recent antitrust cases, arguing that the Court, which had long since moved away from federal common law, had shown a clear preference that common-law-like antitrust law be handled on a statutory or regulatory basis where possible. The second article evaluated and rejected the FTC’s long-held belief that the Federal Trade Commission’s (FTC) interpretations of the FTC Act do not receive Chevron deference.

Together, these articles made the case (as a descriptive, not normative, matter) that we were moving towards a period of what I called “administrative antitrust.” From today’s perspective, it surely seems that I was right, with the FTC set to embrace Section 5’s broad ambiguities to redefine modern understandings of antitrust law. Indeed, those articles have been cited by both former FTC Commissioner Rohit Chopra and current FTC Chair Lina Khan in speeches and other materials that have led up to our current moment.

This essay revisits those articles, in light of the past decade of Supreme Court precedent. It comes as no surprise to anyone familiar with recent cases that the Court is increasingly viewing the broad deference characteristic of administrative law with what, charitably, can be called skepticism. While I stand by the analysis offered in my previous articles—and, indeed, believe that the Court maintains a preference for administratively defined antitrust law over judicially defined antitrust law—I find it less likely today that the Court would defer to any agency interpretation of antitrust law that represents more than an incremental move away from extant law.

I will approach this discussion in four parts. First, I will offer some reflections on the setting of my prior articles. The piece on Chevron and the FTC, in particular, argued that the FTC had misunderstood how Chevron would apply to its interpretations of the FTC Act because it was beholden to out-of-date understandings of administrative law. I will make the point below that the same thing can be said today. I will then briefly recap the essential elements of the arguments made in both of those prior articles, to the extent needed to evaluate how administrative approaches to antitrust will be viewed by the Court today. The third part of the discussion will then summarize some key elements of administrative law that have changed over roughly the past decade. And, finally, I will bring these elements together to look at the viability of administrative antitrust today, arguing that the FTC’s broad embrace of power anticipated by many is likely to meet an ill fate at the hands of the courts on both antitrust and administrative law grounds.

In reviewing these past articles in light of the past decade’s case law, this essay reaches an important conclusion: for the same reasons that the Court seemed likely in 2013 to embrace an administrative approach to antitrust, today it is likely to view such approaches with great skepticism unless they are undertaken on an incrementalist basis. Others are currently developing arguments that sound primarily in current administrative law: the major questions doctrine and the potential turn away from National Petroleum Refiners. My conclusion is based primarily in the Court’s view that administrative antitrust would prove less indeterminate than judicially defined antitrust law. If the FTC shows that not to be the case, the Court seems likely to close the door on administrative antitrust for reasons sounding in both administrative and antitrust law.

Setting the Stage, Circa 2013

It is useful to start by visiting the stage as it was set when I wrote “Administrative Antitrust” and “Limits of Administrative Antitrust” in 2013. I wrote these articles while doing a fellowship at the University of Pennsylvania Law School, prior to which I had spent several years working at the U.S. Justice Department Antitrust Division’s Telecommunications Section. This was a great time to be involved on the telecom side of antitrust, especially for someone with an interest in administrative law, as well. Recent important antitrust cases included Pacific Bell v. linkLine and Verizon v. Trinko and recent important administrative-law cases included Brand-X, Fox v. FCC, and City of Arlington v. FCC. Telecommunications law was defining the center of both fields.

I started working on “Administrative Antitrust” first, prompted by what I admit today was an overreading of the Court’s 2011 American Electric Power Co. Inc. v. Connecticut opinion, in which the Court held broadly that a decision by Congress to regulate broadly displaces judicial common law. In Trinko and Credit Suisse, the Court had held something similar: roughly, that regulation displaces antitrust law. Indeed, in linkLine,the Court had stated that regulation is preferable to antitrust, known for its vicissitudes and adherence to the extra-judicial development of economic theory. “Administrative Antitrust” tied these strands together, arguing that antitrust law, long-discussed as one of the few remaining bastions of federal common law, would—and in the Court’s eyes, should—be displaced by regulation.

Antitrust and administrative law also came together, and remain together, in the debates over net neutrality. It was this nexus that gave rise to “Limits of Administrative Antitrust,” which I started in 2013 while working on “Administrative Antitrust”and waiting for the U.S. Court of Appeals for the D.C. Circuit’s opinion in Verizon v. FCC.

Some background on the net-neutrality debate is useful. In 2007, the Federal Communications Commission (FCC) attempted to put in place net-neutrality rules by adopting a policy statement on the subject. This approach was rejected by the D.C. Circuit in 2010, on grounds that a mere policy statement lacked the force of law. The FCC then adopted similar rules through a rulemaking process, finding authority to issue those rules in its interpretation of the ambiguous language of Section 706 of the Telecommunications Act. In January 2014, the D.C. Circuit again rejected the specific rules adopted by the FCC, on grounds that those rules violated the Communications Act’s prohibition on treating internet service providers (ISPs) as common carriers. But critically, the court affirmed the FCC’s interpretation of Section 706 as allowing it, in principle, to adopt rules regulating ISPs.

Unsurprisingly, whether the language of Section 706 was either ambiguous or subject to the FCC’s interpretation was a central debate within the regulatory community during 2012 and 2013. The broadest consensus, at least among my peers, was strongly of the view that it was neither: the FCC and industry had long read Section 706 as not giving the FCC authority to regulate ISP conduct and, to the extent that it did confer legislative authority, that authority was expressly deregulatory. I was the lone voice arguing that the D.C. Circuit was likely to find that Chevron applied to Section 706 and that the FCC’s reading was permissible on its own (that is, not taking into account such restrictions as the prohibition on treating non-common carriers as common carriers).

I actually had thought this conclusion quite obvious. The past decade of the Court’s Chevron case law followed a trend of increasing deference. Starting with Mead, then Brand-X, Fox v. FCC, and City of Arlington, the safe money was consistently placed on deference to the agency.

This was the setting in which I started thinking about what became “Chevron and the Limits of Administrative Antitrust.” If my argument in “Administrative Antitrust”was right—that the courts would push development of antitrust law from the courts to regulatory agencies—this would most clearly happen through the FTC’s Section 5 authority over unfair methods of competition (UMC). But there was longstanding debate about the limits of the FTC’s UMC authority. These debates included whether it was necessarily coterminous with the Sherman Act (so limited by the judicially defined federal common law of antitrust).

And there was discussion about whether the FTC would receive Chevron deference to its interpretations of its UMC authority. As with the question of the FCC receiving deference to its interpretation of Section 706, there was widespread understanding that the FTC would not receive Chevron deference to its interpretations of its Section 5 UMC authority. “Chevron and the Limits of Administrative Antitrust” explored that issue, ultimately concluding that the FTC likely would indeed be given the benefit of Chevron deference, tracing the commission’s belief to the contrary back to longstanding institutional memory of pre-Chevron judicial losses.

The Administrative Antitrust Argument

The discussion above is more than mere historical navel-gazing. The context and setting in which those prior articles were written is important to understanding both their arguments and the continual currents that propel us across antitrust’s sea of doubt. But we should also look at the specific arguments from each paper in some detail, as well.

Administrative Antitrust

The opening lines of this paper capture the curious judicial statute of antitrust law:

Antitrust is a peculiar area of law, one that has long been treated as exceptional by the courts. Antitrust cases are uniquely long, complicated, and expensive; individual cases turn on case-specific facts, giving them limited precedential value; and what precedent there is changes on a sea of economic—rather than legal—theory. The principal antitrust statutes are minimalist and have left the courts to develop their meaning. As Professor Thomas Arthur has noted, “in ‘the anti-trust field the courts have been accorded, by common consent, an authority they have in no other branch of enacted law.’” …


This Article argues that the Supreme Court is moving away from this exceptionalist treatment of antitrust law and is working to bring antitrust within a normalized administrative law jurisprudence.

Much of this argument is based in the arguments framed above: Trinko and Credit Suisse prioritize regulation over the federal common law of antitrust, and American Electric Power emphasizes the general displacement of common law by regulation. The article adds, as well, the Court’s focus, at the time, against domain-specific “exceptionalism.” Its opinion in Mayo had rejected the longstanding view that tax law was “exceptional” in some way that excluded it from the Administrative Procedure Act (APA) and other standard administrative law doctrine. And thus, so too must the Court’s longstanding treatment of antitrust as exceptional also fall.

Those arguments can all be characterized as pulling antitrust law toward an administrative approach. But there was a push as well. In his majority opinion, Chief Justice John Roberts expressed substantial concern about the difficulties that antitrust law poses for courts and litigants alike. His opinion for the majority notes that “it is difficult enough for courts to identify and remedy an alleged anticompetitive practice” and laments “[h]ow is a judge or jury to determine a ‘fair price?’” And Justice Stephen Breyer writes in concurrence, that “[w]hen a regulatory structure exists [as it does in this case] to deter and remedy anticompetitive harm, the costs of antitrust enforcement are likely to be greater than the benefits.”

In other words, the argument in “Administrative Antitrust” goes, the Court is motivated both to bring antitrust law into a normalized administrative-law framework and also to remove responsibility for the messiness inherent in antitrust law from the courts’ dockets. This latter point will be of particular importance as we turn to how the Court is likely to think about the FTC’s potential use of its UMC authority to develop new antitrust rules.

Chevron and the Limits of Administrative Antitrust

The core argument in “Limits of Administrative Antitrust” is more doctrinal and institutionally focused. In its simplest statement, I merely applied Chevron as it was understood circa 2013 to the FTC’s UMC authority. There is little argument that “unfair methods of competition” is inherently ambiguous—indeed, the term was used, and the power granted to the FTC, expressly to give the agency flexibility and to avoid the limits the Court was placing on antitrust law in the early 20th century.

There are various arguments against application of Chevron to Section 5; the article goes through and rejects them all. Section 5 has long been recognized as including, but being broader than, the Sherman Act. National Petroleum Refiners has long held that the FTC has substantive-rulemaking authority—a conclusion made even more forceful by the Supreme Court’s more recent opinion in Iowa Utilities Board. Other arguments are (or were) unavailing.

The real puzzle the paper unpacks is why the FTC ever believed it wouldn’t receive the benefit of Chevron deference. The article traces it back to a series of cases the FTC lost in the 1980s, contemporaneous with the development of the Chevron doctrine. The commission had big losses in cases like E.I. Du Pont and Ethyl Corp. Perhaps most important, in its 1986 Indiana Federation of Dentists opinion (two years after Chevron was decided), the Court seemed to adopt a de novo standard for review of Section 5 cases. But, “Limits of Administrative Antitrust” argues, this is a misreading and overreading of Indiana Federation of Dentists (a close reading of which actually suggests that it is entirely in line with Chevron), and it misunderstands the case’s relationship with Chevron (the importance of which did not start to come into focus for another several years).

The curious conclusion of the argument is, in effect, that a generation of FTC lawyers, “shell-shocked by its treatment in the courts,” internalized the lesson that they would not receive the benefits of Chevron deference and that Section 5 was subject to de novo review, but also that this would start to change as a new generation of lawyers, trained in the modern Chevron era, came to practice within the halls of the FTC. Today, that prediction appears to have borne out.

Things Change

The conclusion from “Limits of Administrative Antitrust” that FTC lawyers failed to recognize that the agency would receive Chevron deference because they were half a generation behind the development of administrative-law doctrine is an important one. As much as antitrust law may be adrift in a sea of change, administrative law is even more so. From today’s perspective, it feels as though I wrote those articles at Chevron’s zenith—and watching the FTC consider aggressive use of its UMC authority feels like watching a commission that, once again, is half a generation behind the development of administrative law.

The tide against Chevron’sexpansive deference was already beginning to grow at the time I was writing. City of Arlington, though affirming application of Chevron to agencies’ interpretations of their own jurisdictional statutes in a 6-3 opinion, generated substantial controversy at the time. And a short while later, the Court decided a case that many in the telecom space view as a sea change: Utility Air Regulatory Group (UARG). In UARG, Justice Antonin Scalia, writing for a 9-0 majority, struck down an Environmental Protection Agency (EPA) regulation related to greenhouse gasses. In doing so, he invoked language evocative of what today is being debated as the major questions doctrine—that the Court “expect[s] Congress to speak clearly if it wishes to assign to an agency decisions of vast economic and political significance.” Two years after that, the Court decided Encino Motorcars, in which the Court acted upon a limit expressed in Fox v. FCC that agencies face heightened procedural requirements when changing regulations that “may have engendered serious reliance interests.”

And just like that, the dams holding back concern over the scope of Chevron have burst. Justices Clarence Thomas and Neil Gorsuch have openly expressed their views that Chevron needs to be curtailed or eliminated. Justice Brett Kavanaugh has written extensively in favor of the major questions doctrine. Chief Justice Roberts invoked the major questions doctrine in King v. Burwell. Each term, litigants are more aggressively bringing more aggressive cases to probe and tighten the limits of the Chevron doctrine. As I write this, we await the Court’s opinion in American Hospital Association v. Becerra—which, it is widely believed could dramatically curtail the scope of the Chevron doctrine.

Administrative Antitrust, Redux

The prospects for administrative antitrust look very different today than they did a decade ago. While the basic argument continues to hold—the Court will likely encourage and welcome a transition of antitrust law to a normalized administrative jurisprudence—the Court seems likely to afford administrative agencies (viz., the FTC) much less flexibility in how they administer antitrust law than they would have a decade ago. This includes through both the administrative-law vector, with the Court reconsidering how it views delegation of congressional authority to agencies such as through the major questions doctrine and agency rulemaking authority, as well as through the Court’s thinking about how agencies develop and enforce antitrust law.

Major Questions and Major Rules

Two hotly debated areas where we see this trend: the major questions doctrine and the ongoing vitality of National Petroleum Refiners. These are only briefly recapitulated here. The major questions doctrine is an evolving doctrine, seemingly of great interest to many current justices on the Court, that requires Congress to speak clearly when delegating authority to agencies to address major questions—that is, questions of vast economic and political significance. So, while the Court may allow an agency to develop rules governing mergers when tasked by Congress to prohibit acquisitions likely to substantially lessen competition, it is unlikely to allow that agency to categorically prohibit mergers based upon a general congressional command to prevent unfair methods of competition. The first of those is a narrow rule based upon a specific grant of authority; the other is a very broad rule based upon a very general grant of authority.

The major questions doctrine has been a major topic of discussion in administrative-law circles for the past several years. Interest in the National Petroleum Refiners question has been more muted, mostly confined to those focused on the FTC and FCC. National Petroleum Refiners is a 1973 D.C. Circuit case that found that the FTC Act’s grant of power to make rules to implement the act confers broad rulemaking power relating to the act’s substantive provisions. In 1999, the Supreme Court reached a similar conclusion in Iowa Utilities Board, finding that a provision in Section 202 of the Communications Act allowing the FCC to create rules seemingly for the implementation of that section conferred substantive rulemaking power running throughout the Communications Act.

Both National Petroleum Refiners and Iowa Utilities Board reflect previous generations’ understanding of administrative law—and, in particular, the relationship between the courts and Congress in empowering and policing agency conduct. That understanding is best captured in the evolution of the non-delegation doctrine, and the courts’ broad acceptance of broad delegations of congressional power to agencies in the latter half of the 20th century. National Petroleum Refiners and Iowa Utilities Board are not non-delegation cases-—but, similar to the major questions doctrine, they go to similar issues of how specific Congress must be when delegating broad authority to an agency.

In theory, there is little difference between an agency that can develop legal norms through case-by-case adjudications that are backstopped by substantive and procedural judicial review, on the one hand, and authority to develop substantive rules backstopped by procedural judicial review and by Congress as a check on substantive errors. In practice, there is a world of difference between these approaches. As with the Court’s concerns about the major questions doctrine, were the Court to review National Petroleum Refiners Association or Iowa Utilities Board today, it seems at least possible, if not simply unlikely, that most of the Justices would not so readily find agencies to have such broad rulemaking authority without clear congressional intent supporting such a finding.

Both of these ideas—the major question doctrine and limits on broad rules made using thin grants of rulemaking authority—present potential limits on the potential scope of rules the FTC might make using its UMC authority.

Limits on the Antitrust Side of Administrative Antitrust

The potential limits on FTC UMC rulemaking discussed above sound in administrative-law concerns. But administrative antitrust may also find a tepid judicial reception on antitrust concerns, as well.

Many of the arguments advanced in “Administrative Antitrust” and the Court’s opinions on the antitrust-regulation interface echo traditional administrative-law ideas. For instance, much of the Court’s preference that agencies granted authority to engage in antitrust or antitrust-adjacent regulation take precedence over the application of judicially defined antitrust law track the same separation of powers and expertise concerns that are central to the Chevron doctrine itself.

But the antitrust-focused cases—linkLine, Trinko, Credit Suisse—also express concerns specific to antitrust law. Chief Justice Roberts notes that the justices “have repeatedly emphasized the importance of clear rules in antitrust law,” and the need for antitrust rules to “be clear enough for lawyers to explain them to clients.” And the Court and antitrust scholars have long noted the curiosity that antitrust law has evolved over time following developments in economic theory. This extra-judicial development of the law runs contrary to basic principles of due process and the stability of the law.

The Court’s cases in this area express hope that an administrative approach to antitrust could give a clarity and stability to the law that is currently lacking. These are rules of vast economic significance: they are “the Magna Carta of free enterprise”; our economy organizes itself around them; substantial changes to these rules could have a destabilizing effect that runs far deeper than Congress is likely to have anticipated when tasking an agency with enforcing antitrust law. Empowering agencies to develop these rules could, the Court’s opinions suggest, allow for a more thoughtful, expert, and deliberative approach to incorporating incremental developments in economic knowledge into the law.

If an agency’s administrative implementation of antitrust law does not follow this path—and especially if the agency takes a disruptive approach to antitrust law that deviates substantially from established antitrust norms—this defining rationale for an administrative approach to antitrust would not hold.

The courts could respond to such overreach in several ways. They could invoke the major questions or similar doctrines, as above. They could raise due-process concerns, tracking Fox v. FCC and Encino Motorcars, to argue that any change to antitrust law must not be unduly disruptive to engendered reliance interests. They could argue that the FTC’s UMC authority, while broader than the Sherman Act, must be compatible with the Sherman Act. That is, while the FTC has authority for the larger circle in the antitrust Venn diagram, the courts continue to define the inner core of conduct regulated by the Sherman Act.

A final aspect to the Court’s likely approach to administrative antitrust falls from the Roberts Court’s decision-theoretic approach to antitrust law. First articulated in Judge Frank Easterbrook’s “The Limits of Antitrust,” the decision-theoretic approach to antitrust law focuses on the error costs of incorrect judicial decisions and the likelihood that those decisions will be corrected. The Roberts Court has strongly adhered to this framework in its antitrust decisions. This can be seen, for instance, in Justice Breyer’s statement that: “When a regulatory structure exists to deter and remedy anticompetitive harm, the costs of antitrust enforcement are likely to be greater than the benefits.”

The error-costs framework described by Judge Easterbrook focuses on the relative costs of errors, and correcting those errors, between judicial and market mechanisms. In the administrative-antitrust setting, the relevant comparison is between judicial and administrative error costs. The question on this front is whether an administrative agency, should it get things wrong, is likely to correct. Here there are two models, both of concern. The first is that in which law is policy or political preference. Here, the FCC’s approach to net neutrality and the National Labor Relations Board’s (NLRB) approach to labor law loom large; there have been dramatic swing between binary policy preferences held by different political parties as control of agencies shifts between administrations. The second model is one in which Congress responds to agency rules by refining, rejecting, or replacing them through statute. Here, again, net neutrality and the FCC loom large, with nearly two decades of calls for Congress to clarify the FCC’s authority and statutory mandate, while the agency swings between policies with changing administrations.

Both of these models reflect poorly on the prospects for administrative antitrust and suggest a strong likelihood that the Court would reject any ambitious use of administrative authority to remake antitrust law. The stability of these rules is simply too important to leave to change with changing political wills. And, indeed, concern that Congress no longer does its job of providing agencies with clear direction—that Congress has abdicated its job of making important policy decisions and let them fall instead to agency heads—is one of the animating concerns behind the major questions doctrine.

Conclusion

Writing in 2013, it seemed clear that the Court was pushing antitrust law in an administrative direction, as well as that the FTC would likely receive broad Chevron deference in its interpretations of its UMC authority to shape and implement antitrust law. Roughly a decade later, the sands have shifted and continue to shift. Administrative law is in the midst of a retrenchment, with skepticism of broad deference and agency claims of authority.

Many of the underlying rationales behind the ideas of administrative antitrust remain sound. Indeed, I expect the FTC will play an increasingly large role in defining the contours of antitrust law and that the Court and courts will welcome this role. But that role will be limited. Administrative antitrust is a preferred vehicle for administering antitrust law, not for changing it. Should the FTC use its power aggressively, in ways that disrupt longstanding antitrust principles or seem more grounded in policy better created by Congress, it is likely to find itself on the losing side of the judicial opinion.

A raft of progressive scholars in recent years have argued that antitrust law remains blind to the emergence of so-called “attention markets,” in which firms compete by converting user attention into advertising revenue. This blindness, the scholars argue, has caused antitrust enforcers to clear harmful mergers in these industries.

It certainly appears the argument is gaining increased attention, for lack of a better word, with sympathetic policymakers. In a recent call for comments regarding their joint merger guidelines, the U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) ask:

How should the guidelines analyze mergers involving competition for attention? How should relevant markets be defined? What types of harms should the guidelines consider?

Unfortunately, the recent scholarly inquiries into attention markets remain inadequate for policymaking purposes. For example, while many progressives focus specifically on antitrust authorities’ decisions to clear Facebook’s 2012 acquisition of Instagram and 2014 purchase of WhatsApp, they largely tend to ignore the competitive constraints Facebook now faces from TikTok (here and here).

When firms that compete for attention seek to merge, authorities need to infer whether the deal will lead to an “attention monopoly” (if the merging firms are the only, or primary, market competitors for some consumers’ attention) or whether other “attention goods” sufficiently constrain the merged entity. Put another way, the challenge is not just in determining which firms compete for attention, but in evaluating how strongly each constrains the others.

As this piece explains, recent attention-market scholarship fails to offer objective, let alone quantifiable, criteria that might enable authorities to identify firms that are unique competitors for user attention. These limitations should counsel policymakers to proceed with increased rigor when they analyze anticompetitive effects.

The Shaky Foundations of Attention Markets Theory

Advocates for more vigorous antitrust intervention have raised (at least) three normative arguments that pertain attention markets and merger enforcement.

  • First, because they compete for attention, firms may be more competitively related than they seem at first sight. It is sometimes said that these firms are nascent competitors.
  • Second, the scholars argue that all firms competing for attention should not automatically be included in the same relevant market.
  • Finally, scholars argue that enforcers should adopt policy tools to measure market power in these attention markets—e.g., by applying a SSNIC test (“small but significant non-transitory increase in cost”), rather than a SSNIP test (“small but significant non-transitory increase in price”).

There are some contradictions among these three claims. On the one hand, proponents advocate adopting a broad notion of competition for attention, which would ensure that firms are seen as competitively related and thus boost the prospects that antitrust interventions targeting them will be successful. When the shoe is on the other foot, however, proponents fail to follow the logic they have sketched out to its natural conclusion; that is to say, they underplay the competitive constraints that are necessarily imposed by wider-ranging targets for consumer attention. In other words, progressive scholars are keen to ensure the concept is not mobilized to draw broader market definitions than is currently the case:

This “massive market” narrative rests on an obvious fallacy. Proponents argue that the relevant market includes all substitutable sources of attention depletion,” so the market is “enormous.”

Faced with this apparent contradiction, scholars retort that the circle can be squared by deploying new analytical tools that measure attention for competition, such as the so-called SSNIC test. But do these tools actually resolve the contradiction? It would appear, instead, that they merely enable enforcers to selectively mobilize the attention-market concept in ways that fit their preferences. Consider the following description of the SSNIC test, by John Newman:

But if the focus is on the zero-price barter exchange, the SSNIP test requires modification. In such cases, the “SSNIC” (Small but Significant and Non-transitory Increase in Cost) test can replace the SSNIP. Instead of asking whether a hypothetical monopolist would increase prices, the analyst should ask whether the monopolist would likely increase attention costs. The relevant cost increases can take the form of more time or space being devoted to advertisements, or the imposition of more distracting advertisements. Alternatively, one might ask whether the hypothetical monopolist would likely impose an “SSNDQ” (Small but Significant and Non-Transitory Decrease in Quality). The latter framing should generally be avoided, however, for reasons discussed below in the context of anticompetitive effects. Regardless of framing, however, the core question is what would happen if the ratio between desired content to advertising load were to shift.

Tim Wu makes roughly the same argument:

The A-SSNIP would posit a hypothetical monopolist who adds a 5-second advertisement before the mobile map, and leaves it there for a year. If consumers accepted the delay, instead of switching to streaming video or other attentional options, then the market is correctly defined and calculation of market shares would be in order.

The key problem is this: consumer switching among platforms is consistent both with competition and with monopoly power. In fact, consumers are more likely to switch to other goods when they are faced with a monopoly. Perhaps more importantly, consumers can and do switch to a whole range of idiosyncratic goods. Absent some quantifiable metric, it is simply impossible to tell which of these alternatives are significant competitors.

None of this is new, of course. Antitrust scholars have spent decades wrestling with similar issues in connection with the price-related SSNIP test. The upshot of those debates is that the SSNIP test does not measure whether price increases cause users to switch. Instead, it examines whether firms can profitably raise prices above the competitive baseline. Properly understood, this nuance renders proposed SSNIC and SSNDQ tests (“small but significant non-transitory decrease in quality”) unworkable.

First and foremost, proponents wrongly presume to know how firms would choose to exercise their market power, rendering the resulting tests unfit for policymaking purposes. This mistake largely stems from the conflation of price levels and price structures in two-sided markets. In a two-sided market, the price level refers to the cumulative price charged to both sides of a platform. Conversely, the price structure refers to the allocation of prices among users on both sides of a platform (i.e., how much users on each side contribute to the costs of the platform). This is important because, as Jean Charles Rochet and Jean Tirole show in their Nobel-winning work, changes to either the price level or the price structure both affect economic output in two-sided markets.

This has powerful ramifications for antitrust policy in attention markets. To be analytically useful, SSNIC and SSNDQ tests would have to alter the price level while holding the price structure equal. This is the opposite of what attention-market theory advocates are calling for. Indeed, increasing ad loads or decreasing the quality of services provided by a platform, while holding ad prices constant, evidently alters platforms’ chosen price structure.

This matters. Even if the proposed tests were properly implemented (which would be difficult: it is unclear what a 5% quality degradation would look like), the tests would likely lead to false negatives, as they force firms to depart from their chosen (and, thus, presumably profit-maximizing) price structure/price level combinations.

Consider the following illustration: to a first approximation, increasing the quantity of ads served on YouTube would presumably decrease Google’s revenues, as doing so would simultaneously increase output in the ad market (note that the test becomes even more absurd if ad revenues are held constant). In short, scholars fail to recognize that the consumer side of these markets is intrinsically related to the ad side. Each side affects the other in ways that prevent policymakers from using single-sided ad-load increases or quality decreases as an independent variable.

This leads to a second, more fundamental, flaw. To be analytically useful, these increased ad loads and quality deteriorations would have to be applied from the competitive baseline. Unfortunately, it is not obvious what this baseline looks like in two-sided markets.

Economic theory tells us that, in regular markets, goods are sold at marginal cost under perfect competition. However, there is no such shortcut in two-sided markets. As David Evans and Richard Schmalensee aptly summarize:

An increase in marginal cost on one side does not necessarily result in an increase in price on that side relative to price on the other. More generally, the relationship between price and cost is complex, and the simple formulas that have been derived by single-handed markets do not apply.

In other words, while economic theory suggests perfect competition among multi-sided platforms should result in zero economic profits, it does not say what the allocation of prices will look like in this scenario. There is thus no clearly defined competitive baseline upon which to apply increased ad loads or quality degradations. And this makes the SSNIC and SSNDQ tests unsuitable.

In short, the theoretical foundations necessary to apply the equivalent of a SSNIP test on the “free” side of two-sided platforms are largely absent (or exceedingly hard to apply in practice). Calls to implement SSNIC and SSNDQ tests thus greatly overestimate the current state of the art, as well as decision-makers’ ability to solve intractable economic conundrums. The upshot is that, while proposals to apply the SSNIP test to attention markets may have the trappings of economic rigor, the resemblance is superficial. As things stand, these tests fail to ascertain whether given firms are in competition, and in what market.

The Bait and Switch: Qualitative Indicia

These problems with the new quantitative metrics likely explain why proponents of tougher enforcement in attention markets often fall back upon qualitative indicia to resolve market-definition issues. As John Newman writes:

Courts, including the U.S. Supreme Court, have long employed practical indicia as a flexible, workable means of defining relevant markets. This approach considers real-world factors: products’ functional characteristics, the presence or absence of substantial price differences between products, whether companies strategically consider and respond to each other’s competitive conduct, and evidence that industry participants or analysts themselves identify a grouping of activity as a discrete sphere of competition. …The SSNIC test may sometimes be massaged enough to work in attention markets, but practical indicia will often—perhaps usually—be the preferable method

Unfortunately, far from resolving the problems associated with measuring market power in digital markets (and of defining relevant markets in antitrust proceedings), this proposed solution would merely focus investigations on subjective and discretionary factors.

This can be easily understood by looking at the FTC’s Facebook complaint regarding its purchases of WhatsApp and Instagram. The complaint argues that Facebook—a “social networking service,” in the eyes of the FTC—was not interchangeable with either mobile-messaging services or online-video services. To support this conclusion, it cites a series of superficial differences. For instance, the FTC argues that online-video services “are not used primarily to communicate with friends, family, and other personal connections,” while mobile-messaging services “do not feature a shared social space in which users can interact, and do not rely upon a social graph that supports users in making connections and sharing experiences with friends and family.”

This is a poor way to delineate relevant markets. It wrongly portrays competitive constraints as a binary question, rather than a matter of degree. Pointing to the functional differences that exist among rival services mostly fails to resolve this question of degree. It also likely explains why advocates of tougher enforcement have often decried the use of qualitative indicia when the shoe is on the other foot—e.g., when authorities concluded that Facebook did not, in fact, compete with Instagram because their services were functionally different.

A second, and related, problem with the use of qualitative indicia is that they are, almost by definition, arbitrary. Take two services that may or may not be competitors, such as Instagram and TikTok. The two share some similarities, as well as many differences. For instance, while both services enable users to share and engage with video content, they differ significantly in the way this content is displayed. Unfortunately, absent quantitative evidence, it is simply impossible to tell whether, and to what extent, the similarities outweigh the differences. 

There is significant risk that qualitative indicia may lead to arbitrary enforcement, where markets are artificially narrowed by pointing to superficial differences among firms, and where competitive constraints are overemphasized by pointing to consumer switching. 

The Way Forward

The difficulties discussed above should serve as a good reminder that market definition is but a means to an end.

As William Landes, Richard Posner, and Louis Kaplow have all observed (here and here), market definition is merely a proxy for market power, which in turn enables policymakers to infer whether consumer harm (the underlying question to be answered) is likely in a given case.

Given the difficulties inherent in properly defining markets, policymakers should redouble their efforts to precisely measure both potential barriers to entry (the obstacles that may lead to market power) or anticompetitive effects (the potentially undesirable effect of market power), under a case-by-case analysis that looks at both sides of a platform.

Unfortunately, this is not how the FTC has proceeded in recent cases. The FTC’s Facebook complaint, to cite but one example, merely assumes the existence of network effects (a potential barrier to entry) with no effort to quantify their magnitude. Likewise, the agency’s assessment of consumer harm is just two pages long and includes superficial conclusions that appear plucked from thin air:

The benefits to users of additional competition include some or all of the following: additional innovation … ; quality improvements … ; and/or consumer choice … . In addition, by monopolizing the U.S. market for personal social networking, Facebook also harmed, and continues to harm, competition for the sale of advertising in the United States.

Not one of these assertions is based on anything that could remotely be construed as empirical or even anecdotal evidence. Instead, the FTC’s claims are presented as self-evident. Given the difficulties surrounding market definition in digital markets, this superficial analysis of anticompetitive harm is simply untenable.

In short, discussions around attention markets emphasize the important role of case-by-case analysis underpinned by the consumer welfare standard. Indeed, the fact that some of antitrust enforcement’s usual benchmarks are unreliable in digital markets reinforces the conclusion that an empirically grounded analysis of barriers to entry and actual anticompetitive effects must remain the cornerstones of sound antitrust policy. Or, put differently, uncertainty surrounding certain aspects of a case is no excuse for arbitrary speculation. Instead, authorities must meet such uncertainty with an even more vigilant commitment to thoroughness.

Federal Trade Commission (FTC) competition rulemakings, like spring, are in the air. But do they make policy or legal sense?

In two commentaries last summer (see here and here), I argued that FTC competition rulemaking initiatives would not pass cost-benefit muster, on both legal grounds and economic policy grounds.

As a legal matter, I stressed that they would be time-consuming and pose serious litigation risks, suggesting a significant probability that costs would be incurred in proposing rules that ultimately would fail to be upheld.

As an economic policy matter, I explained that the inherent inflexibility of rule-based norms is ill-suited to deal with dynamic evolving market conditions, compared with matter-specific antitrust litigation that flexibly applies the latest economic thinking to particular circumstances. Furthermore, new competition rules would also exacerbate costly policy inconsistencies that stem from the existence of dual federal antitrust enforcement agencies, the FTC and the U.S. Justice Department (DOJ).

My pearls of wisdom, however, failed to move the agency. In December 2021, the FTC issued a Statement of Regulatory Priorities (SRP) that stressed that it would, in the coming year, “consider developing both unfair-methods-of competition [UMC] rulemakings as well as rulemakings to define with specificity unfair or deceptive acts or practices [UDAP].” 

I have addressed in greater detail the legal case against proceeding with UMC rulemakings in an article that will be included as a chapter in a special Concurrences book dealing with FTC rulemaking, scheduled for release around the end of June. The chapter abstract follows:

Under the Biden Administration, the U.S. Federal Trade Commission (FTC) appears poised to launch an unprecedented effort to transform American antitrust policy through the promulgation of rules, rather than reliance on case-by-case adjudication, as in the past. The FTC has a long history of rulemaking, centered primarily on consumer protection. The legal basis for FTC competition rulemaking, however, is enormously weak and fraught with uncertainty, in at least five respects.

First, a constitutional principle known as the “non-delegation doctrine” suggests that the FTC may not, as a constitutional matter, possess the specific statutory delegation required to issue rules that address particular competitive practices. Second, principles of statutory construction strongly suggest that the FTC’s general statutory provision dealing with rulemaking refers to procedural rules of organization, not to substantive rules bearing on competition. Third, even assuming that proposed competition rules survived these initial hurdles, principles of administrative law would pose a substantial risk that competition rules would be struck down as “arbitrary and capricious.” Fourth, there is a high probability that courts would not defer to an FTC statutory construction that authorized “unfair methods of competition” rules. Fifth, any attempt by the FTC to rely on its more specific consumer protection rulemaking powers to reach anticompetitive practices would be cabined by the limited statutory scope of those powers (and the possible perception that the FTC’s procedural protections are weak), and quite probably would fail. In sum, the cumulative weight of these legal risks indicates that the probability FTC competition rulemaking would succeed is extremely low. As such, the FTC may wish to undertake a sober assessment of the legal landscape before embarking on a competition rulemaking adventure that almost certainly would be destined for failure. The Commission could better promote consumer welfare by applying its limited resources to antitrust enforcement rather than competition rulemaking.