Archives For Antitrust & Competition

[The following is a guest post from Andrew Mercado, a research assistant at the Mercatus Center at George Mason University and an adjunct professor and research assistant at George Mason’s Antonin Scalia Law School.]

Price-parity clauses have, until recently, been little discussed in the academic vertical-price-restraints literature. Their growing importance, however, cannot be ignored, and common misconceptions around their use and implementation need to be addressed. While similar in nature to both resale price maintenance and most-favored-nations clauses, the special vertical relationship between sellers and the platform inherent in price-parity clauses leads to distinct economic outcomes. Additionally, with a growing number of lawsuits targeting their use in online platform economies, it is critical to fully understand the economic incentives and outcomes stemming from price-parity clauses. 

Vertical price restraints—of which resale price maintenance (RPM) and most favored nation clauses (MFN) are among many—are both common in business and widely discussed in the academic literature. While there remains a healthy debate among academics as to the true competitive effects of these contractual arrangements, the state of U.S. jurisprudence is clear. Since the Supreme Court’s Leegin and State Oil decisions, the use of RPM is not presumed anticompetitive. Their procompetitive and anticompetitive effects must instead be assessed under a “rule of reason” framework in order to determine their legality under antitrust law. The competitive effects of MFN are also generally analyzed under the rule of reason.

Distinct from these two types of clauses, however, are price-parity clauses (PPCs). A PPC is an agreement between a platform and an independent seller under which the seller agrees to offer their goods on the platform for their lowest advertised price. While sometimes termed “platform MFNs,” the economic effects of PPCs on modern online-commerce platforms are distinct.

This commentary seeks to fill a hole in the PPC literature left by its current focus on producers that sell exclusively nonfungible products on various platforms. That literature generally finds that a PPC reduces price competition between platforms. This finding, however, is not universal. Notably absent from the discussion is any concept of multiple sellers of the same good on the same platform. Correctly accounting for this oversight leads to the conclusion that PPCs generally are both efficient and procompetitive.

Introduction

In a pair of lawsuits filed in California and the District of Columbia, Amazon has come under fire for its restrictions around pricing. These suits allege that Amazon’s restrictive PPCs harm consumers, arguing that sellers are penalized when the price for their good on Amazon is higher than on alternative platforms. They go on to claim that these provisions harm sellers, prevent platform competition, and ultimately force consumers to pay higher prices. The true competitive result of these provisions, however, is unclear.

That literature that does exist on the effects these provisions have on the competitive outcomes of platforms in online marketplaces falls fundamentally short. Jonathan Baker and Fiona Scott Morton (among others) fail to differentiate between PPCs and MFN clauses. This distinction is important because, while the impacts on consumers may be similar, the mechanisms by which the interaction occurs is not. An MFN provision stipulates that a supplier—when working with several distributors—must offer its goods to one particular distributor at terms that are better or equal to those offered to all other distributors.

PPCs, on the other hand, are agreements between sellers and platforms to ensure that the platform’s buyers have access to goods at better or equal terms as those offered the same buyers on other platforms. Sellers that are bound by a PPC and that intend to sell on multiple platforms will have to price uniformly across all platforms to satisfy the PPC. PPCs are contracts between sellers and platforms to define conduct between sellers and buyers. They do not determine conduct between sellers and the platform.

A common characteristic of MFN and PPC arrangements is that consumers are often unaware of the existence of either clause. What is not common, however, is the outcomes that stem from their use. An MFN clause only dictates the terms under which a good is sold to a distributor and does not constrain the interaction between distributors and consumers. While the lower prices realized by a distributor may be passed on as lower prices for the consumer, this is not universally true. A PPC clause, on the other hand, constrains the interactions between sellers and consumers, necessitating that the seller’s price on any given platform, by definition, must be as low as the price on all other platforms. This leads to the lowest prices for a given good in a market.

Intra-Platform Competition

The fundamental oversight in the literature is any discussion of intra-platform competition in the market for fungible goods, within which multiple sellers sell the same good on multiple platforms. Up to this point, all the discussion surrounding PPCs has centered on the Booking.com case in the European Union.

In Booking.com, the primary platform, Booking.com, instituted price-parity clauses with sellers of hotel rooms on its platform, mandating that they sell rooms on Booking.com for equal to or less than the price on all other platforms. This pricing restriction extended to the hotel’s first-party website as well.

In this case, it was alleged that consumers were worse off because the PPC unambiguously increased prices for hotel rooms. This is because, even if the hotel was willing to offer a lower price on its own website, it was unable to do so due to the PPC. This potential lower price would come about due to the low (possibly zero cost) commission a hotel must pay to sell on its own website. On the hotel’s own website, the room could be discounted by as much as the size of the commission that Booking.com took as a percentage of each sale. Further, if a competing platform chose to charge a lower commission than Booking.com, the discount could be the difference in commission rates.

While one other case, E-book MFN, is tangentially relevant, Booking.com is the only case where independent third-party sellers list a good or service for sale on a platform that imposes a PPC. While there is some evidence of harm in the market for the online booking of hotel rooms, however, hotel-room bookings are not analogous to platform-based sales of fungible goods. Sellers of hotel rooms are unable to compete to sell the same room; they can sell similarly situated, easily substitutable rooms, but the rooms are still non-fungible.

In online commerce, however, sellers regularly sell fungible goods. From lip balm and batteries to jeans and air filters, a seller of goods on an e-commerce site is among many similarly situated sellers selling nearly (or perfectly) identical products. These sellers not only have to compete with goods that are close substitutes to the good they are selling, but also with other sellers that offer an identical product.

Therefore, the conclusions found by critics of Booking.com’s PPC do not hold when removing the non-fungibility assumption. While there is some evidence that PPCs may reduce competition among platforms on the margin, there is no evidence that competition among sellers on a given platform is reduced. In fact, the PPC may increase competition by forcing all sellers on a platform to play by the same pricing rules.

We will delve into the competitive environment under a strict PPC—whereby sellers are banned from the platform when found to be in violation of the clause—and introduce the novel (and more realistic) implicit PPC, whereby sellers have incentive to comply with the PPC, but are not punished for deviation. First, however, we must understand the incentives of a seller not bound by a PPC.

Competition by sellers not bound by price-parity clauses

An individual seller in this market chooses to sell identical products at different prices across different platforms, given that the platforms may choose various levels of commission per sale. To sell the highest number of units possible, there is an incentive for sellers to steer customers to platforms that charge the lowest commission, and thereby offer the seller the most revenue possible.

Since the platforms understand the incentive to steer consumers toward low-commission platforms to increase the seller’s revenue, they may not allocate resources toward additional perks, such as free shipping. Platforms may instead compete vigorously to reduce costs in order offer the lowest commissions possible. In the long run, this race to the bottom might leave the market with one dominant and ultra-efficient naturally monopolistic platform that offers the lowest possible commission.

While this sounds excellent for consumers, since they get the lowest possible prices on all goods, this simple scenario does not incorporate non-price factors into the equation. Free shipping, handling, and physical processing; payment processing; and the time spent waiting for the good to arrive are all additional considerations that consumers factor into the equation. For a higher commission, often on the seller side, platforms may offer a number of these perks that increase consumer welfare by a greater amount than the price increase often associated with higher commissions.

In this scenario, because of the under-allocation of resources to platform efficiency, a unified logistics market may not emerge, where buyers are able to search and purchase a good; sellers are able to sell the good; and the platform is able to facilitate the shipping, processing, and handling. By fragmenting these markets—due to the inefficient allocation of capital—consumer welfare is not maximized. And while the raw price of a good is minimized, the total price of the transaction is not.

Competition by sellers bound by strict price-parity clauses

In this scenario, each platform will have some version of a PPC. When the strict PPC is enforced, a seller is restricted from selling on that platform when they are found to have broken parity. Sellers choose the platforms on which they want to sell based on which platform may generate the greatest return; they then set a single price for all platforms. The seller might then make higher returns on platforms with lower commissions and lower returns on platforms with higher commissions. Fundamentally, to sell on a platform, the seller must at least cover its marginal cost.

Due to the potential of being banned for breaking parity, sellers may have an incentive to price so low that, on some platforms, they do not turn a profit (due to high commissions) while compensating for those losses with profits earned on other platforms with lower commissions. Alternatively, sellers may choose to forgo sales on a given platform altogether if the marginal cost associated with selling on the platform under parity is too great.

For a seller to continue to sell on a platform, or to decide to sell on an additional platform, the marginal revenue associated with selling on that platform must outweigh the marginal cost. In effect, even if the commission is so high that the seller merely breaks even, it is still in the seller’s best interest to continue on the platform; only if the seller is losing money by selling on the platform is it economically rational to exit.

Within the boundaries of the platform, sellers bound by a PPC have a strong incentive to vigorously compete. Additionally, they have an incentive to compete vigorously across platforms to generate the highest possible revenue and offset any losses from high-commission platforms.

Platforms have an incentive to vigorously compete to attract buyers and sellers by offering various incentives and additional services to increase the quality of a sale. Examples of such “add-ons” include fulfilment and processing undertaken by the platform, expedited shipping and insured shipping, and authentication services and warranties.

Platforms also have an incentive to find the correct level of commission based on the add-on services that they provide. A platform that wants to offer the lowest possible prices might provide no or few add-ons and charge a low commission. Alternatively, the platform that wants to provide the highest possible quality may charge a high commission in exchange for many add-ons.

As the value that platforms can offer buyers and sellers increases, and as sellers lower their prices to maintain or increase sales, the quality bestowed upon consumers is likely to rise. Competition within the platform, however, may decline. Highly efficient sellers (those with the lowest marginal cost) may use strict PPCs—under which sellers are removed from the platform for breaking parity—to price less-efficient sellers out of the market. Additionally, efficient platforms may be able to price less-efficient platforms out of the market by offering better add-ons, starving the platforms of buyers and sellers in the long run.

Even with the existence of marginally higher prices and lower competition in the marketplace compared to a world without price parity, the marginal benefit for the consumer is likely higher. This is because the add-on services used by platforms to entice buyers and sellers to transact on a given platform, over time, cost less to provide than the benefit they bestow. Regardless of whether every single consumer realizes the full value of such added benefits, the likely result is a level of consumer welfare that is greater under price parity than in its absence.

Implicit price parity: The case of Amazon

Amazon’s price-parity-policy conditions access to some seller perks on the adherence to parity, guiding sellers toward a unified pricing scheme.  The term best suited for this type of policy is an “implicit price parity clause” (IPPC). Under this system, the incentive structure rewards sellers for pricing competitively on Amazon, without punishing alternative pricing measures. For example, if a seller sets prices higher on Amazon because it charges higher commissions than other platforms, that seller will not eligible for Amazon’s Buy Box. But they are still able to sell, market, and promote their own product on the platform. They still show up in the “other sellers” dropdown section of the product page, and consumers can choose that seller with little more than a scroll and an additional click.

While the remainder of this analysis focuses on the specific policies found on Amazon’s platform, IPPCs are found on other platforms, as well. Walmart’s marketplace contains a similar parity policy along with a similarly functioning “buy” box. eBay, too, offers a “best price guarantee,” through which the site offers match the price plus 10% of a qualified competitor within 48 hours. While this policy is not identical in nature, it is in result: prices that are identical for identical goods across multiple platforms.

Amazon’s policy may sound as if it is picking winners and losers on its platform, a system that might appear ripe for corruption and unjustified self-preferencing. But there are several reasons to believe this is not the case. Amazon has built a reputation of low prices, quick delivery, and a high level of customer service. This reputation provides the company an incentive to ensure a consistently high level of quality over time. As Amazon increases the number of products and services offered on its platform, it also needs to devise ways to ensure that its promise of low prices and outstanding service is maintained.

This is where the Buy Box comes in to play. All sellers on the platform can sell without utilizing the Buy Box. These transactions occur either on the seller’s own storefront, or by utilizing the “other sellers” portion of the purchase page for a given good. Amazon’s PPC does not affect the way that these sales occur. Additionally, the seller is free in this type of transaction to sell at whatever price it desires. This includes severely under- or overpricing the competition, as well as breaking price parity. Amazon’s policies do not directly determine prices.

The benefit of the Buy Box—and the reason that an IPPC can be so effective for buyers, sellers, and the platform—is that it both increases competition and decreases search costs. For sellers, there is a strong incentive to compete vigorously on price, since that should give them the best opportunity to sell through the Buy Box. Because the Buy Box is algorithmically driven—factoring in price parity, as well as a few other quality-centered metrics (reviews, shipping cost and speed, etc.)—the featured Buy Box seller can change multiple times per day.

Relative prices between sellers are not the only important factor in winning the Buy Box; absolute prices also play a role. For some products—where there are a limited number of sellers and none are observing parity or they are pricing far above sellers on other platforms—the Buy Box is not displayed at all. This forces consumers to make a deliberate choice to buy from a specific seller as opposed to from a preselected seller. In effect, the Buy Box’s omission removes Amazon’s endorsement of the seller’s practices, while still allowing the seller to offer goods on the platform.

For consumers, this vigorous price competition leads to significantly lower prices with a high level of service. When a consumer uses the Buy Box (as opposed to buying directly from a given seller), Amazon is offering an assurance that the price, shipping, cost, speed, and service associated with that seller and that good is the best of all possible options. Amazon is so confident with its  algorithm that the assurance is backed up with a price guarantee; Amazon will match the price of relevant competitors and, until 2021, would foot the bill for any price drops that happened within seven days of purchase.

For Amazon, this commitment to low prices, high volume, and quality service leads to a sustained strong reputation. Since Amazon has an incentive to attract as many buyers and sellers as possible, to maximize its revenue through commissions on sales and advertising, the platform needs to carefully curate an environment that is conducive to repeated interactions. Buyers and sellers come together on the platform knowing that they are going to face the lowest prices, highest revenues, and highest level of service, because Amazon’s implicit price-parity clause (among other policies) aligns incentives in just the right way to optimize competition.

Conclusion

In some ways, an implicit price-parity clause is the Goldilocks of vertical price restraints.

Without a price-parity clause, there is little incentive to invest in the platform. Yes, there are low prices, but a race to the bottom may tend to lead to a single monopolistic platform. Additionally, consumer welfare is not maximized, since there are no services provided at an efficient level to bring additional value to buyers and sellers, leading to higher quality-adjusted prices. 

Under a strict price-parity clause, there is a strong incentive to invest in the platform, but the nature of removing selling rights due to a violation can lead to reduced price competition. While the quality of service under this system may be higher, the quality-adjusted price may remain high, since there are lower levels of competition putting downward pressure on prices.

An implicit price-parity clause takes the best aspects of both no PPC and strict PPC policies but removes the worst. Sellers are free to set prices as they wish but have incentive to comply with the policy due to the additional benefits they may receive from the Buy Box. The platform has sufficient protection from free riding due to the revocation of certain services, leading to high levels of investment in efficient services that increase quality and decrease quality-adjusted prices. Finally, consumers benefit from the vigorous price competition for the Buy Box, leading to both lower prices and higher quality-adjusted prices when accounting for the efficient shipping and fulfilment undertaken by the platform.

Current attempts to find an antitrust violation associated with PPCs—both implicit and otherwise—are likely misplaced. Any evidence gathered on the market will probably show an increase in consumer welfare. The reduced search costs on the platforms alone could outweigh any alleged increase in price, not to mention the time costs associated with rapid processing and shipping.

Further, while there are many claims that PPC policies—and high commissions on sales—harm sellers, the alternative is even worse. The only credible counterfactual, given the widespread permeation of PPC policies, is that all sellers on the Internet only sell through their own website. Not only would this increase the cost for small businesses by a significant margin, but it would also likely drive many out of business. For sellers, the benefit of a platform is access to a multitude (in some cases, hundreds of millions) of potential consumers. To reach that number of consumers on its own, every single independent seller would have to employ a team of marketers that rivals a Fortune 500 company. Unfortunately, the value proposition is not on its side, and until it is, platforms are the only viable option.

Before labeling a specific contractual obligation as harmful and anticompetitive, we need to understand how it works in the real world. To this point, there has been insufficient discussion about the intra-platform competition that occurs because of price-parity clauses, and the potential consumer-welfare benefits associated with implicit price-parity clauses. Ideally, courts, regulators, and policymakers will take the time going forward to think deeply about the costs and benefits associated with the clauses and choose the least harmful approach to enforcement.

Ultimately, consumers are the ones who stand to lose the most as a result of overenforcement. As always, enforcers should keep in mind that it is the welfare of consumers, not competitors or platforms, that is the overarching concern of antitrust.

Faithful and even occasional readers of this roundup might have noticed a certain temporal discontinuity between the last post and this one. The inimitable Gus Hurwitz has passed the scrivener’s pen to me, a recent refugee from the Federal Trade Commission (FTC), and the roundup is back in business. Any errors going forward are mine. Going back, blame Gus.

Commissioner Noah Phillips departed the FTC last Friday, leaving the Commission down a much-needed advocate for consumer welfare and the antitrust laws as they are, if not as some wish they were. I recommend the reflections posted by Commissioner Christine S. Wilson and my fellow former FTC Attorney Advisor Alex Okuliar. Phillips collaborated with his fellow commissioners on matters grounded in the law and evidence, but he wasn’t shy about crying frolic and detour when appropriate.

The FTC without Noah is a lesser place. Still, while it’s not always obvious, many able people remain at the Commission and some good solid work continues. For example, FTC staff filed comments urging New York State to reject a Certificate of Public Advantage (“COPA”) application submitted by SUNY Upstate Health System and Crouse Medical. The staff’s thorough comments reflect investigation of the proposed merger, recent research, and the FTC’s long experience with COPAs. In brief, the staff identified anticompetitive rent-seeking for what it is. Antitrust exemptions for health-care providers tend to make health care worse, but more expensive. Which is a corollary to the evergreen truth that antitrust exemptions help the special interests receiving them but not a living soul besides those special interests. That’s it, full stop.

More Good News from the Commission

On Sept. 30, a unanimous Commission announced that an independent physician association in New Mexico had settled allegations that it violated a 2005 consent order. The allegations? Roughly 400 physicians—independent competitors—had engaged in price fixing, violating both the 2005 order and the Sherman Act. As the concurring statement of Commissioners Phillips and Wilson put it, the new order “will prevent a group of doctors from allegedly getting together to negotiate… higher incomes for themselves and higher costs for their patients.” Oddly, some have chastised the FTC for bringing the action as anti-labor. But the IPA is a regional “must-have” for health plans and a dominant provider to consumers, including patients, who might face tighter budget constraints than the median physician

Peering over the rims of the rose-colored glasses, my gaze turns to Meta. In July, the FTC sued to block Meta’s proposed acquisition of Within Unlimited (and its virtual-reality exercise app, Supernatural). Gus wrote about it with wonder, noting reports that the staff had recommended against filing, only to be overruled by the chair.

Now comes October and an amended complaint. The amended complaint is even weaker than the opening salvo. Now, the FTC alleges that the acquisition would eliminate potential competition from Meta in a narrower market, VR-dedicated fitness apps, by “eliminating any probability that Meta would enter the market through alternative means absent the Proposed Acquisition, as well as eliminating the likely and actual beneficial influence on existing competition that results from Meta’s current position, poised on the edge of the market.”

So what if Meta were to abandon the deal—as the FTC wants—but not enter on its own? Same effect, but the FTC cannot seriously suggest that Meta has a positive duty to enter the market. Is there a jurisdiction (or a planet) where a decision to delay or abandon entry would be unlawful unilateral conduct? Suppose instead that Meta enters, with virtual-exercise guns blazing, much to the consternation of firms actually in the market, which might complain about it. Then what? Would the Commission cheer or would it allege harm to nascent competition, or perhaps a novel vertical theory? And by the way, how poised is Meta, given no competing product in late-stage development? Would the FTC prefer that Meta buy a different competitor? Should the overworked staff commence Meta’s due diligence?

Potential competition cases are viable given the right facts, and in areas where good grounds to predict significant entry are well-established. But this is a nascent market in a large, highly dynamic, and innovative industry. The competitive landscape a few years down the road is anyone’s guess. More speculation: the staff was right all along. For more, see Dirk Auer’s or Geoffrey Manne’s threads on the amended complaint.

When It Rains It Pours Regulations

On Aug. 22, the FTC published an advance notice of proposed rulemaking (ANPR) to consider the potential regulation of “commercial surveillance and data security” under its Section 18 authority. Shortly thereafter, they announced an Oct. 20 open meeting with three more ANPRs on the agenda.

First, on the advance notice: I’m not sure what they mean by “commercial surveillance.” The term doesn’t appear in statutory law, or in prior FTC enforcement actions. It sounds sinister and, surely, it’s an intentional nod to Shoshana Zuboff’s anti-tech polemic “The Age of Surveillance Capitalism.” One thing is plain enough: the proffered definition is as dramatically sweeping as it is hopelessly vague. The Commission seems to be contemplating a general data regulation of some sort, but we don’t know what sort. They don’t say or even sketch a possible rule. That’s a problem for the FTC, because the law demands that the Commission state its regulatory objectives, along with regulatory alternatives under consideration, in the ANPR itself. If they get to an NPRM, they are required to describe a proposed rule with specificity.

What’s clear is that the ANPR takes a dim view of much of the digital economy. And while the Commission has considerable experience in certain sorts of privacy and data security matters, the ANPR hints at a project extending well past that experience. Commissioners Phillips and Wilson dissented for good and overlapping reasons. Here’s a bit from the Phillips dissent:

When adopting regulations, clarity is a virtue. But the only thing clear in the ANPR is a rather dystopic view of modern commerce….I cannot support an ANPR that is the first step in a plan to go beyond the Commission’s remit and outside its experience to issue rules that fundamentally alter the internet economy without a clear congressional mandate….It’s a naked power grab.

Be sure to read the bonus material in the Federal Register—supporting statements from Chair Lina Khan and Commissioners Rebecca Kelly Slaughter and Alvaro Bedoya, and dissenting statements from Commissioners Phillips and Wilson. Chair Khan breezily states that “the questions we ask in the ANPR and the rules we are empowered to issue may be consequential, but they do not implicate the ‘major questions doctrine.’” She’s probably half right: the questions do not violate the Constitution. But she’s probably half wrong too.

For more, see ICLE’s Oct. 20 panel discussion and the executive summary to our forthcoming comments to the Commission.

But wait, there’s more! There were three additional ANPRs on the Commission’s Oct. 20 agenda. So that’s four and counting. Will there be a proposed rule on non-competes? Gig workers? Stay tuned. For now, note that rules are not self-enforcing, and that the chair has testified to Congress that the Commission is strapped for resources and struggling to keep up with its statutory mission. Are more regulations an odd way to ask Congress for money? Thus far, there’s no proposed rule on gig workers, but there was a Policy Statement on Enforcement Related to Gig Workers.. For more on that story, see Alden Abbott’s TOTM post.

Laws, Like People, Have Their Limits

Read Phillips’s parting dissent in Passport Auto Group, where the Commission combined legitimate allegations with an unhealthy dose of overreach:

The language of the unfairness standard has given the FTC the flexibility to combat new threats to consumers that accompany the development of new industries and technologies. Still, there are limits to the Commission’s unfairness authority. Because this complaint includes an unfairness count that aims to transform Section 5 into an undefined discrimination statute, I respectfully dissent.”

Right. Three cheers for effective enforcement of the focused antidiscrimination laws enacted by Congress by the agencies actually charged to enforce those laws. And to equal protection. And three more, at least, for a little regulatory humility, if we find it.

This post is the third in a three-part series. The first installment can be found here and the second can be found here.

As it has before in its history, liberalism again finds itself at an existential crossroads, with liberally oriented reformers generally falling into two camps: those who seek to subordinate markets to some higher vision of the common good and those for whom the market itself is the common good. The former seek to rein in, temper, order, and discipline unfettered markets, while the latter strive to build on the foundations of classical liberalism to perfect market logic, rather than to subvert it.

This conflict of visions has deep ramifications for today’s economic policy. In his classic text “The Antitrust Paradox,” Judge Robert Bork deemed antitrust law a “subcategory of ideology” that “connects with the central political and social concerns of our time.” Among these concerns, he focused specifically on the eternal tension between the ideals of “equality” and “freedom.” In recent years, that tension has been exemplified in competition-policy debates by two schools of thought: the neo-Brandeisians, whose jurisprudential philosophy draws from the progressive U.S. Supreme Court Justice Louis Brandeis, and another group represented by the Chicago School and other defenders of the consumer-welfare standard.

But this schism resembles similar divides that have played out countless times over the history of liberalism, albeit under different names and banners. Looking back on the past century and a half of economic and philosophical thought can help us to make sense of these fundamentally opposed visions for the future of both liberalism and antitrust. This history can also help us to understand how these ideologies have sometimes failed to live up to their ambitions or crumbled under the weight of their own contradictions. 

In this final piece in the political philosophy series, I explain the genesis, normative underpinnings, and likely outcome of the current “battle for the soul of antitrust.” The broader point that I have tried to make throughout this series is that this confrontation hinges on ethical and deontological considerations, as much as it does on “hard” consequentialist arguments. Put differently, how we decide to resolve foundational and putatively “technical” questions regarding the goals, standards, and enforcement of antitrust law ultimately cannot help but reflect our underlying views about the values and ideals that should guide a liberal society. In this vein, I argue that there are compelling non-utilitarian reasons to prefer a polity with an in-built bias for negative freedom and that is guided by a narrow economic-efficiency criterion, rather than the apparently ascendant alternatives.

The Birth of Neoliberalism

The clearest articulation of the philosophical schism between the two visions of liberalism that we see today came with the 1937 publication of “The Good Society” by American author and journalist Walter Lippmann. Lippman—who, like Brandeis, came out of the American Progressive Movement and had been an adviser to progressive U.S. President Woodrow Wilson—sparked the birth of “neoliberalism” as a separate strand of liberal political philosophical thought. The book invited readers to critically reexamine and, where appropriate, update the tenets of classical liberalism with a view toward “stabilizing and consolidating the course of an intellectual tradition that was otherwise bound to tumble straight into oblivion” (see here).  

This was the objective of the “neoliberal collective,” a loose affiliation of liberally oriented thinkers who convened for the first time at the Walter Lippmann Colloquium in 1938 to discuss Lippmann’s seminal book, and from 1947 onwards more formally under the auspices of the Mont Pelerin Society

Neoliberals grappled with questions that went to the very heart of liberalism, such as how to adapt traditional small-scale human societies to the exigencies of ever-widening markets and economic progress; the causes and consequences of industrial concentration; the appropriate role and boundaries of state intervention; the ability of markets to address the “social question”; the interplay between freedom and coercion; and the tension between the individual and the collective. Like Lippmann, the neoliberals were convinced that the failure to reckon with such fundamental issues would result in the inevitable displacement of liberalism by some form of “authoritarian collectivism,” which they believed provided emotionally appealing (but ultimately illusory) solutions to the full range of liberal problems.

It quickly became apparent, however, that there existed two main currents of neoliberalism.

The first, which I will call “left neoliberalism,” was a relatively conciliatory version that sought to strike a “mostly liberal” balance with socialism and collectivism. It postulated that markets are embedded in a broader social and political context that may include a strong and activist state, aggressive antitrust policy, robust social rights, and an emphasis on positive freedom. In this respect, their views resembled those of the Progressive Movement of Wilson and Brandeis, which was carried on into the mid-20th century in the United States by such figures as President Franklin Roosevelt, historian Arthur M. Schlesinger, and economist John Kenneth Galbraith. The “left neoliberals,” however, were primarily European, and included the likes of Wilhelm Röpke, Walter Eucken, Franz Bohm, Alexander Rüstow, Luigi Einaudi, Louis Rougier, Louis Marlio, and Jacques Rueff (and, arguably, Lippmann himself). 

Adherents to the other strand, “right neoliberalism,” were more conservative and less willing to compromise. They championed a strong but minimal state tasked with (and limited by) facilitating efficient markets, posited a lean antitrust policy, and emphasized negative liberty. Thinkers like Friedrich Hayek, Milton Friedman, Lionel Robbins, James Buchanan and, arguably, the more libertarian Ludwig von Mises and Bruno Leoni would fall into this group.

The Price Mechanism and the State

The two groups of neoliberals shared several basic postulates. 

First and foremost, they agreed that any revision of Adam Smith’’s “invisible hand” had to respect the integrity of the price mechanism (what Wilhelm Röpke referred to as the “sacrosanct core of liberalism”). The argument rested on utilitarian, but also political and ethical grounds. As Friedrich Hayek argued in “The Road to Serfdom,” the substitution of the free market for a centrally planned economy would lead to the loss of economic freedom, and eventually all other freedoms, as well. This meant that neoliberals were, on principle, harsh critics of any type of state intervention that distorted the formation of prices through the forces of supply and demand.

At the same time, however, neither strand of neoliberalism professed a doctrine of statelessness.  To the contrary, the state may, in hindsight, be neoliberalism’s greatest conquest. The question at hand is what kind of state is optimal. 

For the left neoliberals, a strong state was needed to resist capture by interest groups. It also had to exercise good political leadership and discretion in juggling goals and values (markets, after all, had to be “embedded” in the social order). These views were underpinned by a relatively sanguine set of expectations the left neoliberals had of the state’s willingness and capacity to protect the general interest, as well as their shared belief that the core institutions of liberalism (including self-regulating markets) were prone to degeneration and in need of constant public oversight. The state, not the private sector, was the ultimate ordering power of the economy. As Alexander Rüstow said:

I am, indeed, of the opinion that it is not the economy, but the state which determines our fate. 

The right neoliberal position was more ambivalent, due to its heightened skepticism toward state power. The bigger threat to freedom was not unfettered private power, but public power. As Milton Friedman put it in “Capitalism and Freedom”:

Government is necessary to preserve our freedom […] yet by concentrating power in political hands, it is also a threat to freedom. […] How can we benefit from the promise of government while avoiding the threat to freedom? 

The answer was a revamped Smithian nightwatchman that acted more as an umpire determining “the rules of the game” and overseeing free interactions between individuals than as a helmsman tasked with channeling society toward any particular variety of teleological goals. Like the left neoliberal position, this one, too, rests on a set of theoretical underpinnings.

One is that public actors are not any less self-interested than private ones, with the corollary that any extension or deepening of the powers of the state must be well-justified. The idea relied heavily on the public choice theory developed by James M. Buchanan, a member of Mont Pelerin Society and its president from 1984 to 1986. Thus, left and right neoliberals advanced almost completely opposite responses to the problem of capture. While left neoliberals believed in strengthening the state relative to private enterprise, the right’s critique led them to want precisely to limit state power and reshape institutional incentives.

This is not surprising, as right neoliberals were also more optimistic about the potential of markets and deontologically more preoccupied with negative freedom, a combination that added another layer of suspicion to any putatively progressive measures that involved wealth redistribution or meticulous administration of the market by the state.

Economic Concentration and Competition

Another important difference lay in the two sides’ views on economic concentration and competition. Some left neoliberals, particularly in Europe, internalized much of the Marxist and fascist critiques of capitalism, including the belief that markets naturally tended toward economic concentration. They argued, however, that this process could be reversed or prevented with robust antitrust and de-concentration measures. While essentially conceding Marxian arguments about the intrinsic tendency of competition to degenerate into monopoly—thereby fostering inequality and “proletarizing” the masses—they denied the ultimate implications upon which Marx had insisted—i.e., the inevitable “cannibalization” of capitalism through its inherent contradictions.

Right neoliberals, by contrast, insisted that, where economic concentration was not fleeting, it was generally the result of state action, not state inaction. As Mises argued, cartels were a consequence of protectionism and the artificial partitioning of markets through, e.g., tariffs. Similarly, monopolies formed and persisted because of “anti-liberal policies of governments that [created] the conditions favorable” to them. This implied that antitrust had a secondary position in securing competitive markets.

Each strand’s reasoning as to why competition was worthy of protection also differed. For the right neoliberals, who saw the legitimate goals and boundaries of public policy through the lenses of economic efficiency and negative freedom, the case for competition was principally a utilitarian one. As Hayek wrote in “Individualism and Economic Order,” state-backed institutions and laws (including antitrust laws) that “made competition work” (by which he meant, made competition work effectively) were one of the ways in which right neoliberals improved on the classical liberal position. 

Left neoliberals added political, social, and ethical layers to this argument. Politically, they shared the standard Marxian view that concentrated markets facilitated the capture of the state by powerful private interests. Marxists had, e.g., always asserted that Nazism was the product of “monopoly capitalism” and that the Nazis themselves were the tools of big business (the idea of “state monopoly capitalism” stems from Lenin). Left neoliberals largely agreed with this view. They also counseled that a centralized industry was more readily prone to takeover by an authoritarian state. In addition, they rejected “bigness” because they considered it an unnatural perversion of human nature (though such critiques surprisingly did not seem to translate to the state). As Wilhelm Röpke notes in “A Humane Economy”:

Nothing is more detrimental to a sound general order appropriate to human nature than two things: mass and concentration.

“Bigness,” Roepke thought, had come about as a result of one particularly harmful but pervasive trend of modernity: “economism,” a frequent target of left neoliberals that refers to a fixation with indicators of economic performance at the expense of deeper social and spiritual values.

But it would be a mistake to conclude that left neoliberals viewed competition as a panacea. Private property, profit, and competition (the foundations of liberalism) were as socially corrosive as they were beneficial. They were, according to Wilhelm Röpke:

justifiable only within certain limits, and in remembering this we return to the realm beyond supply and demand. In other words, the market economy is not everything. It must find its place within a higher order of things which is not ruled by supply and demand, free prices, and competition.

Competition, in other words, was as Luigi Einaudi put it, a paradox. It was beneficial, but could also be socially and morally ruinous. 

The Goals and Boundaries of Public Policy

The perceived failures of liberalism guided the contrasting notions of what a reformed neoliberalism should look like. On the one hand, European left neoliberals and American progressives thought that liberalism suffered from certain inherent deficiencies that could not be resolved within the liberal paradigm and that called for mitigating policies and social-safety nets. Again, these resonated with familiar criticisms levied by the right and the left, such as, e.g., excessive individualism; the loss of shared values and a sense of community; a lack of “social integration”; worker alienation (in an essay titled “Social Policy or Vitalpolitik (Organic Policy),” Alexander Rüstow starts by citing Friedrich Engels’ 1945 “The Condition of the Working-Class in England”); and the socially explosive elements of competition and markets. These spiritual dislocations arguably weighed more than any material or economic shortcomings, and were at the root of the liberal debacle. As Walter Eucken argued:

Quite obviously, the reasons for the anti-capitalistic attitude of the masses cannot be found in any deterioration of the living conditions brought about by capitalism. […] The turning of the masses against capitalism is rather a phenomenon that can only be understood in terms of the sensibilities of modern man.  

In response, the left neoliberals called for an “organic policy” that would approach markets and competition as not purely an economic, but also a social phenomena (a similar view was expressed by Justice Brandeis). In this new hybrid vision of liberalism, “there would be counterweights to competition and the mechanical operation of prices.” Competition and the market’s other imperatives would be tempered by balancing considerations and subordinated to “higher values” that were beyond the law of supply and demand—and beyond mere economic utility. As Wilhelm Röpke summarizes:

Competition, which we need as a regulator in a free market economy, comes up on all sides against limits which we would not wish to transgress. It remains morally and socially dangerous and can be defended only up to a point and with qualifications and modifications of all kinds.

Conversely, right neoliberals believed that the downfall of liberalism had been the result of a fundamental misunderstanding of its true ethos and an overabundance of conflicting rules and policies. It was not the inevitable upshot of liberalism itself. As Lionel Robbins posited:

It is not liberal institutions but the absence of such institutions which is responsible for the chaos of today.

Classical liberalism had stopped short on the road to exploring the full range of laws and institutions needed to sustain and perfect the “natural order.” But the prevalent social malaise—which had, no doubt, been adroitly instigated and exploited by collectivist demagogues—was not the result of some innate incompatibility between markets and human society. It had instead come about because of the failure to properly adjust the latter to the exigencies of the former. 

Additionally, right neoliberals rejected “organic” or “third way” policies of the sort favored by the left neoliberals, because they believed that it was not within the remit of public policy to answer existential questions or to provide “meaning” or “social integration.”  Granting the state the power to decide on such matters was a slippery slope that required it to override the preferences of some with its own. As such, it got dangerously close to the sort of collectivism that neoliberals rallied in opposition to in the first place. They also doubted the state’s ability to resolve such complex, value-laden questions. It was insights such as these that underpinned Friedrich Hayek’s theory of the gradual march towards serfdom and Ludwig von Mises’ quip that there is no such thing as a “third way” or a mixed economy. 

In consequence, the solution was not to restrain, mollify, or limit the spread or depth of markets in order to align them with some past ideal of parochial life, but to improve markets and to acclimatize societies to their workings through better laws and institutions.

Two Different Visions for Liberalism For Two Different Visions of Antitrust

In keeping with the theme of this series, the prescriptions for antitrust policy made by each strand of neoliberalism are not doctrinally extrapolated from their broader vision of society.

Left neoliberals and American progressives took Marxist and fascist attacks on liberalism seriously, but sought to address them through less radical channels. They wanted a “mostly liberal” third-way social order, in which markets and competition would be tempered by a host of other social and political considerations that were mediated by the state. This meant opposing “big business” as a matter of principle, infusing antitrust law with a host of non-economic goals and values, and granting enforcers the necessary discretion to decide in cases of conflict. 

Right neoliberals, on the other hand, sought to improve on the classical-liberal position through a more robust legal and institutional framework that operated primarily in the service of a single goal: economic efficiency. Economic efficiency—itself not a value-free notion—was, however, seen as a comparatively neutral, narrow, and predictable standard that, in turn, cabined enforcers’  scope of discretion and minimized the instances in which the state could override business decisions (and thus interfere with negative liberty). In the context of antitrust law, this tethered anticompetitive conduct and exemptions to the threshold requirement to find harms to consumers or to total welfare.

Conclusion

The pendulum of neoliberalism has swung in the past, with momentous implications for antitrust. The “Chicagoan” shift of the 1970s, for instance, was a move toward right neoliberalism, as was the “more economic approach” of EU competition law in the late 1990s. Conversely, more recent calls for the condemnation of “big business” on a range of moral and political grounds; “polycentric competition laws” with multiple goals and values; and the widening of state discretion to lead market developments in a socially desirable direction signal a move in the opposite direction. 

How should the newest iteration of the neoliberal “battle for the soul of antitrust” be resolved?

On the one hand, left neoliberalism—or what Americans typically just call “progressivism”—has intuitive and emotional appeal, particularly in a time of growing anti-capitalistic fervor. Today, as in the 1930s, many believe that market logic has overstepped its legitimate boundaries and that the most successful private companies are a looming enemy. From this perspective, a “market in society” approach—in which the government has more leeway to restrain corporate power and reshape markets in accordance with a range of social or political considerations—may sound more humane to some. 

If history teaches us anything, however, this populist approach to regulating competition is problematic for a number of reasons.

First, the overly complex web of mutually conflicting goals and values will inevitably require enforcement agencies to act as social engineers. In this position, they may use their enhanced discretion to decide whom or what to favor and to rank subjective values pursuant to personal moral heuristics. Public-choice theory and historical examples of state-led collectivist projects, however, counsel against assuming that government is able and willing to exercise such far-reaching oversight of society. In addition, as enforcers inevitably prove unfit to discharge their new role as philosopher-kings, and as their contradictory case law increasingly comes under contestation, activist attempts to widen the scope of antitrust law likely will be checked by the courts. 

Second, like the non-economic arguments against concentration raised today by progressives such as Tim Wu and Lina Khan, the left neoliberal position is largely based on aesthetic preference and intuition—not fact. Röpkean complaints about big business ruining the bucolic landscape where men are “vitally satisfied” in their small, tight-knit communities rests on a very idiosyncratic vision of the good life (left neoliberals romanticized Switzerland, for instance), and it’s one many do not share in the 21st century. Equally particular were Justice Brandeis’ own yeoman sensibilities, which led him to reject bigness as a matter of principle (unlike today’s neo-Brandeisians, however, he was also skeptical of big government). 

As to the persistent argument to curb “bigness” on political grounds: this would be more convincing if there was a clear, unambiguous relationship between market concentration or company size and the quality of democracy. This does not appear to be the case. In fact, the case for incorporating democratic concerns into antitrust seems unwittingly to rely on discredited Marxist theories about the relationship between German big business and the rise of Hitler. Unfortunately, these ideas have been so aggressively peddled by Marxists—who had a vested ideological interest in demonstrating that private corporations were the main culprits behind Nazism—during the 1960s and 70s that today they enjoy the status of dogma.

Alternatively, one might argue that the very existence of large concentrations of private economic power is antithetical to democracy because having the potential to exercise private power over another (without any actual interference) is anti-democratic (see here). But this lifts a particularistic vision of democracy—so-called republican democracy—over others. According to the more mainstream notion of liberal democracy, which gives precedence to negative freedom, any such interference with property rights may, in fact, be seen as deeply illiberal and undemocratic, especially as the inherent ambiguity of the “democracy” standard is likely to invite reprisals against political opponents.

Alas, right neoliberalism appears to be falling out of favor, as anti-market rhetoric seeps into the mainstream and politicians and intellectuals look to the past to find alternatives to a neoliberal system seen as too narrow and economistic. Ultimately, however, this may be precisely what we want public policy to be in a liberal world: focused on predictable and quantifiable standards that subject enforcers to the rigorous discipline of economic theory and leave them little space to act as social engineers or to exercise arbitrary authority. More than a century of intellectual effervescence and dangerous intellectual escapades has proven this to be the superior way to achieve both measurable policy outcomes that improve on the classical-liberal position and to avoid the Charybdis of state collectivism. In antitrust law, it has meant embracing economic analysis of the law and a narrow consumer-welfare standard to discern anticompetitive from procompetitive conduct. 

In the end, today’s “battle for the soul” of antitrust is a proxy for a much wider conflict of visions. Changing the consumer-welfare standard and the architecture of antitrust enforcement along lines preferred by progressives and left neoliberals would be both a symptom and a cause of a broader philosophical shift toward a worldview that makes some of the same deleterious mistakes it purports to correct: excessive government discretion in overseeing the economy; the subordination of individual freedom to an array of collectivist goals mediated by a public aristocracy; and the substitution of evidence-based policy for emotional impetus.

While the inherent contradictions and incongruence of that vision mean that the pendulum is likely to eventually swing back in the right direction, the damage will already have been done. This is why we must defend the consumer-welfare standard today more vigorously than ever: because ultimately, much more than the future of a niche field of law is at stake.

The concept of European “digital sovereignty” has been promoted in recent years both by high officials of the European Union and by EU national governments. Indeed, France made strengthening sovereignty one of the goals of its recent presidency in the EU Council.

The approach taken thus far both by the EU and by national authorities has been not to exclude foreign businesses, but instead to focus on research and development funding for European projects. Unfortunately, there are worrying signs that this more measured approach is beginning to be replaced by ill-conceived moves toward economic protectionism, ostensibly justified by national-security and personal-privacy concerns.

In this context, it is worth reconsidering why Europeans’ best interests are best served not by economic isolationism, but by an understanding of sovereignty that capitalizes on alliances with other free democracies.

Protectionism Under the Guise of Cybersecurity

Among the primary worrying signs regarding the EU’s approach to digital sovereignty is the union’s planned official cybersecurity-certification scheme. The European Commission is reportedly pushing for “digital sovereignty” conditions in the scheme, which would include data and corporate-entity localization and ownership requirements. This can be categorized as “hard” data localization in the taxonomy laid out by Peter Swire and DeBrae Kennedy-Mayo of Georgia Institute of Technology, in that it would prohibit both data transfers to other countries and for foreign capital to be involved in processing even data that is not transferred.

The European Cybersecurity Certification Scheme for Cloud Services (EUCS) is being prepared by ENISA, the EU cybersecurity agency. The scheme is supposed to be voluntary at first, but it is expected that it will become mandatory in the future, at least for some situations (e.g., public procurement). It was not initially billed as an industrial-policy measure and was instead meant to focus on technical security issues. Moreover, ENISA reportedly did not see the need to include such “digital sovereignty” requirements in the certification scheme, perhaps because they saw them as insufficiently grounded in genuine cybersecurity needs.

Despite ENISA’s position, the European Commission asked the agency to include the digital–sovereignty requirements. This move has been supported by a coalition of European businesses that hope to benefit from the protectionist nature of the scheme. Somewhat ironically, their official statement called on the European Commission to “not give in to the pressure of the ones who tend to promote their own economic interests,”

The governments of Denmark, Estonia, Greece, Ireland, Netherlands, Poland, and Sweden expressed “strong concerns” about the Commission’s move. In contrast, Germany called for a political discussion of the certification scheme that would take into account “the economic policy perspective.” In other words, German officials want the EU to consider using the cybersecurity-certification scheme to achieve protectionist goals.

Cybersecurity certification is not the only avenue by which Brussels appears to be pursuing protectionist policies under the guise of cybersecurity concerns. As highlighted in a recent report from the Information Technology & Innovation Foundation, the European Commission and other EU bodies have also been downgrading or excluding U.S.-owned firms from technical standard-setting processes.

Do Security and Privacy Require Protectionism?

As others have discussed at length (in addition to Swire and Kennedy-Mayo, also Theodore Christakis) the evidence for cybersecurity and national-security arguments for hard data localization have been, at best, inconclusive. Press reports suggest that ENISA reached a similar conclusion. There may be security reasons to insist upon certain ways of distributing data storage (e.g., across different data centers), but those reasons are not directly related to the division of national borders.

In fact, as illustrated by the well-known architectural goal behind the design of the U.S. military computer network that was the precursor to the Internet, security is enhanced by redundant distribution of data and network connections in a geographically dispersed way. The perils of putting “all one’s data eggs” in one basket (one locale, one data center) were amply illustrated when a fire in a data center of a French cloud provider, OVH, famously brought down millions of websites that were only hosted there. (Notably, OVH is among the most vocal European proponents of hard data localization).

Moreover, security concerns are clearly not nearly as serious when data is processed by our allies as it when processed by entities associated with less friendly powers. Whatever concerns there may be about U.S. intelligence collection, it would be detached from reality to suggest that the United States poses a national-security risk to EU countries. This has become even clearer since the beginning of the Russian invasion of Ukraine. Indeed, the strength of the U.S.-EU security relationship has been repeatedly acknowledged by EU and national officials.

Another commonly used justification for data localization is that it is required to protect Europeans’ privacy. The radical version of this position, seemingly increasingly popular among EU data-protection authorities, amounts to a call to block data flows between the EU and the United States. (Most bizarrely, Russia seems to receive a more favorable treatment from some European bureaucrats). The legal argument behind this view is that the United States doesn’t have sufficient legal safeguards when its officials process the data of foreigners.

The soundness of that view is debated, but what is perhaps more interesting is that similar privacy concerns have also been identified by EU courts with respect to several EU countries. The reaction of those European countries was either to ignore the courts, or to be “ruthless in exploiting loopholes” in court rulings. It is thus difficult to treat seriously the claims that Europeans’ data is much better safeguarded in their home countries than if it flows in the networks of the EU’s democratic allies, like the United States.

Digital Sovereignty as Industrial Policy

Given the above, the privacy and security arguments are unlikely to be the real decisive factors behind the EU’s push for a more protectionist approach to digital sovereignty, as in the case of cybersecurity certification. In her 2020 State of the Union speech, EU Commission President Ursula von der Leyen stated that Europe “must now lead the way on digital—or it will have to follow the way of others, who are setting these standards for us.”

She continued: “On personalized data—business to consumer—Europe has been too slow and is now dependent on others. This cannot happen with industrial data.” This framing suggests an industrial-policy aim behind the digital-sovereignty agenda. But even in considering Europe’s best interests through the lens of industrial policy, there are reasons to question the manner in which “leading the way on digital” is being implemented.

Limitations on foreign investment in European tech businesses come with significant costs to the European tech ecosystem. Those costs are particularly high in the case of blocking or disincentivizing American investment.

Effect on startups

Early-stage investors such as venture capitalists bring more than just financial capital. They offer expertise and other vital tools to help the businesses in which they invest. It is thus not surprising that, among the best investors, those with significant experience in a given area are well-represented. Due to the successes of the U.S. tech industry, American investors are especially well-positioned to play this role.

In contrast, European investors may lack the needed knowledge and skills. For example, in its report on building “deep tech” companies in Europe, Boston Consulting Group noted that a “substantial majority of executives at deep-tech companies and more than three-quarters of the investors we surveyed believe that European investors do not have a good understanding of what deep tech is.”

More to the point, even where EU players do hold advantages, a cooperative economic and technological system will allow the comparative advantage of both U.S. and EU markets to redound to each others’ benefit. That is to say, of course not all U.S. investment expertise will apply in the EU, but certainly some will. Similarly, there will be EU firms that are positioned to share their expertise in the United States. But there is no ex ante way to know when and where these complementarities will exist, which essentially dooms efforts at centrally planning technological cooperation.

Given the close economic, cultural, and historical ties of the two regions, it makes sense to work together, particularly given the rising international-relations tensions outside of the western sphere. It also makes sense, insofar as the relatively open private-capital-investment environment in the United States is nearly impossible to match, let alone surpass, through government spending.

For example, national government and EU funding in Europe has thus far ranged from expensive failures (the “Google-killer”) to the all-too-predictable bureaucracy-heavy grantmaking, the beneficiaries of which describe as lacking flexibility, “slow,” “heavily process-oriented,” and expensive for businesses to navigate. As reported by the Financial Times’ Sifted website, the EU’s own startup-investment scheme (the European Innovation Council) backed only one business over more than a year, and it had “delays in payment” that “left many startups short of cash—and some on the brink of going out of business.”

Starting new business ventures is risky, especially for the founders. They risk devoting their time, resources, and reputation to an enterprise that may very well fail. Given this risk of failure, the potential upside needs to be sufficiently high to incentivize founders and early employees to take the gamble. This upside is normally provided by the possibility of selling one’s shares in a business. In BCG’s previously cited report on deep tech in Europe, respondents noted that the European ecosystem lacks “clear exit opportunities”:

Some investors fear being constrained by European sovereignty concerns through vetoes at the state or Europe level or by rules potentially requiring European ownership for deep-tech companies pursuing strategically important technologies. M&A in Europe does not serve as the active off-ramp it provides in the US. From a macroeconomic standpoint, in the current environment, investment and exit valuations may be impaired by inflation or geopolitical tensions.

More broadly, those exit opportunities also factor importantly into funders’ appetite to price the risk of failure in their ventures. Where the upside is sufficiently large, an investor might be willing to experiment in riskier ventures and be suitably motivated to structure investments to deal with such risks. But where the exit opportunities are diminished, it makes much more sense to spend time on safer bets that may provide lower returns, but are less likely to fail. Coupled with the fact that government funding must run through bureaucratic channels, which are inherently risk averse, the overall effect is a less dynamic funding system.

The Central and Eastern Europe (CEE) region is an especially good example of the positive influence of American investment in Europe’s tech ecosystem. According to the state-owned Polish Development Fund and Dealroom.co, in 2019, $0.9 billion of venture-capital investment in CEE came from the United States, $0.5 billion from Europe, and $0.1 billion from the rest of the world.

Direct investment

Technological investment is rarely, if ever, a zero-sum game. U.S. firms that invest in the EU (and vice versa) do not do so as foreign conquerors, but as partners whose own fortunes are intertwined with their host country. Consider, for example, Google’s recent PLN 2.7 billion investment in Poland. Far from extractive, that investment will build infrastructure in Poland, and will employ an additional 2,500 Poles in the company’s cloud-computing division. This sort of partnership plants the seeds that grow into a native tech ecosystem. The Poles that today work in Google’s cloud-computing division are the founders of tomorrow’s innovative startups rooted in Poland.

The funding that accompanies native operations of foreign firms also has a direct impact on local economies and tech ecosystems. More local investment in technology creates demand for education and support roles around that investment. This creates a virtuous circle that ultimately facilitates growth in the local ecosystem. And while this direct investment is important for large countries, in smaller countries, it can be a critical component in stimulating their own participation in the innovation economy. 

According to Crunchbase, out of 2,617 EU-headquartered startups founded since 2010 with total equity funding amount of at least $10 million, 927 (35%) had at least one founder who previously worked for an American company. For example, two of the three founders of Madrid-based Seedtag (total funding of more than $300 million) worked at Google immediately before starting Seedtag.

It is more difficult to quantify how many early employees of European startups built their experience in American-owned companies, but it is likely to be significant and to become even more so, especially in regions—like Central and Eastern Europe—with significant direct U.S. investment in local talent.

Conclusion

Explicit industrial policy for protectionist ends is—at least, for the time being—regarded as unwise public policy. But this is not to say that countries do not have valid national interests that can be met through more productive channels. While strong data-localization requirements is ultimately counterproductive, particularly among closely allied nations, countries have a legitimate interest in promoting the growth of the technology sector within their borders.

National investment in R&D can yield fruit, particularly when that investment works in tandem with the private sector (see, e.g., the Bayh-Dole Act in the United States). The bottom line, however, is that any intervention should take care to actually promote the ends it seeks. Strong data-localization policies in the EU will not lead to success of the local tech industry, but it will serve to wall the region off from the kind of investment that can make it thrive.

The business press generally describes the gig economy that has sprung up around digital platforms like Uber and TaskRabbit as a beneficial phenomenon, “a glass that is almost full.” The gig economy “is an economy that operates flexibly, involving the exchange of labor and resources through digital platforms that actively facilitate buyer and seller matching.”

From the perspective of businesses, major positive attributes of the gig economy include cost-effectiveness (minimizing costs and expenses); labor-force efficiencies (“directly matching the company to the freelancer”); and flexible output production (individualized work schedules and enhanced employee motivation). Workers also benefit through greater independence, enhanced work flexibility (including hours worked), and the ability to earn extra income.

While there are some disadvantages, as well, (worker-commitment questions, business-ethics issues, lack of worker benefits, limited coverage of personal expenses, and worker isolation), there is no question that the gig economy has contributed substantially to the growth and flexibility of the American economy—a major social good. Indeed, “[i]t is undeniable that the gig economy has become an integral part of the American workforce, a trend that has only been accelerated during the” COVID-19 pandemic.

In marked contrast, however, the Federal Trade Commission’s (FTC) Sept. 15 Policy Statement on Enforcement Related to Gig Work (“gig statement” or “statement”) is the story of a glass that is almost empty. The accompanying press release declaring “FTC to Crack Down on Companies Taking Advantage of Gig Workers” (since when is “taking advantage of workers” an antitrust or consumer-protection offense?) puts an entirely negative spin on the gig economy. And while the gig statement begins by describing the nature and large size of the gig economy, it does so in a dispassionate and bland tone. No mention is made of the substantial benefits for consumers, workers, and the overall economy stemming from gig work. Rather, the gig statement quickly adopts a critical perspective in describing the market for gig workers and then addressing gig-related FTC-enforcement priorities. What’s more, the statement deals in very broad generalities and eschews specifics, rendering it of no real use to gig businesses seeking practical guidance.

Most significantly, the gig statement suggests that the FTC should play a significant enforcement role in gig-industry labor questions that fall outside its statutory authority. As such, the statement is fatally flawed as a policy document. It provides no true guidance and should be substantially rewritten or withdrawn.

Gig Statement Analysis

The gig statement’s substantive analysis begins with a negative assessment of gig-firm conduct. It expresses concern that gig workers are being misclassified as independent contractors and are thus deprived “of critical rights [right to organize, overtime pay, health and safety protections] to which they are entitled under law.” Relatedly, gig workers are said to be “saddled with inordinate risks.” Gig firms also “may use transparent algorithms to capture more revenue from customer payments for workers’ services than customers or workers understand.”

Heaven forfend!

The solution offered by the gig statement is “scrutiny of promises gig platforms make, or information they fail to disclose, about the financial proposition of gig work.” No mention is made of how these promises supposedly made to workers about the financial ramifications of gig employment are related to the FTC’s statutory mission (which centers on unfair or deceptive acts or practices affecting consumers or unfair methods of competition).

The gig statement next complains that a “power imbalance” between gig companies and gig workers “may leave gig workers exposed to harms from unfair, deceptive, and anticompetitive practices and is likely to amplify such harms when they occur. “Power imbalance” along a vertical chain has not been a source of serious antitrust concern for decades (and even in the case of the Robinson-Patman Act, the U.S. Supreme Court most recently stressed, in 2005’s Volvo v. Reeder, that harm to interbrand competition is the key concern). “Power imbalances” between workers and employers bear no necessary relation to consumer welfare promotion, which the Supreme Court teaches is the raison d’etre of antitrust. Moreover, the FTC does not explain why unfair or deceptive conduct likely follows from the mere existence of substantial bargaining power. Such an unsupported assertion is not worthy of being included in a serious agency-policy document.

The gig statement then engages in more idle speculation about a supposed relationship between market concentration and the proliferation of unfair and deceptive practices across the gig economy. The statement claims, without any substantiation, that gig companies in concentrated platform markets will be incentivized to exert anticompetitive market power over gig workers, and thereby “suppress wages below competitive rates, reduce job quality, or impose onerous terms on gig workers.” Relatedly, “unfair and deceptive practices by one platform can proliferate across the labor market, creating a race to the bottom that participants in the gig economy, and especially gig workers, have little ability to avoid.” No empirical or theoretical support is advanced for any of these bald assertions, which give the strong impression that the commission plans to target gig-economy companies for enforcement actions without regard to the actual facts on the ground. (By contrast, the commission has in the past developed detailed factual records of competitive and/or consumer-protection problems in health care and other important industry sectors as a prelude to possible future investigations.)

The statement then launches into a description of the FTC’s gig-economy policy priorities. It notes first that “workers may be deprived of the protections of an employment relationship” when gig firms classify them as independent contractors, leading to firms’ “disclosing [of] pay and costs in an unfair and deceptive manner.” What’s more, the FTC “also recognizes that misleading claims [made to workers] about the costs and benefits of gig work can impair fair competition among companies in the gig economy and elsewhere.”

These extraordinary statements seem to be saying that the FTC plans to closely scrutinize gig-economy-labor contract negotiations, based on its distaste for independent contracting (which it believes should be supplanted by employer-employee relationships, a question of labor law, not FTC law). Nowhere is it explained where such a novel FTC exercise of authority comes from, nor how such FTC actions have any bearing on harms to consumer welfare. The FTC’s apparent desire to force employment relationships upon gig firms is far removed from harm to competition or unfair or deceptive practices directed at consumers. Without more of an explanation, one is left to conclude that the FTC is proposing to take actions that are far beyond its statutory remit.

The gig statement next tries to tie the FTC’s new gig program to violations of the FTC Act (“unsubstantiated claims”); the FTC’s Franchise Rule; and the FTC’s Business Opportunity Rule, violations of which “can trigger civil penalties.” The statement, however, lacks any sort of logical, coherent explanation of how the new enforcement program necessarily follows from these other sources of authority. While a few examples of rules-based enforcement actions that have some connection to certain terms of employment may be pointed to, such special cases are a far cry from any sort of general justification for turning the FTC into a labor-contracts regulator.

The statement then moves on to the alleged misuse of algorithmic tools dealing with gig-worker contracts and supervision that may lead to unlawful gig-worker oversight and termination. Once again, the connection of any of this to consumer-welfare harm (from a competition or consumer-protection perspective) is not made.

The statement further asserts that FTC Act consumer-protection violations may arise from “nonnegotiable” and other unfair contracts. In support of such a novel exercise of authority, however, the FTC cites supposedly analogous “unfair” clauses found in consumer contracts with individuals or small-business consumers. It is highly doubtful that these precedents support any FTC enforcement actions involving labor contracts.

Noncompete clauses with individuals are next on the gig statement’s agenda. It is claimed that “[n]on-compete provisions may undermine free and fair labor markets by restricting workers’ ability to obtain competitive offers for their services from existing companies, resulting in lower wages and degraded working conditions. These provisions may also raise barriers to entry for new companies.” The assertion, however, that such clauses may violate Section 1 of the Sherman Act or Section 5 of the FTC Act’s bar on unfair methods of competition, seems dubious, to say the least. Unless there is coordination among companies, these are essentially unilateral contracting practices that may have robust efficiency explanations. Making out these practices to be federal antitrust violations is bad law and bad policy; they are, in any event, subject to a wide variety of state laws.

Even more problematic is the FTC’s claim that a variety of standard (typically efficiency-seeking) contract limitations, such as nondisclosure agreements and liquidated damages clauses, “may be excessive or overbroad” and subject to FTC scrutiny. This preposterous assertion would make the FTC into a second-guesser of common labor contracts (a federal labor-contract regulator, if you will), a role for which it lacks authority and is entirely unsuited. Turning the FTC into a federal labor-contract regulator would impose unjustifiable uncertainty costs on business and chill a host of efficient arrangements. It is hard to take such a claim of power seriously, given its lack of any credible statutory basis.

The final section of the gig statement dealing with FTC enforcement (“Policing Unfair Methods of Competition That Harm Gig Workers”) is unobjectionable, but not particularly informative. It essentially states that the FTC’s black letter legal authority over anticompetitive conduct also extends to gig companies: the FTC has the authority to investigate and prosecute anticompetitive mergers; agreements among competitors to fix terms of employment; no-poach agreements; and acts of monopolization and attempted monopolization. (Tell us something we did not know!)

The fact that gig-company workers may be harmed by such arrangements is noted. The mere page and a half devoted to this legal summary, however, provides little practical guidance for gig companies as to how to avoid running afoul of the law. Antitrust policy statements may be excused if they provided less detailed guidance than antitrust guidelines, but it would be helpful if they did something more than provide a capsule summary of general American antitrust principles. The gig statement does not pass this simple test.

The gig statement closes with a few glittering generalities. Cooperation with other agencies is highlighted (for example, an information-sharing agreement with the National Labor Relations Board is described). The FTC describes an “Equity Action Plan” calling for a focus on how gig-economy antitrust and consumer-protection abuses harm underserved communities and low-wage workers.

The FTC finishes with a request for input from the public and from gig workers about abusive and potentially illegal gig-sector conduct. No mention is made of the fact that the FTC must, of course, conform itself to the statutory limitations on its jurisdiction in the gig sector, as in all other areas of the economy.

Summing Up the Gig Statement

In sum, the critical flaw of the FTC’s gig statement is its focus on questions of labor law and policy (including the question of independent contractor as opposed to employee status) that are the proper purview of federal and state statutory schemes not administered by the Federal Trade Commission. (A secondary flaw is the statement’s unbalanced portrayal of the gig sector, which ignores its beneficial aspects.) If the FTC decides that gig-economy issues deserve particular enforcement emphasis, it should (and, indeed, must) direct its attention to anticompetitive actions and unfair or deceptive acts or practices that harm consumers.

On the antitrust side, that might include collusion among gig companies on the terms offered to workers or perhaps “mergers to monopoly” between gig companies offering a particular service. On the consumer-protection side, that might include making false or materially misleading statements to consumers about the terms under which they purchase gig-provided services. (It would be conceivable, of course, that some of those statements might be made, unwittingly or not, by gig independent contractors, at the behest of the gig companies.)

The FTC also might carry out gig-industry studies to identify particular prevalent competitive or consumer-protection harms. The FTC should not, however, seek to transform itself into a gig-labor-market enforcer and regulator, in defiance of its lack of statutory authority to play this role.

Conclusion

The FTC does, of course, have a legitimate role to play in challenging unfair methods of competition and unfair acts or practices that undermine consumer welfare wherever they arise, including in the gig economy. But it does a disservice by focusing merely on supposed negative aspects of the gig economy and conjuring up a gig-specific “parade of horribles” worthy of close commission scrutiny and enforcement action.

Many of the “horribles” cited may not even be “bads,” and many of them are, in any event, beyond the proper legal scope of FTC inquiry. There are other federal agencies (for example, the National Labor Relations Board) whose statutes may prove applicable to certain problems noted in the gig statement. In other cases, statutory changes may be required to address certain problems noted in the statement (assuming they actually are problems). The FTC, and its fellow enforcement agencies, should keep in mind, of course, that they are not Congress, and wishing for legal authority to deal with problems does not create it (something the federal judiciary fully understands).  

In short, the negative atmospherics that permeate the gig statement are unnecessary and counterproductive; if anything, they are likely to convince at least some judges that the FTC is not the dispassionate finder of fact and enforcer of law that it claims to be. In particular, the judiciary is unlikely to be impressed by the FTC’s apparent effort to insert itself into questions that lie far beyond its statutory mandate.

The FTC should withdraw the gig statement. If, however, it does not, it should revise the statement in a manner that is respectful of the limits on the commission’s legal authority, and that presents a more dispassionate analysis of gig-economy business conduct.

A White House administration typically announces major new antitrust initiatives in the fall and spring, and this year is no exception. Senior Biden administration officials kicked off the fall season at Fordham Law School (more on that below) by shedding additional light on their plans to expand the accepted scope of antitrust enforcement.

Their aggressive enforcement statements draw headlines, but will the administration’s neo-Brandeisians actually notch enforcement successes? The prospects are cloudy, to say the least.

The U.S. Justice Department (DOJ) has lost some cartel cases in court this year (what was the last time that happened?) and, on Sept. 19, a federal judge rejected the DOJ’s attempt to enjoin United Health’s $13.8 billion bid for Change Healthcare. The Federal Trade Commission (FTC) recently lost two merger challenges before its in-house administrative law judge. It now faces a challenge to its administrative-enforcement processes before the U.S. Supreme Court (the Axon case, to be argued in November).

(Incidentally, on the other side of the Atlantic, the European Commission has faced some obstacles itself. Despite its recent Google victory, the Commission has effectively lost two abuse of dominance cases this year—the Intel and Qualcomm matters—before the European General Court.)

So, are the U.S. antitrust agencies chastened? Will they now go back to basics? Far from it. They enthusiastically are announcing plans to charge ahead, asserting theories of antitrust violations that have not been taken seriously for decades, if ever. Whether this turns out to be wise enforcement policy remains to be seen, but color me highly skeptical. Let’s take a quick look at some of the big enforcement-policy ideas that are being floated.

Fordham Law’s Antitrust Conference

Admiral David Farragut’s order “Damn the torpedoes, full speed ahead!” was key to the Union Navy’s August 1864 victory in the Battle of Mobile Bay, a decisive Civil War clash. Perhaps inspired by this display of risk-taking, the heads of the two federal antitrust agencies—DOJ Assistant Attorney General (AAG) Jonathan Kanter and FTC Chair Lina Khan—took a “damn the economics, full speed ahead” attitude in remarks at the Sept. 16 session of Fordham Law School’s 49th Annual Conference on International Antitrust Law and Policy. Special Assistant to the President Tim Wu was also on hand and emphasized the “all of government” approach to competition policy adopted by the Biden administration.

In his remarks, AAG Kanter seemed to be endorsing a “monopoly broth” argument in decrying the current “Whac-a-Mole” approach to monopolization cases. The intent may be to lessen the burden of proof of anticompetitive effects, or to bring together a string of actions taken jointly as evidence of a Section 2 violation. In taking such an approach, however, there is a serious risk that efficiency-seeking actions may be mistaken for exclusionary tactics and incorrectly included in the broth. (Notably, the U.S. Court of Appeals for the D.C. Circuit’s 2001 Microsoft opinion avoided the monopoly-broth problem by separately discussing specific company actions and weighing them on their individual merits, not as part of a general course of conduct.)

Kanter also recommended going beyond “our horizontal and vertical framework” in merger assessments, despite the fact that vertical mergers (involving complements) are far less likely to be anticompetitive than horizontal mergers (involving substitutes).

Finally, and perhaps most problematically, Kanter endorsed the American Innovative and Choice Online Act (AICOA), citing the protection it would afford “would-be competitors” (but what about consumers?). In so doing, the AAG ignored the fact that AICOA would prohibit welfare-enhancing business conduct and could be harmfully construed to ban mere harm to rivals (see, for example, Stanford professor Doug Melamed’s trenchant critique).

Chair Khan’s presentation, which called for a far-reaching “course correction” in U.S. antitrust, was even more bold and alarming. She announced plans for a new FTC Act Section 5 “unfair methods of competition” (UMC) policy statement centered on bringing “standalone” cases not reachable under the antitrust laws. Such cases would not consider any potential efficiencies and would not be subject to the rule of reason. Endorsing that approach amounts to an admission that economic analysis will not play a serious role in future FTC UMC assessments (a posture that likely will cause FTC filings to be viewed skeptically by federal judges).

In noting the imminent release of new joint DOJ-FTC merger guidelines, Khan implied that they would be animated by an anti-merger philosophy. She cited “[l]awmakers’ skepticism of mergers” and congressional rejection “of economic debits and credits” in merger law. Khan thus asserted that prior agency merger guidance had departed from the law. I doubt, however, that many courts will be swayed by this “economics free” anti-merger revisionism.

Tim Wu’s remarks closing the Fordham conference had a “big picture” orientation. In an interview with GW Law’s Bill Kovacic, Wu briefly described the Biden administration’s “whole of government” approach, embodied in President Joe Biden’s July 2021 Executive Order on Promoting Competition in the American Economy. While the order’s notion of breaking down existing barriers to competition across the American economy is eminently sound, many of those barriers are caused by government restrictions (not business practices) that are not even alluded to in the order.

Moreover, in many respects, the order seeks to reregulate industries, misdiagnosing many phenomena as business abuses that actually represent efficient free-market practices (as explained by Howard Beales and Mark Jamison in a Sept. 12 Mercatus Center webinar that I moderated). In reality, the order may prove to be on net harmful, rather than beneficial, to competition.

Conclusion

What is one to make of the enforcement officials’ bold interventionist screeds? What seems to be missing in their presentations is a dose of humility and pragmatism, as well as appreciation for consumer welfare (scarcely mentioned in the agency heads’ presentations). It is beyond strange to see agencies that are having problems winning cases under conventional legal theories floating novel far-reaching initiatives that lack a sound economics foundation.

It is also amazing to observe the downplaying of consumer welfare by agency heads, given that, since 1979 (in Reiter v. Sonotone), the U.S. Supreme Court has described antitrust as a “consumer welfare prescription.” Unless there is fundamental change in the makeup of the federal judiciary (and, in particular, the Supreme Court) in the very near future, the new unconventional theories are likely to fail—and fail badly—when tested in court. 

Bringing new sorts of cases to test enforcement boundaries is, of course, an entirely defensible role for U.S. antitrust leadership. But can the same thing be said for bringing “non-boundary” cases based on theories that would have been deemed far beyond the pale by both Republican and Democratic officials just a few years ago? Buckle up: it looks as if we are going to find out. 

The practice of so-called “self-preferencing” has come to embody the zeitgeist of competition policy for digital markets, as legislative initiatives are undertaken in jurisdictions around the world that to seek, in various ways, to constrain large digital platforms from granting favorable treatment to their own goods and services. The core concern cited by policymakers is that gatekeepers may abuse their dual role—as both an intermediary and a trader operating on the platform—to pursue a strategy of biased intermediation that entrenches their power in core markets (defensive leveraging) and extends it to associated markets (offensive leveraging).

In addition to active interventions by lawmakers, self-preferencing has also emerged as a new theory of harm before European courts and antitrust authorities. Should antitrust enforcers be allowed to pursue such a theory, they would gain significant leeway to bypass the legal standards and evidentiary burdens traditionally required to prove that a given business practice is anticompetitive. This should be of particular concern, given the broad range of practices and types of exclusionary behavior that could be characterized as self-preferencing—only some of which may, in some specific contexts, include exploitative or anticompetitive elements.

In a new working paper for the International Center for Law & Economics (ICLE), I provide an overview of the relevant traditional antitrust theories of harm, as well as the emerging case law, to analyze whether and to what extent self-preferencing should be considered a new standalone offense under EU competition law. The experience to date in European case law suggests that courts have been able to address platforms’ self-preferencing practices under existing theories of harm, and that it may not be sufficiently novel to constitute a standalone theory of harm.

European Case Law on Self-Preferencing

Practices by digital platforms that might be deemed self-preferencing first garnered significant attention from European competition enforcers with the European Commission’s Google Shopping investigation, which examined whether the search engine’s results pages positioned and displayed its own comparison-shopping service more favorably than the websites of rival comparison-shopping services. According to the Commission’s findings, Google’s conduct fell outside the scope of competition on the merits and could have the effect of extending Google’s dominant position in the national markets for general Internet search into adjacent national markets for comparison-shopping services, in addition to protecting Google’s dominance in its core search market.

Rather than explicitly posit that self-preferencing (a term the Commission did not use) constituted a new theory of harm, the Google Shopping ruling described the conduct as belonging to the well-known category of “leveraging.” The Commission therefore did not need to propagate a new legal test, as it held that the conduct fell under a well-established form of abuse. The case did, however, spur debate over whether the legal tests the Commission did apply effectively imposed on Google a principle of equal treatment of rival comparison-shopping services.

But it should be noted that conduct similar to that alleged in the Google Shopping investigation actually came before the High Court of England and Wales several months earlier, this time in a dispute between Google and Streetmap. At issue in that case was favorable search results Google granted to its own maps, rather than to competing online maps. The UK Court held, however, that the complaint should have been appropriately characterized as an allegation of discrimination; it further found that Google’s conduct did not constitute anticompetitive foreclosure. A similar result was reached in May 2020 by the Amsterdam Court of Appeal in the Funda case.  

Conversely, in June 2021, the French Competition Authority (AdlC) followed the European Commission into investigating Google’s practices in the digital-advertising sector. Like the Commission, the AdlC did not explicitly refer to self-preferencing, instead describing the conduct as “favoring.”

Given this background and the proliferation of approaches taken by courts and enforcers to address similar conduct, there was significant anticipation for the judgment that the European General Court would ultimately render in the appeal of the Google Shopping ruling. While the General Court upheld the Commission’s decision, it framed self-preferencing as a discriminatory abuse. Further, the Court outlined four criteria that differentiated Google’s self-preferencing from competition on the merits.

Specifically, the Court highlighted the “universal vocation” of Google’s search engine—that it is open to all users and designed to index results containing any possible content; the “superdominant” position that Google holds in the market for general Internet search; the high barriers to entry in the market for general search services; and what the Court deemed Google’s “abnormal” conduct—behaving in a way that defied expectations, given a search engine’s business model, and that changed after the company launched its comparison-shopping service.

While the precise contours of what the Court might consider discriminatory abuse aren’t yet clear, the decision’s listed criteria appear to be narrow in scope. This stands at odds with the much broader application of self-preferencing as a standalone abuse, both by the European Commission itself and by some national competition authorities (NCAs).

Indeed, just a few weeks after the General Court’s ruling, the Italian Competition Authority (AGCM) handed down a mammoth fine against Amazon over preferential treatment granted to third-party sellers who use the company’s own logistics and delivery services. Rather than reflecting the qualified set of criteria laid out by the General Court, the Italian decision was clearly inspired by the Commission’s approach in Google Shopping. Where the Commission described self-preferencing as a new form of leveraging abuse, AGCM characterized Amazon’s practices as tying.

Self-preferencing has also been raised as a potential abuse in the context of data and information practices. In November 2020, the European Commission sent Amazon a statement of objections detailing its preliminary view that the company had infringed antitrust rules by making systematic use of non-public business data, gathered from independent retailers who sell on Amazon’s marketplace, to advantage the company’s own retail business. (Amazon responded with a set of commitments currently under review by the Commission.)

Both the Commission and the U.K. Competition and Markets Authority have lodged similar allegations against Facebook over data gathered from advertisers and then used to compete with those advertisers in markets in which Facebook is active, such as classified ads. The Commission’s antitrust proceeding against Apple over its App Store rules likewise highlights concerns that the company may use its platform position to obtain valuable data about the activities and offers of its competitors, while competing developers may be denied access to important customer data.

These enforcement actions brought by NCAs and the Commission appear at odds with the more bounded criteria set out by the General Court in Google Shopping, and raise tremendous uncertainty regarding the scope and definition of the alleged new theory of harm.

Self-Preferencing, Platform Neutrality, and the Limits of Antitrust Law

The growing tendency to invoke self-preferencing as a standalone theory of antitrust harm could serve two significant goals for European competition enforcers. As mentioned earlier, it offers a convenient shortcut that could allow enforcers to skip the legal standards and evidentiary burdens traditionally required to prove anticompetitive behavior. Moreover, it can function, in practice, as a means to impose a neutrality regime on digital gatekeepers, with the aims of both ensuring a level playing field among competitors and neutralizing the potential conflicts of interests implicated by dual-mode intermediation.

The dual roles performed by some platforms continue to fuel the never-ending debate over vertical integration, as well as related concerns that, by giving preferential treatment to its own products and services, an integrated provider may leverage its dominance in one market to related markets. From this perspective, self-preferencing is an inevitable byproduct of the emergence of ecosystems.

However, as the Australian Competition and Consumer Commission has recognized, self-preferencing conduct is “often benign.” Furthermore, the total value generated by an ecosystem depends on the activities of independent complementors. Those activities are not completely under the platform’s control, although the platform is required to establish and maintain the governance structures regulating access to and interactions around that ecosystem.

Given this reality, a complete ban on self-preferencing may call the very existence of ecosystems into question, challenging their design and monetization strategies. Preferential treatment can take many different forms with many different potential effects, all stemming from platforms’ many different business models. This counsels for a differentiated, case-by-case, and effects-based approach to assessing the alleged competitive harms of self-preferencing.

Antitrust law does not impose on platforms a general duty to ensure neutrality by sharing their competitive advantages with rivals. Moreover, possessing a competitive advantage does not automatically equal an anticompetitive effect. As the European Court of Justice recently stated in Servizio Elettrico Nazionale, competition law is not intended to protect the competitive structure of the market, but rather to protect consumer welfare. Accordingly, not every exclusionary effect is detrimental to competition. Distinctions must be drawn between foreclosure and anticompetitive foreclosure, as only the latter may be penalized under antitrust.

[This post from Jonathan M. Barnett, the Torrey H. Webb Professor of Law at the University of Southern California’s Gould School of Law, is an entry in Truth on the Market’s continuing FTC UMC Rulemaking symposium. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

In its Advance Notice for Proposed Rulemaking (ANPR) on Commercial Surveillance and Data Security, the Federal Trade Commission (FTC) has requested public comment on an unprecedented initiative to promulgate and implement wide-ranging rules concerning the gathering and use of consumer data in digital markets. In this contribution, I will assume, for the sake of argument, that the commission has the legal authority to exercise its purported rulemaking powers for this purpose without a specific legislative mandate (a question as to which I recognize there is great uncertainty, which is further heightened by the fact that Congress is concurrently considered legislation in the same policy area).

In considering whether to use these powers for the purposes of adopting and implementing privacy-related regulations in digital markets, the commission would be required to undertake a rigorous assessment of the expected costs and benefits of any such regulation. Any such cost-benefit analysis must comprise at least two critical elements that are omitted from, or addressed in highly incomplete form in, the ANPR.

The Hippocratic Oath of Regulatory Intervention

There is a longstanding consensus that regulatory intervention is warranted only if a market failure can be identified with reasonable confidence. This principle is especially relevant in the case of the FTC, which is entrusted with preserving competitive markets and, therefore, should be hesitant about intervening in market transactions without a compelling evidentiary basis. As a corollary to this proposition, it is also widely agreed that implementing any intervention to correct a market failure would only be warranted to the extent that such intervention would be reasonably expected to correct any such failure at a net social gain.

This prudent approach tracks the “economic effect” analysis that the commission must apply in the rulemaking process contemplated under the Federal Trade Commission Act and the analysis of “projected benefits and … adverse economic effects” of proposed and final rules contemplated by the commission’s rules of practice. Consistent with these requirements, the commission has exhibited a longstanding commitment to thorough cost-benefit analysis. As observed by former Commissioner Julie Brill in 2016, “the FTC conducts its rulemakings with the same level of attention to costs and benefits that is required of other agencies.” Former Commissioner Brill also observed that the “FTC combines our broad mandate to protect consumers with a rigorous, empirical approach to enforcement matters.”

This demanding, fact-based protocol enhances the likelihood that regulatory interventions result in a net improvement relative to the status quo, an uncontroversial goal of any rational public policy. Unfortunately, the ANPR does not make clear that the commission remains committed to this methodology.

Assessing Market Failure in the Use of Consumer Data

To even “get off the ground,” any proposed privacy regulation would be required to identify a market failure arising from a particular use of consumer data. This requires a rigorous and comprehensive assessment of the full range of social costs and benefits that can be reasonably attributed to any such practice.

The ANPR’s Oversights

In contrast to the approach described by former Commissioner Brill, several elements of the ANPR raise significant doubts concerning the current commission’s willingness to assess evidence relevant to the potential necessity of privacy-related regulations in a balanced, rigorous, and comprehensive manner.

First, while the ANPR identifies a plethora of social harms attributable to data-collection practices, it merely acknowledges the possibility that consumers enjoy benefits from such practices “in theory.” This skewed perspective is not empirically serious. Focusing almost entirely on the costs of data collection and dismissing as conjecture any possible gains defies market realities, especially given the fact that (as discussed below) those gains are clearly significant and, in some cases, transformative.

Second, the ANPR’s choice of the normatively charged term “data surveillance” to encompass all uses of consumer data conveys the impression that all data collection through digital services is surreptitious or coerced, whereas (as discussed below) some users may knowingly provide such data to enable certain data-reliant functionalities.

Third, there is no mention in the ANPR that online providers widely provide users with notices concerning certain uses of consumer data and often require users to select among different levels of data collection.

Fourth, the ANPR unusually relies substantially on news websites and non-peer-reviewed publications in the style of policy briefs or advocacy papers, rather than the empirical social-science research on which the commission has historically made policy determinations.

This apparent indifference to analytical balance is particularly exhibited in the ANPR’s failure to address the economic gains generated through the use of consumer data in online markets. As was recognized in a 2014 White House report, many valuable digital services could not function effectively without engaging in some significant level of data collection. The examples are numerous and diverse, including traffic-navigation services that rely on data concerning a user’s geographic location (as well as other users’ geographic location); personalized ad delivery, which relies on data concerning a user’s search history and other disclosed characteristics; and search services, which rely on the ability to use user data to offer search services at no charge while offering targeted advertisements to paying advertisers.

There are equally clear gains on the “supply” side of the market. Data-collection practices can expand market access by enabling smaller vendors to leverage digital intermediaries to attract consumers that are most likely to purchase those vendors’ goods or services. The commission has recognized this point in the past, observing in a 2014 report:

Data brokers provide the information they compile to clients, who can use it to benefit consumers … [C]onsumers may benefit from increased and innovative product offerings fueled by increased competition from small businesses that are able to connect with consumers that they may not have otherwise been able to reach.

Given the commission’s statutory mission under the FTC Act to protect consumers’ interests and preserve competitive markets, these observations should be of special relevance.

Data Protection v. Data-Reliant Functionality

Data-reliant services yield social gains by substantially lowering transaction costs and, in the process, enabling services that would not otherwise be feasible, with favorable effects for consumers and vendors. This observation does not exclude the possibility that specific uses of consumer data may constitute a potential market failure that merits regulatory scrutiny and possible intervention (assuming there is sufficient legal authority for the relevant agency to undertake any such intervention). That depends on whether the social costs reasonably attributable to a particular use of consumer data exceed the social gains reasonably attributable to that use. This basic principle seems to be recognized by the ANPR, which states that the commission can only deem a practice “unfair” under the FTC Act if “it causes or is likely to cause substantial injury” and “the injury is not outweighed by benefits to consumers or competition.”

In implementing this principle, it is important to keep in mind that a market failure could only arise if the costs attributable to any particular use of consumer data are not internalized by the parties to the relevant transaction. This requires showing either that a particular use of consumer data imposes harms on third parties (a plausible scenario in circumstances implicating risks to data security) or consumers are not aware of, or do not adequately assess or foresee, the costs they incur as a result of such use (a plausible scenario in circumstances implicating risks to consumer data). For the sake of brevity, I will focus on the latter scenario.

Many scholars have taken the view that consumers do not meaningfully read privacy notices or consider privacy risks, although the academic literature has also recognized efforts by private entities to develop notice methodologies that can improve consumers’ ability to do so. Even accepting this view, however, it does not necessarily follow (as the ANPR appears to assume) that a more thorough assessment of privacy risks would inevitably lead consumers to elect higher levels of data privacy even where that would degrade functionality or require paying a positive price for certain services. That is a tradeoff that will vary across consumers. It is therefore difficult to predict and easy to get wrong.

As the ANPR indirectly acknowledges in questions 26 and 40, interventions that bar certain uses of consumer data may therefore harm consumers by compelling the modification, positive pricing, or removal from the market of popular data-reliant services. For this reason, some scholars and commentators have favored the informed-consent approach that provides users with the option to bar or limit certain uses of their data. This approach minimizes error costs since it avoids overestimating consumer preferences for privacy. Unlike a flat prohibition of certain uses of consumer data, it also can reflect differences in those preferences across consumers. The ANPR appears to dismiss this concern, asking in question 75 whether certain practices should be made illegal “irrespective of whether consumers consent to them” (my emphasis added).

Addressing the still-uncertain body of evidence concerning the tradeoff between privacy protections on the one hand and data-reliant functionalities on the other (as well as the still-unresolved extent to which users can meaningfully make that tradeoff) lies outside the scope of this discussion. However, the critical observation is that any determination of market failure concerning any particular use of consumer data must identify the costs (and specifically, identify non-internalized costs) attributable to any such use and then offset those costs against the gains attributable to that use.

This balancing analysis is critical. As the commission recognized in a 2015 report, it is essential to strike a balance between safeguarding consumer privacy without suppressing the economic gains that arise from data-reliant services that can benefit consumers and vendors alike. This even-handed approach is largely absent from the ANPR—which, as noted above, focuses almost entirely on costs while largely overlooking the gains associated with the uses of consumer data in online markets. This suggests a one-sided approach to privacy regulation that is incompatible with the cost-benefit analysis that the commission recognizes it must follow in the rulemaking process.

Private-Ordering Approaches to Consumer-Data Regulation

Suppose that a rigorous and balanced cost-benefit analysis determines that a particular use of consumer data would likely yield social costs that exceed social gains. It would still remain to be determined whether and howa regulator should intervene to yield a net social gain. As regulators make this determination, it is critical that they consider the full range of possible mechanisms to address a particular market failure in the use of consumer data.

Consistent with this approach, the FTC Act specifically requires that the commission specify in an ANPR “possible regulatory alternatives under consideration,” a requirement that is replicated at each subsequent stage of the rulemaking process, as provided in the rules of practice. The range of alternatives should include the possibility of taking no action, if no feasible intervention can be identified that would likely yield a net gain.

In selecting among those alternatives, it is imperative that the commission consider the possibility of unnecessary or overly burdensome rules that could impede the efficient development and supply of data-reliant services, either degrading the quality or raising the price of those services. In the past, the commission has emphasized this concern, stating in 2011 that “[t]he FTC actively looks for means to reduce burdens while preserving the effectiveness of a rule.”

This consideration (which appears to be acknowledged in question 24 of the ANPR) is of special importance to privacy-related regulation, given that the estimated annual costs to the U.S. economy (as calculated by the Information Technology and Innovation Foundation) of compliance with the most extensive proposed forms of privacy-related regulations would exceed $100 billion dollars. Those costs would be especially burdensome for smaller entities, effectively raising entry barriers and reducing competition in online markets (a concern that appears to be acknowledged in question 27 of the ANPR).

Given the exceptional breadth of the rules that the ANPR appears to contemplate—cover an ambitious range of activities that would typically be the subject of a landmark piece of federal legislation, rather than administrative rulemaking—it is not clear that the commission has seriously considered this vital point of concern.

In the event that the FTC does move forward with any of these proposed rulemakings (which would be required to rest on a factually supported finding of market failure), it would confront a range of possible interventions in markets for consumer data. That range is typically viewed as being bounded, on the least-interventionist side, by notice and consent requirements to facilitate informed user choice, and on the most interventionist side, by prohibitions that specifically bar certain uses of consumer data.

This is well-traveled ground within the academic and policy literature and the relative advantages and disadvantages of each regulatory approach are well-known (and differ depending on the type of consumer data and other factors). Within the scope of this contribution, I wish to address an alternative regulatory approach that lies outside this conventional range of policy options.

Bottom-Up v. Top-Down Regulation

Any cost-benefit analysis concerning potential interventions to modify or bar a particular use of consumer data, or to mandate notice-and-consent requirements in connection with any such use, must contemplate not only government-implemented solutions but also market-implemented solutions, including hybrid mechanisms in which government action facilitates or complements market-implemented solutions.

This is not a merely theoretical proposal (and is referenced indirectly in questions 36, 51, and 87 of the ANPR). As I have discussed in previously published research, the U.S. economy has a long-established record of having adopted, largely without government intervention, collective solutions to the information asymmetries that can threaten the efficient operation of consumer goods and services markets.

Examples abound: Underwriters Laboratories (UL), which establishes product-safety standards in hundreds of markets; large accounting firms, which confirm compliance with Generally Accepted Accounting Principles (GAAP), which are in turn established and updated by the Financial Accounting Standards Board, a private entity subject to oversight by the Securities and Exchange Commission; and intermediaries in other markets, such as consumer credit, business credit, insurance carriers, bond issuers, and content ratings in the entertainment and gaming industries. Collectively, these markets encompass thousands of providers, hundreds of millions of customers, and billions of dollars in value.

A collective solution is often necessary to resolve information asymmetries efficiently because the benefits from establishing an industrywide standard of product or service quality, together with a trusted mechanism for showing compliance with that standard, generates gains that cannot be fully internalized by any single provider.

Jurisdictions outside the United States have tended to address this collective-action problem through the top-down imposition of standards by government mandate and enforcement by regulatory agencies, as illustrated by the jurisdictions referenced by the ANPR that have imposed restrictions on the use of consumer data through direct regulatory intervention. By contrast, the U.S. economy has tended to favor the bottom-up development of voluntary standards, accompanied by certification and audit services, all accomplished by a mix of industry groups and third-party intermediaries. In certain markets, this may be a preferred model to address the information asymmetries between vendors and customers that are the key sources of potential market failure in the use of consumer data.

Privately organized initiatives to set quality standards and monitor compliance benefit the market by supplying a reliable standard that reduces information asymmetries and transaction costs between consumers and vendors. This, in turn, yields economic gains in the form of increased output, since consumers have reduced uncertainty concerning product quality. These quality standards are generally implemented through certification marks (for example, the “UL” certification mark) or ranking mechanisms (for example, consumer-credit or business-credit scores), which induce adoption and compliance through the opportunity to accrue reputational goodwill that, in turn, translates into economic gains.

These market-implemented voluntary mechanisms are a far less costly means to reduce information asymmetries in consumer-goods markets than regulatory interventions, which require significant investments of public funds in rulemaking, detection, investigation, enforcement, and adjudication activities.

Hybrid Policy Approaches

Private-ordering solutions to collective-action failures in markets that suffer from information asymmetries can sometimes benefit from targeted regulatory action, resulting in a hybrid policy approach. In particular, regulators can sometimes play two supplemental functions in this context.

First, regulators can require that providers in certain markets comply with (or can provide a liability safe harbor for providers that comply with) the quality standards developed by private intermediaries that have developed track records of efficiently establishing those standards and reliably confirming compliance. This mechanism is anticipated by the ANPR, which asks in question 51 whether the commission should “require firms to certify that their commercial surveillance practices meet clear standards concerning collection, use, retention, transfer, or monetization of consumer data” and further asks whether those standards should be set by “the Commission, a third-party organization, or some other entity.”

Other regulatory agencies already follow this model. For example, federal and state regulatory agencies in the fields of health care and education rely on accreditation by designated private entities for purposes of assessing compliance with applicable licensing requirements.

Second, regulators can supervise and review the quality standards implemented, adjusted, and enforced by private intermediaries. This is illustrated by the example of securities markets, in which the major exchanges institute and enforce certain governance, disclosure, and reporting requirements for listed companies but are subject to regulatory oversight by the SEC, which must approve all exchange rules and amendments. Similarly, major accounting firms monitor compliance by public companies with GAAP but must register with, and are subject to oversight by, the Public Company Accounting Oversight Board (PCAOB), a nonprofit entity subject to SEC oversight.

These types of hybrid mechanisms shift to private intermediaries most of the costs involved in developing, updating, and enforcing quality standards (in this context, standards for the use of consumer data) and harness private intermediaries’ expertise, capacities, and incentives to execute these functions efficiently and rapidly, while using targeted forms of regulatory oversight as a complementary policy tool.

Conclusion

Certain uses of consumer data in digital markets may impose net social harms that can be mitigated through appropriately crafted regulation. Assuming, for the sake of argument, that the commission has the legal power to enact regulation to address such harms (again, a point as to which there is great doubt), any specific steps must be grounded in rigorous and balanced cost-benefit analysis.

As a matter of law and sound public policy, it is imperative that the commission meaningfully consider the full range of reliable evidence to identify any potential market failures in the use of consumer data and how to formulate rules to rectify or mitigate such failures at a net social gain. Given the extent to which business models in digital environments rely on the use of consumer data, and the substantial value those business models confer on consumers and businesses, the potential “error costs” of regulatory overreach are high. It is therefore critical to engage in a thorough balancing of costs and gains concerning any such use.

Privacy regulation is a complex and economically consequential policy area that demands careful diagnosis and targeted remedies grounded in analysis and evidence, rather than sweeping interventions accompanied by rhetoric and anecdote.

The Federal Trade Commission (FTC) wants to review in advance all future acquisitions by Facebook parent Meta Platforms. According to a Sept. 2 Bloomberg report, in connection with its challenge to Meta’s acquisition of fitness-app maker Within Unlimited,  the commission “has asked its in-house court to force both Meta and [Meta CEO Mark] Zuckerberg to seek approval from the FTC before engaging in any future deals.”

This latest FTC decision is inherently hyper-regulatory, anti-free market, and contrary to the rule of law. It also is profoundly anti-consumer.

Like other large digital-platform companies, Meta has conferred enormous benefits on consumers (net of payments to platforms) that are not reflected in gross domestic product statistics. In a December 2019 Harvard Business Review article, Erik Brynjolfsson and Avinash Collis reported research finding that Facebook:

…generates a median consumer surplus of about $500 per person annually in the United States, and at least that much for users in Europe. … [I]ncluding the consumer surplus value of just one digital good—Facebook—in GDP would have added an average of 0.11 percentage points a year to U.S. GDP growth from 2004 through 2017.

The acquisition of complementary digital assets—like the popular fitness app produced by Within—enables Meta to continually enhance the quality of its offerings to consumers and thereby expand consumer surplus. It reflects the benefits of economic specialization, as specialized assets are made available to enhance the quality of Meta’s offerings. Requiring Meta to develop complementary assets in-house, when that is less efficient than a targeted acquisition, denies these benefits.

Furthermore, in a recent editorial lambasting the FTC’s challenge to a Meta-Within merger as lacking a principled basis, the Wall Street Journal pointed out that the challenge also removes incentive for venture-capital investments in promising startups, a result at odds with free markets and innovation:

Venture capitalists often fund startups on the hope that they will be bought by larger companies. [FTC Chair Lina] Khan is setting down the marker that the FTC can block acquisitions merely to prevent big companies from getting bigger, even if they don’t reduce competition or harm consumers. This will chill investment and innovation, and it deserves a burial in court.

This is bad enough. But the commission’s proposal to require blanket preapprovals of all future Meta mergers (including tiny acquisitions well under regulatory pre-merger reporting thresholds) greatly compounds the harm from its latest ill-advised merger challenge. Indeed, it poses a blatant challenge to free-market principles and the rule of law, in at least three ways.

  1. It substitutes heavy-handed ex ante regulatory approval for a reliance on competition, with antitrust stepping in only in those limited instances where the hard facts indicate a transaction will be anticompetitive. Indeed, in one key sense, it is worse than traditional economic regulation. Empowering FTC staff to carry out case-by-case reviews of all proposed acquisitions inevitably will generate arbitrary decision-making, perhaps based on a variety of factors unrelated to traditional consumer-welfare-based antitrust. FTC leadership has abandoned sole reliance on consumer welfare as the touchstone of antitrust analysis, paving the wave for potentially abusive and arbitrary enforcement decisions. By contrast, statutorily based economic regulation, whatever its flaws, at least imposes specific standards that staff must apply when rendering regulatory determinations.
  2. By abandoning sole reliance on consumer-welfare analysis, FTC reviews of proposed Meta acquisitions may be expected to undermine the major welfare benefits that Meta has previously bestowed upon consumers. Given the untrammeled nature of these reviews, Meta may be expected to be more cautious in proposing transactions that could enhance consumer offerings. What’s more, the general anti-merger bias by current FTC leadership would undoubtedly prompt them to reject some, if not many, procompetitive transactions that would confer new benefits on consumers.
  3. Instituting a system of case-by-case assessment and approval of transactions is antithetical to the normal American reliance on free markets, featuring limited government intervention in market transactions based on specific statutory guidance. The proposed review system for Meta lacks statutory warrant and (as noted above) could promote arbitrary decision-making. As such, it seriously flouts the rule of law and threatens substantial economic harm (sadly consistent with other ill-considered initiatives by FTC Chair Khan, see here and here).

In sum, internet-based industries, and the big digital platforms, have thrived under a system of American technological freedom characterized as “permissionless innovation.” Under this system, the American people—consumers and producers—have been the winners.

The FTC’s efforts to micromanage future business decision-making by Meta, prompted by the challenge to a routine merger, would seriously harm welfare. To the extent that the FTC views such novel interventionism as a bureaucratic template applicable to other disfavored large companies, the American public would be the big-time loser.

[The following is a guest post from Philip Hanspach of the European University Institute.]

There is an emerging debate regarding whether complexity theory—which, among other things, draws lessons about uncertainty and non-linearity from the natural sciences—should make inroads into antitrust (see, e.g., Nicolas Petit and Thibault Schrepel, 2022). Of course, one might also say that antitrust is already quite late to the party. Since the 1990s, complexity theory has made inroads into numerous “hard” and social sciences, from geography and urban planning to cultural studies.

Depending on whom you ask, complexity theory is everything from a revolutionary paradigm to a lazy buzzword. What would it mean to apply it in the context of antitrust and would it, in fact, be useful?

Given its numerous applications, scholars have proposed several definitions of complexity theory, invoking different kinds of complexity. According to one, complexity theory is concerned with the study of complex adaptive systems (CAS)—that is, networks that consist of many diverse, interdependent parts. A CAS may adapt and change, for example, in response to past experience.

That does not sound too strange as a general description either of the economy as a whole or of markets in particular, with consumers, firms, and potential entrants among the numerous moving parts. At the same time, this approach contrasts with orthodox economic theory—specifically, with the game-theory models that rule antitrust debates and that prize simplicity and reductionism.

As both a competition economist and a history buff, my primary point of reference for complexity theory is a scholarly debate among Bronze Age scholars. Sound obscure? Bear with me.

The collapse of several flourishing Mediterranean civilizations in the 12th century B.C. (Mycenae and Egypt, to name only two) puzzles historians as much as today’s economists are stumped by the question of whether any particular merger will raise prices.[1] Both questions encounter difficulties in gathering sufficient data for empirical analysis (the lack of counterfactuals and foresight in one case, and 3,000 years of decay in the other), forcing a recourse to theory and possibility results.

Earlier Bronze Age scholarship blamed the “Sea Peoples,” invaders of unknown origin (possibly Sicily or Sardinia), for the destruction of several thriving cities and states. The primary source for this thesis was statements attributed to the Egyptian pharaoh of the time. More recent research, while acknowledging the role of the Sea Peoples, but has gone to lengths to point out that, in many cases, we simply don’t know. Alternative explanations (famine, disease, systems collapse) are individually unconvincing as alternative explanations, but might each have contributed to the end of various Bronze Age civilizations.

Complexity theory was brought into this discussion with some caution. While acknowledging the theory’s potential usefulness, Eric Cline writes:

We may just be applying a scientific (or possibly pseudoscientific) term to a situation in which there is insufficient knowledge to draw firm conclusions. It sounds nice, but does it really advance our understanding? Is it more than just a fancy way to state a fairly obvious fact?

In a review of Cline’s book, archaeologist Guy D. Middleton agreed that the application of complexity theory might be “useful” but also “obvious.” Similarly, in the context of antitrust, I think complexity theory may serve as a useful framework to understand uncertainty in the marketplace.

Thinking of a market as a CAS can help to illustrate the uncertainty behind every decision. For example, a formal economic model with a clear (at least, to economists) equilibrium outcome might predict that a certain merger will give firms the incentive and ability to reduce spending on research and development. But the lens of complexity theory allows us to better understand why we might still be wrong, or why we are right, but for the wrong reasons.

We can accept that decisions that are relevant and observable to antitrust practitioners (such as price and production decisions) can be driven by things that are small and unobservable. For example, a manager who ultimately calls the shots on R&D budgets for an airplane manufacturer might go to a trade fair and become fascinated by a cool robot that a particular shipyard presented. This might have been the key push that prompted her to finance an unlikely robotics project proposed by her head engineer.

Her firm is, indeed, part of a complex system—one that includes the individual purchase decisions of consumers, customer feedback, reports from salespeople in the field, news from science and business journalists about the next big thing, and impressions at trade fairs and exhibitions. These all coalesce in the manager’s head and influence simple decisions about her R&D budget. But I have yet to see a merger-review decision that predicted effects on innovation from peeking into managers’ minds in such a way.

This little story might be a far-fetched example of the Butterfly Effect, perhaps the most familiar concept from complexity theory. Just as the flaps of a butterfly’s wings might cause a storm on the other side of the world, the shipyard’s earlier decision to invest in a robotic manufacturing technology resulted in our fictitious aircraft manufacturer’s decision to invest more in R&D than we might have predicted with our traditional tools.

Indeed, it is easy to think of other small events that can have consequences leading to price changes that are relevant in the antitrust arena. Remember the cargo ship Ever Given, which blocked the Suez Canal in March 2021? One reason mentioned for its distress were unusually strong winds (whether a butterfly was to blame, I don’t know) pushing the highly stacked containers like a sail. The disruption to supply chains was felt in various markets across Europe.

In my opinion, one benefit of admitting this complexity is that it can make ex post evaluation more common in antitrust. Indeed, some researchers are doing great work on this. Enforcers are understandably hesitant to admit that they might get it wrong sometimes, but I believe that we can acknowledge that we will not ultimately know whether merged firms will, say, invest more or less in innovation. Complexity theory tells us that, even if our best and most appropriate model is wrong, the world is not random. It is just very hard to understand and hinges on things that are neither straightforward to observe, nor easy to correctly gauge ex ante.

Turning back to the Bronze Age, scholars have an easier time observing that a certain city was destroyed and abandoned at some point in time than they do in correctly naming the culprit (the Sea Peoples, a rival power, an earthquake?) The appeal of complexity theory is not just that it lifts a scholar’s burden to name one or a few predominant explanations, but that it grants confidence that the decision itself arose out of a complex system: the big and small effects that factors such as famine, trade, weather, and fortune may have had on the city’s ability to defend itself against attack, and the individual-but-interrelated decisions of a city’s citizens to stay or leave following a catastrophe.

Similarly, for antitrust experts, it is easier to observe a price increase following a merger than to correctly guess its reason. Where economists differ from archaeologists and classicists is that they don’t just study the past. They have to continue exploring the present and future. Imagine that an agency clears a merger that we would have expected not to harm competition, but it turns out, ex post, that it was a bad call. Complexity theory doesn’t just offer excuses for where reality diverged from our prediction. Instead, it can tell us whether our tools were deficient or whether we made an “honest mistake.” As investigations are always costly, it is up to the enforcer (or those setting their budget) to decide whether it makes sense to expand investigations to account for new, complex phenomena (reading the minds of R&D managers will probably remain out of the budget for the foreseeable future).

Finally, economists working on antitrust problems should not see this as belittling their role, but as a welcome frame for their work. Computing diversion ratios or modeling a complex market as a straightforward set of equations might still be the best we can do. A model that is right on average gets us closer to the right answer and is certainly preferred to having no clue what’s going on. Where we don’t have precedent to guide us, we have to resort to models that may be wrong, despite getting everything right that was under our control.

A few things that Petit and Schrepel call for are comfortably established in the economist’s toolkit. They might not, however, always be put to use where they should. Notably, there are feedback loops in dynamic models. Even in static models, it is possible to show how a change in one variable has direct and indirect (second order) effects on an outcome. The typical merger investigation is concerned with short-term effects, perhaps those materializing over the three to five years following a merger. These short-term effects may be relatively easy to approximate in a simple model. Granted, Petit and Schrepel’s article adopts a wide understanding of antitrust—including pro-competitive market regulation—but this seems like an important caveat, nonetheless.

In conclusion, complexity theory is something economists and lawyers who study markets should learn more about. It’s a fascinating research paradigm and a framework in which one can make sense of small and large causes having sometimes unpredictable effects. For antitrust practitioners, it can advance our understanding of why our predictions can fail when the tools and approaches that we use are limited. My hope is that understanding complexity will increase openness to ex-post valuation and the expectations toward antitrust enforcement (and its limits). At the same time, it is still an (economic) question of costs and benefits as to whether further complications in an antitrust investigation are worth it.


[1] A fascinating introduction that balances approachability and source work is YouTube’s Extra History series on the Bronze Age collapse.

A recent viral video captures a prevailing sentiment in certain corners of social media, and among some competition scholars, about how mergers supposedly work in the real world: firms start competing on price, one firm loses out, that firm agrees to sell itself to the other firm and, finally, prices are jacked up.(Warning: Keep the video muted. The voice-over is painful.)

The story ends there. In this narrative, the combination offers no possible cost savings. The owner of the firm who sold doesn’t start a new firm and begin competing tomorrow, and nor does anyone else. The story ends with customers getting screwed.

And in this telling, it’s not just horizontal mergers that look like the one in the viral egg video. It is becoming a common theory of harm regarding nonhorizontal acquisitions that they are, in fact, horizontal acquisitions in disguise. The acquired party may possibly, potentially, with some probability, in the future, become a horizontal competitor. And of course, the story goes, all horizontal mergers are anticompetitive.

Therefore, we should have the same skepticism toward all mergers, regardless of whether they are horizontal or vertical. Steve Salop has argued that a problem with the Federal Trade Commission’s (FTC) 2020 vertical merger guidelines is that they failed to adopt anticompetitive presumptions.

This perspective is not just a meme on Twitter. The FTC and U.S. Justice Department (DOJ) are currently revising their guidelines for merger enforcement and have issued a request for information (RFI). The working presumption in the RFI (and we can guess this will show up in the final guidelines) is exactly the takeaway from the video: Mergers are bad. Full stop.

The RFI repeatedly requests information that would support the conclusion that the agencies should strengthen merger enforcement, rather than information that might point toward either stronger or weaker enforcement. For example, the RFI asks:

What changes in standards or approaches would appropriately strengthen enforcement against mergers that eliminate a potential competitor?

This framing presupposes that enforcement should be strengthened against mergers that eliminate a potential competitor.

Do Monopoly Profits Always Exceed Joint Duopoly Profits?

Should we assume enforcement, including vertical enforcement, needs to be strengthened? In a world with lots of uncertainty about which products and companies will succeed, why would an incumbent buy out every potential competitor? The basic idea is that, since profits are highest when there is only a single monopolist, that seller will always have an incentive to buy out any competitors.

The punchline for this anti-merger presumption is “monopoly profits exceed duopoly profits.” The argument is laid out most completely by Salop, although the argument is not unique to him. As Salop points out:

I do not think that any of the analysis in the article is new. I expect that all the points have been made elsewhere by others and myself.

Under the model that Salop puts forward, there should, in fact, be a presumption against any acquisition, not just horizontal acquisitions. He argues that:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

We see a presumption against mergers in the recent FTC challenge of Meta’s purchase of Within. While Meta owns Oculus, a virtual-reality headset and Within owns virtual-reality fitness apps, the FTC challenged the acquisition on grounds that:

The Acquisition would cause anticompetitive effects by eliminating potential competition from Meta in the relevant market for VR dedicated fitness apps.

Given the prevalence of this perspective, it is important to examine the basic model’s assumptions. In particular, is it always true that—since monopoly profits exceed duopoly profits—incumbents have an incentive to eliminate potential competition for anticompetitive reasons?

I will argue no. The notion that monopoly profits exceed joint-duopoly profits rests on two key assumptions that hinder the simple application of the “merge to monopoly” model to antitrust.

First, even in a simple model, it is not always true that monopolists have both the ability and incentive to eliminate any potential entrant, simply because monopoly profits exceed duopoly profits.

For the simplest complication, suppose there are two possible entrants, rather than the common assumption of just one entrant at a time. The monopolist must now pay each of the entrants enough to prevent entry. But how much? If the incumbent has already paid one potential entrant not to enter, the second could then enter the market as a duopolist, rather than as one of three oligopolists. Therefore, the incumbent must pay the second entrant an amount sufficient to compensate a duopolist, not their share of a three-firm oligopoly profit. The same is true for buying the first entrant. To remain a monopolist, the incumbent would have to pay each possible competitor duopoly profits.

Because monopoly profits exceed duopoly profits, it is profitable to pay a single entrant half of the duopoly profit to prevent entry. It is not, however, necessarily profitable for the incumbent to pay both potential entrants half of the duopoly profit to avoid entry by either. 

Now go back to the video. Suppose two passersby, who also happen to have chickens at home, notice that they can sell their eggs. The best part? They don’t have to sit around all day; the lady on the right will buy them. The next day, perhaps, two new egg sellers arrive.

For a simple example, consider a Cournot oligopoly model with an industry-inverse demand curve of P(Q)=1-Q and constant marginal costs that are normalized to zero. In a market with N symmetric sellers, each seller earns 1/((N+1)^2) in profits. A monopolist makes a profit of 1/4. A duopolist can expect to earn a profit of 1/9. If there are three potential entrants, plus the incumbent, the monopolist must pay each the duopoly profit of 3*1/9=1/3, which exceeds the monopoly profits of 1/4.

In the Nash/Cournot equilibrium, the incumbent will not acquire any of the competitors, since it is too costly to keep them all out. With enough potential entrants, the monopolist in any market will not want to buy any of them out. In that case, the outcome involves no acquisitions.

If we observe an acquisition in a market with many potential entrants, which any given market may or may not have, it cannot be that the merger is solely about obtaining monopoly profits, since the model above shows that the incumbent doesn’t have incentives to do that.

If our model captures the dynamics of the market (which it may or may not, depending on a given case’s circumstances) but we observe mergers, there must be another reason for that deal besides maintaining a monopoly. The presence of multiple potential entrants overturns the antitrust implications of the truism that monopoly profits exceed duopoly profits. The question turns instead to empirical analysis of the merger and market in question, as to whether it would be profitable to acquire all potential entrants.

The second simplifying assumption that restricts the applicability of Salop’s baseline model is that the incumbent has the lowest cost of production. He rules out the possibility of lower-cost entrants in Footnote 2:

Monopoly profits are not always higher. The entrant may have much lower costs or a better or highly differentiated product. But higher monopoly profits are more usually the case.

If one allows the possibility that an entrant may have lower costs (even if those lower costs won’t be achieved until the future, when the entrant gets to scale), it does not follow that monopoly profits (under the current higher-cost monopolist) necessarily exceed duopoly profits (with a lower-cost producer involved).

One cannot simply assume that all firms have the same costs or that the incumbent is always the lowest-cost producer. This is not just a modeling choice but has implications for how we think about mergers. As Geoffrey Manne, Sam Bowman, and Dirk Auer have argued:

Although it is convenient in theoretical modeling to assume that similarly situated firms have equivalent capacities to realize profits, in reality firms vary greatly in their capabilities, and their investment and other business decisions are dependent on the firm’s managers’ expectations about their idiosyncratic abilities to recognize profit opportunities and take advantage of them—in short, they rest on the firm managers’ ability to be entrepreneurial.

Given the assumptions that all firms have identical costs and there is only one potential entrant, Salop’s framework would find that all possible mergers are anticompetitive and that there are no possible efficiency gains from any merger. That’s the thrust of the video. We assume that the whole story is two identical-seeming women selling eggs. Since the acquired firm cannot, by assumption, have lower costs of production, it cannot improve on the incumbent’s costs of production.

Many Reasons for Mergers

But whether a merger is efficiency-reducing and bad for competition and consumers needs to be proven, not just assumed.

If we take the basic acquisition model literally, every industry would have just one firm. Every incumbent would acquire every possible competitor, no matter how small. After all, monopoly profits are higher than duopoly profits, and so the incumbent both wants to and can preserve its monopoly profits. The model does not give us a way to disentangle when mergers would stop without antitrust enforcement.

Mergers do not affect the production side of the economy, under this assumption, but exist solely to gain the market power to manipulate prices. Since the model finds no downsides for the incumbent to acquiring a competitor, it would naturally acquire every last potential competitor, no matter how small, unless prevented by law. 

Once we allow for the possibility that firms differ in productivity, however, it is no longer true that monopoly profits are greater than industry duopoly profits. We can see this most clearly in situations where there is “competition for the market” and the market is winner-take-all. If the entrant to such a market has lower costs, the profit under entry (when one firm wins the whole market) can be greater than the original monopoly profits. In such cases, monopoly maintenance alone cannot explain an entrant’s decision to sell.

An acquisition could therefore be both procompetitive and increase consumer welfare. For example, the acquisition could allow the lower-cost entrant to get to scale quicker. The acquisition of Instagram by Facebook, for example, brought the photo-editing technology that Instagram had developed to a much larger market of Facebook users and provided a powerful monetization mechanism that was otherwise unavailable to Instagram.

In short, the notion that incumbents can systematically and profitably maintain their market position by acquiring potential competitors rests on assumptions that, in practice, will regularly and consistently fail to materialize. It is thus improper to assume that most of these acquisitions reflect efforts by an incumbent to anticompetitively maintain its market position.

[This post is a contribution to Truth on the Market‘s continuing digital symposium “FTC Rulemaking on Unfair Methods of Competition.” You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

In a recent op-ed for the Wall Street Journal, Svetlana Gans and Eugene Scalia look at three potential traps the Federal Trade Commission (FTC) could trigger if it pursues the aggressive rulemaking agenda many have long been expecting. From their opening:

FTC Chairman Lina Khan has Rooseveltian ambitions for the agency. … Within weeks the FTC is expected to begin a blizzard of rule-makings that will include restrictions on employment noncompete agreements and the practices of technology companies.

If Ms. Khan succeeds, she will transform the FTC’s regulation of American business. But there’s a strong chance this regulatory blitz will fail. The FTC is a textbook case for how federal agencies could be affected by the re-examination of administrative law under way at the Supreme Court.

The first pitfall into which the FTC might fall, Gans and Scalia argue, is the “major questions” doctrine. Recently illuminated in the Supreme Court’s opinion in West Virginia v. EPA decision, the doctrine holds that federal agencies cannot enact regulations of vast economic and political significance without clear congressional authorization. The sorts of rules the FTC appears to be contemplating “would run headlong into” major questions, Gans and Scalia write, a position shared by several contributors to Truth on the Market‘s recent symposium on the potential for FTC rulemakings on unfair methods of competition (UMC).

The second trap the authors expect might trip up an ambitious FTC is the major questions doctrine’s close cousin: the nondelegation doctrine. The nondelegation doctrine holds that there are limits to how much authority Congress can delegate to a federal agency, even if it does so clearly.

Curiously, as Gans and Scalia note, the last time the Supreme Court invoked the nondelegation doctrine involved regulations to implement “codes of fair competition”—nearly identical, on their face, to the commission’s current interest in rules to prohibit unfair methods of competition. That last case, Schechter Poultry Corp. v. United States, is more than 80 years old. The doctrine has since lain dormant for multiple generations. But in recent years, several justice have signaled their openness to reinvigorating the doctrine. As Gans and Scalia note, “[a]n aggressive FTC competition rule could be a tempting target” for them.

Finally, the authors anticipate an overly aggressive FTC may find itself entangled in yet a thorny web wrapped around the very heart of the administrative state: the constitutionality of so-called independent agencies. Again, the relevant constitutional doctrine giving rise to these agencies results from another 1935 case involving the FTC itself: Humphrey’s Executor v. United States. While the Court in that opinion upheld the notion that Congress can create agencies led by officials who operate independently of direct presidential control, conservative justices have long questioned the doctrine’s legitimacy and the Roberts court, in particularly, has trimmed its outer limits. An overly aggressive FTC might present an opportunity to further check the independence of these agencies.

While it remains unclear the precise rules the FTC seek try to develop using its UMC authority, the clearest signs are that it will focus first on labor issues, such as emerging research around labor monopsony and firms’ use of noncompete clauses. Indeed, Eric Posner, who joined the U.S. Justice Department Antitrust Division earlier this year as counsel on these issues, recently acknowledged that: “There is this very close and complicated relationship between labor law and antitrust law that has to be maintained.”

If the FTC were to upset this relationship, such as by using its UMC authority either to circumvent the National Labor Relations Board in addressing competition concerns or to assist the NLRB in exceeding its own statutory authority, it would be unsurprising for the courts to exercise their constitutional role as a check on a rogue agency.