Archives For Truth on the Market

With just a week to go until the U.S. midterm elections, which potentially herald a change in control of one or both houses of Congress, speculation is mounting that congressional Democrats may seek to use the lame-duck session following the election to move one or more pieces of legislation targeting the so-called “Big Tech” companies.

Gaining particular notice—on grounds that it is the least controversial of the measures—is S. 2710, the Open App Markets Act (OAMA). Introduced by Sen. Richard Blumenthal (D-Conn.), the Senate bill has garnered 14 cosponsors: exactly seven Republicans and seven Democrats. It would, among other things, force certain mobile app stores and operating systems to allow “sideloading” and open their platforms to rival in-app payment systems.

Unfortunately, even this relatively restrained legislation—at least, when compared to Sen. Amy Klobuchar’s (D-Minn.) American Innovation and Choice Online Act or the European Union’s Digital Markets Act (DMA)—is highly problematic in its own right. Here, I will offer seven major questions the legislation leaves unresolved.

1.     Are Quantitative Thresholds a Good Indicator of ‘Gatekeeper Power’?

It is no secret that OAMA has been tailor-made to regulate two specific app stores: Android’s Google Play Store and Apple’s Apple App Store (see here, here, and, yes, even Wikipedia knows it).The text makes this clear by limiting the bill’s scope to app stores with more than 50 million users, a threshold that only Google Play and the Apple App Store currently satisfy.

However, purely quantitative thresholds are a poor indicator of a company’s potential “gatekeeper power.” An app store might have much fewer than 50 million users but cater to a relevant niche market. By the bill’s own logic, why shouldn’t that app store likewise be compelled to be open to competing app distributors? Conversely, it may be easy for users of very large app stores to multi-home or switch seamlessly to competing stores. In either case, raw user data paints a distorted picture of the market’s realities.

As it stands, the bill’s thresholds appear arbitrary and pre-committed to “disciplining” just two companies: Google and Apple. In principle, good laws should be abstract and general and not intentionally crafted to apply only to a few select actors. In OAMA’s case, the law’s specific thresholds are also factually misguided, as purely quantitative criteria are not a good proxy for the sort of market power the bill purportedly seeks to curtail.

2.     Why Does the Bill not Apply to all App Stores?

Rather than applying to app stores across the board, OAMA targets only those associated with mobile devices and “general purpose computing devices.” It’s not clear why.

For example, why doesn’t it cover app stores on gaming platforms, such as Microsoft’s Xbox or Sony’s PlayStation?

Source: Visual Capitalist

Currently, a PlayStation user can only buy digital games through the PlayStation Store, where Sony reportedly takes a 30% cut of all sales—although its pricing schedule is less transparent than that of mobile rivals such as Apple or Google.

Clearly, this bothers some developers. Much like Epic Games CEO Tim Sweeney’s ongoing crusade against the Apple App Store, indie-game publisher Iain Garner of Neon Doctrine recently took to Twitter to complain about Sony’s restrictive practices. According to Garner, “Platform X” (clearly PlayStation) charges developers up to $25,000 and 30% of subsequent earnings to give games a modicum of visibility on the platform, in addition to requiring them to jump through such hoops as making a PlayStation-specific trailer and writing a blog post. Garner further alleges that Sony severely circumscribes developers’ ability to offer discounts, “meaning that Platform X owners will always get the worst deal!” (see also here).

Microsoft’s Xbox Game Store similarly takes a 30% cut of sales. Presumably, Microsoft and Sony both have the same type of gatekeeper power in the gaming-console market that Apple and Google are said to have on their respective platforms, leading to precisely those issues that OAMA ostensibly purports to combat. Namely, that consumers are not allowed to choose alternative app stores through which to buy games on their respective consoles, and developers must acquiesce to Sony’s and Microsoft’s terms if they want their games to reach those players.

More broadly, dozens of online platforms also charge commissions on the sales made by their creators. To cite but a few: OnlyFans takes a 20% cut of sales; Facebook gets 30% of the revenue that creators earn from their followers; YouTube takes 45% of ad revenue generated by users; and Twitch reportedly rakes in 50% of subscription fees.

This is not to say that all these services are monopolies that should be regulated. To the contrary, it seems like fees in the 20-30% range are common even in highly competitive environments. Rather, it is merely to observe that there are dozens of online platforms that demand a percentage of the revenue that creators generate and that prevent those creators from bypassing the platform. As well they should, after all, because creating and improving a platform is not free.

It is nonetheless difficult to see why legislation regulating online marketplaces should focus solely on two mobile app stores. Ultimately, the inability of OAMA’s sponsors to properly account for this carveout diminishes the law’s credibility.

3.     Should Picking Among Legitimate Business Models Be up to Lawmakers or Consumers?

“Open” and “closed” platforms posit two different business models, each with its own advantages and disadvantages. Some consumers may prefer more open platforms because they grant them more flexibility to customize their mobile devices and operating systems. But there are also compelling reasons to prefer closed systems. As Sam Bowman observed, narrowing choice through a more curated system frees users from having to research every possible option every time they buy or use some product. Instead, they can defer to the platform’s expertise in determining whether an app or app store is trustworthy or whether it contains, say, objectionable content.

Currently, users can choose to opt for Apple’s semi-closed “walled garden” iOS or Google’s relatively more open Android OS (which OAMA wants to pry open even further). Ironically, under the pretext of giving users more “choice,” OAMA would take away the possibility of choice where it matters the most—i.e., at the platform level. As Mikolaj Barczentewicz has written:

A sideloading mandate aims to give users more choice. It can only achieve this, however, by taking away the option of choosing a device with a “walled garden” approach to privacy and security (such as is taken by Apple with iOS).

This obviates the nuances between the two and pushes Android and iOS to converge around a single model. But if consumers unequivocally preferred open platforms, Apple would have no customers, because everyone would already be on Android.

Contrary to regulators’ simplistic assumptions, “open” and “closed” are not synonyms for “good” and “bad.” Instead, as Boston University’s Andrei Hagiu has shown, there are fundamental welfare tradeoffs at play between these two perfectly valid business models that belie simplistic characterizations of one being inherently superior to the other.

It is debatable whether courts, regulators, or legislators are well-situated to resolve these complex tradeoffs by substituting businesses’ product-design decisions and consumers’ revealed preferences with their own. After all, if regulators had such perfect information, we wouldn’t need markets or competition in the first place.

4.     Does OAMA Account for the Security Risks of Sideloading?

Platforms retaining some control over the apps or app stores allowed on their operating systems bolsters security, as it allows companies to weed out bad players.

Both Apple and Google do this, albeit to varying degrees. For instance, Android already allows sideloading and third-party in-app payment systems to some extent, while Apple runs a tighter ship. However, studies have shown that it is precisely the iOS “walled garden” model which gives it an edge over Android in terms of privacy and security. Even vocal Apple critic Tim Sweeney recently acknowledged that increased safety and privacy were competitive advantages for Apple.

The problem is that far-reaching sideloading mandates—such as the ones contemplated under OAMA—are fundamentally at odds with current privacy and security capabilities (see here and here).

OAMA’s defenders might argue that the law does allow covered platforms to raise safety and security defenses, thus making the tradeoffs between openness and security unnecessary. But the bill places such stringent conditions on those defenses that platform operators will almost certainly be deterred from risking running afoul of the law’s terms. To invoke the safety and security defenses, covered companies must demonstrate that provisions are applied on a “demonstrably consistent basis”; are “narrowly tailored and could not be achieved through less discriminatory means”; and are not used as a “pretext to exclude or impose unnecessary or discriminatory terms.”

Implementing these stringent requirements will drag enforcers into a micromanagement quagmire. There are thousands of potential spyware, malware, rootkit, backdoor, and phishing (to name just a few) software-security issues—all of which pose distinct threats to an operating system. The Federal Trade Commission (FTC) and the federal courts will almost certainly struggle to control the “consistency” requirement across such varied types.

Likewise, OAMA’s reference to “least discriminatory means” suggests there is only one valid answer to any given security-access tradeoff. Further, depending on one’s preferred balance between security and “openness,” a claimed security risk may or may not be “pretextual,” and thus may or may not be legal.

Finally, the bill text appears to preclude the possibility of denying access to a third-party app or app store for reasons other than safety and privacy. This would undermine Apple’s and Google’s two-tiered quality-control systems, which also control for “objectionable” content such as (child) pornography and social engineering. 

5.     How Will OAMA Safeguard the Rights of Covered Platforms?

OAMA is also deeply flawed from a procedural standpoint. Most importantly, there is no meaningful way to contest the law’s designation as “covered company,” or the harms associated with it.

Once a company is “covered,” it is presumed to hold gatekeeper power, with all the associated risks for competition, innovation, and consumer choice. Remarkably, this presumption does not admit any qualitative or quantitative evidence to the contrary. The only thing a covered company can do to rebut the designation is to demonstrate that it, in fact, has fewer than 50 million users.

By preventing companies from showing that they do not hold the kind of gatekeeper power that harms competition, decreases innovation, raises prices, and reduces choice (the bill’s stated objectives), OAMA severely tilts the playing field in the FTC’s favor. Even the EU’s enforcer-friendly DMA incorporated a last-minute amendment allowing firms to dispute their status as “gatekeepers.” While this defense is not perfect (companies cannot rely on the same qualitative evidence that the European Commission can use against them), at least gatekeeper status can be contested under the DMA.

6.     Should Legislation Protect Competitors at the Expense of Consumers?

Like most of the new wave of regulatory initiatives against Big Tech (but unlike antitrust law), OAMA is explicitly designed to help competitors, with consumers footing the bill.

For example, OAMA prohibits covered companies from using or combining nonpublic data obtained from third-party apps or app stores operating on their platforms in competition with those third parties. While this may have the short-term effect of redistributing rents away from these platforms and toward competitors, it risks harming consumers and third-party developers in the long run.

Platforms’ ability to integrate such data is part of what allows them to bring better and improved products and services to consumers in the first place. OAMA tacitly admits this by recognizing that the use of nonpublic data grants covered companies a competitive advantage. In other words, it allows them to deliver a product that is better than competitors’.

Prohibiting self-preferencing raises similar concerns. Why wouldn’t a company that has invested billions in developing a successful platform and ecosystem not give preference to its own products to recoup some of that investment? After all, the possibility of exercising some control over downstream and adjacent products is what might have driven the platform’s development in the first place. In other words, self-preferencing may be a symptom of competition, and not the absence thereof. Third-party companies also would have weaker incentives to develop their own platforms if they can free-ride on the investments of others. And platforms that favor their own downstream products might simply be better positioned to guarantee their quality and reliability (see here and here).

In all of these cases, OAMA’s myopic focus on improving the lot of competitors for easy political points will upend the mobile ecosystems from which both users and developers derive significant benefit.

7.     Shouldn’t the EU Bear the Risks of Bad Tech Regulation?

Finally, U.S. lawmakers should ask themselves whether the European Union, which has no tech leaders of its own, is really a model to emulate. Today, after all, marks the day the long-awaited Digital Markets Act— the EU’s response to perceived contestability and fairness problems in the digital economy—officially takes effect. In anticipation of the law entering into force, I summarized some of the outstanding issues that will define implementation moving forward in this recent tweet thread.

We have been critical of the DMA here at Truth on the Market on several factual, legal, economic, and procedural grounds. The law’s problems range from it essentially being a tool to redistribute rents away from platforms and to third-parties, despite it being unclear why the latter group is inherently more deserving (Pablo Ibañez Colomo has raised a similar point); to its opacity and lack of clarity, a process that appears tilted in the Commission’s favor; to the awkward way it interacts with EU competition law, ignoring the welfare tradeoffs between the models it seeks to impose and perfectly valid alternatives (see here and here); to its flawed assumptions (see, e.g., here on contestability under the DMA); to the dubious legal and economic value of the theory of harm known as  “self-preferencing”; to the very real possibility of unintended consequences (e.g., in relation to security and interoperability mandates).

In other words, that the United States lags the EU in seeking to regulate this area might not be a bad thing, after all. Despite the EU’s insistence on being a trailblazing agenda-setter at all costs, the wiser thing in tech regulation might be to remain at a safe distance. This is particularly true when one considers the potentially large costs of legislative missteps and the difficulty of recalibrating once a course has been set.

U.S. lawmakers should take advantage of this dynamic and learn from some of the Old Continent’s mistakes. If they play their cards right and take the time to read the writing on the wall, they might just succeed in averting antitrust’s uncertain future.

Faithful and even occasional readers of this roundup might have noticed a certain temporal discontinuity between the last post and this one. The inimitable Gus Hurwitz has passed the scrivener’s pen to me, a recent refugee from the Federal Trade Commission (FTC), and the roundup is back in business. Any errors going forward are mine. Going back, blame Gus.

Commissioner Noah Phillips departed the FTC last Friday, leaving the Commission down a much-needed advocate for consumer welfare and the antitrust laws as they are, if not as some wish they were. I recommend the reflections posted by Commissioner Christine S. Wilson and my fellow former FTC Attorney Advisor Alex Okuliar. Phillips collaborated with his fellow commissioners on matters grounded in the law and evidence, but he wasn’t shy about crying frolic and detour when appropriate.

The FTC without Noah is a lesser place. Still, while it’s not always obvious, many able people remain at the Commission and some good solid work continues. For example, FTC staff filed comments urging New York State to reject a Certificate of Public Advantage (“COPA”) application submitted by SUNY Upstate Health System and Crouse Medical. The staff’s thorough comments reflect investigation of the proposed merger, recent research, and the FTC’s long experience with COPAs. In brief, the staff identified anticompetitive rent-seeking for what it is. Antitrust exemptions for health-care providers tend to make health care worse, but more expensive. Which is a corollary to the evergreen truth that antitrust exemptions help the special interests receiving them but not a living soul besides those special interests. That’s it, full stop.

More Good News from the Commission

On Sept. 30, a unanimous Commission announced that an independent physician association in New Mexico had settled allegations that it violated a 2005 consent order. The allegations? Roughly 400 physicians—independent competitors—had engaged in price fixing, violating both the 2005 order and the Sherman Act. As the concurring statement of Commissioners Phillips and Wilson put it, the new order “will prevent a group of doctors from allegedly getting together to negotiate… higher incomes for themselves and higher costs for their patients.” Oddly, some have chastised the FTC for bringing the action as anti-labor. But the IPA is a regional “must-have” for health plans and a dominant provider to consumers, including patients, who might face tighter budget constraints than the median physician

Peering over the rims of the rose-colored glasses, my gaze turns to Meta. In July, the FTC sued to block Meta’s proposed acquisition of Within Unlimited (and its virtual-reality exercise app, Supernatural). Gus wrote about it with wonder, noting reports that the staff had recommended against filing, only to be overruled by the chair.

Now comes October and an amended complaint. The amended complaint is even weaker than the opening salvo. Now, the FTC alleges that the acquisition would eliminate potential competition from Meta in a narrower market, VR-dedicated fitness apps, by “eliminating any probability that Meta would enter the market through alternative means absent the Proposed Acquisition, as well as eliminating the likely and actual beneficial influence on existing competition that results from Meta’s current position, poised on the edge of the market.”

So what if Meta were to abandon the deal—as the FTC wants—but not enter on its own? Same effect, but the FTC cannot seriously suggest that Meta has a positive duty to enter the market. Is there a jurisdiction (or a planet) where a decision to delay or abandon entry would be unlawful unilateral conduct? Suppose instead that Meta enters, with virtual-exercise guns blazing, much to the consternation of firms actually in the market, which might complain about it. Then what? Would the Commission cheer or would it allege harm to nascent competition, or perhaps a novel vertical theory? And by the way, how poised is Meta, given no competing product in late-stage development? Would the FTC prefer that Meta buy a different competitor? Should the overworked staff commence Meta’s due diligence?

Potential competition cases are viable given the right facts, and in areas where good grounds to predict significant entry are well-established. But this is a nascent market in a large, highly dynamic, and innovative industry. The competitive landscape a few years down the road is anyone’s guess. More speculation: the staff was right all along. For more, see Dirk Auer’s or Geoffrey Manne’s threads on the amended complaint.

When It Rains It Pours Regulations

On Aug. 22, the FTC published an advance notice of proposed rulemaking (ANPR) to consider the potential regulation of “commercial surveillance and data security” under its Section 18 authority. Shortly thereafter, they announced an Oct. 20 open meeting with three more ANPRs on the agenda.

First, on the advance notice: I’m not sure what they mean by “commercial surveillance.” The term doesn’t appear in statutory law, or in prior FTC enforcement actions. It sounds sinister and, surely, it’s an intentional nod to Shoshana Zuboff’s anti-tech polemic “The Age of Surveillance Capitalism.” One thing is plain enough: the proffered definition is as dramatically sweeping as it is hopelessly vague. The Commission seems to be contemplating a general data regulation of some sort, but we don’t know what sort. They don’t say or even sketch a possible rule. That’s a problem for the FTC, because the law demands that the Commission state its regulatory objectives, along with regulatory alternatives under consideration, in the ANPR itself. If they get to an NPRM, they are required to describe a proposed rule with specificity.

What’s clear is that the ANPR takes a dim view of much of the digital economy. And while the Commission has considerable experience in certain sorts of privacy and data security matters, the ANPR hints at a project extending well past that experience. Commissioners Phillips and Wilson dissented for good and overlapping reasons. Here’s a bit from the Phillips dissent:

When adopting regulations, clarity is a virtue. But the only thing clear in the ANPR is a rather dystopic view of modern commerce….I cannot support an ANPR that is the first step in a plan to go beyond the Commission’s remit and outside its experience to issue rules that fundamentally alter the internet economy without a clear congressional mandate….It’s a naked power grab.

Be sure to read the bonus material in the Federal Register—supporting statements from Chair Lina Khan and Commissioners Rebecca Kelly Slaughter and Alvaro Bedoya, and dissenting statements from Commissioners Phillips and Wilson. Chair Khan breezily states that “the questions we ask in the ANPR and the rules we are empowered to issue may be consequential, but they do not implicate the ‘major questions doctrine.’” She’s probably half right: the questions do not violate the Constitution. But she’s probably half wrong too.

For more, see ICLE’s Oct. 20 panel discussion and the executive summary to our forthcoming comments to the Commission.

But wait, there’s more! There were three additional ANPRs on the Commission’s Oct. 20 agenda. So that’s four and counting. Will there be a proposed rule on non-competes? Gig workers? Stay tuned. For now, note that rules are not self-enforcing, and that the chair has testified to Congress that the Commission is strapped for resources and struggling to keep up with its statutory mission. Are more regulations an odd way to ask Congress for money? Thus far, there’s no proposed rule on gig workers, but there was a Policy Statement on Enforcement Related to Gig Workers.. For more on that story, see Alden Abbott’s TOTM post.

Laws, Like People, Have Their Limits

Read Phillips’s parting dissent in Passport Auto Group, where the Commission combined legitimate allegations with an unhealthy dose of overreach:

The language of the unfairness standard has given the FTC the flexibility to combat new threats to consumers that accompany the development of new industries and technologies. Still, there are limits to the Commission’s unfairness authority. Because this complaint includes an unfairness count that aims to transform Section 5 into an undefined discrimination statute, I respectfully dissent.”

Right. Three cheers for effective enforcement of the focused antidiscrimination laws enacted by Congress by the agencies actually charged to enforce those laws. And to equal protection. And three more, at least, for a little regulatory humility, if we find it.

This post is the third in a three-part series. The first installment can be found here and the second can be found here.

As it has before in its history, liberalism again finds itself at an existential crossroads, with liberally oriented reformers generally falling into two camps: those who seek to subordinate markets to some higher vision of the common good and those for whom the market itself is the common good. The former seek to rein in, temper, order, and discipline unfettered markets, while the latter strive to build on the foundations of classical liberalism to perfect market logic, rather than to subvert it.

This conflict of visions has deep ramifications for today’s economic policy. In his classic text “The Antitrust Paradox,” Judge Robert Bork deemed antitrust law a “subcategory of ideology” that “connects with the central political and social concerns of our time.” Among these concerns, he focused specifically on the eternal tension between the ideals of “equality” and “freedom.” In recent years, that tension has been exemplified in competition-policy debates by two schools of thought: the neo-Brandeisians, whose jurisprudential philosophy draws from the progressive U.S. Supreme Court Justice Louis Brandeis, and another group represented by the Chicago School and other defenders of the consumer-welfare standard.

But this schism resembles similar divides that have played out countless times over the history of liberalism, albeit under different names and banners. Looking back on the past century and a half of economic and philosophical thought can help us to make sense of these fundamentally opposed visions for the future of both liberalism and antitrust. This history can also help us to understand how these ideologies have sometimes failed to live up to their ambitions or crumbled under the weight of their own contradictions. 

In this final piece in the political philosophy series, I explain the genesis, normative underpinnings, and likely outcome of the current “battle for the soul of antitrust.” The broader point that I have tried to make throughout this series is that this confrontation hinges on ethical and deontological considerations, as much as it does on “hard” consequentialist arguments. Put differently, how we decide to resolve foundational and putatively “technical” questions regarding the goals, standards, and enforcement of antitrust law ultimately cannot help but reflect our underlying views about the values and ideals that should guide a liberal society. In this vein, I argue that there are compelling non-utilitarian reasons to prefer a polity with an in-built bias for negative freedom and that is guided by a narrow economic-efficiency criterion, rather than the apparently ascendant alternatives.

The Birth of Neoliberalism

The clearest articulation of the philosophical schism between the two visions of liberalism that we see today came with the 1937 publication of “The Good Society” by American author and journalist Walter Lippmann. Lippman—who, like Brandeis, came out of the American Progressive Movement and had been an adviser to progressive U.S. President Woodrow Wilson—sparked the birth of “neoliberalism” as a separate strand of liberal political philosophical thought. The book invited readers to critically reexamine and, where appropriate, update the tenets of classical liberalism with a view toward “stabilizing and consolidating the course of an intellectual tradition that was otherwise bound to tumble straight into oblivion” (see here).  

This was the objective of the “neoliberal collective,” a loose affiliation of liberally oriented thinkers who convened for the first time at the Walter Lippmann Colloquium in 1938 to discuss Lippmann’s seminal book, and from 1947 onwards more formally under the auspices of the Mont Pelerin Society

Neoliberals grappled with questions that went to the very heart of liberalism, such as how to adapt traditional small-scale human societies to the exigencies of ever-widening markets and economic progress; the causes and consequences of industrial concentration; the appropriate role and boundaries of state intervention; the ability of markets to address the “social question”; the interplay between freedom and coercion; and the tension between the individual and the collective. Like Lippmann, the neoliberals were convinced that the failure to reckon with such fundamental issues would result in the inevitable displacement of liberalism by some form of “authoritarian collectivism,” which they believed provided emotionally appealing (but ultimately illusory) solutions to the full range of liberal problems.

It quickly became apparent, however, that there existed two main currents of neoliberalism.

The first, which I will call “left neoliberalism,” was a relatively conciliatory version that sought to strike a “mostly liberal” balance with socialism and collectivism. It postulated that markets are embedded in a broader social and political context that may include a strong and activist state, aggressive antitrust policy, robust social rights, and an emphasis on positive freedom. In this respect, their views resembled those of the Progressive Movement of Wilson and Brandeis, which was carried on into the mid-20th century in the United States by such figures as President Franklin Roosevelt, historian Arthur M. Schlesinger, and economist John Kenneth Galbraith. The “left neoliberals,” however, were primarily European, and included the likes of Wilhelm Röpke, Walter Eucken, Franz Bohm, Alexander Rüstow, Luigi Einaudi, Louis Rougier, Louis Marlio, and Jacques Rueff (and, arguably, Lippmann himself). 

Adherents to the other strand, “right neoliberalism,” were more conservative and less willing to compromise. They championed a strong but minimal state tasked with (and limited by) facilitating efficient markets, posited a lean antitrust policy, and emphasized negative liberty. Thinkers like Friedrich Hayek, Milton Friedman, Lionel Robbins, James Buchanan and, arguably, the more libertarian Ludwig von Mises and Bruno Leoni would fall into this group.

The Price Mechanism and the State

The two groups of neoliberals shared several basic postulates. 

First and foremost, they agreed that any revision of Adam Smith’’s “invisible hand” had to respect the integrity of the price mechanism (what Wilhelm Röpke referred to as the “sacrosanct core of liberalism”). The argument rested on utilitarian, but also political and ethical grounds. As Friedrich Hayek argued in “The Road to Serfdom,” the substitution of the free market for a centrally planned economy would lead to the loss of economic freedom, and eventually all other freedoms, as well. This meant that neoliberals were, on principle, harsh critics of any type of state intervention that distorted the formation of prices through the forces of supply and demand.

At the same time, however, neither strand of neoliberalism professed a doctrine of statelessness.  To the contrary, the state may, in hindsight, be neoliberalism’s greatest conquest. The question at hand is what kind of state is optimal. 

For the left neoliberals, a strong state was needed to resist capture by interest groups. It also had to exercise good political leadership and discretion in juggling goals and values (markets, after all, had to be “embedded” in the social order). These views were underpinned by a relatively sanguine set of expectations the left neoliberals had of the state’s willingness and capacity to protect the general interest, as well as their shared belief that the core institutions of liberalism (including self-regulating markets) were prone to degeneration and in need of constant public oversight. The state, not the private sector, was the ultimate ordering power of the economy. As Alexander Rüstow said:

I am, indeed, of the opinion that it is not the economy, but the state which determines our fate. 

The right neoliberal position was more ambivalent, due to its heightened skepticism toward state power. The bigger threat to freedom was not unfettered private power, but public power. As Milton Friedman put it in “Capitalism and Freedom”:

Government is necessary to preserve our freedom […] yet by concentrating power in political hands, it is also a threat to freedom. […] How can we benefit from the promise of government while avoiding the threat to freedom? 

The answer was a revamped Smithian nightwatchman that acted more as an umpire determining “the rules of the game” and overseeing free interactions between individuals than as a helmsman tasked with channeling society toward any particular variety of teleological goals. Like the left neoliberal position, this one, too, rests on a set of theoretical underpinnings.

One is that public actors are not any less self-interested than private ones, with the corollary that any extension or deepening of the powers of the state must be well-justified. The idea relied heavily on the public choice theory developed by James M. Buchanan, a member of Mont Pelerin Society and its president from 1984 to 1986. Thus, left and right neoliberals advanced almost completely opposite responses to the problem of capture. While left neoliberals believed in strengthening the state relative to private enterprise, the right’s critique led them to want precisely to limit state power and reshape institutional incentives.

This is not surprising, as right neoliberals were also more optimistic about the potential of markets and deontologically more preoccupied with negative freedom, a combination that added another layer of suspicion to any putatively progressive measures that involved wealth redistribution or meticulous administration of the market by the state.

Economic Concentration and Competition

Another important difference lay in the two sides’ views on economic concentration and competition. Some left neoliberals, particularly in Europe, internalized much of the Marxist and fascist critiques of capitalism, including the belief that markets naturally tended toward economic concentration. They argued, however, that this process could be reversed or prevented with robust antitrust and de-concentration measures. While essentially conceding Marxian arguments about the intrinsic tendency of competition to degenerate into monopoly—thereby fostering inequality and “proletarizing” the masses—they denied the ultimate implications upon which Marx had insisted—i.e., the inevitable “cannibalization” of capitalism through its inherent contradictions.

Right neoliberals, by contrast, insisted that, where economic concentration was not fleeting, it was generally the result of state action, not state inaction. As Mises argued, cartels were a consequence of protectionism and the artificial partitioning of markets through, e.g., tariffs. Similarly, monopolies formed and persisted because of “anti-liberal policies of governments that [created] the conditions favorable” to them. This implied that antitrust had a secondary position in securing competitive markets.

Each strand’s reasoning as to why competition was worthy of protection also differed. For the right neoliberals, who saw the legitimate goals and boundaries of public policy through the lenses of economic efficiency and negative freedom, the case for competition was principally a utilitarian one. As Hayek wrote in “Individualism and Economic Order,” state-backed institutions and laws (including antitrust laws) that “made competition work” (by which he meant, made competition work effectively) were one of the ways in which right neoliberals improved on the classical liberal position. 

Left neoliberals added political, social, and ethical layers to this argument. Politically, they shared the standard Marxian view that concentrated markets facilitated the capture of the state by powerful private interests. Marxists had, e.g., always asserted that Nazism was the product of “monopoly capitalism” and that the Nazis themselves were the tools of big business (the idea of “state monopoly capitalism” stems from Lenin). Left neoliberals largely agreed with this view. They also counseled that a centralized industry was more readily prone to takeover by an authoritarian state. In addition, they rejected “bigness” because they considered it an unnatural perversion of human nature (though such critiques surprisingly did not seem to translate to the state). As Wilhelm Röpke notes in “A Humane Economy”:

Nothing is more detrimental to a sound general order appropriate to human nature than two things: mass and concentration.

“Bigness,” Roepke thought, had come about as a result of one particularly harmful but pervasive trend of modernity: “economism,” a frequent target of left neoliberals that refers to a fixation with indicators of economic performance at the expense of deeper social and spiritual values.

But it would be a mistake to conclude that left neoliberals viewed competition as a panacea. Private property, profit, and competition (the foundations of liberalism) were as socially corrosive as they were beneficial. They were, according to Wilhelm Röpke:

justifiable only within certain limits, and in remembering this we return to the realm beyond supply and demand. In other words, the market economy is not everything. It must find its place within a higher order of things which is not ruled by supply and demand, free prices, and competition.

Competition, in other words, was as Luigi Einaudi put it, a paradox. It was beneficial, but could also be socially and morally ruinous. 

The Goals and Boundaries of Public Policy

The perceived failures of liberalism guided the contrasting notions of what a reformed neoliberalism should look like. On the one hand, European left neoliberals and American progressives thought that liberalism suffered from certain inherent deficiencies that could not be resolved within the liberal paradigm and that called for mitigating policies and social-safety nets. Again, these resonated with familiar criticisms levied by the right and the left, such as, e.g., excessive individualism; the loss of shared values and a sense of community; a lack of “social integration”; worker alienation (in an essay titled “Social Policy or Vitalpolitik (Organic Policy),” Alexander Rüstow starts by citing Friedrich Engels’ 1945 “The Condition of the Working-Class in England”); and the socially explosive elements of competition and markets. These spiritual dislocations arguably weighed more than any material or economic shortcomings, and were at the root of the liberal debacle. As Walter Eucken argued:

Quite obviously, the reasons for the anti-capitalistic attitude of the masses cannot be found in any deterioration of the living conditions brought about by capitalism. […] The turning of the masses against capitalism is rather a phenomenon that can only be understood in terms of the sensibilities of modern man.  

In response, the left neoliberals called for an “organic policy” that would approach markets and competition as not purely an economic, but also a social phenomena (a similar view was expressed by Justice Brandeis). In this new hybrid vision of liberalism, “there would be counterweights to competition and the mechanical operation of prices.” Competition and the market’s other imperatives would be tempered by balancing considerations and subordinated to “higher values” that were beyond the law of supply and demand—and beyond mere economic utility. As Wilhelm Röpke summarizes:

Competition, which we need as a regulator in a free market economy, comes up on all sides against limits which we would not wish to transgress. It remains morally and socially dangerous and can be defended only up to a point and with qualifications and modifications of all kinds.

Conversely, right neoliberals believed that the downfall of liberalism had been the result of a fundamental misunderstanding of its true ethos and an overabundance of conflicting rules and policies. It was not the inevitable upshot of liberalism itself. As Lionel Robbins posited:

It is not liberal institutions but the absence of such institutions which is responsible for the chaos of today.

Classical liberalism had stopped short on the road to exploring the full range of laws and institutions needed to sustain and perfect the “natural order.” But the prevalent social malaise—which had, no doubt, been adroitly instigated and exploited by collectivist demagogues—was not the result of some innate incompatibility between markets and human society. It had instead come about because of the failure to properly adjust the latter to the exigencies of the former. 

Additionally, right neoliberals rejected “organic” or “third way” policies of the sort favored by the left neoliberals, because they believed that it was not within the remit of public policy to answer existential questions or to provide “meaning” or “social integration.”  Granting the state the power to decide on such matters was a slippery slope that required it to override the preferences of some with its own. As such, it got dangerously close to the sort of collectivism that neoliberals rallied in opposition to in the first place. They also doubted the state’s ability to resolve such complex, value-laden questions. It was insights such as these that underpinned Friedrich Hayek’s theory of the gradual march towards serfdom and Ludwig von Mises’ quip that there is no such thing as a “third way” or a mixed economy. 

In consequence, the solution was not to restrain, mollify, or limit the spread or depth of markets in order to align them with some past ideal of parochial life, but to improve markets and to acclimatize societies to their workings through better laws and institutions.

Two Different Visions for Liberalism For Two Different Visions of Antitrust

In keeping with the theme of this series, the prescriptions for antitrust policy made by each strand of neoliberalism are not doctrinally extrapolated from their broader vision of society.

Left neoliberals and American progressives took Marxist and fascist attacks on liberalism seriously, but sought to address them through less radical channels. They wanted a “mostly liberal” third-way social order, in which markets and competition would be tempered by a host of other social and political considerations that were mediated by the state. This meant opposing “big business” as a matter of principle, infusing antitrust law with a host of non-economic goals and values, and granting enforcers the necessary discretion to decide in cases of conflict. 

Right neoliberals, on the other hand, sought to improve on the classical-liberal position through a more robust legal and institutional framework that operated primarily in the service of a single goal: economic efficiency. Economic efficiency—itself not a value-free notion—was, however, seen as a comparatively neutral, narrow, and predictable standard that, in turn, cabined enforcers’  scope of discretion and minimized the instances in which the state could override business decisions (and thus interfere with negative liberty). In the context of antitrust law, this tethered anticompetitive conduct and exemptions to the threshold requirement to find harms to consumers or to total welfare.

Conclusion

The pendulum of neoliberalism has swung in the past, with momentous implications for antitrust. The “Chicagoan” shift of the 1970s, for instance, was a move toward right neoliberalism, as was the “more economic approach” of EU competition law in the late 1990s. Conversely, more recent calls for the condemnation of “big business” on a range of moral and political grounds; “polycentric competition laws” with multiple goals and values; and the widening of state discretion to lead market developments in a socially desirable direction signal a move in the opposite direction. 

How should the newest iteration of the neoliberal “battle for the soul of antitrust” be resolved?

On the one hand, left neoliberalism—or what Americans typically just call “progressivism”—has intuitive and emotional appeal, particularly in a time of growing anti-capitalistic fervor. Today, as in the 1930s, many believe that market logic has overstepped its legitimate boundaries and that the most successful private companies are a looming enemy. From this perspective, a “market in society” approach—in which the government has more leeway to restrain corporate power and reshape markets in accordance with a range of social or political considerations—may sound more humane to some. 

If history teaches us anything, however, this populist approach to regulating competition is problematic for a number of reasons.

First, the overly complex web of mutually conflicting goals and values will inevitably require enforcement agencies to act as social engineers. In this position, they may use their enhanced discretion to decide whom or what to favor and to rank subjective values pursuant to personal moral heuristics. Public-choice theory and historical examples of state-led collectivist projects, however, counsel against assuming that government is able and willing to exercise such far-reaching oversight of society. In addition, as enforcers inevitably prove unfit to discharge their new role as philosopher-kings, and as their contradictory case law increasingly comes under contestation, activist attempts to widen the scope of antitrust law likely will be checked by the courts. 

Second, like the non-economic arguments against concentration raised today by progressives such as Tim Wu and Lina Khan, the left neoliberal position is largely based on aesthetic preference and intuition—not fact. Röpkean complaints about big business ruining the bucolic landscape where men are “vitally satisfied” in their small, tight-knit communities rests on a very idiosyncratic vision of the good life (left neoliberals romanticized Switzerland, for instance), and it’s one many do not share in the 21st century. Equally particular were Justice Brandeis’ own yeoman sensibilities, which led him to reject bigness as a matter of principle (unlike today’s neo-Brandeisians, however, he was also skeptical of big government). 

As to the persistent argument to curb “bigness” on political grounds: this would be more convincing if there was a clear, unambiguous relationship between market concentration or company size and the quality of democracy. This does not appear to be the case. In fact, the case for incorporating democratic concerns into antitrust seems unwittingly to rely on discredited Marxist theories about the relationship between German big business and the rise of Hitler. Unfortunately, these ideas have been so aggressively peddled by Marxists—who had a vested ideological interest in demonstrating that private corporations were the main culprits behind Nazism—during the 1960s and 70s that today they enjoy the status of dogma.

Alternatively, one might argue that the very existence of large concentrations of private economic power is antithetical to democracy because having the potential to exercise private power over another (without any actual interference) is anti-democratic (see here). But this lifts a particularistic vision of democracy—so-called republican democracy—over others. According to the more mainstream notion of liberal democracy, which gives precedence to negative freedom, any such interference with property rights may, in fact, be seen as deeply illiberal and undemocratic, especially as the inherent ambiguity of the “democracy” standard is likely to invite reprisals against political opponents.

Alas, right neoliberalism appears to be falling out of favor, as anti-market rhetoric seeps into the mainstream and politicians and intellectuals look to the past to find alternatives to a neoliberal system seen as too narrow and economistic. Ultimately, however, this may be precisely what we want public policy to be in a liberal world: focused on predictable and quantifiable standards that subject enforcers to the rigorous discipline of economic theory and leave them little space to act as social engineers or to exercise arbitrary authority. More than a century of intellectual effervescence and dangerous intellectual escapades has proven this to be the superior way to achieve both measurable policy outcomes that improve on the classical-liberal position and to avoid the Charybdis of state collectivism. In antitrust law, it has meant embracing economic analysis of the law and a narrow consumer-welfare standard to discern anticompetitive from procompetitive conduct. 

In the end, today’s “battle for the soul” of antitrust is a proxy for a much wider conflict of visions. Changing the consumer-welfare standard and the architecture of antitrust enforcement along lines preferred by progressives and left neoliberals would be both a symptom and a cause of a broader philosophical shift toward a worldview that makes some of the same deleterious mistakes it purports to correct: excessive government discretion in overseeing the economy; the subordination of individual freedom to an array of collectivist goals mediated by a public aristocracy; and the substitution of evidence-based policy for emotional impetus.

While the inherent contradictions and incongruence of that vision mean that the pendulum is likely to eventually swing back in the right direction, the damage will already have been done. This is why we must defend the consumer-welfare standard today more vigorously than ever: because ultimately, much more than the future of a niche field of law is at stake.

The concept of European “digital sovereignty” has been promoted in recent years both by high officials of the European Union and by EU national governments. Indeed, France made strengthening sovereignty one of the goals of its recent presidency in the EU Council.

The approach taken thus far both by the EU and by national authorities has been not to exclude foreign businesses, but instead to focus on research and development funding for European projects. Unfortunately, there are worrying signs that this more measured approach is beginning to be replaced by ill-conceived moves toward economic protectionism, ostensibly justified by national-security and personal-privacy concerns.

In this context, it is worth reconsidering why Europeans’ best interests are best served not by economic isolationism, but by an understanding of sovereignty that capitalizes on alliances with other free democracies.

Protectionism Under the Guise of Cybersecurity

Among the primary worrying signs regarding the EU’s approach to digital sovereignty is the union’s planned official cybersecurity-certification scheme. The European Commission is reportedly pushing for “digital sovereignty” conditions in the scheme, which would include data and corporate-entity localization and ownership requirements. This can be categorized as “hard” data localization in the taxonomy laid out by Peter Swire and DeBrae Kennedy-Mayo of Georgia Institute of Technology, in that it would prohibit both data transfers to other countries and for foreign capital to be involved in processing even data that is not transferred.

The European Cybersecurity Certification Scheme for Cloud Services (EUCS) is being prepared by ENISA, the EU cybersecurity agency. The scheme is supposed to be voluntary at first, but it is expected that it will become mandatory in the future, at least for some situations (e.g., public procurement). It was not initially billed as an industrial-policy measure and was instead meant to focus on technical security issues. Moreover, ENISA reportedly did not see the need to include such “digital sovereignty” requirements in the certification scheme, perhaps because they saw them as insufficiently grounded in genuine cybersecurity needs.

Despite ENISA’s position, the European Commission asked the agency to include the digital–sovereignty requirements. This move has been supported by a coalition of European businesses that hope to benefit from the protectionist nature of the scheme. Somewhat ironically, their official statement called on the European Commission to “not give in to the pressure of the ones who tend to promote their own economic interests,”

The governments of Denmark, Estonia, Greece, Ireland, Netherlands, Poland, and Sweden expressed “strong concerns” about the Commission’s move. In contrast, Germany called for a political discussion of the certification scheme that would take into account “the economic policy perspective.” In other words, German officials want the EU to consider using the cybersecurity-certification scheme to achieve protectionist goals.

Cybersecurity certification is not the only avenue by which Brussels appears to be pursuing protectionist policies under the guise of cybersecurity concerns. As highlighted in a recent report from the Information Technology & Innovation Foundation, the European Commission and other EU bodies have also been downgrading or excluding U.S.-owned firms from technical standard-setting processes.

Do Security and Privacy Require Protectionism?

As others have discussed at length (in addition to Swire and Kennedy-Mayo, also Theodore Christakis) the evidence for cybersecurity and national-security arguments for hard data localization have been, at best, inconclusive. Press reports suggest that ENISA reached a similar conclusion. There may be security reasons to insist upon certain ways of distributing data storage (e.g., across different data centers), but those reasons are not directly related to the division of national borders.

In fact, as illustrated by the well-known architectural goal behind the design of the U.S. military computer network that was the precursor to the Internet, security is enhanced by redundant distribution of data and network connections in a geographically dispersed way. The perils of putting “all one’s data eggs” in one basket (one locale, one data center) were amply illustrated when a fire in a data center of a French cloud provider, OVH, famously brought down millions of websites that were only hosted there. (Notably, OVH is among the most vocal European proponents of hard data localization).

Moreover, security concerns are clearly not nearly as serious when data is processed by our allies as it when processed by entities associated with less friendly powers. Whatever concerns there may be about U.S. intelligence collection, it would be detached from reality to suggest that the United States poses a national-security risk to EU countries. This has become even clearer since the beginning of the Russian invasion of Ukraine. Indeed, the strength of the U.S.-EU security relationship has been repeatedly acknowledged by EU and national officials.

Another commonly used justification for data localization is that it is required to protect Europeans’ privacy. The radical version of this position, seemingly increasingly popular among EU data-protection authorities, amounts to a call to block data flows between the EU and the United States. (Most bizarrely, Russia seems to receive a more favorable treatment from some European bureaucrats). The legal argument behind this view is that the United States doesn’t have sufficient legal safeguards when its officials process the data of foreigners.

The soundness of that view is debated, but what is perhaps more interesting is that similar privacy concerns have also been identified by EU courts with respect to several EU countries. The reaction of those European countries was either to ignore the courts, or to be “ruthless in exploiting loopholes” in court rulings. It is thus difficult to treat seriously the claims that Europeans’ data is much better safeguarded in their home countries than if it flows in the networks of the EU’s democratic allies, like the United States.

Digital Sovereignty as Industrial Policy

Given the above, the privacy and security arguments are unlikely to be the real decisive factors behind the EU’s push for a more protectionist approach to digital sovereignty, as in the case of cybersecurity certification. In her 2020 State of the Union speech, EU Commission President Ursula von der Leyen stated that Europe “must now lead the way on digital—or it will have to follow the way of others, who are setting these standards for us.”

She continued: “On personalized data—business to consumer—Europe has been too slow and is now dependent on others. This cannot happen with industrial data.” This framing suggests an industrial-policy aim behind the digital-sovereignty agenda. But even in considering Europe’s best interests through the lens of industrial policy, there are reasons to question the manner in which “leading the way on digital” is being implemented.

Limitations on foreign investment in European tech businesses come with significant costs to the European tech ecosystem. Those costs are particularly high in the case of blocking or disincentivizing American investment.

Effect on startups

Early-stage investors such as venture capitalists bring more than just financial capital. They offer expertise and other vital tools to help the businesses in which they invest. It is thus not surprising that, among the best investors, those with significant experience in a given area are well-represented. Due to the successes of the U.S. tech industry, American investors are especially well-positioned to play this role.

In contrast, European investors may lack the needed knowledge and skills. For example, in its report on building “deep tech” companies in Europe, Boston Consulting Group noted that a “substantial majority of executives at deep-tech companies and more than three-quarters of the investors we surveyed believe that European investors do not have a good understanding of what deep tech is.”

More to the point, even where EU players do hold advantages, a cooperative economic and technological system will allow the comparative advantage of both U.S. and EU markets to redound to each others’ benefit. That is to say, of course not all U.S. investment expertise will apply in the EU, but certainly some will. Similarly, there will be EU firms that are positioned to share their expertise in the United States. But there is no ex ante way to know when and where these complementarities will exist, which essentially dooms efforts at centrally planning technological cooperation.

Given the close economic, cultural, and historical ties of the two regions, it makes sense to work together, particularly given the rising international-relations tensions outside of the western sphere. It also makes sense, insofar as the relatively open private-capital-investment environment in the United States is nearly impossible to match, let alone surpass, through government spending.

For example, national government and EU funding in Europe has thus far ranged from expensive failures (the “Google-killer”) to the all-too-predictable bureaucracy-heavy grantmaking, the beneficiaries of which describe as lacking flexibility, “slow,” “heavily process-oriented,” and expensive for businesses to navigate. As reported by the Financial Times’ Sifted website, the EU’s own startup-investment scheme (the European Innovation Council) backed only one business over more than a year, and it had “delays in payment” that “left many startups short of cash—and some on the brink of going out of business.”

Starting new business ventures is risky, especially for the founders. They risk devoting their time, resources, and reputation to an enterprise that may very well fail. Given this risk of failure, the potential upside needs to be sufficiently high to incentivize founders and early employees to take the gamble. This upside is normally provided by the possibility of selling one’s shares in a business. In BCG’s previously cited report on deep tech in Europe, respondents noted that the European ecosystem lacks “clear exit opportunities”:

Some investors fear being constrained by European sovereignty concerns through vetoes at the state or Europe level or by rules potentially requiring European ownership for deep-tech companies pursuing strategically important technologies. M&A in Europe does not serve as the active off-ramp it provides in the US. From a macroeconomic standpoint, in the current environment, investment and exit valuations may be impaired by inflation or geopolitical tensions.

More broadly, those exit opportunities also factor importantly into funders’ appetite to price the risk of failure in their ventures. Where the upside is sufficiently large, an investor might be willing to experiment in riskier ventures and be suitably motivated to structure investments to deal with such risks. But where the exit opportunities are diminished, it makes much more sense to spend time on safer bets that may provide lower returns, but are less likely to fail. Coupled with the fact that government funding must run through bureaucratic channels, which are inherently risk averse, the overall effect is a less dynamic funding system.

The Central and Eastern Europe (CEE) region is an especially good example of the positive influence of American investment in Europe’s tech ecosystem. According to the state-owned Polish Development Fund and Dealroom.co, in 2019, $0.9 billion of venture-capital investment in CEE came from the United States, $0.5 billion from Europe, and $0.1 billion from the rest of the world.

Direct investment

Technological investment is rarely, if ever, a zero-sum game. U.S. firms that invest in the EU (and vice versa) do not do so as foreign conquerors, but as partners whose own fortunes are intertwined with their host country. Consider, for example, Google’s recent PLN 2.7 billion investment in Poland. Far from extractive, that investment will build infrastructure in Poland, and will employ an additional 2,500 Poles in the company’s cloud-computing division. This sort of partnership plants the seeds that grow into a native tech ecosystem. The Poles that today work in Google’s cloud-computing division are the founders of tomorrow’s innovative startups rooted in Poland.

The funding that accompanies native operations of foreign firms also has a direct impact on local economies and tech ecosystems. More local investment in technology creates demand for education and support roles around that investment. This creates a virtuous circle that ultimately facilitates growth in the local ecosystem. And while this direct investment is important for large countries, in smaller countries, it can be a critical component in stimulating their own participation in the innovation economy. 

According to Crunchbase, out of 2,617 EU-headquartered startups founded since 2010 with total equity funding amount of at least $10 million, 927 (35%) had at least one founder who previously worked for an American company. For example, two of the three founders of Madrid-based Seedtag (total funding of more than $300 million) worked at Google immediately before starting Seedtag.

It is more difficult to quantify how many early employees of European startups built their experience in American-owned companies, but it is likely to be significant and to become even more so, especially in regions—like Central and Eastern Europe—with significant direct U.S. investment in local talent.

Conclusion

Explicit industrial policy for protectionist ends is—at least, for the time being—regarded as unwise public policy. But this is not to say that countries do not have valid national interests that can be met through more productive channels. While strong data-localization requirements is ultimately counterproductive, particularly among closely allied nations, countries have a legitimate interest in promoting the growth of the technology sector within their borders.

National investment in R&D can yield fruit, particularly when that investment works in tandem with the private sector (see, e.g., the Bayh-Dole Act in the United States). The bottom line, however, is that any intervention should take care to actually promote the ends it seeks. Strong data-localization policies in the EU will not lead to success of the local tech industry, but it will serve to wall the region off from the kind of investment that can make it thrive.

The business press generally describes the gig economy that has sprung up around digital platforms like Uber and TaskRabbit as a beneficial phenomenon, “a glass that is almost full.” The gig economy “is an economy that operates flexibly, involving the exchange of labor and resources through digital platforms that actively facilitate buyer and seller matching.”

From the perspective of businesses, major positive attributes of the gig economy include cost-effectiveness (minimizing costs and expenses); labor-force efficiencies (“directly matching the company to the freelancer”); and flexible output production (individualized work schedules and enhanced employee motivation). Workers also benefit through greater independence, enhanced work flexibility (including hours worked), and the ability to earn extra income.

While there are some disadvantages, as well, (worker-commitment questions, business-ethics issues, lack of worker benefits, limited coverage of personal expenses, and worker isolation), there is no question that the gig economy has contributed substantially to the growth and flexibility of the American economy—a major social good. Indeed, “[i]t is undeniable that the gig economy has become an integral part of the American workforce, a trend that has only been accelerated during the” COVID-19 pandemic.

In marked contrast, however, the Federal Trade Commission’s (FTC) Sept. 15 Policy Statement on Enforcement Related to Gig Work (“gig statement” or “statement”) is the story of a glass that is almost empty. The accompanying press release declaring “FTC to Crack Down on Companies Taking Advantage of Gig Workers” (since when is “taking advantage of workers” an antitrust or consumer-protection offense?) puts an entirely negative spin on the gig economy. And while the gig statement begins by describing the nature and large size of the gig economy, it does so in a dispassionate and bland tone. No mention is made of the substantial benefits for consumers, workers, and the overall economy stemming from gig work. Rather, the gig statement quickly adopts a critical perspective in describing the market for gig workers and then addressing gig-related FTC-enforcement priorities. What’s more, the statement deals in very broad generalities and eschews specifics, rendering it of no real use to gig businesses seeking practical guidance.

Most significantly, the gig statement suggests that the FTC should play a significant enforcement role in gig-industry labor questions that fall outside its statutory authority. As such, the statement is fatally flawed as a policy document. It provides no true guidance and should be substantially rewritten or withdrawn.

Gig Statement Analysis

The gig statement’s substantive analysis begins with a negative assessment of gig-firm conduct. It expresses concern that gig workers are being misclassified as independent contractors and are thus deprived “of critical rights [right to organize, overtime pay, health and safety protections] to which they are entitled under law.” Relatedly, gig workers are said to be “saddled with inordinate risks.” Gig firms also “may use transparent algorithms to capture more revenue from customer payments for workers’ services than customers or workers understand.”

Heaven forfend!

The solution offered by the gig statement is “scrutiny of promises gig platforms make, or information they fail to disclose, about the financial proposition of gig work.” No mention is made of how these promises supposedly made to workers about the financial ramifications of gig employment are related to the FTC’s statutory mission (which centers on unfair or deceptive acts or practices affecting consumers or unfair methods of competition).

The gig statement next complains that a “power imbalance” between gig companies and gig workers “may leave gig workers exposed to harms from unfair, deceptive, and anticompetitive practices and is likely to amplify such harms when they occur. “Power imbalance” along a vertical chain has not been a source of serious antitrust concern for decades (and even in the case of the Robinson-Patman Act, the U.S. Supreme Court most recently stressed, in 2005’s Volvo v. Reeder, that harm to interbrand competition is the key concern). “Power imbalances” between workers and employers bear no necessary relation to consumer welfare promotion, which the Supreme Court teaches is the raison d’etre of antitrust. Moreover, the FTC does not explain why unfair or deceptive conduct likely follows from the mere existence of substantial bargaining power. Such an unsupported assertion is not worthy of being included in a serious agency-policy document.

The gig statement then engages in more idle speculation about a supposed relationship between market concentration and the proliferation of unfair and deceptive practices across the gig economy. The statement claims, without any substantiation, that gig companies in concentrated platform markets will be incentivized to exert anticompetitive market power over gig workers, and thereby “suppress wages below competitive rates, reduce job quality, or impose onerous terms on gig workers.” Relatedly, “unfair and deceptive practices by one platform can proliferate across the labor market, creating a race to the bottom that participants in the gig economy, and especially gig workers, have little ability to avoid.” No empirical or theoretical support is advanced for any of these bald assertions, which give the strong impression that the commission plans to target gig-economy companies for enforcement actions without regard to the actual facts on the ground. (By contrast, the commission has in the past developed detailed factual records of competitive and/or consumer-protection problems in health care and other important industry sectors as a prelude to possible future investigations.)

The statement then launches into a description of the FTC’s gig-economy policy priorities. It notes first that “workers may be deprived of the protections of an employment relationship” when gig firms classify them as independent contractors, leading to firms’ “disclosing [of] pay and costs in an unfair and deceptive manner.” What’s more, the FTC “also recognizes that misleading claims [made to workers] about the costs and benefits of gig work can impair fair competition among companies in the gig economy and elsewhere.”

These extraordinary statements seem to be saying that the FTC plans to closely scrutinize gig-economy-labor contract negotiations, based on its distaste for independent contracting (which it believes should be supplanted by employer-employee relationships, a question of labor law, not FTC law). Nowhere is it explained where such a novel FTC exercise of authority comes from, nor how such FTC actions have any bearing on harms to consumer welfare. The FTC’s apparent desire to force employment relationships upon gig firms is far removed from harm to competition or unfair or deceptive practices directed at consumers. Without more of an explanation, one is left to conclude that the FTC is proposing to take actions that are far beyond its statutory remit.

The gig statement next tries to tie the FTC’s new gig program to violations of the FTC Act (“unsubstantiated claims”); the FTC’s Franchise Rule; and the FTC’s Business Opportunity Rule, violations of which “can trigger civil penalties.” The statement, however, lacks any sort of logical, coherent explanation of how the new enforcement program necessarily follows from these other sources of authority. While a few examples of rules-based enforcement actions that have some connection to certain terms of employment may be pointed to, such special cases are a far cry from any sort of general justification for turning the FTC into a labor-contracts regulator.

The statement then moves on to the alleged misuse of algorithmic tools dealing with gig-worker contracts and supervision that may lead to unlawful gig-worker oversight and termination. Once again, the connection of any of this to consumer-welfare harm (from a competition or consumer-protection perspective) is not made.

The statement further asserts that FTC Act consumer-protection violations may arise from “nonnegotiable” and other unfair contracts. In support of such a novel exercise of authority, however, the FTC cites supposedly analogous “unfair” clauses found in consumer contracts with individuals or small-business consumers. It is highly doubtful that these precedents support any FTC enforcement actions involving labor contracts.

Noncompete clauses with individuals are next on the gig statement’s agenda. It is claimed that “[n]on-compete provisions may undermine free and fair labor markets by restricting workers’ ability to obtain competitive offers for their services from existing companies, resulting in lower wages and degraded working conditions. These provisions may also raise barriers to entry for new companies.” The assertion, however, that such clauses may violate Section 1 of the Sherman Act or Section 5 of the FTC Act’s bar on unfair methods of competition, seems dubious, to say the least. Unless there is coordination among companies, these are essentially unilateral contracting practices that may have robust efficiency explanations. Making out these practices to be federal antitrust violations is bad law and bad policy; they are, in any event, subject to a wide variety of state laws.

Even more problematic is the FTC’s claim that a variety of standard (typically efficiency-seeking) contract limitations, such as nondisclosure agreements and liquidated damages clauses, “may be excessive or overbroad” and subject to FTC scrutiny. This preposterous assertion would make the FTC into a second-guesser of common labor contracts (a federal labor-contract regulator, if you will), a role for which it lacks authority and is entirely unsuited. Turning the FTC into a federal labor-contract regulator would impose unjustifiable uncertainty costs on business and chill a host of efficient arrangements. It is hard to take such a claim of power seriously, given its lack of any credible statutory basis.

The final section of the gig statement dealing with FTC enforcement (“Policing Unfair Methods of Competition That Harm Gig Workers”) is unobjectionable, but not particularly informative. It essentially states that the FTC’s black letter legal authority over anticompetitive conduct also extends to gig companies: the FTC has the authority to investigate and prosecute anticompetitive mergers; agreements among competitors to fix terms of employment; no-poach agreements; and acts of monopolization and attempted monopolization. (Tell us something we did not know!)

The fact that gig-company workers may be harmed by such arrangements is noted. The mere page and a half devoted to this legal summary, however, provides little practical guidance for gig companies as to how to avoid running afoul of the law. Antitrust policy statements may be excused if they provided less detailed guidance than antitrust guidelines, but it would be helpful if they did something more than provide a capsule summary of general American antitrust principles. The gig statement does not pass this simple test.

The gig statement closes with a few glittering generalities. Cooperation with other agencies is highlighted (for example, an information-sharing agreement with the National Labor Relations Board is described). The FTC describes an “Equity Action Plan” calling for a focus on how gig-economy antitrust and consumer-protection abuses harm underserved communities and low-wage workers.

The FTC finishes with a request for input from the public and from gig workers about abusive and potentially illegal gig-sector conduct. No mention is made of the fact that the FTC must, of course, conform itself to the statutory limitations on its jurisdiction in the gig sector, as in all other areas of the economy.

Summing Up the Gig Statement

In sum, the critical flaw of the FTC’s gig statement is its focus on questions of labor law and policy (including the question of independent contractor as opposed to employee status) that are the proper purview of federal and state statutory schemes not administered by the Federal Trade Commission. (A secondary flaw is the statement’s unbalanced portrayal of the gig sector, which ignores its beneficial aspects.) If the FTC decides that gig-economy issues deserve particular enforcement emphasis, it should (and, indeed, must) direct its attention to anticompetitive actions and unfair or deceptive acts or practices that harm consumers.

On the antitrust side, that might include collusion among gig companies on the terms offered to workers or perhaps “mergers to monopoly” between gig companies offering a particular service. On the consumer-protection side, that might include making false or materially misleading statements to consumers about the terms under which they purchase gig-provided services. (It would be conceivable, of course, that some of those statements might be made, unwittingly or not, by gig independent contractors, at the behest of the gig companies.)

The FTC also might carry out gig-industry studies to identify particular prevalent competitive or consumer-protection harms. The FTC should not, however, seek to transform itself into a gig-labor-market enforcer and regulator, in defiance of its lack of statutory authority to play this role.

Conclusion

The FTC does, of course, have a legitimate role to play in challenging unfair methods of competition and unfair acts or practices that undermine consumer welfare wherever they arise, including in the gig economy. But it does a disservice by focusing merely on supposed negative aspects of the gig economy and conjuring up a gig-specific “parade of horribles” worthy of close commission scrutiny and enforcement action.

Many of the “horribles” cited may not even be “bads,” and many of them are, in any event, beyond the proper legal scope of FTC inquiry. There are other federal agencies (for example, the National Labor Relations Board) whose statutes may prove applicable to certain problems noted in the gig statement. In other cases, statutory changes may be required to address certain problems noted in the statement (assuming they actually are problems). The FTC, and its fellow enforcement agencies, should keep in mind, of course, that they are not Congress, and wishing for legal authority to deal with problems does not create it (something the federal judiciary fully understands).  

In short, the negative atmospherics that permeate the gig statement are unnecessary and counterproductive; if anything, they are likely to convince at least some judges that the FTC is not the dispassionate finder of fact and enforcer of law that it claims to be. In particular, the judiciary is unlikely to be impressed by the FTC’s apparent effort to insert itself into questions that lie far beyond its statutory mandate.

The FTC should withdraw the gig statement. If, however, it does not, it should revise the statement in a manner that is respectful of the limits on the commission’s legal authority, and that presents a more dispassionate analysis of gig-economy business conduct.

In late August, Roberto Campos Neto, the head of Brazil’s central bank, is reported to have said about Pix, the bank’s two-year-old real-time-payments (RTP) system, that it “eliminates the need to have a credit card. I think that credit cards will cease to exist at some point soon.” Wow! Sounds amazing. A new system that does everything a credit card can do, but better.

As the old saying goes, however, something that sounds too good to be true probably isn’t. While Pix has some advantages, it also has many disadvantages. In particular, it lacks many of the features currently offered by credit cards, such as liability caps, fraud prevention, and—perhaps crucially—access to credit. So, it seems unlikely to replace credit cards any time soon.

Pix and the Unbanked

When Brazil’s central bank launched Pix in November 2020, evangelists at the bank hoped it would offer a low-cost alternative to existing payments and would entice some of the country’s tens of millions of unbanked and underbanked adults into the banking system. While Pix has, indeed, attracted many users, it has done little, if anything, to solve the problem of the unbanked.

Proponents of Pix asserted that the RTP system would dramatically reduce the number of unbanked individuals in Brazil. While it is true that many Brazilians who were previously unbanked do now have Pix accounts, it would be incorrect to conclude that Pix was the reason they ceased to be unbanked.

A study by Americas Market Intelligence (commissioned by Mastercard) found that, during the COVID-19 pandemic, “Brazil reduced its unbanked population by an astounding 73%.” But the study was based on research conducted between June and August 2020 and was published in October 2020, the month before Pix launched. It described the implementation of state and federal programs launched in Brazil in response to the pandemic:

  • The “Coronavoucher” program distributed emergency funds to low-income informal workers exclusively via state-owned bank Caixa Econômica Federal (CEF). Applications for funds could only be made via CEF’s Caixa Tem smartphone app, and funds were distributed via the same app. As of Aug. 5, 2020, 66 million people had received Coronavouchers via the Caix Tem app. Of those, 36 million were previously unbanked.
  • Merenda em Casa (“snack at home”), a program run by state governments, distributed funds to low-income families with children at public schools to help them pay for food while schools were closed due to COVID-19. The program distributed funds via PicPay and PagBank’s PagSeguro, both private-sector payment apps.

Following the launch of Pix, the central bank-run RTP program was made available to clients of Caixa Tem, PicPay, and PagBank. As a result, previously unbanked individuals who had become banked because of the Coronavoucher and Merenda em Casa programs were able to obtain and use Pix keys to send and receive payments.

It remains unclear, however, what proportion of those previously unbanked individuals actually use Pix. As Figure 1 below shows, the number of Pix keys registered vastly outstrips the number of users. As such, not only is it false to claim that Pix helped reduce the number of unbanked Brazilians, but it isn’t possible to say with certainty how many of those previously unbanked individuals are now active users of Pix.

FIGURE 1: Pix Keys Registered to Natural Persons and Pix Users Who Are Natural Persons

Pix-Created Problems

Pix suffered a series of data breaches this past year, with the end result that details of Pix accounts were stolen from more than 500,000 account holders. Meanwhile, hackers have set up fake apps designed to steal money from users’ bank accounts by masquerading as legitimate Pix-compliant wallets. And Pix has been associated with a rise in lightning kidnappings, whereby kidnappers force their victims to make a transfer on Pix in order to be released.

Faced with the problem that they cannot avoid having Pix because their banks have automatically enabled the system, some Brazilians have responded to the threat of kidnappings by purchasing second “Pix phones.” Users load these mid-range Android phones with banking and Pix apps and leave them at home. Meanwhile, they delete all banking apps from their primary phone. While such an approach ostensibly prevents criminals from stealing potentially large amounts of money from individuals who can afford to have a second phone, it is quite a costly and inconvenient solution.

Pix vs Credit Cards

Roberto Campos Neto reportedly conceded that Pix data breaches will occur “with some frequency.” This acknowledgment of Pix’s unresolved security issues is difficult to square with the central bank president’s claim that the service will soon replace credit cards. After all, the major credit-card networks (Visa, Mastercard, American Express, and Discover) have more than half a century of experience managing fraud, and have built massive artificial-intelligence-based systems to identify and prevent potentially fraudulent transactions. Pix has no such system. Credit-card networks have also developed a highly effective system for challenging fraudulent transactions called “chargebacks.”

Card networks’ investment in fraud management has enabled them to offer “zero liability” terms to cardholders, which has made credit cards attractive as a means of paying for goods and services, both at brick-and-mortar locations and online. While Pix now has a system to reverse fraudulent transactions, its reliability has yet to be tested, and Pix as yet does not offer zero liability. Thus, given the choice between a credit card and Pix, users are unlikely to use Pix to pay for goods where there is a risk that the business will fail to deliver goods or services as promised.  

Finally, credit cards offer users the ability to defer payment for no fee until their next bill becomes due (usually at least a month). And they offer the ability to defer payment for longer, if necessary, with interest payable on the amount outstanding.

Conclusion: There Ain’t No Such Thing as a Free Lunch

The investments that credit-card networks have made in the identification, prevention, and rectification of fraud have been possible because they are able to charge a (very small) fee to process transactions. Pix also charges merchants a small fee for transactions but, as noted, it is not able to offer the same protections.

Most Pix transactions to date have been person-to-person (P2P), effectively replacing transactions that would have otherwise been made with cash, checks, or online bank-to-bank funds transfers. That makes sense when one thinks about the risks involved. P2P transactions are likely to involve parties that know one another and/or are engaged in repeat business. By contrast, many consumer-to-business and business-to-business transactions involve parties that are relatively less well-known to one another and thus have more incentive to renege on commitments. Consumers are therefore more inclined to use the payment system with protections built in, while merchants—who are happy for the additional business—are willing to pay the price for that business.

The science-fiction writer Robert Heinlein popularized a pithy phrase to describe the idea that it is not possible to get something for nothing: “There Ain’t No Such Thing as a Free Lunch.” If Pix is to challenge credit cards as a real consumer-payments system, it will have to offer similar levels of fraud protection to consumers. That will not be cheap. While the central bank might continue to subsidize Pix transactions, doing so to the degree that would be necessary to offer such fraud protections would be an abuse of its position. Thinking otherwise is science fiction.

We’re back for another biweekly roundup – and what a biweekly it’s been! The JCPA rode, died, and rides again. Yet AICOA is AWOL. FTC Chair Lina Khan went to Congress and back to (Fordham) law school, making waves wherever she went. DOJ added to the agencies’ roster of recently lost cases. And the FTC is here to help gig workers get real jobs. All that and more, in this edition of the FTC UMC Roundup.

This week’s headline is, without a doubt, FTC’s Chair Lina Khan’s remarks at Fordham Law School’s Conference on International Antitrust Law & Policy, where she announced that the Commission is currently considering a new policy statement on use of the Commission’s Unfair Methods of Competition authority.

It comes as no surprise that the Commission will be issuing this statement, though the details and exact timing have yet to be disclosed. Khan’s remarks do shed some light on what can be expected – though again there are no surprises. She “believe[s] it is clear that respect for the rule of law requires [the Commission] to reactivate [its] standalone Section 5 enforcement program,” and that the statement must “reflect[] the statutory text, our institutional structure, the history of the statute, and the case law.” 

Earlier in her remarks, Khan points to standalone UMC claims the Commission litigated in the 1940s through 1970s – “invitations to collude; price discrimination claims against buyers not covered by the Clayton Act; de facto bundling, tying, and exclusive dealing; and a host of other practices.” This reads like a menu of claims that will be embraced by the new statement, for which she has found support in the history of the statute and case law.

In addition to her trip to New York, back home Khan also visited the Senate for an antitrust oversight hearing. Khan’s statement champions the Commission’s departure from longstanding antitrust principles and celebrates its more active enforcement efforts. Very unusually, her statement prompted a dissenting statement from Commissioners Phillips and Wilson. Phillips and Wilson note that under Khan the Commission has actually seen less enforcement activity, call out the myriad inaccurate factual assertions in Khan’s statement, and raise concern about too-aggressive efforts to push the Commission beyond its statutory authority.

Cristiano Lima has more coverage of the oversight hearing. After a bit over a year at the helm of the agency this was Khan’s first oversight hearing. From the tone of the questioning, she may wish that it was her last. But in the likely event that Republicans take the House in the midterms, it will likely just be the first, and the easiest, of many future trips to Congress.

In other news, Senators Amy Klobuchar (D-MN) and Ted Cruz (R-TX) show us that strange bedfellows do weird things in bed. That’s right, I’m talking about the Journalism Competition and Preservation Act (JCPA), sponsored by Klobuchar. The JCPA is an attempt to preserve competition in media markets by allowing cartelization in media markets. A couple of weeks ago, Sen. Klobuchar abruptly withdrew the JCPA (her own bill) from committee consideration after a surprise amendment from Sen. Cruz that was intended to limit platforms content moderation practices. In a legitimately surprising turn of events, Senators Klobuchar and Cruz agreed to compromise language that allows news outlets to collectively bargain with platforms and will “bar the tech firms from throttling, filtering, suppressing or curating content.”

Back on the FTC front, the Commission released a new Policy Statement on Enforcement Related to Gig Work. The statement explains that “Protecting these workers from unfair, deceptive, and anticompetitive practices is a priority, and the Federal Trade Commission will use its full authority to do so.” It is a curious policy statement for a number of reasons, not least of which is the purported use of the Commission’s consumer protection authority for employee protection – we have a National Labor Relations Board for that. More subtle, the statement refers throughout to “unfair, deceptive, and anticompetitive practices,” suggesting a hybrid approach to these issues that draws separately from the Commission’s consumer protection and antitrust authorities. This move is increasingly common in the Commission’s recent regulatory efforts.

Time for some quick hits. This week’s puzzler has got to be Commissioner Bedoya calling for a revitalization of the Robinson-Patman Act. But as with all things FTC, these days the new ideas seem to be the ones found in the back seat of a Delorean.

Alden Abbott draws our attention to the upcoming Axon case. To be argued in the Supreme Court on November 7th, this case raises both procedural and substantive challenges to the Commission’s constitutional structure. Abbott notes in passing the Commission’s recent losses before its ALJ in the Altria-JUUL  and Illumina-Grail mergers – and we can add the DOJ’s recent loss in its effort to block UnitedHealth’s acquisition of Change Healthcare to the agencies’ growing list of recent losses.

Charles Sauer takes a look at ongoing discussion of potential Republican nominees to fill Commissioner Phillip’s seat when he steps down from the FTC, asking Why Are Conservatives Intent On Cloning Lina Khan? He rightly argues that Republicans should not consider nominating someone who shares Khan’s disregard for the rule of law and sound economics, or who would embrace unchecked administrative power. Even if used to pursue valid goals, such abuses of regulatory authority are anathema to good government and basic conservative principles. Any Commissioner should put faithful execution of the Commission’s statutory mandate above their own policy preferences, including a commitment to acting pursuant to clearly expressed Congressional intent instead of through constitutionally-dubious administrative fiat.

What’s on tap for next week? The White House is convening its Competition Council on Monday. And for those wondering whether I forgot to discuss AICOA after mentioning it in the opening graf, no need to worry. It got just as much attention as needed.

A White House administration typically announces major new antitrust initiatives in the fall and spring, and this year is no exception. Senior Biden administration officials kicked off the fall season at Fordham Law School (more on that below) by shedding additional light on their plans to expand the accepted scope of antitrust enforcement.

Their aggressive enforcement statements draw headlines, but will the administration’s neo-Brandeisians actually notch enforcement successes? The prospects are cloudy, to say the least.

The U.S. Justice Department (DOJ) has lost some cartel cases in court this year (what was the last time that happened?) and, on Sept. 19, a federal judge rejected the DOJ’s attempt to enjoin United Health’s $13.8 billion bid for Change Healthcare. The Federal Trade Commission (FTC) recently lost two merger challenges before its in-house administrative law judge. It now faces a challenge to its administrative-enforcement processes before the U.S. Supreme Court (the Axon case, to be argued in November).

(Incidentally, on the other side of the Atlantic, the European Commission has faced some obstacles itself. Despite its recent Google victory, the Commission has effectively lost two abuse of dominance cases this year—the Intel and Qualcomm matters—before the European General Court.)

So, are the U.S. antitrust agencies chastened? Will they now go back to basics? Far from it. They enthusiastically are announcing plans to charge ahead, asserting theories of antitrust violations that have not been taken seriously for decades, if ever. Whether this turns out to be wise enforcement policy remains to be seen, but color me highly skeptical. Let’s take a quick look at some of the big enforcement-policy ideas that are being floated.

Fordham Law’s Antitrust Conference

Admiral David Farragut’s order “Damn the torpedoes, full speed ahead!” was key to the Union Navy’s August 1864 victory in the Battle of Mobile Bay, a decisive Civil War clash. Perhaps inspired by this display of risk-taking, the heads of the two federal antitrust agencies—DOJ Assistant Attorney General (AAG) Jonathan Kanter and FTC Chair Lina Khan—took a “damn the economics, full speed ahead” attitude in remarks at the Sept. 16 session of Fordham Law School’s 49th Annual Conference on International Antitrust Law and Policy. Special Assistant to the President Tim Wu was also on hand and emphasized the “all of government” approach to competition policy adopted by the Biden administration.

In his remarks, AAG Kanter seemed to be endorsing a “monopoly broth” argument in decrying the current “Whac-a-Mole” approach to monopolization cases. The intent may be to lessen the burden of proof of anticompetitive effects, or to bring together a string of actions taken jointly as evidence of a Section 2 violation. In taking such an approach, however, there is a serious risk that efficiency-seeking actions may be mistaken for exclusionary tactics and incorrectly included in the broth. (Notably, the U.S. Court of Appeals for the D.C. Circuit’s 2001 Microsoft opinion avoided the monopoly-broth problem by separately discussing specific company actions and weighing them on their individual merits, not as part of a general course of conduct.)

Kanter also recommended going beyond “our horizontal and vertical framework” in merger assessments, despite the fact that vertical mergers (involving complements) are far less likely to be anticompetitive than horizontal mergers (involving substitutes).

Finally, and perhaps most problematically, Kanter endorsed the American Innovative and Choice Online Act (AICOA), citing the protection it would afford “would-be competitors” (but what about consumers?). In so doing, the AAG ignored the fact that AICOA would prohibit welfare-enhancing business conduct and could be harmfully construed to ban mere harm to rivals (see, for example, Stanford professor Doug Melamed’s trenchant critique).

Chair Khan’s presentation, which called for a far-reaching “course correction” in U.S. antitrust, was even more bold and alarming. She announced plans for a new FTC Act Section 5 “unfair methods of competition” (UMC) policy statement centered on bringing “standalone” cases not reachable under the antitrust laws. Such cases would not consider any potential efficiencies and would not be subject to the rule of reason. Endorsing that approach amounts to an admission that economic analysis will not play a serious role in future FTC UMC assessments (a posture that likely will cause FTC filings to be viewed skeptically by federal judges).

In noting the imminent release of new joint DOJ-FTC merger guidelines, Khan implied that they would be animated by an anti-merger philosophy. She cited “[l]awmakers’ skepticism of mergers” and congressional rejection “of economic debits and credits” in merger law. Khan thus asserted that prior agency merger guidance had departed from the law. I doubt, however, that many courts will be swayed by this “economics free” anti-merger revisionism.

Tim Wu’s remarks closing the Fordham conference had a “big picture” orientation. In an interview with GW Law’s Bill Kovacic, Wu briefly described the Biden administration’s “whole of government” approach, embodied in President Joe Biden’s July 2021 Executive Order on Promoting Competition in the American Economy. While the order’s notion of breaking down existing barriers to competition across the American economy is eminently sound, many of those barriers are caused by government restrictions (not business practices) that are not even alluded to in the order.

Moreover, in many respects, the order seeks to reregulate industries, misdiagnosing many phenomena as business abuses that actually represent efficient free-market practices (as explained by Howard Beales and Mark Jamison in a Sept. 12 Mercatus Center webinar that I moderated). In reality, the order may prove to be on net harmful, rather than beneficial, to competition.

Conclusion

What is one to make of the enforcement officials’ bold interventionist screeds? What seems to be missing in their presentations is a dose of humility and pragmatism, as well as appreciation for consumer welfare (scarcely mentioned in the agency heads’ presentations). It is beyond strange to see agencies that are having problems winning cases under conventional legal theories floating novel far-reaching initiatives that lack a sound economics foundation.

It is also amazing to observe the downplaying of consumer welfare by agency heads, given that, since 1979 (in Reiter v. Sonotone), the U.S. Supreme Court has described antitrust as a “consumer welfare prescription.” Unless there is fundamental change in the makeup of the federal judiciary (and, in particular, the Supreme Court) in the very near future, the new unconventional theories are likely to fail—and fail badly—when tested in court. 

Bringing new sorts of cases to test enforcement boundaries is, of course, an entirely defensible role for U.S. antitrust leadership. But can the same thing be said for bringing “non-boundary” cases based on theories that would have been deemed far beyond the pale by both Republican and Democratic officials just a few years ago? Buckle up: it looks as if we are going to find out. 

The practice of so-called “self-preferencing” has come to embody the zeitgeist of competition policy for digital markets, as legislative initiatives are undertaken in jurisdictions around the world that to seek, in various ways, to constrain large digital platforms from granting favorable treatment to their own goods and services. The core concern cited by policymakers is that gatekeepers may abuse their dual role—as both an intermediary and a trader operating on the platform—to pursue a strategy of biased intermediation that entrenches their power in core markets (defensive leveraging) and extends it to associated markets (offensive leveraging).

In addition to active interventions by lawmakers, self-preferencing has also emerged as a new theory of harm before European courts and antitrust authorities. Should antitrust enforcers be allowed to pursue such a theory, they would gain significant leeway to bypass the legal standards and evidentiary burdens traditionally required to prove that a given business practice is anticompetitive. This should be of particular concern, given the broad range of practices and types of exclusionary behavior that could be characterized as self-preferencing—only some of which may, in some specific contexts, include exploitative or anticompetitive elements.

In a new working paper for the International Center for Law & Economics (ICLE), I provide an overview of the relevant traditional antitrust theories of harm, as well as the emerging case law, to analyze whether and to what extent self-preferencing should be considered a new standalone offense under EU competition law. The experience to date in European case law suggests that courts have been able to address platforms’ self-preferencing practices under existing theories of harm, and that it may not be sufficiently novel to constitute a standalone theory of harm.

European Case Law on Self-Preferencing

Practices by digital platforms that might be deemed self-preferencing first garnered significant attention from European competition enforcers with the European Commission’s Google Shopping investigation, which examined whether the search engine’s results pages positioned and displayed its own comparison-shopping service more favorably than the websites of rival comparison-shopping services. According to the Commission’s findings, Google’s conduct fell outside the scope of competition on the merits and could have the effect of extending Google’s dominant position in the national markets for general Internet search into adjacent national markets for comparison-shopping services, in addition to protecting Google’s dominance in its core search market.

Rather than explicitly posit that self-preferencing (a term the Commission did not use) constituted a new theory of harm, the Google Shopping ruling described the conduct as belonging to the well-known category of “leveraging.” The Commission therefore did not need to propagate a new legal test, as it held that the conduct fell under a well-established form of abuse. The case did, however, spur debate over whether the legal tests the Commission did apply effectively imposed on Google a principle of equal treatment of rival comparison-shopping services.

But it should be noted that conduct similar to that alleged in the Google Shopping investigation actually came before the High Court of England and Wales several months earlier, this time in a dispute between Google and Streetmap. At issue in that case was favorable search results Google granted to its own maps, rather than to competing online maps. The UK Court held, however, that the complaint should have been appropriately characterized as an allegation of discrimination; it further found that Google’s conduct did not constitute anticompetitive foreclosure. A similar result was reached in May 2020 by the Amsterdam Court of Appeal in the Funda case.  

Conversely, in June 2021, the French Competition Authority (AdlC) followed the European Commission into investigating Google’s practices in the digital-advertising sector. Like the Commission, the AdlC did not explicitly refer to self-preferencing, instead describing the conduct as “favoring.”

Given this background and the proliferation of approaches taken by courts and enforcers to address similar conduct, there was significant anticipation for the judgment that the European General Court would ultimately render in the appeal of the Google Shopping ruling. While the General Court upheld the Commission’s decision, it framed self-preferencing as a discriminatory abuse. Further, the Court outlined four criteria that differentiated Google’s self-preferencing from competition on the merits.

Specifically, the Court highlighted the “universal vocation” of Google’s search engine—that it is open to all users and designed to index results containing any possible content; the “superdominant” position that Google holds in the market for general Internet search; the high barriers to entry in the market for general search services; and what the Court deemed Google’s “abnormal” conduct—behaving in a way that defied expectations, given a search engine’s business model, and that changed after the company launched its comparison-shopping service.

While the precise contours of what the Court might consider discriminatory abuse aren’t yet clear, the decision’s listed criteria appear to be narrow in scope. This stands at odds with the much broader application of self-preferencing as a standalone abuse, both by the European Commission itself and by some national competition authorities (NCAs).

Indeed, just a few weeks after the General Court’s ruling, the Italian Competition Authority (AGCM) handed down a mammoth fine against Amazon over preferential treatment granted to third-party sellers who use the company’s own logistics and delivery services. Rather than reflecting the qualified set of criteria laid out by the General Court, the Italian decision was clearly inspired by the Commission’s approach in Google Shopping. Where the Commission described self-preferencing as a new form of leveraging abuse, AGCM characterized Amazon’s practices as tying.

Self-preferencing has also been raised as a potential abuse in the context of data and information practices. In November 2020, the European Commission sent Amazon a statement of objections detailing its preliminary view that the company had infringed antitrust rules by making systematic use of non-public business data, gathered from independent retailers who sell on Amazon’s marketplace, to advantage the company’s own retail business. (Amazon responded with a set of commitments currently under review by the Commission.)

Both the Commission and the U.K. Competition and Markets Authority have lodged similar allegations against Facebook over data gathered from advertisers and then used to compete with those advertisers in markets in which Facebook is active, such as classified ads. The Commission’s antitrust proceeding against Apple over its App Store rules likewise highlights concerns that the company may use its platform position to obtain valuable data about the activities and offers of its competitors, while competing developers may be denied access to important customer data.

These enforcement actions brought by NCAs and the Commission appear at odds with the more bounded criteria set out by the General Court in Google Shopping, and raise tremendous uncertainty regarding the scope and definition of the alleged new theory of harm.

Self-Preferencing, Platform Neutrality, and the Limits of Antitrust Law

The growing tendency to invoke self-preferencing as a standalone theory of antitrust harm could serve two significant goals for European competition enforcers. As mentioned earlier, it offers a convenient shortcut that could allow enforcers to skip the legal standards and evidentiary burdens traditionally required to prove anticompetitive behavior. Moreover, it can function, in practice, as a means to impose a neutrality regime on digital gatekeepers, with the aims of both ensuring a level playing field among competitors and neutralizing the potential conflicts of interests implicated by dual-mode intermediation.

The dual roles performed by some platforms continue to fuel the never-ending debate over vertical integration, as well as related concerns that, by giving preferential treatment to its own products and services, an integrated provider may leverage its dominance in one market to related markets. From this perspective, self-preferencing is an inevitable byproduct of the emergence of ecosystems.

However, as the Australian Competition and Consumer Commission has recognized, self-preferencing conduct is “often benign.” Furthermore, the total value generated by an ecosystem depends on the activities of independent complementors. Those activities are not completely under the platform’s control, although the platform is required to establish and maintain the governance structures regulating access to and interactions around that ecosystem.

Given this reality, a complete ban on self-preferencing may call the very existence of ecosystems into question, challenging their design and monetization strategies. Preferential treatment can take many different forms with many different potential effects, all stemming from platforms’ many different business models. This counsels for a differentiated, case-by-case, and effects-based approach to assessing the alleged competitive harms of self-preferencing.

Antitrust law does not impose on platforms a general duty to ensure neutrality by sharing their competitive advantages with rivals. Moreover, possessing a competitive advantage does not automatically equal an anticompetitive effect. As the European Court of Justice recently stated in Servizio Elettrico Nazionale, competition law is not intended to protect the competitive structure of the market, but rather to protect consumer welfare. Accordingly, not every exclusionary effect is detrimental to competition. Distinctions must be drawn between foreclosure and anticompetitive foreclosure, as only the latter may be penalized under antitrust.

[This post from Jonathan M. Barnett, the Torrey H. Webb Professor of Law at the University of Southern California’s Gould School of Law, is an entry in Truth on the Market’s continuing FTC UMC Rulemaking symposium. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

In its Advance Notice for Proposed Rulemaking (ANPR) on Commercial Surveillance and Data Security, the Federal Trade Commission (FTC) has requested public comment on an unprecedented initiative to promulgate and implement wide-ranging rules concerning the gathering and use of consumer data in digital markets. In this contribution, I will assume, for the sake of argument, that the commission has the legal authority to exercise its purported rulemaking powers for this purpose without a specific legislative mandate (a question as to which I recognize there is great uncertainty, which is further heightened by the fact that Congress is concurrently considered legislation in the same policy area).

In considering whether to use these powers for the purposes of adopting and implementing privacy-related regulations in digital markets, the commission would be required to undertake a rigorous assessment of the expected costs and benefits of any such regulation. Any such cost-benefit analysis must comprise at least two critical elements that are omitted from, or addressed in highly incomplete form in, the ANPR.

The Hippocratic Oath of Regulatory Intervention

There is a longstanding consensus that regulatory intervention is warranted only if a market failure can be identified with reasonable confidence. This principle is especially relevant in the case of the FTC, which is entrusted with preserving competitive markets and, therefore, should be hesitant about intervening in market transactions without a compelling evidentiary basis. As a corollary to this proposition, it is also widely agreed that implementing any intervention to correct a market failure would only be warranted to the extent that such intervention would be reasonably expected to correct any such failure at a net social gain.

This prudent approach tracks the “economic effect” analysis that the commission must apply in the rulemaking process contemplated under the Federal Trade Commission Act and the analysis of “projected benefits and … adverse economic effects” of proposed and final rules contemplated by the commission’s rules of practice. Consistent with these requirements, the commission has exhibited a longstanding commitment to thorough cost-benefit analysis. As observed by former Commissioner Julie Brill in 2016, “the FTC conducts its rulemakings with the same level of attention to costs and benefits that is required of other agencies.” Former Commissioner Brill also observed that the “FTC combines our broad mandate to protect consumers with a rigorous, empirical approach to enforcement matters.”

This demanding, fact-based protocol enhances the likelihood that regulatory interventions result in a net improvement relative to the status quo, an uncontroversial goal of any rational public policy. Unfortunately, the ANPR does not make clear that the commission remains committed to this methodology.

Assessing Market Failure in the Use of Consumer Data

To even “get off the ground,” any proposed privacy regulation would be required to identify a market failure arising from a particular use of consumer data. This requires a rigorous and comprehensive assessment of the full range of social costs and benefits that can be reasonably attributed to any such practice.

The ANPR’s Oversights

In contrast to the approach described by former Commissioner Brill, several elements of the ANPR raise significant doubts concerning the current commission’s willingness to assess evidence relevant to the potential necessity of privacy-related regulations in a balanced, rigorous, and comprehensive manner.

First, while the ANPR identifies a plethora of social harms attributable to data-collection practices, it merely acknowledges the possibility that consumers enjoy benefits from such practices “in theory.” This skewed perspective is not empirically serious. Focusing almost entirely on the costs of data collection and dismissing as conjecture any possible gains defies market realities, especially given the fact that (as discussed below) those gains are clearly significant and, in some cases, transformative.

Second, the ANPR’s choice of the normatively charged term “data surveillance” to encompass all uses of consumer data conveys the impression that all data collection through digital services is surreptitious or coerced, whereas (as discussed below) some users may knowingly provide such data to enable certain data-reliant functionalities.

Third, there is no mention in the ANPR that online providers widely provide users with notices concerning certain uses of consumer data and often require users to select among different levels of data collection.

Fourth, the ANPR unusually relies substantially on news websites and non-peer-reviewed publications in the style of policy briefs or advocacy papers, rather than the empirical social-science research on which the commission has historically made policy determinations.

This apparent indifference to analytical balance is particularly exhibited in the ANPR’s failure to address the economic gains generated through the use of consumer data in online markets. As was recognized in a 2014 White House report, many valuable digital services could not function effectively without engaging in some significant level of data collection. The examples are numerous and diverse, including traffic-navigation services that rely on data concerning a user’s geographic location (as well as other users’ geographic location); personalized ad delivery, which relies on data concerning a user’s search history and other disclosed characteristics; and search services, which rely on the ability to use user data to offer search services at no charge while offering targeted advertisements to paying advertisers.

There are equally clear gains on the “supply” side of the market. Data-collection practices can expand market access by enabling smaller vendors to leverage digital intermediaries to attract consumers that are most likely to purchase those vendors’ goods or services. The commission has recognized this point in the past, observing in a 2014 report:

Data brokers provide the information they compile to clients, who can use it to benefit consumers … [C]onsumers may benefit from increased and innovative product offerings fueled by increased competition from small businesses that are able to connect with consumers that they may not have otherwise been able to reach.

Given the commission’s statutory mission under the FTC Act to protect consumers’ interests and preserve competitive markets, these observations should be of special relevance.

Data Protection v. Data-Reliant Functionality

Data-reliant services yield social gains by substantially lowering transaction costs and, in the process, enabling services that would not otherwise be feasible, with favorable effects for consumers and vendors. This observation does not exclude the possibility that specific uses of consumer data may constitute a potential market failure that merits regulatory scrutiny and possible intervention (assuming there is sufficient legal authority for the relevant agency to undertake any such intervention). That depends on whether the social costs reasonably attributable to a particular use of consumer data exceed the social gains reasonably attributable to that use. This basic principle seems to be recognized by the ANPR, which states that the commission can only deem a practice “unfair” under the FTC Act if “it causes or is likely to cause substantial injury” and “the injury is not outweighed by benefits to consumers or competition.”

In implementing this principle, it is important to keep in mind that a market failure could only arise if the costs attributable to any particular use of consumer data are not internalized by the parties to the relevant transaction. This requires showing either that a particular use of consumer data imposes harms on third parties (a plausible scenario in circumstances implicating risks to data security) or consumers are not aware of, or do not adequately assess or foresee, the costs they incur as a result of such use (a plausible scenario in circumstances implicating risks to consumer data). For the sake of brevity, I will focus on the latter scenario.

Many scholars have taken the view that consumers do not meaningfully read privacy notices or consider privacy risks, although the academic literature has also recognized efforts by private entities to develop notice methodologies that can improve consumers’ ability to do so. Even accepting this view, however, it does not necessarily follow (as the ANPR appears to assume) that a more thorough assessment of privacy risks would inevitably lead consumers to elect higher levels of data privacy even where that would degrade functionality or require paying a positive price for certain services. That is a tradeoff that will vary across consumers. It is therefore difficult to predict and easy to get wrong.

As the ANPR indirectly acknowledges in questions 26 and 40, interventions that bar certain uses of consumer data may therefore harm consumers by compelling the modification, positive pricing, or removal from the market of popular data-reliant services. For this reason, some scholars and commentators have favored the informed-consent approach that provides users with the option to bar or limit certain uses of their data. This approach minimizes error costs since it avoids overestimating consumer preferences for privacy. Unlike a flat prohibition of certain uses of consumer data, it also can reflect differences in those preferences across consumers. The ANPR appears to dismiss this concern, asking in question 75 whether certain practices should be made illegal “irrespective of whether consumers consent to them” (my emphasis added).

Addressing the still-uncertain body of evidence concerning the tradeoff between privacy protections on the one hand and data-reliant functionalities on the other (as well as the still-unresolved extent to which users can meaningfully make that tradeoff) lies outside the scope of this discussion. However, the critical observation is that any determination of market failure concerning any particular use of consumer data must identify the costs (and specifically, identify non-internalized costs) attributable to any such use and then offset those costs against the gains attributable to that use.

This balancing analysis is critical. As the commission recognized in a 2015 report, it is essential to strike a balance between safeguarding consumer privacy without suppressing the economic gains that arise from data-reliant services that can benefit consumers and vendors alike. This even-handed approach is largely absent from the ANPR—which, as noted above, focuses almost entirely on costs while largely overlooking the gains associated with the uses of consumer data in online markets. This suggests a one-sided approach to privacy regulation that is incompatible with the cost-benefit analysis that the commission recognizes it must follow in the rulemaking process.

Private-Ordering Approaches to Consumer-Data Regulation

Suppose that a rigorous and balanced cost-benefit analysis determines that a particular use of consumer data would likely yield social costs that exceed social gains. It would still remain to be determined whether and howa regulator should intervene to yield a net social gain. As regulators make this determination, it is critical that they consider the full range of possible mechanisms to address a particular market failure in the use of consumer data.

Consistent with this approach, the FTC Act specifically requires that the commission specify in an ANPR “possible regulatory alternatives under consideration,” a requirement that is replicated at each subsequent stage of the rulemaking process, as provided in the rules of practice. The range of alternatives should include the possibility of taking no action, if no feasible intervention can be identified that would likely yield a net gain.

In selecting among those alternatives, it is imperative that the commission consider the possibility of unnecessary or overly burdensome rules that could impede the efficient development and supply of data-reliant services, either degrading the quality or raising the price of those services. In the past, the commission has emphasized this concern, stating in 2011 that “[t]he FTC actively looks for means to reduce burdens while preserving the effectiveness of a rule.”

This consideration (which appears to be acknowledged in question 24 of the ANPR) is of special importance to privacy-related regulation, given that the estimated annual costs to the U.S. economy (as calculated by the Information Technology and Innovation Foundation) of compliance with the most extensive proposed forms of privacy-related regulations would exceed $100 billion dollars. Those costs would be especially burdensome for smaller entities, effectively raising entry barriers and reducing competition in online markets (a concern that appears to be acknowledged in question 27 of the ANPR).

Given the exceptional breadth of the rules that the ANPR appears to contemplate—cover an ambitious range of activities that would typically be the subject of a landmark piece of federal legislation, rather than administrative rulemaking—it is not clear that the commission has seriously considered this vital point of concern.

In the event that the FTC does move forward with any of these proposed rulemakings (which would be required to rest on a factually supported finding of market failure), it would confront a range of possible interventions in markets for consumer data. That range is typically viewed as being bounded, on the least-interventionist side, by notice and consent requirements to facilitate informed user choice, and on the most interventionist side, by prohibitions that specifically bar certain uses of consumer data.

This is well-traveled ground within the academic and policy literature and the relative advantages and disadvantages of each regulatory approach are well-known (and differ depending on the type of consumer data and other factors). Within the scope of this contribution, I wish to address an alternative regulatory approach that lies outside this conventional range of policy options.

Bottom-Up v. Top-Down Regulation

Any cost-benefit analysis concerning potential interventions to modify or bar a particular use of consumer data, or to mandate notice-and-consent requirements in connection with any such use, must contemplate not only government-implemented solutions but also market-implemented solutions, including hybrid mechanisms in which government action facilitates or complements market-implemented solutions.

This is not a merely theoretical proposal (and is referenced indirectly in questions 36, 51, and 87 of the ANPR). As I have discussed in previously published research, the U.S. economy has a long-established record of having adopted, largely without government intervention, collective solutions to the information asymmetries that can threaten the efficient operation of consumer goods and services markets.

Examples abound: Underwriters Laboratories (UL), which establishes product-safety standards in hundreds of markets; large accounting firms, which confirm compliance with Generally Accepted Accounting Principles (GAAP), which are in turn established and updated by the Financial Accounting Standards Board, a private entity subject to oversight by the Securities and Exchange Commission; and intermediaries in other markets, such as consumer credit, business credit, insurance carriers, bond issuers, and content ratings in the entertainment and gaming industries. Collectively, these markets encompass thousands of providers, hundreds of millions of customers, and billions of dollars in value.

A collective solution is often necessary to resolve information asymmetries efficiently because the benefits from establishing an industrywide standard of product or service quality, together with a trusted mechanism for showing compliance with that standard, generates gains that cannot be fully internalized by any single provider.

Jurisdictions outside the United States have tended to address this collective-action problem through the top-down imposition of standards by government mandate and enforcement by regulatory agencies, as illustrated by the jurisdictions referenced by the ANPR that have imposed restrictions on the use of consumer data through direct regulatory intervention. By contrast, the U.S. economy has tended to favor the bottom-up development of voluntary standards, accompanied by certification and audit services, all accomplished by a mix of industry groups and third-party intermediaries. In certain markets, this may be a preferred model to address the information asymmetries between vendors and customers that are the key sources of potential market failure in the use of consumer data.

Privately organized initiatives to set quality standards and monitor compliance benefit the market by supplying a reliable standard that reduces information asymmetries and transaction costs between consumers and vendors. This, in turn, yields economic gains in the form of increased output, since consumers have reduced uncertainty concerning product quality. These quality standards are generally implemented through certification marks (for example, the “UL” certification mark) or ranking mechanisms (for example, consumer-credit or business-credit scores), which induce adoption and compliance through the opportunity to accrue reputational goodwill that, in turn, translates into economic gains.

These market-implemented voluntary mechanisms are a far less costly means to reduce information asymmetries in consumer-goods markets than regulatory interventions, which require significant investments of public funds in rulemaking, detection, investigation, enforcement, and adjudication activities.

Hybrid Policy Approaches

Private-ordering solutions to collective-action failures in markets that suffer from information asymmetries can sometimes benefit from targeted regulatory action, resulting in a hybrid policy approach. In particular, regulators can sometimes play two supplemental functions in this context.

First, regulators can require that providers in certain markets comply with (or can provide a liability safe harbor for providers that comply with) the quality standards developed by private intermediaries that have developed track records of efficiently establishing those standards and reliably confirming compliance. This mechanism is anticipated by the ANPR, which asks in question 51 whether the commission should “require firms to certify that their commercial surveillance practices meet clear standards concerning collection, use, retention, transfer, or monetization of consumer data” and further asks whether those standards should be set by “the Commission, a third-party organization, or some other entity.”

Other regulatory agencies already follow this model. For example, federal and state regulatory agencies in the fields of health care and education rely on accreditation by designated private entities for purposes of assessing compliance with applicable licensing requirements.

Second, regulators can supervise and review the quality standards implemented, adjusted, and enforced by private intermediaries. This is illustrated by the example of securities markets, in which the major exchanges institute and enforce certain governance, disclosure, and reporting requirements for listed companies but are subject to regulatory oversight by the SEC, which must approve all exchange rules and amendments. Similarly, major accounting firms monitor compliance by public companies with GAAP but must register with, and are subject to oversight by, the Public Company Accounting Oversight Board (PCAOB), a nonprofit entity subject to SEC oversight.

These types of hybrid mechanisms shift to private intermediaries most of the costs involved in developing, updating, and enforcing quality standards (in this context, standards for the use of consumer data) and harness private intermediaries’ expertise, capacities, and incentives to execute these functions efficiently and rapidly, while using targeted forms of regulatory oversight as a complementary policy tool.

Conclusion

Certain uses of consumer data in digital markets may impose net social harms that can be mitigated through appropriately crafted regulation. Assuming, for the sake of argument, that the commission has the legal power to enact regulation to address such harms (again, a point as to which there is great doubt), any specific steps must be grounded in rigorous and balanced cost-benefit analysis.

As a matter of law and sound public policy, it is imperative that the commission meaningfully consider the full range of reliable evidence to identify any potential market failures in the use of consumer data and how to formulate rules to rectify or mitigate such failures at a net social gain. Given the extent to which business models in digital environments rely on the use of consumer data, and the substantial value those business models confer on consumers and businesses, the potential “error costs” of regulatory overreach are high. It is therefore critical to engage in a thorough balancing of costs and gains concerning any such use.

Privacy regulation is a complex and economically consequential policy area that demands careful diagnosis and targeted remedies grounded in analysis and evidence, rather than sweeping interventions accompanied by rhetoric and anecdote.

The Federal Trade Commission (FTC) wants to review in advance all future acquisitions by Facebook parent Meta Platforms. According to a Sept. 2 Bloomberg report, in connection with its challenge to Meta’s acquisition of fitness-app maker Within Unlimited,  the commission “has asked its in-house court to force both Meta and [Meta CEO Mark] Zuckerberg to seek approval from the FTC before engaging in any future deals.”

This latest FTC decision is inherently hyper-regulatory, anti-free market, and contrary to the rule of law. It also is profoundly anti-consumer.

Like other large digital-platform companies, Meta has conferred enormous benefits on consumers (net of payments to platforms) that are not reflected in gross domestic product statistics. In a December 2019 Harvard Business Review article, Erik Brynjolfsson and Avinash Collis reported research finding that Facebook:

…generates a median consumer surplus of about $500 per person annually in the United States, and at least that much for users in Europe. … [I]ncluding the consumer surplus value of just one digital good—Facebook—in GDP would have added an average of 0.11 percentage points a year to U.S. GDP growth from 2004 through 2017.

The acquisition of complementary digital assets—like the popular fitness app produced by Within—enables Meta to continually enhance the quality of its offerings to consumers and thereby expand consumer surplus. It reflects the benefits of economic specialization, as specialized assets are made available to enhance the quality of Meta’s offerings. Requiring Meta to develop complementary assets in-house, when that is less efficient than a targeted acquisition, denies these benefits.

Furthermore, in a recent editorial lambasting the FTC’s challenge to a Meta-Within merger as lacking a principled basis, the Wall Street Journal pointed out that the challenge also removes incentive for venture-capital investments in promising startups, a result at odds with free markets and innovation:

Venture capitalists often fund startups on the hope that they will be bought by larger companies. [FTC Chair Lina] Khan is setting down the marker that the FTC can block acquisitions merely to prevent big companies from getting bigger, even if they don’t reduce competition or harm consumers. This will chill investment and innovation, and it deserves a burial in court.

This is bad enough. But the commission’s proposal to require blanket preapprovals of all future Meta mergers (including tiny acquisitions well under regulatory pre-merger reporting thresholds) greatly compounds the harm from its latest ill-advised merger challenge. Indeed, it poses a blatant challenge to free-market principles and the rule of law, in at least three ways.

  1. It substitutes heavy-handed ex ante regulatory approval for a reliance on competition, with antitrust stepping in only in those limited instances where the hard facts indicate a transaction will be anticompetitive. Indeed, in one key sense, it is worse than traditional economic regulation. Empowering FTC staff to carry out case-by-case reviews of all proposed acquisitions inevitably will generate arbitrary decision-making, perhaps based on a variety of factors unrelated to traditional consumer-welfare-based antitrust. FTC leadership has abandoned sole reliance on consumer welfare as the touchstone of antitrust analysis, paving the wave for potentially abusive and arbitrary enforcement decisions. By contrast, statutorily based economic regulation, whatever its flaws, at least imposes specific standards that staff must apply when rendering regulatory determinations.
  2. By abandoning sole reliance on consumer-welfare analysis, FTC reviews of proposed Meta acquisitions may be expected to undermine the major welfare benefits that Meta has previously bestowed upon consumers. Given the untrammeled nature of these reviews, Meta may be expected to be more cautious in proposing transactions that could enhance consumer offerings. What’s more, the general anti-merger bias by current FTC leadership would undoubtedly prompt them to reject some, if not many, procompetitive transactions that would confer new benefits on consumers.
  3. Instituting a system of case-by-case assessment and approval of transactions is antithetical to the normal American reliance on free markets, featuring limited government intervention in market transactions based on specific statutory guidance. The proposed review system for Meta lacks statutory warrant and (as noted above) could promote arbitrary decision-making. As such, it seriously flouts the rule of law and threatens substantial economic harm (sadly consistent with other ill-considered initiatives by FTC Chair Khan, see here and here).

In sum, internet-based industries, and the big digital platforms, have thrived under a system of American technological freedom characterized as “permissionless innovation.” Under this system, the American people—consumers and producers—have been the winners.

The FTC’s efforts to micromanage future business decision-making by Meta, prompted by the challenge to a routine merger, would seriously harm welfare. To the extent that the FTC views such novel interventionism as a bureaucratic template applicable to other disfavored large companies, the American public would be the big-time loser.

[This post is an entry in Truth on the Market’s continuing FTC UMC Rulemaking symposium. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

The Federal Trade Commission’s (FTC) Aug. 22 Advance Notice of Proposed Rulemaking on Commercial Surveillance and Data Security (ANPRM) is breathtaking in its scope. For an overview summary, see this Aug. 11 FTC press release.

In their dissenting statements opposing ANPRM’s release, Commissioners Noah Phillips and Christine Wilson expertly lay bare the notice’s serious deficiencies. Phillips’ dissent stresses that the ANPRM illegitimately arrogates to the FTC legislative power that properly belongs to Congress:

[The [A]NPRM] recast[s] the Commission as a legislature, with virtually limitless rulemaking authority where personal data are concerned. It contemplates banning or regulating conduct the Commission has never once identified as unfair or deceptive. At the same time, the ANPR virtually ignores the privacy and security concerns that have animated our [FTC] enforcement regime for decades. … [As such, the ANPRM] is the first step in a plan to go beyond the Commission’s remit and outside its experience to issue rules that fundamentally alter the internet economy without a clear congressional mandate. That’s not “democratizing” the FTC or using all “the tools in the FTC’s toolbox.” It’s a naked power grab.

Wilson’s complementary dissent critically notes that the 2021 changes to FTC rules of practice governing consumer-protection rulemaking decrease opportunities for public input and vest significant authority solely with the FTC chair. She also echoed Phillips’ overarching concern with FTC overreach (footnote citations omitted):

Many practices discussed in this ANPRM are presented as clearly deceptive or unfair despite the fact that they stretch far beyond practices with which we are familiar, given our extensive law enforcement experience. Indeed, the ANPRM wanders far afield of areas for which we have clear evidence of a widespread pattern of unfair or deceptive practices. … [R]egulatory and enforcement overreach increasingly has drawn sharp criticism from courts. Recent Supreme Court decisions indicate FTC rulemaking overreach likely will not fare well when subjected to judicial review.

Phillips and Wilson’s warnings are fully warranted. The ANPRM contemplates a possible Magnuson-Moss rulemaking pursuant to Section 18 of the FTC Act,[1] which authorizes the commission to promulgate rules dealing with “unfair or deceptive acts or practices.” The questions that the ANPRM highlights center primarily on concerns of unfairness.[2] Any unfairness-related rulemaking provisions eventually adopted by the commission will have to satisfy a strict statutory cost-benefit test that defines “unfair” acts, found in Section 5(n) of the FTC Act. As explained below, the FTC will be hard-pressed to justify addressing most of the ANPRM’s concerns in Section 5(n) cost-benefit terms.

Discussion

The requirements imposed by Section 5(n) cost-benefit analysis

Section 5(n) codifies the meaning of unfair practices, and thereby constrains the FTC’s application of rulemakings covering such practices. Section 5(n) states:

The Commission shall have no authority … to declare unlawful an act or practice on the grounds that such an act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

In other words, a practice may be condemned as unfair only if it causes or is likely to cause “(1) substantial injury to consumers (2) which is not reasonably avoidable by consumers themselves and (3) not outweighed by countervailing benefits to consumers or to competition.”

This is a demanding standard. (For scholarly analyses of the standard’s legal and economic implications authored by former top FTC officials, see here, here, and here.)

First, the FTC must demonstrate that a practice imposes a great deal of harm on consumers, which they could not readily have avoided. This requires detailed analysis of the actual effects of a particular practice, not mere theoretical musings about possible harms that may (or may not) flow from such practice. Actual effects analysis, of course, must be based on empiricism: consideration of hard facts.

Second, assuming that this formidable hurdle is overcome, the FTC must then acknowledge and weigh countervailing welfare benefits that might flow from such a practice. In addition to direct consumer-welfare benefits, other benefits include “benefits to competition.” Those may include business efficiencies that reduce a firm’s costs, because such efficiencies are a driver of vigorous competition and, thus, of long-term consumer welfare. As the Organisation for Economic Co-operation and Development has explained (see OECD Background Note on Efficiencies, 2012, at 14), dynamic and transactional business efficiencies are particularly important in driving welfare enhancement.

In sum, under Section 5(n), the FTC must show actual, fact-based, substantial harm to consumers that they could not have escaped, acting reasonably. The commission must also demonstrate that such harm is not outweighed by consumer and (procompetitive) business-efficiency benefits. What’s more, Section 5(n) makes clear that the FTC cannot “pull a rabbit out of a hat” and interject other “public policy” considerations as key factors in the rulemaking  calculus (“[s]uch [other] public policy considerations may not serve as a primary basis for … [a] determination [of unfairness]”).

It ineluctably follows as a matter of law that a Section 18 FTC rulemaking sounding in unfairness must be based on hard empirical cost-benefit assessments, which require data grubbing and detailed evidence-based economic analysis. Mere anecdotal stories of theoretical harm to some consumers that is alleged to have resulted from a practice in certain instances will not suffice.

As such, if an unfairness-based FTC rulemaking fails to adhere to the cost-benefit framework of Section 5(n), it inevitably will be struck down by the courts as beyond the FTC’s statutory authority. This conclusion is buttressed by the tenor of the Supreme Court’s unanimous 2021 opinion in AMG Capital v. FTC, which rejected the FTC’s claim that its statutory injunctive authority included the ability to obtain monetary relief for harmed consumers (see my discussion of this case here).

The ANPRM and Section 5(n)

Regrettably, the tone of the questions posed in the ANPRM indicates a lack of consideration for the constraints imposed by Section 5(n). Accordingly, any future rulemaking that sought to establish “remedies” for many of the theorized abuses found in the ANPRM would stand very little chance of being upheld in litigation.

The Aug. 11 FTC press release cited previously addresses several broad topical sources of harms: harms to consumers; harms to children; regulations; automated systems; discrimination; consumer consent; notice, transparency, and disclosure; remedies; and obsolescence. These categories are chock full of questions that imply the FTC may consider restrictions on business conduct that go far beyond the scope of the commission’s authority under Section 5(n). (The questions are notably silent about the potential consumer benefits and procompetitive efficiencies that may arise from the business practices called here into question.)

A few of the many questions set forth under just four of these topical listings (harms to consumers, harms to children, regulations, and discrimination) are highlighted below, to provide a flavor of the statutory overreach that categorizes all aspects of the ANPRM. Many other examples could be cited. (Phillips’ dissenting statement provides a cogent and critical evaluation of ANPRM questions that embody such overreach.) Furthermore, although there is a short discussion of “costs and benefits” in the ANPRM press release, it is wholly inadequate to the task.

Under the category “harms to consumers,” the ANPRM press release focuses on harm from “lax data security or surveillance practices.” It asks whether FTC enforcement has “adequately addressed indirect pecuniary harms, including potential physical harms, psychological harms, reputational injuries, and unwanted intrusions.” The press release suggests that a rule might consider addressing harms to “different kinds of consumers (e.g., young people, workers, franchisees, small businesses, women, victims of stalking or domestic violence, racial minorities, the elderly) in different sectors (e.g., health, finance, employment) or in different segments or ‘stacks’ of the internet economy.”

These laundry lists invite, at best, anecdotal public responses alleging examples of perceived “harm” falling into the specified categories. Little or no light is likely to be shed on the measurement of such harm, nor on the potential beneficial effects to some consumers from the practices complained of (for example, better targeted ads benefiting certain consumers). As such, a sound Section 5(n) assessment would be infeasible.

Under “harms to children,” the press release suggests possibly extending the limitations of the FTC-administered Children’s Online Privacy Protection Act (COPPA) to older teenagers, thereby in effect rewriting COPPA and usurping the role of Congress (a clear statutory overreach). The press release also asks “[s]hould new rules set out clear limits on personalized advertising to children and teenagers irrespective of parental consent?” It is hard (if not impossible) to understand how this form of overreach, which would displace the supervisory rights of parents (thereby imposing impossible-to-measure harms on them), could be shoe-horned into a defensible Section 5(n) cost-benefit assessment.

Under “regulations,” the press release asks whether “new rules [should] require businesses to implement administrative, technical, and physical data security measures, including encryption techniques, to protect against risks to the security, confidentiality, or integrity of covered data?” Such new regulatory strictures (whose benefits to some consumers appear speculative) would interfere significantly in internal business processes. Specifically, they could substantially diminish the efficiency of business-security measures, diminish business incentives to innovate (for example, in encryption), and reduce dynamic competition among businesses.

Consumers also would be harmed by a related slowdown in innovation. Those costs undoubtedly would be high but hard, if not impossible, to measure. The FTC also asks whether a rule should limit “companies’ collection, use, and retention of consumer data.” This requirement, which would seemingly bypass consumers’ decisions to make their data available, would interfere with companies’ ability to use such data to improve business offerings and thereby enhance consumers’ experiences. Justifying new requirements such as these under Section 5(n) would be well-nigh impossible.

The category “discrimination” is especially problematic. In addressing “algorithmic discrimination,” the ANPRM press release asks whether the FTC should “consider new trade regulation rules that bar or somehow limit the deployment of any system that produces discrimination, irrespective of the data or processes on which those outcomes are based.” In addition, the press release asks “if the Commission [should] consider harms to other underserved groups that current law does not recognize as protected from discrimination (e.g., unhoused people or residents of rural communities)?”

The FTC cites no statutory warrant for the authority to combat such forms of “discrimination.” It is not a civil-rights agency. It clearly is not authorized to issue anti-discrimination rules dealing with “groups that current law does not recognize as protected from discrimination.” Any such rules, if issued, would be summarily struck down in no uncertain terms by the judiciary, even without regard to Section 5(n).

In addition, given the fact that “economic discrimination” often is efficient (and procompetitive) and may be beneficial to consumer welfare (see, for example, here), more limited economic anti-discrimination rules almost certainly would not pass muster under the Section 5(n) cost-benefit framework.     

Finally, while the ANPRM press release does contain a very short section entitled “costs and benefits,” that section lacks any specific reference to the required Section 5(n) evaluation framework. Phillips’ dissent points out that the ANPRM:

…simply fail[s] to provide the detail necessary for commenters to prepare constructive responses” on cost-benefit analysis. He stresses that the broad nature of requests for commenters’ view on costs and benefits renders the inquiry “not conducive to stakeholders submitting data and analysis that can be compared and considered in the context of a specific rule. … Without specific questions about [the costs and benefits of] business practices and potential regulations, the Commission cannot hope for tailored responses providing a full picture of particular practices.

In other words, the ANPRM does not provide the guidance needed to prompt the sorts of responses that might assist the FTC in carrying out an adequate Section 5(n) cost-benefit analysis.

Conclusion

The FTC would face almost certain defeat in court if it promulgated a broad rule addressing many of the perceived unfairness-based “ills” alluded to in the ANPRM. Moreover, although its requirements would (I believe) not come into effect, such a rule nevertheless would impose major economic costs on society.

Prior to final judicial resolution of its status, the rule would disincentivize businesses from engaging in a variety of data-related practices that enhance business efficiency and benefit many consumers. Furthermore, the FTC resources devoted to developing and defending the rule would not be applied to alternative welfare-enhancing FTC activities—a substantial opportunity cost.

The FTC should take heed of these realities and opt not to carry out a rulemaking based on the ANPRM. It should instead devote its scarce consumer protection resources to prosecuting hard core consumer fraud and deception—and, perhaps, to launching empirical studies into the economic-welfare effects of data security and commercial surveillance practices. Such studies, if carried out, should focus on dispassionate economic analysis and avoid policy preconceptions. (For example, studies involving digital platforms should take note of the existing economic literature, such as a paper indicating that digital platforms have generated enormous consumer-welfare benefits not accounted for in gross domestic product.)

One can only hope that a majority of FTC commissioners will apply common sense and realize that far-flung rulemaking exercises lacking in statutory support are bad for the rule of law, bad for the commission’s reputation, bad for the economy, and bad for American consumers.


[1] The FTC states specifically that it “is issuing this ANPR[M] pursuant to Section 18 of the Federal Trade Commission Act”.

[2] Deceptive practices that might be addressed in a Section 18 trade regulation rule would be subject to the “FTC Policy Statement on Deception,” which states that “the Commission will find deception if there is a representation, omission or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.” A court reviewing an FTC Section 18 rule focused on “deceptive acts or practices” undoubtedly would consult this Statement, although it is not clear, in light of recent jurisprudential trends, that the court would defer to the Statement’s analysis in rendering an opinion. In any event, questions of deception, which focus on acts or practices that mislead consumers, would in all likelihood have little relevance to the evaluation of any rule that might be promulgated in light of the ANPRM.