The concept of European “digital sovereignty” has been promoted in recent years both by high officials of the European Union and by EU national governments. Indeed, France made strengthening sovereignty one of the goals of its recent presidency in the EU Council.

The approach taken thus far both by the EU and by national authorities has been not to exclude foreign businesses, but instead to focus on research and development funding for European projects. Unfortunately, there are worrying signs that this more measured approach is beginning to be replaced by ill-conceived moves toward economic protectionism, ostensibly justified by national-security and personal-privacy concerns.

In this context, it is worth reconsidering why Europeans’ best interests are best served not by economic isolationism, but by an understanding of sovereignty that capitalizes on alliances with other free democracies.

Protectionism Under the Guise of Cybersecurity

Among the primary worrying signs regarding the EU’s approach to digital sovereignty is the union’s planned official cybersecurity-certification scheme. The European Commission is reportedly pushing for “digital sovereignty” conditions in the scheme, which would include data and corporate-entity localization and ownership requirements. This can be categorized as “hard” data localization in the taxonomy laid out by Peter Swire and DeBrae Kennedy-Mayo of Georgia Institute of Technology, in that it would prohibit both data transfers to other countries and for foreign capital to be involved in processing even data that is not transferred.

The European Cybersecurity Certification Scheme for Cloud Services (EUCS) is being prepared by ENISA, the EU cybersecurity agency. The scheme is supposed to be voluntary at first, but it is expected that it will become mandatory in the future, at least for some situations (e.g., public procurement). It was not initially billed as an industrial-policy measure and was instead meant to focus on technical security issues. Moreover, ENISA reportedly did not see the need to include such “digital sovereignty” requirements in the certification scheme, perhaps because they saw them as insufficiently grounded in genuine cybersecurity needs.

Despite ENISA’s position, the European Commission asked the agency to include the digital–sovereignty requirements. This move has been supported by a coalition of European businesses that hope to benefit from the protectionist nature of the scheme. Somewhat ironically, their official statement called on the European Commission to “not give in to the pressure of the ones who tend to promote their own economic interests,”

The governments of Denmark, Estonia, Greece, Ireland, Netherlands, Poland, and Sweden expressed “strong concerns” about the Commission’s move. In contrast, Germany called for a political discussion of the certification scheme that would take into account “the economic policy perspective.” In other words, German officials want the EU to consider using the cybersecurity-certification scheme to achieve protectionist goals.

Cybersecurity certification is not the only avenue by which Brussels appears to be pursuing protectionist policies under the guise of cybersecurity concerns. As highlighted in a recent report from the Information Technology & Innovation Foundation, the European Commission and other EU bodies have also been downgrading or excluding U.S.-owned firms from technical standard-setting processes.

Do Security and Privacy Require Protectionism?

As others have discussed at length (in addition to Swire and Kennedy-Mayo, also Theodore Christakis) the evidence for cybersecurity and national-security arguments for hard data localization have been, at best, inconclusive. Press reports suggest that ENISA reached a similar conclusion. There may be security reasons to insist upon certain ways of distributing data storage (e.g., across different data centers), but those reasons are not directly related to the division of national borders.

In fact, as illustrated by the well-known architectural goal behind the design of the U.S. military computer network that was the precursor to the Internet, security is enhanced by redundant distribution of data and network connections in a geographically dispersed way. The perils of putting “all one’s data eggs” in one basket (one locale, one data center) were amply illustrated when a fire in a data center of a French cloud provider, OVH, famously brought down millions of websites that were only hosted there. (Notably, OVH is among the most vocal European proponents of hard data localization).

Moreover, security concerns are clearly not nearly as serious when data is processed by our allies as it when processed by entities associated with less friendly powers. Whatever concerns there may be about U.S. intelligence collection, it would be detached from reality to suggest that the United States poses a national-security risk to EU countries. This has become even clearer since the beginning of the Russian invasion of Ukraine. Indeed, the strength of the U.S.-EU security relationship has been repeatedly acknowledged by EU and national officials.

Another commonly used justification for data localization is that it is required to protect Europeans’ privacy. The radical version of this position, seemingly increasingly popular among EU data-protection authorities, amounts to a call to block data flows between the EU and the United States. (Most bizarrely, Russia seems to receive a more favorable treatment from some European bureaucrats). The legal argument behind this view is that the United States doesn’t have sufficient legal safeguards when its officials process the data of foreigners.

The soundness of that view is debated, but what is perhaps more interesting is that similar privacy concerns have also been identified by EU courts with respect to several EU countries. The reaction of those European countries was either to ignore the courts, or to be “ruthless in exploiting loopholes” in court rulings. It is thus difficult to treat seriously the claims that Europeans’ data is much better safeguarded in their home countries than if it flows in the networks of the EU’s democratic allies, like the United States.

Digital Sovereignty as Industrial Policy

Given the above, the privacy and security arguments are unlikely to be the real decisive factors behind the EU’s push for a more protectionist approach to digital sovereignty, as in the case of cybersecurity certification. In her 2020 State of the Union speech, EU Commission President Ursula von der Leyen stated that Europe “must now lead the way on digital—or it will have to follow the way of others, who are setting these standards for us.”

She continued: “On personalized data—business to consumer—Europe has been too slow and is now dependent on others. This cannot happen with industrial data.” This framing suggests an industrial-policy aim behind the digital-sovereignty agenda. But even in considering Europe’s best interests through the lens of industrial policy, there are reasons to question the manner in which “leading the way on digital” is being implemented.

Limitations on foreign investment in European tech businesses come with significant costs to the European tech ecosystem. Those costs are particularly high in the case of blocking or disincentivizing American investment.

Effect on startups

Early-stage investors such as venture capitalists bring more than just financial capital. They offer expertise and other vital tools to help the businesses in which they invest. It is thus not surprising that, among the best investors, those with significant experience in a given area are well-represented. Due to the successes of the U.S. tech industry, American investors are especially well-positioned to play this role.

In contrast, European investors may lack the needed knowledge and skills. For example, in its report on building “deep tech” companies in Europe, Boston Consulting Group noted that a “substantial majority of executives at deep-tech companies and more than three-quarters of the investors we surveyed believe that European investors do not have a good understanding of what deep tech is.”

More to the point, even where EU players do hold advantages, a cooperative economic and technological system will allow the comparative advantage of both U.S. and EU markets to redound to each others’ benefit. That is to say, of course not all U.S. investment expertise will apply in the EU, but certainly some will. Similarly, there will be EU firms that are positioned to share their expertise in the United States. But there is no ex ante way to know when and where these complementarities will exist, which essentially dooms efforts at centrally planning technological cooperation.

Given the close economic, cultural, and historical ties of the two regions, it makes sense to work together, particularly given the rising international-relations tensions outside of the western sphere. It also makes sense, insofar as the relatively open private-capital-investment environment in the United States is nearly impossible to match, let alone surpass, through government spending.

For example, national government and EU funding in Europe has thus far ranged from expensive failures (the “Google-killer”) to the all-too-predictable bureaucracy-heavy grantmaking, the beneficiaries of which describe as lacking flexibility, “slow,” “heavily process-oriented,” and expensive for businesses to navigate. As reported by the Financial Times’ Sifted website, the EU’s own startup-investment scheme (the European Innovation Council) backed only one business over more than a year, and it had “delays in payment” that “left many startups short of cash—and some on the brink of going out of business.”

Starting new business ventures is risky, especially for the founders. They risk devoting their time, resources, and reputation to an enterprise that may very well fail. Given this risk of failure, the potential upside needs to be sufficiently high to incentivize founders and early employees to take the gamble. This upside is normally provided by the possibility of selling one’s shares in a business. In BCG’s previously cited report on deep tech in Europe, respondents noted that the European ecosystem lacks “clear exit opportunities”:

Some investors fear being constrained by European sovereignty concerns through vetoes at the state or Europe level or by rules potentially requiring European ownership for deep-tech companies pursuing strategically important technologies. M&A in Europe does not serve as the active off-ramp it provides in the US. From a macroeconomic standpoint, in the current environment, investment and exit valuations may be impaired by inflation or geopolitical tensions.

More broadly, those exit opportunities also factor importantly into funders’ appetite to price the risk of failure in their ventures. Where the upside is sufficiently large, an investor might be willing to experiment in riskier ventures and be suitably motivated to structure investments to deal with such risks. But where the exit opportunities are diminished, it makes much more sense to spend time on safer bets that may provide lower returns, but are less likely to fail. Coupled with the fact that government funding must run through bureaucratic channels, which are inherently risk averse, the overall effect is a less dynamic funding system.

The Central and Eastern Europe (CEE) region is an especially good example of the positive influence of American investment in Europe’s tech ecosystem. According to the state-owned Polish Development Fund and Dealroom.co, in 2019, $0.9 billion of venture-capital investment in CEE came from the United States, $0.5 billion from Europe, and $0.1 billion from the rest of the world.

Direct investment

Technological investment is rarely, if ever, a zero-sum game. U.S. firms that invest in the EU (and vice versa) do not do so as foreign conquerors, but as partners whose own fortunes are intertwined with their host country. Consider, for example, Google’s recent PLN 2.7 billion investment in Poland. Far from extractive, that investment will build infrastructure in Poland, and will employ an additional 2,500 Poles in the company’s cloud-computing division. This sort of partnership plants the seeds that grow into a native tech ecosystem. The Poles that today work in Google’s cloud-computing division are the founders of tomorrow’s innovative startups rooted in Poland.

The funding that accompanies native operations of foreign firms also has a direct impact on local economies and tech ecosystems. More local investment in technology creates demand for education and support roles around that investment. This creates a virtuous circle that ultimately facilitates growth in the local ecosystem. And while this direct investment is important for large countries, in smaller countries, it can be a critical component in stimulating their own participation in the innovation economy. 

According to Crunchbase, out of 2,617 EU-headquartered startups founded since 2010 with total equity funding amount of at least $10 million, 927 (35%) had at least one founder who previously worked for an American company. For example, two of the three founders of Madrid-based Seedtag (total funding of more than $300 million) worked at Google immediately before starting Seedtag.

It is more difficult to quantify how many early employees of European startups built their experience in American-owned companies, but it is likely to be significant and to become even more so, especially in regions—like Central and Eastern Europe—with significant direct U.S. investment in local talent.

Conclusion

Explicit industrial policy for protectionist ends is—at least, for the time being—regarded as unwise public policy. But this is not to say that countries do not have valid national interests that can be met through more productive channels. While strong data-localization requirements is ultimately counterproductive, particularly among closely allied nations, countries have a legitimate interest in promoting the growth of the technology sector within their borders.

National investment in R&D can yield fruit, particularly when that investment works in tandem with the private sector (see, e.g., the Bayh-Dole Act in the United States). The bottom line, however, is that any intervention should take care to actually promote the ends it seeks. Strong data-localization policies in the EU will not lead to success of the local tech industry, but it will serve to wall the region off from the kind of investment that can make it thrive.

The business press generally describes the gig economy that has sprung up around digital platforms like Uber and TaskRabbit as a beneficial phenomenon, “a glass that is almost full.” The gig economy “is an economy that operates flexibly, involving the exchange of labor and resources through digital platforms that actively facilitate buyer and seller matching.”

From the perspective of businesses, major positive attributes of the gig economy include cost-effectiveness (minimizing costs and expenses); labor-force efficiencies (“directly matching the company to the freelancer”); and flexible output production (individualized work schedules and enhanced employee motivation). Workers also benefit through greater independence, enhanced work flexibility (including hours worked), and the ability to earn extra income.

While there are some disadvantages, as well, (worker-commitment questions, business-ethics issues, lack of worker benefits, limited coverage of personal expenses, and worker isolation), there is no question that the gig economy has contributed substantially to the growth and flexibility of the American economy—a major social good. Indeed, “[i]t is undeniable that the gig economy has become an integral part of the American workforce, a trend that has only been accelerated during the” COVID-19 pandemic.

In marked contrast, however, the Federal Trade Commission’s (FTC) Sept. 15 Policy Statement on Enforcement Related to Gig Work (“gig statement” or “statement”) is the story of a glass that is almost empty. The accompanying press release declaring “FTC to Crack Down on Companies Taking Advantage of Gig Workers” (since when is “taking advantage of workers” an antitrust or consumer-protection offense?) puts an entirely negative spin on the gig economy. And while the gig statement begins by describing the nature and large size of the gig economy, it does so in a dispassionate and bland tone. No mention is made of the substantial benefits for consumers, workers, and the overall economy stemming from gig work. Rather, the gig statement quickly adopts a critical perspective in describing the market for gig workers and then addressing gig-related FTC-enforcement priorities. What’s more, the statement deals in very broad generalities and eschews specifics, rendering it of no real use to gig businesses seeking practical guidance.

Most significantly, the gig statement suggests that the FTC should play a significant enforcement role in gig-industry labor questions that fall outside its statutory authority. As such, the statement is fatally flawed as a policy document. It provides no true guidance and should be substantially rewritten or withdrawn.

Gig Statement Analysis

The gig statement’s substantive analysis begins with a negative assessment of gig-firm conduct. It expresses concern that gig workers are being misclassified as independent contractors and are thus deprived “of critical rights [right to organize, overtime pay, health and safety protections] to which they are entitled under law.” Relatedly, gig workers are said to be “saddled with inordinate risks.” Gig firms also “may use transparent algorithms to capture more revenue from customer payments for workers’ services than customers or workers understand.”

Heaven forfend!

The solution offered by the gig statement is “scrutiny of promises gig platforms make, or information they fail to disclose, about the financial proposition of gig work.” No mention is made of how these promises supposedly made to workers about the financial ramifications of gig employment are related to the FTC’s statutory mission (which centers on unfair or deceptive acts or practices affecting consumers or unfair methods of competition).

The gig statement next complains that a “power imbalance” between gig companies and gig workers “may leave gig workers exposed to harms from unfair, deceptive, and anticompetitive practices and is likely to amplify such harms when they occur. “Power imbalance” along a vertical chain has not been a source of serious antitrust concern for decades (and even in the case of the Robinson-Patman Act, the U.S. Supreme Court most recently stressed, in 2005’s Volvo v. Reeder, that harm to interbrand competition is the key concern). “Power imbalances” between workers and employers bear no necessary relation to consumer welfare promotion, which the Supreme Court teaches is the raison d’etre of antitrust. Moreover, the FTC does not explain why unfair or deceptive conduct likely follows from the mere existence of substantial bargaining power. Such an unsupported assertion is not worthy of being included in a serious agency-policy document.

The gig statement then engages in more idle speculation about a supposed relationship between market concentration and the proliferation of unfair and deceptive practices across the gig economy. The statement claims, without any substantiation, that gig companies in concentrated platform markets will be incentivized to exert anticompetitive market power over gig workers, and thereby “suppress wages below competitive rates, reduce job quality, or impose onerous terms on gig workers.” Relatedly, “unfair and deceptive practices by one platform can proliferate across the labor market, creating a race to the bottom that participants in the gig economy, and especially gig workers, have little ability to avoid.” No empirical or theoretical support is advanced for any of these bald assertions, which give the strong impression that the commission plans to target gig-economy companies for enforcement actions without regard to the actual facts on the ground. (By contrast, the commission has in the past developed detailed factual records of competitive and/or consumer-protection problems in health care and other important industry sectors as a prelude to possible future investigations.)

The statement then launches into a description of the FTC’s gig-economy policy priorities. It notes first that “workers may be deprived of the protections of an employment relationship” when gig firms classify them as independent contractors, leading to firms’ “disclosing [of] pay and costs in an unfair and deceptive manner.” What’s more, the FTC “also recognizes that misleading claims [made to workers] about the costs and benefits of gig work can impair fair competition among companies in the gig economy and elsewhere.”

These extraordinary statements seem to be saying that the FTC plans to closely scrutinize gig-economy-labor contract negotiations, based on its distaste for independent contracting (which it believes should be supplanted by employer-employee relationships, a question of labor law, not FTC law). Nowhere is it explained where such a novel FTC exercise of authority comes from, nor how such FTC actions have any bearing on harms to consumer welfare. The FTC’s apparent desire to force employment relationships upon gig firms is far removed from harm to competition or unfair or deceptive practices directed at consumers. Without more of an explanation, one is left to conclude that the FTC is proposing to take actions that are far beyond its statutory remit.

The gig statement next tries to tie the FTC’s new gig program to violations of the FTC Act (“unsubstantiated claims”); the FTC’s Franchise Rule; and the FTC’s Business Opportunity Rule, violations of which “can trigger civil penalties.” The statement, however, lacks any sort of logical, coherent explanation of how the new enforcement program necessarily follows from these other sources of authority. While a few examples of rules-based enforcement actions that have some connection to certain terms of employment may be pointed to, such special cases are a far cry from any sort of general justification for turning the FTC into a labor-contracts regulator.

The statement then moves on to the alleged misuse of algorithmic tools dealing with gig-worker contracts and supervision that may lead to unlawful gig-worker oversight and termination. Once again, the connection of any of this to consumer-welfare harm (from a competition or consumer-protection perspective) is not made.

The statement further asserts that FTC Act consumer-protection violations may arise from “nonnegotiable” and other unfair contracts. In support of such a novel exercise of authority, however, the FTC cites supposedly analogous “unfair” clauses found in consumer contracts with individuals or small-business consumers. It is highly doubtful that these precedents support any FTC enforcement actions involving labor contracts.

Noncompete clauses with individuals are next on the gig statement’s agenda. It is claimed that “[n]on-compete provisions may undermine free and fair labor markets by restricting workers’ ability to obtain competitive offers for their services from existing companies, resulting in lower wages and degraded working conditions. These provisions may also raise barriers to entry for new companies.” The assertion, however, that such clauses may violate Section 1 of the Sherman Act or Section 5 of the FTC Act’s bar on unfair methods of competition, seems dubious, to say the least. Unless there is coordination among companies, these are essentially unilateral contracting practices that may have robust efficiency explanations. Making out these practices to be federal antitrust violations is bad law and bad policy; they are, in any event, subject to a wide variety of state laws.

Even more problematic is the FTC’s claim that a variety of standard (typically efficiency-seeking) contract limitations, such as nondisclosure agreements and liquidated damages clauses, “may be excessive or overbroad” and subject to FTC scrutiny. This preposterous assertion would make the FTC into a second-guesser of common labor contracts (a federal labor-contract regulator, if you will), a role for which it lacks authority and is entirely unsuited. Turning the FTC into a federal labor-contract regulator would impose unjustifiable uncertainty costs on business and chill a host of efficient arrangements. It is hard to take such a claim of power seriously, given its lack of any credible statutory basis.

The final section of the gig statement dealing with FTC enforcement (“Policing Unfair Methods of Competition That Harm Gig Workers”) is unobjectionable, but not particularly informative. It essentially states that the FTC’s black letter legal authority over anticompetitive conduct also extends to gig companies: the FTC has the authority to investigate and prosecute anticompetitive mergers; agreements among competitors to fix terms of employment; no-poach agreements; and acts of monopolization and attempted monopolization. (Tell us something we did not know!)

The fact that gig-company workers may be harmed by such arrangements is noted. The mere page and a half devoted to this legal summary, however, provides little practical guidance for gig companies as to how to avoid running afoul of the law. Antitrust policy statements may be excused if they provided less detailed guidance than antitrust guidelines, but it would be helpful if they did something more than provide a capsule summary of general American antitrust principles. The gig statement does not pass this simple test.

The gig statement closes with a few glittering generalities. Cooperation with other agencies is highlighted (for example, an information-sharing agreement with the National Labor Relations Board is described). The FTC describes an “Equity Action Plan” calling for a focus on how gig-economy antitrust and consumer-protection abuses harm underserved communities and low-wage workers.

The FTC finishes with a request for input from the public and from gig workers about abusive and potentially illegal gig-sector conduct. No mention is made of the fact that the FTC must, of course, conform itself to the statutory limitations on its jurisdiction in the gig sector, as in all other areas of the economy.

Summing Up the Gig Statement

In sum, the critical flaw of the FTC’s gig statement is its focus on questions of labor law and policy (including the question of independent contractor as opposed to employee status) that are the proper purview of federal and state statutory schemes not administered by the Federal Trade Commission. (A secondary flaw is the statement’s unbalanced portrayal of the gig sector, which ignores its beneficial aspects.) If the FTC decides that gig-economy issues deserve particular enforcement emphasis, it should (and, indeed, must) direct its attention to anticompetitive actions and unfair or deceptive acts or practices that harm consumers.

On the antitrust side, that might include collusion among gig companies on the terms offered to workers or perhaps “mergers to monopoly” between gig companies offering a particular service. On the consumer-protection side, that might include making false or materially misleading statements to consumers about the terms under which they purchase gig-provided services. (It would be conceivable, of course, that some of those statements might be made, unwittingly or not, by gig independent contractors, at the behest of the gig companies.)

The FTC also might carry out gig-industry studies to identify particular prevalent competitive or consumer-protection harms. The FTC should not, however, seek to transform itself into a gig-labor-market enforcer and regulator, in defiance of its lack of statutory authority to play this role.

Conclusion

The FTC does, of course, have a legitimate role to play in challenging unfair methods of competition and unfair acts or practices that undermine consumer welfare wherever they arise, including in the gig economy. But it does a disservice by focusing merely on supposed negative aspects of the gig economy and conjuring up a gig-specific “parade of horribles” worthy of close commission scrutiny and enforcement action.

Many of the “horribles” cited may not even be “bads,” and many of them are, in any event, beyond the proper legal scope of FTC inquiry. There are other federal agencies (for example, the National Labor Relations Board) whose statutes may prove applicable to certain problems noted in the gig statement. In other cases, statutory changes may be required to address certain problems noted in the statement (assuming they actually are problems). The FTC, and its fellow enforcement agencies, should keep in mind, of course, that they are not Congress, and wishing for legal authority to deal with problems does not create it (something the federal judiciary fully understands).  

In short, the negative atmospherics that permeate the gig statement are unnecessary and counterproductive; if anything, they are likely to convince at least some judges that the FTC is not the dispassionate finder of fact and enforcer of law that it claims to be. In particular, the judiciary is unlikely to be impressed by the FTC’s apparent effort to insert itself into questions that lie far beyond its statutory mandate.

The FTC should withdraw the gig statement. If, however, it does not, it should revise the statement in a manner that is respectful of the limits on the commission’s legal authority, and that presents a more dispassionate analysis of gig-economy business conduct.

In late August, Roberto Campos Neto, the head of Brazil’s central bank, is reported to have said about Pix, the bank’s two-year-old real-time-payments (RTP) system, that it “eliminates the need to have a credit card. I think that credit cards will cease to exist at some point soon.” Wow! Sounds amazing. A new system that does everything a credit card can do, but better.

As the old saying goes, however, something that sounds too good to be true probably isn’t. While Pix has some advantages, it also has many disadvantages. In particular, it lacks many of the features currently offered by credit cards, such as liability caps, fraud prevention, and—perhaps crucially—access to credit. So, it seems unlikely to replace credit cards any time soon.

Pix and the Unbanked

When Brazil’s central bank launched Pix in November 2020, evangelists at the bank hoped it would offer a low-cost alternative to existing payments and would entice some of the country’s tens of millions of unbanked and underbanked adults into the banking system. While Pix has, indeed, attracted many users, it has done little, if anything, to solve the problem of the unbanked.

Proponents of Pix asserted that the RTP system would dramatically reduce the number of unbanked individuals in Brazil. While it is true that many Brazilians who were previously unbanked do now have Pix accounts, it would be incorrect to conclude that Pix was the reason they ceased to be unbanked.

A study by Americas Market Intelligence (commissioned by Mastercard) found that, during the COVID-19 pandemic, “Brazil reduced its unbanked population by an astounding 73%.” But the study was based on research conducted between June and August 2020 and was published in October 2020, the month before Pix launched. It described the implementation of state and federal programs launched in Brazil in response to the pandemic:

  • The “Coronavoucher” program distributed emergency funds to low-income informal workers exclusively via state-owned bank Caixa Econômica Federal (CEF). Applications for funds could only be made via CEF’s Caixa Tem smartphone app, and funds were distributed via the same app. As of Aug. 5, 2020, 66 million people had received Coronavouchers via the Caix Tem app. Of those, 36 million were previously unbanked.
  • Merenda em Casa (“snack at home”), a program run by state governments, distributed funds to low-income families with children at public schools to help them pay for food while schools were closed due to COVID-19. The program distributed funds via PicPay and PagBank’s PagSeguro, both private-sector payment apps.

Following the launch of Pix, the central bank-run RTP program was made available to clients of Caixa Tem, PicPay, and PagBank. As a result, previously unbanked individuals who had become banked because of the Coronavoucher and Merenda em Casa programs were able to obtain and use Pix keys to send and receive payments.

It remains unclear, however, what proportion of those previously unbanked individuals actually use Pix. As Figure 1 below shows, the number of Pix keys registered vastly outstrips the number of users. As such, not only is it false to claim that Pix helped reduce the number of unbanked Brazilians, but it isn’t possible to say with certainty how many of those previously unbanked individuals are now active users of Pix.

FIGURE 1: Pix Keys Registered to Natural Persons and Pix Users Who Are Natural Persons

Pix-Created Problems

Pix suffered a series of data breaches this past year, with the end result that details of Pix accounts were stolen from more than 500,000 account holders. Meanwhile, hackers have set up fake apps designed to steal money from users’ bank accounts by masquerading as legitimate Pix-compliant wallets. And Pix has been associated with a rise in lightning kidnappings, whereby kidnappers force their victims to make a transfer on Pix in order to be released.

Faced with the problem that they cannot avoid having Pix because their banks have automatically enabled the system, some Brazilians have responded to the threat of kidnappings by purchasing second “Pix phones.” Users load these mid-range Android phones with banking and Pix apps and leave them at home. Meanwhile, they delete all banking apps from their primary phone. While such an approach ostensibly prevents criminals from stealing potentially large amounts of money from individuals who can afford to have a second phone, it is quite a costly and inconvenient solution.

Pix vs Credit Cards

Roberto Campos Neto reportedly conceded that Pix data breaches will occur “with some frequency.” This acknowledgment of Pix’s unresolved security issues is difficult to square with the central bank president’s claim that the service will soon replace credit cards. After all, the major credit-card networks (Visa, Mastercard, American Express, and Discover) have more than half a century of experience managing fraud, and have built massive artificial-intelligence-based systems to identify and prevent potentially fraudulent transactions. Pix has no such system. Credit-card networks have also developed a highly effective system for challenging fraudulent transactions called “chargebacks.”

Card networks’ investment in fraud management has enabled them to offer “zero liability” terms to cardholders, which has made credit cards attractive as a means of paying for goods and services, both at brick-and-mortar locations and online. While Pix now has a system to reverse fraudulent transactions, its reliability has yet to be tested, and Pix as yet does not offer zero liability. Thus, given the choice between a credit card and Pix, users are unlikely to use Pix to pay for goods where there is a risk that the business will fail to deliver goods or services as promised.  

Finally, credit cards offer users the ability to defer payment for no fee until their next bill becomes due (usually at least a month). And they offer the ability to defer payment for longer, if necessary, with interest payable on the amount outstanding.

Conclusion: There Ain’t No Such Thing as a Free Lunch

The investments that credit-card networks have made in the identification, prevention, and rectification of fraud have been possible because they are able to charge a (very small) fee to process transactions. Pix also charges merchants a small fee for transactions but, as noted, it is not able to offer the same protections.

Most Pix transactions to date have been person-to-person (P2P), effectively replacing transactions that would have otherwise been made with cash, checks, or online bank-to-bank funds transfers. That makes sense when one thinks about the risks involved. P2P transactions are likely to involve parties that know one another and/or are engaged in repeat business. By contrast, many consumer-to-business and business-to-business transactions involve parties that are relatively less well-known to one another and thus have more incentive to renege on commitments. Consumers are therefore more inclined to use the payment system with protections built in, while merchants—who are happy for the additional business—are willing to pay the price for that business.

The science-fiction writer Robert Heinlein popularized a pithy phrase to describe the idea that it is not possible to get something for nothing: “There Ain’t No Such Thing as a Free Lunch.” If Pix is to challenge credit cards as a real consumer-payments system, it will have to offer similar levels of fraud protection to consumers. That will not be cheap. While the central bank might continue to subsidize Pix transactions, doing so to the degree that would be necessary to offer such fraud protections would be an abuse of its position. Thinking otherwise is science fiction.

We’re back for another biweekly roundup – and what a biweekly it’s been! The JCPA rode, died, and rides again. Yet AICOA is AWOL. FTC Chair Lina Khan went to Congress and back to (Fordham) law school, making waves wherever she went. DOJ added to the agencies’ roster of recently lost cases. And the FTC is here to help gig workers get real jobs. All that and more, in this edition of the FTC UMC Roundup.

This week’s headline is, without a doubt, FTC’s Chair Lina Khan’s remarks at Fordham Law School’s Conference on International Antitrust Law & Policy, where she announced that the Commission is currently considering a new policy statement on use of the Commission’s Unfair Methods of Competition authority.

It comes as no surprise that the Commission will be issuing this statement, though the details and exact timing have yet to be disclosed. Khan’s remarks do shed some light on what can be expected – though again there are no surprises. She “believe[s] it is clear that respect for the rule of law requires [the Commission] to reactivate [its] standalone Section 5 enforcement program,” and that the statement must “reflect[] the statutory text, our institutional structure, the history of the statute, and the case law.” 

Earlier in her remarks, Khan points to standalone UMC claims the Commission litigated in the 1940s through 1970s – “invitations to collude; price discrimination claims against buyers not covered by the Clayton Act; de facto bundling, tying, and exclusive dealing; and a host of other practices.” This reads like a menu of claims that will be embraced by the new statement, for which she has found support in the history of the statute and case law.

In addition to her trip to New York, back home Khan also visited the Senate for an antitrust oversight hearing. Khan’s statement champions the Commission’s departure from longstanding antitrust principles and celebrates its more active enforcement efforts. Very unusually, her statement prompted a dissenting statement from Commissioners Phillips and Wilson. Phillips and Wilson note that under Khan the Commission has actually seen less enforcement activity, call out the myriad inaccurate factual assertions in Khan’s statement, and raise concern about too-aggressive efforts to push the Commission beyond its statutory authority.

Cristiano Lima has more coverage of the oversight hearing. After a bit over a year at the helm of the agency this was Khan’s first oversight hearing. From the tone of the questioning, she may wish that it was her last. But in the likely event that Republicans take the House in the midterms, it will likely just be the first, and the easiest, of many future trips to Congress.

In other news, Senators Amy Klobuchar (D-MN) and Ted Cruz (R-TX) show us that strange bedfellows do weird things in bed. That’s right, I’m talking about the Journalism Competition and Preservation Act (JCPA), sponsored by Klobuchar. The JCPA is an attempt to preserve competition in media markets by allowing cartelization in media markets. A couple of weeks ago, Sen. Klobuchar abruptly withdrew the JCPA (her own bill) from committee consideration after a surprise amendment from Sen. Cruz that was intended to limit platforms content moderation practices. In a legitimately surprising turn of events, Senators Klobuchar and Cruz agreed to compromise language that allows news outlets to collectively bargain with platforms and will “bar the tech firms from throttling, filtering, suppressing or curating content.”

Back on the FTC front, the Commission released a new Policy Statement on Enforcement Related to Gig Work. The statement explains that “Protecting these workers from unfair, deceptive, and anticompetitive practices is a priority, and the Federal Trade Commission will use its full authority to do so.” It is a curious policy statement for a number of reasons, not least of which is the purported use of the Commission’s consumer protection authority for employee protection – we have a National Labor Relations Board for that. More subtle, the statement refers throughout to “unfair, deceptive, and anticompetitive practices,” suggesting a hybrid approach to these issues that draws separately from the Commission’s consumer protection and antitrust authorities. This move is increasingly common in the Commission’s recent regulatory efforts.

Time for some quick hits. This week’s puzzler has got to be Commissioner Bedoya calling for a revitalization of the Robinson-Patman Act. But as with all things FTC, these days the new ideas seem to be the ones found in the back seat of a Delorean.

Alden Abbott draws our attention to the upcoming Axon case. To be argued in the Supreme Court on November 7th, this case raises both procedural and substantive challenges to the Commission’s constitutional structure. Abbott notes in passing the Commission’s recent losses before its ALJ in the Altria-JUUL  and Illumina-Grail mergers – and we can add the DOJ’s recent loss in its effort to block UnitedHealth’s acquisition of Change Healthcare to the agencies’ growing list of recent losses.

Charles Sauer takes a look at ongoing discussion of potential Republican nominees to fill Commissioner Phillip’s seat when he steps down from the FTC, asking Why Are Conservatives Intent On Cloning Lina Khan? He rightly argues that Republicans should not consider nominating someone who shares Khan’s disregard for the rule of law and sound economics, or who would embrace unchecked administrative power. Even if used to pursue valid goals, such abuses of regulatory authority are anathema to good government and basic conservative principles. Any Commissioner should put faithful execution of the Commission’s statutory mandate above their own policy preferences, including a commitment to acting pursuant to clearly expressed Congressional intent instead of through constitutionally-dubious administrative fiat.

What’s on tap for next week? The White House is convening its Competition Council on Monday. And for those wondering whether I forgot to discuss AICOA after mentioning it in the opening graf, no need to worry. It got just as much attention as needed.

A White House administration typically announces major new antitrust initiatives in the fall and spring, and this year is no exception. Senior Biden administration officials kicked off the fall season at Fordham Law School (more on that below) by shedding additional light on their plans to expand the accepted scope of antitrust enforcement.

Their aggressive enforcement statements draw headlines, but will the administration’s neo-Brandeisians actually notch enforcement successes? The prospects are cloudy, to say the least.

The U.S. Justice Department (DOJ) has lost some cartel cases in court this year (what was the last time that happened?) and, on Sept. 19, a federal judge rejected the DOJ’s attempt to enjoin United Health’s $13.8 billion bid for Change Healthcare. The Federal Trade Commission (FTC) recently lost two merger challenges before its in-house administrative law judge. It now faces a challenge to its administrative-enforcement processes before the U.S. Supreme Court (the Axon case, to be argued in November).

(Incidentally, on the other side of the Atlantic, the European Commission has faced some obstacles itself. Despite its recent Google victory, the Commission has effectively lost two abuse of dominance cases this year—the Intel and Qualcomm matters—before the European General Court.)

So, are the U.S. antitrust agencies chastened? Will they now go back to basics? Far from it. They enthusiastically are announcing plans to charge ahead, asserting theories of antitrust violations that have not been taken seriously for decades, if ever. Whether this turns out to be wise enforcement policy remains to be seen, but color me highly skeptical. Let’s take a quick look at some of the big enforcement-policy ideas that are being floated.

Fordham Law’s Antitrust Conference

Admiral David Farragut’s order “Damn the torpedoes, full speed ahead!” was key to the Union Navy’s August 1864 victory in the Battle of Mobile Bay, a decisive Civil War clash. Perhaps inspired by this display of risk-taking, the heads of the two federal antitrust agencies—DOJ Assistant Attorney General (AAG) Jonathan Kanter and FTC Chair Lina Khan—took a “damn the economics, full speed ahead” attitude in remarks at the Sept. 16 session of Fordham Law School’s 49th Annual Conference on International Antitrust Law and Policy. Special Assistant to the President Tim Wu was also on hand and emphasized the “all of government” approach to competition policy adopted by the Biden administration.

In his remarks, AAG Kanter seemed to be endorsing a “monopoly broth” argument in decrying the current “Whac-a-Mole” approach to monopolization cases. The intent may be to lessen the burden of proof of anticompetitive effects, or to bring together a string of actions taken jointly as evidence of a Section 2 violation. In taking such an approach, however, there is a serious risk that efficiency-seeking actions may be mistaken for exclusionary tactics and incorrectly included in the broth. (Notably, the U.S. Court of Appeals for the D.C. Circuit’s 2001 Microsoft opinion avoided the monopoly-broth problem by separately discussing specific company actions and weighing them on their individual merits, not as part of a general course of conduct.)

Kanter also recommended going beyond “our horizontal and vertical framework” in merger assessments, despite the fact that vertical mergers (involving complements) are far less likely to be anticompetitive than horizontal mergers (involving substitutes).

Finally, and perhaps most problematically, Kanter endorsed the American Innovative and Choice Online Act (AICOA), citing the protection it would afford “would-be competitors” (but what about consumers?). In so doing, the AAG ignored the fact that AICOA would prohibit welfare-enhancing business conduct and could be harmfully construed to ban mere harm to rivals (see, for example, Stanford professor Doug Melamed’s trenchant critique).

Chair Khan’s presentation, which called for a far-reaching “course correction” in U.S. antitrust, was even more bold and alarming. She announced plans for a new FTC Act Section 5 “unfair methods of competition” (UMC) policy statement centered on bringing “standalone” cases not reachable under the antitrust laws. Such cases would not consider any potential efficiencies and would not be subject to the rule of reason. Endorsing that approach amounts to an admission that economic analysis will not play a serious role in future FTC UMC assessments (a posture that likely will cause FTC filings to be viewed skeptically by federal judges).

In noting the imminent release of new joint DOJ-FTC merger guidelines, Khan implied that they would be animated by an anti-merger philosophy. She cited “[l]awmakers’ skepticism of mergers” and congressional rejection “of economic debits and credits” in merger law. Khan thus asserted that prior agency merger guidance had departed from the law. I doubt, however, that many courts will be swayed by this “economics free” anti-merger revisionism.

Tim Wu’s remarks closing the Fordham conference had a “big picture” orientation. In an interview with GW Law’s Bill Kovacic, Wu briefly described the Biden administration’s “whole of government” approach, embodied in President Joe Biden’s July 2021 Executive Order on Promoting Competition in the American Economy. While the order’s notion of breaking down existing barriers to competition across the American economy is eminently sound, many of those barriers are caused by government restrictions (not business practices) that are not even alluded to in the order.

Moreover, in many respects, the order seeks to reregulate industries, misdiagnosing many phenomena as business abuses that actually represent efficient free-market practices (as explained by Howard Beales and Mark Jamison in a Sept. 12 Mercatus Center webinar that I moderated). In reality, the order may prove to be on net harmful, rather than beneficial, to competition.

Conclusion

What is one to make of the enforcement officials’ bold interventionist screeds? What seems to be missing in their presentations is a dose of humility and pragmatism, as well as appreciation for consumer welfare (scarcely mentioned in the agency heads’ presentations). It is beyond strange to see agencies that are having problems winning cases under conventional legal theories floating novel far-reaching initiatives that lack a sound economics foundation.

It is also amazing to observe the downplaying of consumer welfare by agency heads, given that, since 1979 (in Reiter v. Sonotone), the U.S. Supreme Court has described antitrust as a “consumer welfare prescription.” Unless there is fundamental change in the makeup of the federal judiciary (and, in particular, the Supreme Court) in the very near future, the new unconventional theories are likely to fail—and fail badly—when tested in court. 

Bringing new sorts of cases to test enforcement boundaries is, of course, an entirely defensible role for U.S. antitrust leadership. But can the same thing be said for bringing “non-boundary” cases based on theories that would have been deemed far beyond the pale by both Republican and Democratic officials just a few years ago? Buckle up: it looks as if we are going to find out. 

The practice of so-called “self-preferencing” has come to embody the zeitgeist of competition policy for digital markets, as legislative initiatives are undertaken in jurisdictions around the world that to seek, in various ways, to constrain large digital platforms from granting favorable treatment to their own goods and services. The core concern cited by policymakers is that gatekeepers may abuse their dual role—as both an intermediary and a trader operating on the platform—to pursue a strategy of biased intermediation that entrenches their power in core markets (defensive leveraging) and extends it to associated markets (offensive leveraging).

In addition to active interventions by lawmakers, self-preferencing has also emerged as a new theory of harm before European courts and antitrust authorities. Should antitrust enforcers be allowed to pursue such a theory, they would gain significant leeway to bypass the legal standards and evidentiary burdens traditionally required to prove that a given business practice is anticompetitive. This should be of particular concern, given the broad range of practices and types of exclusionary behavior that could be characterized as self-preferencing—only some of which may, in some specific contexts, include exploitative or anticompetitive elements.

In a new working paper for the International Center for Law & Economics (ICLE), I provide an overview of the relevant traditional antitrust theories of harm, as well as the emerging case law, to analyze whether and to what extent self-preferencing should be considered a new standalone offense under EU competition law. The experience to date in European case law suggests that courts have been able to address platforms’ self-preferencing practices under existing theories of harm, and that it may not be sufficiently novel to constitute a standalone theory of harm.

European Case Law on Self-Preferencing

Practices by digital platforms that might be deemed self-preferencing first garnered significant attention from European competition enforcers with the European Commission’s Google Shopping investigation, which examined whether the search engine’s results pages positioned and displayed its own comparison-shopping service more favorably than the websites of rival comparison-shopping services. According to the Commission’s findings, Google’s conduct fell outside the scope of competition on the merits and could have the effect of extending Google’s dominant position in the national markets for general Internet search into adjacent national markets for comparison-shopping services, in addition to protecting Google’s dominance in its core search market.

Rather than explicitly posit that self-preferencing (a term the Commission did not use) constituted a new theory of harm, the Google Shopping ruling described the conduct as belonging to the well-known category of “leveraging.” The Commission therefore did not need to propagate a new legal test, as it held that the conduct fell under a well-established form of abuse. The case did, however, spur debate over whether the legal tests the Commission did apply effectively imposed on Google a principle of equal treatment of rival comparison-shopping services.

But it should be noted that conduct similar to that alleged in the Google Shopping investigation actually came before the High Court of England and Wales several months earlier, this time in a dispute between Google and Streetmap. At issue in that case was favorable search results Google granted to its own maps, rather than to competing online maps. The UK Court held, however, that the complaint should have been appropriately characterized as an allegation of discrimination; it further found that Google’s conduct did not constitute anticompetitive foreclosure. A similar result was reached in May 2020 by the Amsterdam Court of Appeal in the Funda case.  

Conversely, in June 2021, the French Competition Authority (AdlC) followed the European Commission into investigating Google’s practices in the digital-advertising sector. Like the Commission, the AdlC did not explicitly refer to self-preferencing, instead describing the conduct as “favoring.”

Given this background and the proliferation of approaches taken by courts and enforcers to address similar conduct, there was significant anticipation for the judgment that the European General Court would ultimately render in the appeal of the Google Shopping ruling. While the General Court upheld the Commission’s decision, it framed self-preferencing as a discriminatory abuse. Further, the Court outlined four criteria that differentiated Google’s self-preferencing from competition on the merits.

Specifically, the Court highlighted the “universal vocation” of Google’s search engine—that it is open to all users and designed to index results containing any possible content; the “superdominant” position that Google holds in the market for general Internet search; the high barriers to entry in the market for general search services; and what the Court deemed Google’s “abnormal” conduct—behaving in a way that defied expectations, given a search engine’s business model, and that changed after the company launched its comparison-shopping service.

While the precise contours of what the Court might consider discriminatory abuse aren’t yet clear, the decision’s listed criteria appear to be narrow in scope. This stands at odds with the much broader application of self-preferencing as a standalone abuse, both by the European Commission itself and by some national competition authorities (NCAs).

Indeed, just a few weeks after the General Court’s ruling, the Italian Competition Authority (AGCM) handed down a mammoth fine against Amazon over preferential treatment granted to third-party sellers who use the company’s own logistics and delivery services. Rather than reflecting the qualified set of criteria laid out by the General Court, the Italian decision was clearly inspired by the Commission’s approach in Google Shopping. Where the Commission described self-preferencing as a new form of leveraging abuse, AGCM characterized Amazon’s practices as tying.

Self-preferencing has also been raised as a potential abuse in the context of data and information practices. In November 2020, the European Commission sent Amazon a statement of objections detailing its preliminary view that the company had infringed antitrust rules by making systematic use of non-public business data, gathered from independent retailers who sell on Amazon’s marketplace, to advantage the company’s own retail business. (Amazon responded with a set of commitments currently under review by the Commission.)

Both the Commission and the U.K. Competition and Markets Authority have lodged similar allegations against Facebook over data gathered from advertisers and then used to compete with those advertisers in markets in which Facebook is active, such as classified ads. The Commission’s antitrust proceeding against Apple over its App Store rules likewise highlights concerns that the company may use its platform position to obtain valuable data about the activities and offers of its competitors, while competing developers may be denied access to important customer data.

These enforcement actions brought by NCAs and the Commission appear at odds with the more bounded criteria set out by the General Court in Google Shopping, and raise tremendous uncertainty regarding the scope and definition of the alleged new theory of harm.

Self-Preferencing, Platform Neutrality, and the Limits of Antitrust Law

The growing tendency to invoke self-preferencing as a standalone theory of antitrust harm could serve two significant goals for European competition enforcers. As mentioned earlier, it offers a convenient shortcut that could allow enforcers to skip the legal standards and evidentiary burdens traditionally required to prove anticompetitive behavior. Moreover, it can function, in practice, as a means to impose a neutrality regime on digital gatekeepers, with the aims of both ensuring a level playing field among competitors and neutralizing the potential conflicts of interests implicated by dual-mode intermediation.

The dual roles performed by some platforms continue to fuel the never-ending debate over vertical integration, as well as related concerns that, by giving preferential treatment to its own products and services, an integrated provider may leverage its dominance in one market to related markets. From this perspective, self-preferencing is an inevitable byproduct of the emergence of ecosystems.

However, as the Australian Competition and Consumer Commission has recognized, self-preferencing conduct is “often benign.” Furthermore, the total value generated by an ecosystem depends on the activities of independent complementors. Those activities are not completely under the platform’s control, although the platform is required to establish and maintain the governance structures regulating access to and interactions around that ecosystem.

Given this reality, a complete ban on self-preferencing may call the very existence of ecosystems into question, challenging their design and monetization strategies. Preferential treatment can take many different forms with many different potential effects, all stemming from platforms’ many different business models. This counsels for a differentiated, case-by-case, and effects-based approach to assessing the alleged competitive harms of self-preferencing.

Antitrust law does not impose on platforms a general duty to ensure neutrality by sharing their competitive advantages with rivals. Moreover, possessing a competitive advantage does not automatically equal an anticompetitive effect. As the European Court of Justice recently stated in Servizio Elettrico Nazionale, competition law is not intended to protect the competitive structure of the market, but rather to protect consumer welfare. Accordingly, not every exclusionary effect is detrimental to competition. Distinctions must be drawn between foreclosure and anticompetitive foreclosure, as only the latter may be penalized under antitrust.

[This post from Jonathan M. Barnett, the Torrey H. Webb Professor of Law at the University of Southern California’s Gould School of Law, is an entry in Truth on the Market’s FTC UMC Rulemaking symposium. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

In its Advance Notice for Proposed Rulemaking (ANPR) on Commercial Surveillance and Data Security, the Federal Trade Commission (FTC) has requested public comment on an unprecedented initiative to promulgate and implement wide-ranging rules concerning the gathering and use of consumer data in digital markets. In this contribution, I will assume, for the sake of argument, that the commission has the legal authority to exercise its purported rulemaking powers for this purpose without a specific legislative mandate (a question as to which I recognize there is great uncertainty, which is further heightened by the fact that Congress is concurrently considered legislation in the same policy area).

In considering whether to use these powers for the purposes of adopting and implementing privacy-related regulations in digital markets, the commission would be required to undertake a rigorous assessment of the expected costs and benefits of any such regulation. Any such cost-benefit analysis must comprise at least two critical elements that are omitted from, or addressed in highly incomplete form in, the ANPR.

The Hippocratic Oath of Regulatory Intervention

There is a longstanding consensus that regulatory intervention is warranted only if a market failure can be identified with reasonable confidence. This principle is especially relevant in the case of the FTC, which is entrusted with preserving competitive markets and, therefore, should be hesitant about intervening in market transactions without a compelling evidentiary basis. As a corollary to this proposition, it is also widely agreed that implementing any intervention to correct a market failure would only be warranted to the extent that such intervention would be reasonably expected to correct any such failure at a net social gain.

This prudent approach tracks the “economic effect” analysis that the commission must apply in the rulemaking process contemplated under the Federal Trade Commission Act and the analysis of “projected benefits and … adverse economic effects” of proposed and final rules contemplated by the commission’s rules of practice. Consistent with these requirements, the commission has exhibited a longstanding commitment to thorough cost-benefit analysis. As observed by former Commissioner Julie Brill in 2016, “the FTC conducts its rulemakings with the same level of attention to costs and benefits that is required of other agencies.” Former Commissioner Brill also observed that the “FTC combines our broad mandate to protect consumers with a rigorous, empirical approach to enforcement matters.”

This demanding, fact-based protocol enhances the likelihood that regulatory interventions result in a net improvement relative to the status quo, an uncontroversial goal of any rational public policy. Unfortunately, the ANPR does not make clear that the commission remains committed to this methodology.

Assessing Market Failure in the Use of Consumer Data

To even “get off the ground,” any proposed privacy regulation would be required to identify a market failure arising from a particular use of consumer data. This requires a rigorous and comprehensive assessment of the full range of social costs and benefits that can be reasonably attributed to any such practice.

The ANPR’s Oversights

In contrast to the approach described by former Commissioner Brill, several elements of the ANPR raise significant doubts concerning the current commission’s willingness to assess evidence relevant to the potential necessity of privacy-related regulations in a balanced, rigorous, and comprehensive manner.

First, while the ANPR identifies a plethora of social harms attributable to data-collection practices, it merely acknowledges the possibility that consumers enjoy benefits from such practices “in theory.” This skewed perspective is not empirically serious. Focusing almost entirely on the costs of data collection and dismissing as conjecture any possible gains defies market realities, especially given the fact that (as discussed below) those gains are clearly significant and, in some cases, transformative.

Second, the ANPR’s choice of the normatively charged term “data surveillance” to encompass all uses of consumer data conveys the impression that all data collection through digital services is surreptitious or coerced, whereas (as discussed below) some users may knowingly provide such data to enable certain data-reliant functionalities.

Third, there is no mention in the ANPR that online providers widely provide users with notices concerning certain uses of consumer data and often require users to select among different levels of data collection.

Fourth, the ANPR unusually relies substantially on news websites and non-peer-reviewed publications in the style of policy briefs or advocacy papers, rather than the empirical social-science research on which the commission has historically made policy determinations.

This apparent indifference to analytical balance is particularly exhibited in the ANPR’s failure to address the economic gains generated through the use of consumer data in online markets. As was recognized in a 2014 White House report, many valuable digital services could not function effectively without engaging in some significant level of data collection. The examples are numerous and diverse, including traffic-navigation services that rely on data concerning a user’s geographic location (as well as other users’ geographic location); personalized ad delivery, which relies on data concerning a user’s search history and other disclosed characteristics; and search services, which rely on the ability to use user data to offer search services at no charge while offering targeted advertisements to paying advertisers.

There are equally clear gains on the “supply” side of the market. Data-collection practices can expand market access by enabling smaller vendors to leverage digital intermediaries to attract consumers that are most likely to purchase those vendors’ goods or services. The commission has recognized this point in the past, observing in a 2014 report:

Data brokers provide the information they compile to clients, who can use it to benefit consumers … [C]onsumers may benefit from increased and innovative product offerings fueled by increased competition from small businesses that are able to connect with consumers that they may not have otherwise been able to reach.

Given the commission’s statutory mission under the FTC Act to protect consumers’ interests and preserve competitive markets, these observations should be of special relevance.

Data Protection v. Data-Reliant Functionality

Data-reliant services yield social gains by substantially lowering transaction costs and, in the process, enabling services that would not otherwise be feasible, with favorable effects for consumers and vendors. This observation does not exclude the possibility that specific uses of consumer data may constitute a potential market failure that merits regulatory scrutiny and possible intervention (assuming there is sufficient legal authority for the relevant agency to undertake any such intervention). That depends on whether the social costs reasonably attributable to a particular use of consumer data exceed the social gains reasonably attributable to that use. This basic principle seems to be recognized by the ANPR, which states that the commission can only deem a practice “unfair” under the FTC Act if “it causes or is likely to cause substantial injury” and “the injury is not outweighed by benefits to consumers or competition.”

In implementing this principle, it is important to keep in mind that a market failure could only arise if the costs attributable to any particular use of consumer data are not internalized by the parties to the relevant transaction. This requires showing either that a particular use of consumer data imposes harms on third parties (a plausible scenario in circumstances implicating risks to data security) or consumers are not aware of, or do not adequately assess or foresee, the costs they incur as a result of such use (a plausible scenario in circumstances implicating risks to consumer data). For the sake of brevity, I will focus on the latter scenario.

Many scholars have taken the view that consumers do not meaningfully read privacy notices or consider privacy risks, although the academic literature has also recognized efforts by private entities to develop notice methodologies that can improve consumers’ ability to do so. Even accepting this view, however, it does not necessarily follow (as the ANPR appears to assume) that a more thorough assessment of privacy risks would inevitably lead consumers to elect higher levels of data privacy even where that would degrade functionality or require paying a positive price for certain services. That is a tradeoff that will vary across consumers. It is therefore difficult to predict and easy to get wrong.

As the ANPR indirectly acknowledges in questions 26 and 40, interventions that bar certain uses of consumer data may therefore harm consumers by compelling the modification, positive pricing, or removal from the market of popular data-reliant services. For this reason, some scholars and commentators have favored the informed-consent approach that provides users with the option to bar or limit certain uses of their data. This approach minimizes error costs since it avoids overestimating consumer preferences for privacy. Unlike a flat prohibition of certain uses of consumer data, it also can reflect differences in those preferences across consumers. The ANPR appears to dismiss this concern, asking in question 75 whether certain practices should be made illegal “irrespective of whether consumers consent to them” (my emphasis added).

Addressing the still-uncertain body of evidence concerning the tradeoff between privacy protections on the one hand and data-reliant functionalities on the other (as well as the still-unresolved extent to which users can meaningfully make that tradeoff) lies outside the scope of this discussion. However, the critical observation is that any determination of market failure concerning any particular use of consumer data must identify the costs (and specifically, identify non-internalized costs) attributable to any such use and then offset those costs against the gains attributable to that use.

This balancing analysis is critical. As the commission recognized in a 2015 report, it is essential to strike a balance between safeguarding consumer privacy without suppressing the economic gains that arise from data-reliant services that can benefit consumers and vendors alike. This even-handed approach is largely absent from the ANPR—which, as noted above, focuses almost entirely on costs while largely overlooking the gains associated with the uses of consumer data in online markets. This suggests a one-sided approach to privacy regulation that is incompatible with the cost-benefit analysis that the commission recognizes it must follow in the rulemaking process.

Private-Ordering Approaches to Consumer-Data Regulation

Suppose that a rigorous and balanced cost-benefit analysis determines that a particular use of consumer data would likely yield social costs that exceed social gains. It would still remain to be determined whether and howa regulator should intervene to yield a net social gain. As regulators make this determination, it is critical that they consider the full range of possible mechanisms to address a particular market failure in the use of consumer data.

Consistent with this approach, the FTC Act specifically requires that the commission specify in an ANPR “possible regulatory alternatives under consideration,” a requirement that is replicated at each subsequent stage of the rulemaking process, as provided in the rules of practice. The range of alternatives should include the possibility of taking no action, if no feasible intervention can be identified that would likely yield a net gain.

In selecting among those alternatives, it is imperative that the commission consider the possibility of unnecessary or overly burdensome rules that could impede the efficient development and supply of data-reliant services, either degrading the quality or raising the price of those services. In the past, the commission has emphasized this concern, stating in 2011 that “[t]he FTC actively looks for means to reduce burdens while preserving the effectiveness of a rule.”

This consideration (which appears to be acknowledged in question 24 of the ANPR) is of special importance to privacy-related regulation, given that the estimated annual costs to the U.S. economy (as calculated by the Information Technology and Innovation Foundation) of compliance with the most extensive proposed forms of privacy-related regulations would exceed $100 billion dollars. Those costs would be especially burdensome for smaller entities, effectively raising entry barriers and reducing competition in online markets (a concern that appears to be acknowledged in question 27 of the ANPR).

Given the exceptional breadth of the rules that the ANPR appears to contemplate—cover an ambitious range of activities that would typically be the subject of a landmark piece of federal legislation, rather than administrative rulemaking—it is not clear that the commission has seriously considered this vital point of concern.

In the event that the FTC does move forward with any of these proposed rulemakings (which would be required to rest on a factually supported finding of market failure), it would confront a range of possible interventions in markets for consumer data. That range is typically viewed as being bounded, on the least-interventionist side, by notice and consent requirements to facilitate informed user choice, and on the most interventionist side, by prohibitions that specifically bar certain uses of consumer data.

This is well-traveled ground within the academic and policy literature and the relative advantages and disadvantages of each regulatory approach are well-known (and differ depending on the type of consumer data and other factors). Within the scope of this contribution, I wish to address an alternative regulatory approach that lies outside this conventional range of policy options.

Bottom-Up v. Top-Down Regulation

Any cost-benefit analysis concerning potential interventions to modify or bar a particular use of consumer data, or to mandate notice-and-consent requirements in connection with any such use, must contemplate not only government-implemented solutions but also market-implemented solutions, including hybrid mechanisms in which government action facilitates or complements market-implemented solutions.

This is not a merely theoretical proposal (and is referenced indirectly in questions 36, 51, and 87 of the ANPR). As I have discussed in previously published research, the U.S. economy has a long-established record of having adopted, largely without government intervention, collective solutions to the information asymmetries that can threaten the efficient operation of consumer goods and services markets.

Examples abound: Underwriters Laboratories (UL), which establishes product-safety standards in hundreds of markets; large accounting firms, which confirm compliance with Generally Accepted Accounting Principles (GAAP), which are in turn established and updated by the Financial Accounting Standards Board, a private entity subject to oversight by the Securities and Exchange Commission; and intermediaries in other markets, such as consumer credit, business credit, insurance carriers, bond issuers, and content ratings in the entertainment and gaming industries. Collectively, these markets encompass thousands of providers, hundreds of millions of customers, and billions of dollars in value.

A collective solution is often necessary to resolve information asymmetries efficiently because the benefits from establishing an industrywide standard of product or service quality, together with a trusted mechanism for showing compliance with that standard, generates gains that cannot be fully internalized by any single provider.

Jurisdictions outside the United States have tended to address this collective-action problem through the top-down imposition of standards by government mandate and enforcement by regulatory agencies, as illustrated by the jurisdictions referenced by the ANPR that have imposed restrictions on the use of consumer data through direct regulatory intervention. By contrast, the U.S. economy has tended to favor the bottom-up development of voluntary standards, accompanied by certification and audit services, all accomplished by a mix of industry groups and third-party intermediaries. In certain markets, this may be a preferred model to address the information asymmetries between vendors and customers that are the key sources of potential market failure in the use of consumer data.

Privately organized initiatives to set quality standards and monitor compliance benefit the market by supplying a reliable standard that reduces information asymmetries and transaction costs between consumers and vendors. This, in turn, yields economic gains in the form of increased output, since consumers have reduced uncertainty concerning product quality. These quality standards are generally implemented through certification marks (for example, the “UL” certification mark) or ranking mechanisms (for example, consumer-credit or business-credit scores), which induce adoption and compliance through the opportunity to accrue reputational goodwill that, in turn, translates into economic gains.

These market-implemented voluntary mechanisms are a far less costly means to reduce information asymmetries in consumer-goods markets than regulatory interventions, which require significant investments of public funds in rulemaking, detection, investigation, enforcement, and adjudication activities.

Hybrid Policy Approaches

Private-ordering solutions to collective-action failures in markets that suffer from information asymmetries can sometimes benefit from targeted regulatory action, resulting in a hybrid policy approach. In particular, regulators can sometimes play two supplemental functions in this context.

First, regulators can require that providers in certain markets comply with (or can provide a liability safe harbor for providers that comply with) the quality standards developed by private intermediaries that have developed track records of efficiently establishing those standards and reliably confirming compliance. This mechanism is anticipated by the ANPR, which asks in question 51 whether the commission should “require firms to certify that their commercial surveillance practices meet clear standards concerning collection, use, retention, transfer, or monetization of consumer data” and further asks whether those standards should be set by “the Commission, a third-party organization, or some other entity.”

Other regulatory agencies already follow this model. For example, federal and state regulatory agencies in the fields of health care and education rely on accreditation by designated private entities for purposes of assessing compliance with applicable licensing requirements.

Second, regulators can supervise and review the quality standards implemented, adjusted, and enforced by private intermediaries. This is illustrated by the example of securities markets, in which the major exchanges institute and enforce certain governance, disclosure, and reporting requirements for listed companies but are subject to regulatory oversight by the SEC, which must approve all exchange rules and amendments. Similarly, major accounting firms monitor compliance by public companies with GAAP but must register with, and are subject to oversight by, the Public Company Accounting Oversight Board (PCAOB), a nonprofit entity subject to SEC oversight.

These types of hybrid mechanisms shift to private intermediaries most of the costs involved in developing, updating, and enforcing quality standards (in this context, standards for the use of consumer data) and harness private intermediaries’ expertise, capacities, and incentives to execute these functions efficiently and rapidly, while using targeted forms of regulatory oversight as a complementary policy tool.

Conclusion

Certain uses of consumer data in digital markets may impose net social harms that can be mitigated through appropriately crafted regulation. Assuming, for the sake of argument, that the commission has the legal power to enact regulation to address such harms (again, a point as to which there is great doubt), any specific steps must be grounded in rigorous and balanced cost-benefit analysis.

As a matter of law and sound public policy, it is imperative that the commission meaningfully consider the full range of reliable evidence to identify any potential market failures in the use of consumer data and how to formulate rules to rectify or mitigate such failures at a net social gain. Given the extent to which business models in digital environments rely on the use of consumer data, and the substantial value those business models confer on consumers and businesses, the potential “error costs” of regulatory overreach are high. It is therefore critical to engage in a thorough balancing of costs and gains concerning any such use.

Privacy regulation is a complex and economically consequential policy area that demands careful diagnosis and targeted remedies grounded in analysis and evidence, rather than sweeping interventions accompanied by rhetoric and anecdote.

Welcome back to the FTC UMC Roundup! The Senate is back in session and bills are dying. FTC is holding hearings and faith in the agency is dying. The more things change the more they stay the same. Which is a fancy way of saying that despite all the talk of change, little change seems likely. This is never more true than when midterm elections are on the horizon – this is high season for talk of change that will not happen.

This week’s headline is the unexpected death of the Journalism Competition and Preservation Act (JCPA), which seems to have met its fate in committee on Thursday. The JCPA sought to save “local journalism” by allowing select legacy media entities to form cartels to monopolistically negotiate with tech platforms. The expectation yesterday morning was that the bill would sail through committee. Enter Sen. Ted Cruz (R-TX), with an amendment to further help local journalism by limiting platforms’ use of content moderation – leading one of the bill’s chief sponsors, Sen. Amy Klobuchar (D-MN) to withdraw the bill from consideration.

The story here is partly about a bad bill meeting its timely demise – one does not bring “more cartels” as a solution to a competition fight. But the bigger story is about Senator Klobuchar’s ill-fated competition policy efforts and her failure to appreciate the anti-tech dynamic that she has relied on to bring Republican co-sponsors on board. My colleague Ian Adams captured the essential challenge in memetic form:

We’re a week into September, about 60 days from the midterms and three weeks from the end of the fiscal year. Senate Leader Schumer (D.NY) has bigger fish to fry than pushing legislation that will risk costing any Democrats seats. The demise of the JCPA is an object lesson in the politics of Senator Klobuchar’s American Innovation and Choice Online Act (AICOA) – and a preview of its likely fate.

A close contender for this week’s headline could have been the Commercial Surveillance and Data Security Public Forum hosted by the FTC on Thursday. But this charade doesn’t deserve headline status. The online forum, which was billed as a hearing relating to the FTC’s recently-announced a was plagued by technical difficulties from the start – slides not working, speakers on unstable Internet connections, and consistent “am I muted” problems – that are simply amateurish difficulties two years into the COVID-19 pandemic. 

But the bigger issue with the forum was that nearly three of its five scheduled hours were dedicated to one-sided panels stacked with panelists favoring FTC regulation. Assuming that the APRM ultimately results in the FTC adopting rules, the Commission is assembling a remarkably strong record to support claims of procedural bias. As I have previously discussed, the APRM itself does not meet the requirements of the Magnusson-Moss Act. Now, anyone challenging whatever rules the FTC may ultimately adopt (about which the ANPR has offered no basis for discussion) will readily be able to point to this hearing to demonstrate the the Commission’s rulemaking process is biased in favor of adopting specific regulations, not neutrally obtaining information to inform its rulemaking process.

There has been plenty of other FTC-related news over the past two weeks.

First, congratulations to Svetlana Gans! In addition to being a recent contributor to this ongoing symposium, Svetlana is the subject of a recent article identifying her as a “leading candidate” to take current commissioner Noah Phillips’s seat after he steps down. Of course, the article is critical of her – but that’s the nature of the appointments game. There are few individuals as qualified for this position as Svetlana. And I’m not just saying that because she has contributed to this symposium – she is a longtime FTC practitioner with deep institutional knowledge of the agency and an impeccable record of experience on antitrust and consumer protection matters. 

Second, not many people seem to have noticed this, but: the FTC released its latest five-year plan. The changes between this plan and the previous iteration are subtle but substantial. Most notably, the Commission has replaced its previous focus on protecting “consumers” with a focus on protecting “the public,” and is now focused on “fair competition,” instead of “vibrant competition”. Some agencies, like the Federal Communications Commission, have authority based around a public interest standard. It sounds like FTC Chair Lina Khan is trying to rewrite its UDAP and UMC authority – which Congress and the Courts have long focused on consumer concerns – to focus instead on broader “public interest” standards. One need not invoke major questions to question the propriety of one agency refocusing its strategic priorities around the statutory mandate of its agencies.

Third, Fourth, and Fifth: Walmart is going to war with the FTC; the Senate is going to war with the FTC; and the FTC’s ALJ is going to war with the FTC. Walmart is challenging the FTC’s absurd claim that the company is doing too little to protect consumers from scammers despite the company’s substantial efforts to protect consumers from scammers. With its equal split between Republicans and Democrats and in a preview of what may be to come in a new Congress, the Senate Judiciary Committee is planning to hold a DOJ and FTC oversight hearing. And in a loss for the Commission, the FTC’s ALJ has rejected the FTC’s contention that the Illumina/Grail merger would harm competition – a decision that will likely be appealed to and overturned by the FTC Commissioners, in a nice rebuke the the legitimacy of the agency’s decision-making process (see, inter alia, the pending Axon litigation before the Supreme Court). 

It is not wholly bad news for the FTC over the past two weeks. The Commission has only just started scrutinizing Amazon’s proposed acquisition of iRobot, so that case isn’t faltering yet. On the other hand, Kovacha, a firm that the FTC has accused of providing “precise geolocation data associated with unique persistent identifiers” in a way that establishes a unfair or deceptive acts or practices violation, preemptively brought suit against the FTC arguing that the FTC’s claims were unconstitutional. Kovacha smartly positioned its claims alongside the pending Axon litigation – which will be hear by the Supreme Court on November 7 – positioning its claims alongside the most potent recent challenges to the FTC’s Constitutional structure or authority.

This week’s closing note is that Queen Elizabeth II has passed away. As she moves on to the unknown country, it seems that we have lost one of the last figures of the twentieth century’s global order. To our British friends, God save your King – and may we all take a moment to reflect on the value of stability in our economic and political order tempered by the importance and inevitability of the sea of change.

The Federal Trade Commission (FTC) wants to review in advance all future acquisitions by Facebook parent Meta Platforms. According to a Sept. 2 Bloomberg report, in connection with its challenge to Meta’s acquisition of fitness-app maker Within Unlimited,  the commission “has asked its in-house court to force both Meta and [Meta CEO Mark] Zuckerberg to seek approval from the FTC before engaging in any future deals.”

This latest FTC decision is inherently hyper-regulatory, anti-free market, and contrary to the rule of law. It also is profoundly anti-consumer.

Like other large digital-platform companies, Meta has conferred enormous benefits on consumers (net of payments to platforms) that are not reflected in gross domestic product statistics. In a December 2019 Harvard Business Review article, Erik Brynjolfsson and Avinash Collis reported research finding that Facebook:

…generates a median consumer surplus of about $500 per person annually in the United States, and at least that much for users in Europe. … [I]ncluding the consumer surplus value of just one digital good—Facebook—in GDP would have added an average of 0.11 percentage points a year to U.S. GDP growth from 2004 through 2017.

The acquisition of complementary digital assets—like the popular fitness app produced by Within—enables Meta to continually enhance the quality of its offerings to consumers and thereby expand consumer surplus. It reflects the benefits of economic specialization, as specialized assets are made available to enhance the quality of Meta’s offerings. Requiring Meta to develop complementary assets in-house, when that is less efficient than a targeted acquisition, denies these benefits.

Furthermore, in a recent editorial lambasting the FTC’s challenge to a Meta-Within merger as lacking a principled basis, the Wall Street Journal pointed out that the challenge also removes incentive for venture-capital investments in promising startups, a result at odds with free markets and innovation:

Venture capitalists often fund startups on the hope that they will be bought by larger companies. [FTC Chair Lina] Khan is setting down the marker that the FTC can block acquisitions merely to prevent big companies from getting bigger, even if they don’t reduce competition or harm consumers. This will chill investment and innovation, and it deserves a burial in court.

This is bad enough. But the commission’s proposal to require blanket preapprovals of all future Meta mergers (including tiny acquisitions well under regulatory pre-merger reporting thresholds) greatly compounds the harm from its latest ill-advised merger challenge. Indeed, it poses a blatant challenge to free-market principles and the rule of law, in at least three ways.

  1. It substitutes heavy-handed ex ante regulatory approval for a reliance on competition, with antitrust stepping in only in those limited instances where the hard facts indicate a transaction will be anticompetitive. Indeed, in one key sense, it is worse than traditional economic regulation. Empowering FTC staff to carry out case-by-case reviews of all proposed acquisitions inevitably will generate arbitrary decision-making, perhaps based on a variety of factors unrelated to traditional consumer-welfare-based antitrust. FTC leadership has abandoned sole reliance on consumer welfare as the touchstone of antitrust analysis, paving the wave for potentially abusive and arbitrary enforcement decisions. By contrast, statutorily based economic regulation, whatever its flaws, at least imposes specific standards that staff must apply when rendering regulatory determinations.
  2. By abandoning sole reliance on consumer-welfare analysis, FTC reviews of proposed Meta acquisitions may be expected to undermine the major welfare benefits that Meta has previously bestowed upon consumers. Given the untrammeled nature of these reviews, Meta may be expected to be more cautious in proposing transactions that could enhance consumer offerings. What’s more, the general anti-merger bias by current FTC leadership would undoubtedly prompt them to reject some, if not many, procompetitive transactions that would confer new benefits on consumers.
  3. Instituting a system of case-by-case assessment and approval of transactions is antithetical to the normal American reliance on free markets, featuring limited government intervention in market transactions based on specific statutory guidance. The proposed review system for Meta lacks statutory warrant and (as noted above) could promote arbitrary decision-making. As such, it seriously flouts the rule of law and threatens substantial economic harm (sadly consistent with other ill-considered initiatives by FTC Chair Khan, see here and here).

In sum, internet-based industries, and the big digital platforms, have thrived under a system of American technological freedom characterized as “permissionless innovation.” Under this system, the American people—consumers and producers—have been the winners.

The FTC’s efforts to micromanage future business decision-making by Meta, prompted by the challenge to a routine merger, would seriously harm welfare. To the extent that the FTC views such novel interventionism as a bureaucratic template applicable to other disfavored large companies, the American public would be the big-time loser.

[This post is an entry in Truth on the Market’s FTC UMC Rulemaking symposium. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

The Federal Trade Commission’s (FTC) Aug. 22 Advance Notice of Proposed Rulemaking on Commercial Surveillance and Data Security (ANPRM) is breathtaking in its scope. For an overview summary, see this Aug. 11 FTC press release.

In their dissenting statements opposing ANPRM’s release, Commissioners Noah Phillips and Christine Wilson expertly lay bare the notice’s serious deficiencies. Phillips’ dissent stresses that the ANPRM illegitimately arrogates to the FTC legislative power that properly belongs to Congress:

[The [A]NPRM] recast[s] the Commission as a legislature, with virtually limitless rulemaking authority where personal data are concerned. It contemplates banning or regulating conduct the Commission has never once identified as unfair or deceptive. At the same time, the ANPR virtually ignores the privacy and security concerns that have animated our [FTC] enforcement regime for decades. … [As such, the ANPRM] is the first step in a plan to go beyond the Commission’s remit and outside its experience to issue rules that fundamentally alter the internet economy without a clear congressional mandate. That’s not “democratizing” the FTC or using all “the tools in the FTC’s toolbox.” It’s a naked power grab.

Wilson’s complementary dissent critically notes that the 2021 changes to FTC rules of practice governing consumer-protection rulemaking decrease opportunities for public input and vest significant authority solely with the FTC chair. She also echoed Phillips’ overarching concern with FTC overreach (footnote citations omitted):

Many practices discussed in this ANPRM are presented as clearly deceptive or unfair despite the fact that they stretch far beyond practices with which we are familiar, given our extensive law enforcement experience. Indeed, the ANPRM wanders far afield of areas for which we have clear evidence of a widespread pattern of unfair or deceptive practices. … [R]egulatory and enforcement overreach increasingly has drawn sharp criticism from courts. Recent Supreme Court decisions indicate FTC rulemaking overreach likely will not fare well when subjected to judicial review.

Phillips and Wilson’s warnings are fully warranted. The ANPRM contemplates a possible Magnuson-Moss rulemaking pursuant to Section 18 of the FTC Act,[1] which authorizes the commission to promulgate rules dealing with “unfair or deceptive acts or practices.” The questions that the ANPRM highlights center primarily on concerns of unfairness.[2] Any unfairness-related rulemaking provisions eventually adopted by the commission will have to satisfy a strict statutory cost-benefit test that defines “unfair” acts, found in Section 5(n) of the FTC Act. As explained below, the FTC will be hard-pressed to justify addressing most of the ANPRM’s concerns in Section 5(n) cost-benefit terms.

Discussion

The requirements imposed by Section 5(n) cost-benefit analysis

Section 5(n) codifies the meaning of unfair practices, and thereby constrains the FTC’s application of rulemakings covering such practices. Section 5(n) states:

The Commission shall have no authority … to declare unlawful an act or practice on the grounds that such an act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

In other words, a practice may be condemned as unfair only if it causes or is likely to cause “(1) substantial injury to consumers (2) which is not reasonably avoidable by consumers themselves and (3) not outweighed by countervailing benefits to consumers or to competition.”

This is a demanding standard. (For scholarly analyses of the standard’s legal and economic implications authored by former top FTC officials, see here, here, and here.)

First, the FTC must demonstrate that a practice imposes a great deal of harm on consumers, which they could not readily have avoided. This requires detailed analysis of the actual effects of a particular practice, not mere theoretical musings about possible harms that may (or may not) flow from such practice. Actual effects analysis, of course, must be based on empiricism: consideration of hard facts.

Second, assuming that this formidable hurdle is overcome, the FTC must then acknowledge and weigh countervailing welfare benefits that might flow from such a practice. In addition to direct consumer-welfare benefits, other benefits include “benefits to competition.” Those may include business efficiencies that reduce a firm’s costs, because such efficiencies are a driver of vigorous competition and, thus, of long-term consumer welfare. As the Organisation for Economic Co-operation and Development has explained (see OECD Background Note on Efficiencies, 2012, at 14), dynamic and transactional business efficiencies are particularly important in driving welfare enhancement.

In sum, under Section 5(n), the FTC must show actual, fact-based, substantial harm to consumers that they could not have escaped, acting reasonably. The commission must also demonstrate that such harm is not outweighed by consumer and (procompetitive) business-efficiency benefits. What’s more, Section 5(n) makes clear that the FTC cannot “pull a rabbit out of a hat” and interject other “public policy” considerations as key factors in the rulemaking  calculus (“[s]uch [other] public policy considerations may not serve as a primary basis for … [a] determination [of unfairness]”).

It ineluctably follows as a matter of law that a Section 18 FTC rulemaking sounding in unfairness must be based on hard empirical cost-benefit assessments, which require data grubbing and detailed evidence-based economic analysis. Mere anecdotal stories of theoretical harm to some consumers that is alleged to have resulted from a practice in certain instances will not suffice.

As such, if an unfairness-based FTC rulemaking fails to adhere to the cost-benefit framework of Section 5(n), it inevitably will be struck down by the courts as beyond the FTC’s statutory authority. This conclusion is buttressed by the tenor of the Supreme Court’s unanimous 2021 opinion in AMG Capital v. FTC, which rejected the FTC’s claim that its statutory injunctive authority included the ability to obtain monetary relief for harmed consumers (see my discussion of this case here).

The ANPRM and Section 5(n)

Regrettably, the tone of the questions posed in the ANPRM indicates a lack of consideration for the constraints imposed by Section 5(n). Accordingly, any future rulemaking that sought to establish “remedies” for many of the theorized abuses found in the ANPRM would stand very little chance of being upheld in litigation.

The Aug. 11 FTC press release cited previously addresses several broad topical sources of harms: harms to consumers; harms to children; regulations; automated systems; discrimination; consumer consent; notice, transparency, and disclosure; remedies; and obsolescence. These categories are chock full of questions that imply the FTC may consider restrictions on business conduct that go far beyond the scope of the commission’s authority under Section 5(n). (The questions are notably silent about the potential consumer benefits and procompetitive efficiencies that may arise from the business practices called here into question.)

A few of the many questions set forth under just four of these topical listings (harms to consumers, harms to children, regulations, and discrimination) are highlighted below, to provide a flavor of the statutory overreach that categorizes all aspects of the ANPRM. Many other examples could be cited. (Phillips’ dissenting statement provides a cogent and critical evaluation of ANPRM questions that embody such overreach.) Furthermore, although there is a short discussion of “costs and benefits” in the ANPRM press release, it is wholly inadequate to the task.

Under the category “harms to consumers,” the ANPRM press release focuses on harm from “lax data security or surveillance practices.” It asks whether FTC enforcement has “adequately addressed indirect pecuniary harms, including potential physical harms, psychological harms, reputational injuries, and unwanted intrusions.” The press release suggests that a rule might consider addressing harms to “different kinds of consumers (e.g., young people, workers, franchisees, small businesses, women, victims of stalking or domestic violence, racial minorities, the elderly) in different sectors (e.g., health, finance, employment) or in different segments or ‘stacks’ of the internet economy.”

These laundry lists invite, at best, anecdotal public responses alleging examples of perceived “harm” falling into the specified categories. Little or no light is likely to be shed on the measurement of such harm, nor on the potential beneficial effects to some consumers from the practices complained of (for example, better targeted ads benefiting certain consumers). As such, a sound Section 5(n) assessment would be infeasible.

Under “harms to children,” the press release suggests possibly extending the limitations of the FTC-administered Children’s Online Privacy Protection Act (COPPA) to older teenagers, thereby in effect rewriting COPPA and usurping the role of Congress (a clear statutory overreach). The press release also asks “[s]hould new rules set out clear limits on personalized advertising to children and teenagers irrespective of parental consent?” It is hard (if not impossible) to understand how this form of overreach, which would displace the supervisory rights of parents (thereby imposing impossible-to-measure harms on them), could be shoe-horned into a defensible Section 5(n) cost-benefit assessment.

Under “regulations,” the press release asks whether “new rules [should] require businesses to implement administrative, technical, and physical data security measures, including encryption techniques, to protect against risks to the security, confidentiality, or integrity of covered data?” Such new regulatory strictures (whose benefits to some consumers appear speculative) would interfere significantly in internal business processes. Specifically, they could substantially diminish the efficiency of business-security measures, diminish business incentives to innovate (for example, in encryption), and reduce dynamic competition among businesses.

Consumers also would be harmed by a related slowdown in innovation. Those costs undoubtedly would be high but hard, if not impossible, to measure. The FTC also asks whether a rule should limit “companies’ collection, use, and retention of consumer data.” This requirement, which would seemingly bypass consumers’ decisions to make their data available, would interfere with companies’ ability to use such data to improve business offerings and thereby enhance consumers’ experiences. Justifying new requirements such as these under Section 5(n) would be well-nigh impossible.

The category “discrimination” is especially problematic. In addressing “algorithmic discrimination,” the ANPRM press release asks whether the FTC should “consider new trade regulation rules that bar or somehow limit the deployment of any system that produces discrimination, irrespective of the data or processes on which those outcomes are based.” In addition, the press release asks “if the Commission [should] consider harms to other underserved groups that current law does not recognize as protected from discrimination (e.g., unhoused people or residents of rural communities)?”

The FTC cites no statutory warrant for the authority to combat such forms of “discrimination.” It is not a civil-rights agency. It clearly is not authorized to issue anti-discrimination rules dealing with “groups that current law does not recognize as protected from discrimination.” Any such rules, if issued, would be summarily struck down in no uncertain terms by the judiciary, even without regard to Section 5(n).

In addition, given the fact that “economic discrimination” often is efficient (and procompetitive) and may be beneficial to consumer welfare (see, for example, here), more limited economic anti-discrimination rules almost certainly would not pass muster under the Section 5(n) cost-benefit framework.     

Finally, while the ANPRM press release does contain a very short section entitled “costs and benefits,” that section lacks any specific reference to the required Section 5(n) evaluation framework. Phillips’ dissent points out that the ANPRM:

…simply fail[s] to provide the detail necessary for commenters to prepare constructive responses” on cost-benefit analysis. He stresses that the broad nature of requests for commenters’ view on costs and benefits renders the inquiry “not conducive to stakeholders submitting data and analysis that can be compared and considered in the context of a specific rule. … Without specific questions about [the costs and benefits of] business practices and potential regulations, the Commission cannot hope for tailored responses providing a full picture of particular practices.

In other words, the ANPRM does not provide the guidance needed to prompt the sorts of responses that might assist the FTC in carrying out an adequate Section 5(n) cost-benefit analysis.

Conclusion

The FTC would face almost certain defeat in court if it promulgated a broad rule addressing many of the perceived unfairness-based “ills” alluded to in the ANPRM. Moreover, although its requirements would (I believe) not come into effect, such a rule nevertheless would impose major economic costs on society.

Prior to final judicial resolution of its status, the rule would disincentivize businesses from engaging in a variety of data-related practices that enhance business efficiency and benefit many consumers. Furthermore, the FTC resources devoted to developing and defending the rule would not be applied to alternative welfare-enhancing FTC activities—a substantial opportunity cost.

The FTC should take heed of these realities and opt not to carry out a rulemaking based on the ANPRM. It should instead devote its scarce consumer protection resources to prosecuting hard core consumer fraud and deception—and, perhaps, to launching empirical studies into the economic-welfare effects of data security and commercial surveillance practices. Such studies, if carried out, should focus on dispassionate economic analysis and avoid policy preconceptions. (For example, studies involving digital platforms should take note of the existing economic literature, such as a paper indicating that digital platforms have generated enormous consumer-welfare benefits not accounted for in gross domestic product.)

One can only hope that a majority of FTC commissioners will apply common sense and realize that far-flung rulemaking exercises lacking in statutory support are bad for the rule of law, bad for the commission’s reputation, bad for the economy, and bad for American consumers.


[1] The FTC states specifically that it “is issuing this ANPR[M] pursuant to Section 18 of the Federal Trade Commission Act”.

[2] Deceptive practices that might be addressed in a Section 18 trade regulation rule would be subject to the “FTC Policy Statement on Deception,” which states that “the Commission will find deception if there is a representation, omission or practice that is likely to mislead the consumer acting reasonably in the circumstances, to the consumer’s detriment.” A court reviewing an FTC Section 18 rule focused on “deceptive acts or practices” undoubtedly would consult this Statement, although it is not clear, in light of recent jurisprudential trends, that the court would defer to the Statement’s analysis in rendering an opinion. In any event, questions of deception, which focus on acts or practices that mislead consumers, would in all likelihood have little relevance to the evaluation of any rule that might be promulgated in light of the ANPRM.    

[The following is a guest post from Philip Hanspach of the European University Institute.]

There is an emerging debate regarding whether complexity theory—which, among other things, draws lessons about uncertainty and non-linearity from the natural sciences—should make inroads into antitrust (see, e.g., Nicolas Petit and Thibault Schrepel, 2022). Of course, one might also say that antitrust is already quite late to the party. Since the 1990s, complexity theory has made inroads into numerous “hard” and social sciences, from geography and urban planning to cultural studies.

Depending on whom you ask, complexity theory is everything from a revolutionary paradigm to a lazy buzzword. What would it mean to apply it in the context of antitrust and would it, in fact, be useful?

Given its numerous applications, scholars have proposed several definitions of complexity theory, invoking different kinds of complexity. According to one, complexity theory is concerned with the study of complex adaptive systems (CAS)—that is, networks that consist of many diverse, interdependent parts. A CAS may adapt and change, for example, in response to past experience.

That does not sound too strange as a general description either of the economy as a whole or of markets in particular, with consumers, firms, and potential entrants among the numerous moving parts. At the same time, this approach contrasts with orthodox economic theory—specifically, with the game-theory models that rule antitrust debates and that prize simplicity and reductionism.

As both a competition economist and a history buff, my primary point of reference for complexity theory is a scholarly debate among Bronze Age scholars. Sound obscure? Bear with me.

The collapse of several flourishing Mediterranean civilizations in the 12th century B.C. (Mycenae and Egypt, to name only two) puzzles historians as much as today’s economists are stumped by the question of whether any particular merger will raise prices.[1] Both questions encounter difficulties in gathering sufficient data for empirical analysis (the lack of counterfactuals and foresight in one case, and 3,000 years of decay in the other), forcing a recourse to theory and possibility results.

Earlier Bronze Age scholarship blamed the “Sea Peoples,” invaders of unknown origin (possibly Sicily or Sardinia), for the destruction of several thriving cities and states. The primary source for this thesis was statements attributed to the Egyptian pharaoh of the time. More recent research, while acknowledging the role of the Sea Peoples, but has gone to lengths to point out that, in many cases, we simply don’t know. Alternative explanations (famine, disease, systems collapse) are individually unconvincing as alternative explanations, but might each have contributed to the end of various Bronze Age civilizations.

Complexity theory was brought into this discussion with some caution. While acknowledging the theory’s potential usefulness, Eric Cline writes:

We may just be applying a scientific (or possibly pseudoscientific) term to a situation in which there is insufficient knowledge to draw firm conclusions. It sounds nice, but does it really advance our understanding? Is it more than just a fancy way to state a fairly obvious fact?

In a review of Cline’s book, archaeologist Guy D. Middleton agreed that the application of complexity theory might be “useful” but also “obvious.” Similarly, in the context of antitrust, I think complexity theory may serve as a useful framework to understand uncertainty in the marketplace.

Thinking of a market as a CAS can help to illustrate the uncertainty behind every decision. For example, a formal economic model with a clear (at least, to economists) equilibrium outcome might predict that a certain merger will give firms the incentive and ability to reduce spending on research and development. But the lens of complexity theory allows us to better understand why we might still be wrong, or why we are right, but for the wrong reasons.

We can accept that decisions that are relevant and observable to antitrust practitioners (such as price and production decisions) can be driven by things that are small and unobservable. For example, a manager who ultimately calls the shots on R&D budgets for an airplane manufacturer might go to a trade fair and become fascinated by a cool robot that a particular shipyard presented. This might have been the key push that prompted her to finance an unlikely robotics project proposed by her head engineer.

Her firm is, indeed, part of a complex system—one that includes the individual purchase decisions of consumers, customer feedback, reports from salespeople in the field, news from science and business journalists about the next big thing, and impressions at trade fairs and exhibitions. These all coalesce in the manager’s head and influence simple decisions about her R&D budget. But I have yet to see a merger-review decision that predicted effects on innovation from peeking into managers’ minds in such a way.

This little story might be a far-fetched example of the Butterfly Effect, perhaps the most familiar concept from complexity theory. Just as the flaps of a butterfly’s wings might cause a storm on the other side of the world, the shipyard’s earlier decision to invest in a robotic manufacturing technology resulted in our fictitious aircraft manufacturer’s decision to invest more in R&D than we might have predicted with our traditional tools.

Indeed, it is easy to think of other small events that can have consequences leading to price changes that are relevant in the antitrust arena. Remember the cargo ship Ever Given, which blocked the Suez Canal in March 2021? One reason mentioned for its distress were unusually strong winds (whether a butterfly was to blame, I don’t know) pushing the highly stacked containers like a sail. The disruption to supply chains was felt in various markets across Europe.

In my opinion, one benefit of admitting this complexity is that it can make ex post evaluation more common in antitrust. Indeed, some researchers are doing great work on this. Enforcers are understandably hesitant to admit that they might get it wrong sometimes, but I believe that we can acknowledge that we will not ultimately know whether merged firms will, say, invest more or less in innovation. Complexity theory tells us that, even if our best and most appropriate model is wrong, the world is not random. It is just very hard to understand and hinges on things that are neither straightforward to observe, nor easy to correctly gauge ex ante.

Turning back to the Bronze Age, scholars have an easier time observing that a certain city was destroyed and abandoned at some point in time than they do in correctly naming the culprit (the Sea Peoples, a rival power, an earthquake?) The appeal of complexity theory is not just that it lifts a scholar’s burden to name one or a few predominant explanations, but that it grants confidence that the decision itself arose out of a complex system: the big and small effects that factors such as famine, trade, weather, and fortune may have had on the city’s ability to defend itself against attack, and the individual-but-interrelated decisions of a city’s citizens to stay or leave following a catastrophe.

Similarly, for antitrust experts, it is easier to observe a price increase following a merger than to correctly guess its reason. Where economists differ from archaeologists and classicists is that they don’t just study the past. They have to continue exploring the present and future. Imagine that an agency clears a merger that we would have expected not to harm competition, but it turns out, ex post, that it was a bad call. Complexity theory doesn’t just offer excuses for where reality diverged from our prediction. Instead, it can tell us whether our tools were deficient or whether we made an “honest mistake.” As investigations are always costly, it is up to the enforcer (or those setting their budget) to decide whether it makes sense to expand investigations to account for new, complex phenomena (reading the minds of R&D managers will probably remain out of the budget for the foreseeable future).

Finally, economists working on antitrust problems should not see this as belittling their role, but as a welcome frame for their work. Computing diversion ratios or modeling a complex market as a straightforward set of equations might still be the best we can do. A model that is right on average gets us closer to the right answer and is certainly preferred to having no clue what’s going on. Where we don’t have precedent to guide us, we have to resort to models that may be wrong, despite getting everything right that was under our control.

A few things that Petit and Schrepel call for are comfortably established in the economist’s toolkit. They might not, however, always be put to use where they should. Notably, there are feedback loops in dynamic models. Even in static models, it is possible to show how a change in one variable has direct and indirect (second order) effects on an outcome. The typical merger investigation is concerned with short-term effects, perhaps those materializing over the three to five years following a merger. These short-term effects may be relatively easy to approximate in a simple model. Granted, Petit and Schrepel’s article adopts a wide understanding of antitrust—including pro-competitive market regulation—but this seems like an important caveat, nonetheless.

In conclusion, complexity theory is something economists and lawyers who study markets should learn more about. It’s a fascinating research paradigm and a framework in which one can make sense of small and large causes having sometimes unpredictable effects. For antitrust practitioners, it can advance our understanding of why our predictions can fail when the tools and approaches that we use are limited. My hope is that understanding complexity will increase openness to ex-post valuation and the expectations toward antitrust enforcement (and its limits). At the same time, it is still an (economic) question of costs and benefits as to whether further complications in an antitrust investigation are worth it.


[1] A fascinating introduction that balances approachability and source work is YouTube’s Extra History series on the Bronze Age collapse.

A recent viral video captures a prevailing sentiment in certain corners of social media, and among some competition scholars, about how mergers supposedly work in the real world: firms start competing on price, one firm loses out, that firm agrees to sell itself to the other firm and, finally, prices are jacked up.(Warning: Keep the video muted. The voice-over is painful.)

The story ends there. In this narrative, the combination offers no possible cost savings. The owner of the firm who sold doesn’t start a new firm and begin competing tomorrow, and nor does anyone else. The story ends with customers getting screwed.

And in this telling, it’s not just horizontal mergers that look like the one in the viral egg video. It is becoming a common theory of harm regarding nonhorizontal acquisitions that they are, in fact, horizontal acquisitions in disguise. The acquired party may possibly, potentially, with some probability, in the future, become a horizontal competitor. And of course, the story goes, all horizontal mergers are anticompetitive.

Therefore, we should have the same skepticism toward all mergers, regardless of whether they are horizontal or vertical. Steve Salop has argued that a problem with the Federal Trade Commission’s (FTC) 2020 vertical merger guidelines is that they failed to adopt anticompetitive presumptions.

This perspective is not just a meme on Twitter. The FTC and U.S. Justice Department (DOJ) are currently revising their guidelines for merger enforcement and have issued a request for information (RFI). The working presumption in the RFI (and we can guess this will show up in the final guidelines) is exactly the takeaway from the video: Mergers are bad. Full stop.

The RFI repeatedly requests information that would support the conclusion that the agencies should strengthen merger enforcement, rather than information that might point toward either stronger or weaker enforcement. For example, the RFI asks:

What changes in standards or approaches would appropriately strengthen enforcement against mergers that eliminate a potential competitor?

This framing presupposes that enforcement should be strengthened against mergers that eliminate a potential competitor.

Do Monopoly Profits Always Exceed Joint Duopoly Profits?

Should we assume enforcement, including vertical enforcement, needs to be strengthened? In a world with lots of uncertainty about which products and companies will succeed, why would an incumbent buy out every potential competitor? The basic idea is that, since profits are highest when there is only a single monopolist, that seller will always have an incentive to buy out any competitors.

The punchline for this anti-merger presumption is “monopoly profits exceed duopoly profits.” The argument is laid out most completely by Salop, although the argument is not unique to him. As Salop points out:

I do not think that any of the analysis in the article is new. I expect that all the points have been made elsewhere by others and myself.

Under the model that Salop puts forward, there should, in fact, be a presumption against any acquisition, not just horizontal acquisitions. He argues that:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

We see a presumption against mergers in the recent FTC challenge of Meta’s purchase of Within. While Meta owns Oculus, a virtual-reality headset and Within owns virtual-reality fitness apps, the FTC challenged the acquisition on grounds that:

The Acquisition would cause anticompetitive effects by eliminating potential competition from Meta in the relevant market for VR dedicated fitness apps.

Given the prevalence of this perspective, it is important to examine the basic model’s assumptions. In particular, is it always true that—since monopoly profits exceed duopoly profits—incumbents have an incentive to eliminate potential competition for anticompetitive reasons?

I will argue no. The notion that monopoly profits exceed joint-duopoly profits rests on two key assumptions that hinder the simple application of the “merge to monopoly” model to antitrust.

First, even in a simple model, it is not always true that monopolists have both the ability and incentive to eliminate any potential entrant, simply because monopoly profits exceed duopoly profits.

For the simplest complication, suppose there are two possible entrants, rather than the common assumption of just one entrant at a time. The monopolist must now pay each of the entrants enough to prevent entry. But how much? If the incumbent has already paid one potential entrant not to enter, the second could then enter the market as a duopolist, rather than as one of three oligopolists. Therefore, the incumbent must pay the second entrant an amount sufficient to compensate a duopolist, not their share of a three-firm oligopoly profit. The same is true for buying the first entrant. To remain a monopolist, the incumbent would have to pay each possible competitor duopoly profits.

Because monopoly profits exceed duopoly profits, it is profitable to pay a single entrant half of the duopoly profit to prevent entry. It is not, however, necessarily profitable for the incumbent to pay both potential entrants half of the duopoly profit to avoid entry by either. 

Now go back to the video. Suppose two passersby, who also happen to have chickens at home, notice that they can sell their eggs. The best part? They don’t have to sit around all day; the lady on the right will buy them. The next day, perhaps, two new egg sellers arrive.

For a simple example, consider a Cournot oligopoly model with an industry-inverse demand curve of P(Q)=1-Q and constant marginal costs that are normalized to zero. In a market with N symmetric sellers, each seller earns 1/((N+1)^2) in profits. A monopolist makes a profit of 1/4. A duopolist can expect to earn a profit of 1/9. If there are three potential entrants, plus the incumbent, the monopolist must pay each the duopoly profit of 3*1/9=1/3, which exceeds the monopoly profits of 1/4.

In the Nash/Cournot equilibrium, the incumbent will not acquire any of the competitors, since it is too costly to keep them all out. With enough potential entrants, the monopolist in any market will not want to buy any of them out. In that case, the outcome involves no acquisitions.

If we observe an acquisition in a market with many potential entrants, which any given market may or may not have, it cannot be that the merger is solely about obtaining monopoly profits, since the model above shows that the incumbent doesn’t have incentives to do that.

If our model captures the dynamics of the market (which it may or may not, depending on a given case’s circumstances) but we observe mergers, there must be another reason for that deal besides maintaining a monopoly. The presence of multiple potential entrants overturns the antitrust implications of the truism that monopoly profits exceed duopoly profits. The question turns instead to empirical analysis of the merger and market in question, as to whether it would be profitable to acquire all potential entrants.

The second simplifying assumption that restricts the applicability of Salop’s baseline model is that the incumbent has the lowest cost of production. He rules out the possibility of lower-cost entrants in Footnote 2:

Monopoly profits are not always higher. The entrant may have much lower costs or a better or highly differentiated product. But higher monopoly profits are more usually the case.

If one allows the possibility that an entrant may have lower costs (even if those lower costs won’t be achieved until the future, when the entrant gets to scale), it does not follow that monopoly profits (under the current higher-cost monopolist) necessarily exceed duopoly profits (with a lower-cost producer involved).

One cannot simply assume that all firms have the same costs or that the incumbent is always the lowest-cost producer. This is not just a modeling choice but has implications for how we think about mergers. As Geoffrey Manne, Sam Bowman, and Dirk Auer have argued:

Although it is convenient in theoretical modeling to assume that similarly situated firms have equivalent capacities to realize profits, in reality firms vary greatly in their capabilities, and their investment and other business decisions are dependent on the firm’s managers’ expectations about their idiosyncratic abilities to recognize profit opportunities and take advantage of them—in short, they rest on the firm managers’ ability to be entrepreneurial.

Given the assumptions that all firms have identical costs and there is only one potential entrant, Salop’s framework would find that all possible mergers are anticompetitive and that there are no possible efficiency gains from any merger. That’s the thrust of the video. We assume that the whole story is two identical-seeming women selling eggs. Since the acquired firm cannot, by assumption, have lower costs of production, it cannot improve on the incumbent’s costs of production.

Many Reasons for Mergers

But whether a merger is efficiency-reducing and bad for competition and consumers needs to be proven, not just assumed.

If we take the basic acquisition model literally, every industry would have just one firm. Every incumbent would acquire every possible competitor, no matter how small. After all, monopoly profits are higher than duopoly profits, and so the incumbent both wants to and can preserve its monopoly profits. The model does not give us a way to disentangle when mergers would stop without antitrust enforcement.

Mergers do not affect the production side of the economy, under this assumption, but exist solely to gain the market power to manipulate prices. Since the model finds no downsides for the incumbent to acquiring a competitor, it would naturally acquire every last potential competitor, no matter how small, unless prevented by law. 

Once we allow for the possibility that firms differ in productivity, however, it is no longer true that monopoly profits are greater than industry duopoly profits. We can see this most clearly in situations where there is “competition for the market” and the market is winner-take-all. If the entrant to such a market has lower costs, the profit under entry (when one firm wins the whole market) can be greater than the original monopoly profits. In such cases, monopoly maintenance alone cannot explain an entrant’s decision to sell.

An acquisition could therefore be both procompetitive and increase consumer welfare. For example, the acquisition could allow the lower-cost entrant to get to scale quicker. The acquisition of Instagram by Facebook, for example, brought the photo-editing technology that Instagram had developed to a much larger market of Facebook users and provided a powerful monetization mechanism that was otherwise unavailable to Instagram.

In short, the notion that incumbents can systematically and profitably maintain their market position by acquiring potential competitors rests on assumptions that, in practice, will regularly and consistently fail to materialize. It is thus improper to assume that most of these acquisitions reflect efforts by an incumbent to anticompetitively maintain its market position.