Archives For Financial Regulation

Twitter has seen a lot of ups and downs since Elon Musk closed on his acquisition of the company in late October and almost immediately set about his initiatives to “reform” the platform’s operations.

One of the stories that has gotten somewhat lost in the ensuing chaos is that, in the short time under Musk, Twitter has made significant inroads—on at least some margins—against the visibility of child sexual abuse material (CSAM) by removing major hashtags that were used to share it, creating a direct reporting option, and removing major purveyors. On the other hand, due to the large reductions in Twitter’s workforce—both voluntary and involuntary—there are now very few human reviewers left to deal with the issue.

Section 230 immunity currently protects online intermediaries from most civil suits for CSAM (a narrow carveout is made under Section 1595 of the Trafficking Victims Protection Act). While the federal government could bring criminal charges if it believes online intermediaries are violating federal CSAM laws, and certain narrow state criminal claims could be brought consistent with federal law, private litigants are largely left without the ability to find redress on their own in the courts.

This, among other reasons, is why there has been a push to amend Section 230 immunity. Our proposal (along with co-author Geoffrey Manne) suggests online intermediaries should have a reasonable duty of care to remove illegal content. But this still requires thinking carefully about what a reasonable duty of care entails.

For instance, one of the big splash moves made by Twitter after Musk’s acquisition was to remove major CSAM distribution hashtags. While this did limit visibility of CSAM for a time, some experts say it doesn’t really solve the problem, as new hashtags will arise. So, would a reasonableness standard require the periodic removal of major hashtags? Perhaps it would. It appears to have been a relatively low-cost way to reduce access to such material, and could theoretically be incorporated into a larger program that uses automated discovery to find and remove future hashtags.

Of course it won’t be perfect, and will be subject to something of a Whac-A-Mole dynamic. But the relevant question isn’t whether it’s a perfect solution, but whether it yields significant benefit relative to its cost, such that it should be regarded as a legally reasonable measure that platforms should broadly implement.

On the flip side, Twitter has lost such a large amount of its workforce that it potentially no longer has enough staff to do the important review of CSAM. As long as Twitter allows adult nudity, and algorithms are unable to effectively distinguish between different types of nudity, human reviewers remain essential. A reasonableness standard might also require sufficient staff and funding dedicated to reviewing posts for CSAM. 

But what does it mean for a platform to behave “reasonably”?

Platforms Should Behave ‘Reasonably’

Rethinking platforms’ safe harbor from liability as governed by a “reasonableness” standard offers a way to more effectively navigate the complexities of these tradeoffs without resorting to the binary of immunity or total liability that typically characterizes discussions of Section 230 reform.

It could be the case that, given the reality that machines can’t distinguish between “good” and “bad” nudity, it is patently unreasonable for an open platform to allow any nudity at all if it is run with the level of staffing that Musk seems to prefer for Twitter.

Consider the situation that MindGeek faced a couple of years ago. It was pressured by financial providers, including PayPal and Visa, to clean up the CSAM and nonconsenual pornography that appeared on its websites. In response, they removed more than 80% of suspected illicit content and required greater authentication for posting.

Notwithstanding efforts to clean up the service, a lawsuit was filed against MindGeek and Visa by victims who asserted that the credit-card company was a knowing conspirator for processing payments to MindGeek’s sites when they were purveying child pornography. Notably, Section 230 issues were dismissed early on in the case, but the remaining claims—rooted in the Racketeer Influenced and Corrupt Organizations Act (RICO) and the Trafficking Victims Protection Act (TVPA)—contained elements that support evaluating the conduct of online intermediaries, including payment providers who support online services, through a reasonableness lens.

In our amicus, we stressed the broader policy implications of failing to appropriately demarcate the bounds of liability. In short, we stressed that deterrence is best encouraged by placing responsibility for control on the party most closely able to monitor the situation—i.e., MindGeek, and not Visa. Underlying this, we believe that an appropriately tuned reasonableness standard should be able to foreclose these sorts of inquiries at early stages of litigation if there is good evidence that an intermediary behaved reasonably under the circumstances.

In this case, we believed the court should have taken seriously the fact that a payment processor needs to balance a number of competing demands— legally, economically, and morally—in a way that enables them to serve their necessary prosocial roles. Here, Visa had to balance its role, on the one hand, as a neutral intermediary responsible for handling millions of daily transactions, with its interests to ensure that it did not facilitate illegal behavior. But it also was operating, essentially, under a veil of ignorance: all of the information it had was derived from news reports, as it was not directly involved in, nor did it have special insight into, the operation of MindGeek’s businesses.

As we stressed in our intermediary-liability paper, there is indeed a valid concern that changes to intermediary-liability policy not invite a flood of ruinous litigation. Instead, there needs to be some ability to determine at the early stages of litigation whether a defendant behaved reasonably under the circumstances. In the MindGeek case, we believed that Visa did.

In essence, much of this approach to intermediary liability boils down to finding socially and economically efficient dividing lines that can broadly demarcate when liability should attach. For example, if Visa is liable as a co-conspirator in MindGeek’s allegedly illegal enterprise for providing a payment network that MindGeek uses by virtue of its relationship with yet other intermediaries (i.e., the banks that actually accept and process the credit-card payments), why isn’t the U.S. Post Office also liable for providing package-delivery services that allow MindGeek to operate? Or its maintenance contractor for cleaning and maintaining its offices?

Twitter implicitly engaged in this sort of analysis when it considered becoming an OnlyFans competitor. Despite having considerable resources—both algorithmic and human—Twitter’s internal team determined they could not “accurately detect child sexual exploitation and non-consensual nudity at scale.” As a result, they abandoned the project. Similarly, Tumblr tried to make many changes, including taking down CSAM hashtags, before finally giving up and removing all pornographic material in order to remain in the App Store for iOS. At root, these firms demonstrated the ability to weigh costs and benefits in ways entirely consistent with a reasonableness analysis. 

Thinking about the MindGeek situation again, it could also be the case that MindGeek did not behave reasonably. Some of MindGeek’s sites encouraged the upload of user-generated pornography. If MindGeek experienced the same limitations in detecting “good” and “bad” pornography (which is likely), it could be that the company behaved recklessly for many years, and only tightened its verification procedures once it was caught. If true, that is behavior that should not be protected by the law with a liability shield, as it is patently unreasonable.

Apple is sometimes derided as an unfair gatekeeper of speech through its App Store. But, ironically, Apple itself has made complex tradeoffs between data security and privacy—through use of encryption, on the one hand, and checking devices for CSAM material, on the other. Prioritizing encryption over scanning devices (especially photos and messages) for CSAM is a choice that could allow for more CSAM to proliferate. But the choice is, again, a difficult one: how much moderation is needed and how do you balance such costs against other values important to users, such as privacy for the vast majority of nonoffending users?

As always, these issues are complex and involve tradeoffs. But it is obvious that more can and needs to be done by online intermediaries to remove CSAM.

But What Is ‘Reasonable’? And How Do We Get There?

The million-dollar legal question is what counts as “reasonable?” We are not unaware of the fact that, particularly when dealing with online platforms that deal with millions of users a day, there is a great deal of surface area exposed to litigation by potentially illicit user-generated conduct. Thus, it is not the case, at least for the foreseeable future, that we need to throw open gates of a full-blown common-law process to determine questions of intermediary liability. What is needed, instead, is a phased-in approach that gets courts in the business of parsing these hard questions and building up a body of principles that, on the one hand, encourage platforms to do more to control illicit content on their services, and on the other, discourages unmeritorious lawsuits by the plaintiffs’ bar.

One of our proposals for Section 230 reform is for a multistakeholder body, overseen by an expert agency like the Federal Trade Commission or National Institute of Standards and Technology, to create certified moderation policies. This would involve online intermediaries working together with a convening federal expert agency to develop a set of best practices for removing CSAM, including thinking through the cost-benefit analysis of more moderation—human or algorithmic—or even wholesale removal of nudity and pornographic content.

Compliance with these standards should, in most cases, operate to foreclose litigation against online service providers at an early stage. If such best practices are followed, a defendant could point to its moderation policies as a “certified answer” to any complaint alleging a cause of action arising out of user-generated content. Compliant practices will merit dismissal of the case, effecting a safe harbor similar to the one currently in place in Section 230.

In litigation, after a defendant answers a complaint with its certified moderation policies, the burden would shift to the plaintiff to adduce sufficient evidence to show that the certified standards were not actually adhered to. Such evidence should be more than mere res ipsa loquitur; it must be sufficient to demonstrate that the online service provider should have been aware of a harm or potential harm, that it had the opportunity to cure or prevent it, and that it failed to do so. Such a claim would need to meet a heightened pleading requirement, as for fraud, requiring particularity. And, periodically, the body overseeing the development of this process would incorporate changes to the best practices standards based on the cases being brought in front of courts.

Online service providers don’t need to be perfect in their content-moderation decisions, but they should behave reasonably. A properly designed duty-of-care standard should be flexible and account for a platform’s scale, the nature and size of its user base, and the costs of compliance, among other considerations. What is appropriate for YouTube, Facebook, or Twitter may not be the same as what’s appropriate for a startup social-media site, a web-infrastructure provider, or an e-commerce platform.

Indeed, this sort of flexibility is a benefit of adopting a “reasonableness” standard, such as is found in common-law negligence. Allowing courts to apply the flexible common-law duty of reasonable care would also enable jurisprudence to evolve with the changing nature of online intermediaries, the problems they pose, and the moderating technologies that become available.

Conclusion

Twitter and other online intermediaries continue to struggle with the best approach to removing CSAM, nonconsensual pornography, and a whole host of other illicit content. There are no easy answers, but there are strong ethical reasons, as well as legal and market pressures, to do more. Section 230 reform is just one part of a complete regulatory framework, but it is an important part of getting intermediary liability incentives right. A reasonableness approach that would hold online platforms accountable in a cost-beneficial way is likely to be a key part of a positive reform agenda for Section 230.

For many observers, the collapse of the crypto exchange FTX understandably raises questions about the future of the crypto economy, or even of public blockchains as a technology. The topic is high on the agenda of the U.S. Congress this week, with the House Financial Services Committee set for a Dec. 13 hearing with FTX CEO John J. Ray III and founder and former CEO Sam Bankman-Fried, followed by a Dec. 14 hearing of the Senate Banking Committee on “Crypto Crash: Why the FTX Bubble Burst and the Harm to Consumers.”

To some extent, the significance of the FTX case is likely to be exaggerated due to the outsized media attention that Bankman-Fried was able to generate. Nevertheless, many retail and institutional cryptocurrency holders were harmed by FTX and thus both users and policymakers will likely respond to what happened. In this post, I will contrast three perspectives on what may and should happen next for crypto.

‘Centralization Caused the FTX Fiasco’

The first perspective—likely the prevailing view in the crypto community—is that the FTX collapse was a failure of a centralized service, which should be emphatically distinguished from “true” or “crypto-native” decentralized services. The distinction between centralized and decentralized services is sharper in theory than in practice, and it should be seen as a spectrum of decentralization, rather than a simple binary distinction. There is, however, little doubt that crypto-asset exchanges like FTX, which predominantly operate “off-chain” (i.e., on their own servers, not on a public blockchain network) are the paradigmatic case of centralization in the crypto space. They are thus not “decentralized finance” (DeFi), even though much of DeFi today does rely on centralized services—e.g., for price discovery.

As Vivek Ramaswamy and Mark Lurie argued in their Wall Street Journal op-ed, the key feature of a centralized exchange (a “CEX”) “is that somebody (…) takes custody of user funds.” Even when custody is subject to government regulation—as in traditional stock exchanges—custody creates a risk that funds will be misappropriated or otherwise lost by the custodian, as reportedly happened at FTX.

By contrast, no single actor takes custody of customer funds on a decentralized exchange (DEX); these function as smart contracts, self-executing code run on a blockchain like Ethereum. DEX users do, however, face other risks, such as hacks, market manipulation, bugs in code, and situations that combine features of all three. Some of these risks are also present in traditional stock exchanges, but as crypto insiders recognize (see below), the scale and unpredictability of risks like bugs in smart contracts is potentially significant. But as Ramaswamy and Lurie observe, the largest DeFi protocols like “MakerDAO, Compound and Clipper hold more than $15 billion, and their user funds have never been hacked.”

Aside from the lack of custody, DeFi also offers public transparency in two key respects: transparency of the self-executing code powering the DEX and transparency of completed transactions. In contrast, part of what enabled the FTX debacle is that external observers were not able to monitor the financial situation of the centralized exchange. The solution commonly put forward for CEX services on the blockchain—proof of reserves—may not match the transparency that DEX services can offer. Even if a proof-of-reserves requirement provided a reliable, real-time view of an exchange’s assets, it is unlikely to be able to do so for its liabilities. Because it is a business, a CEX always may incur liabilities that are not visible—or not easily visible—on the blockchain, such as liability to pay damages.

Some have proposed that a CEX could establish trust by offering to each user legally binding “proof of insurance” from a reputable insurer. But this simply moves the locus of trust to the insurer, which may or may not be acceptable to users, depending on the circumstances.

‘The Ecosystem Needs Time to Mature Before We Get Even More Attention’

As a critique of today’s centralized crypto services, the first perspective is persuasive. The implication that decentralized solutions offer a fully ready alternative has been called into question, however, both within the crypto space and from the outside. One internal voice of caution has been Ethereum founder Vitalik Buterin, one of crypto’s key thought leaders. Writing shortly before the FTX collapse, Buterin said:

… I don’t think we should be enthusiastically pursuing large institutional capital at full speed. I’m actually kinda happy a lot of the ETFs are getting delayed. The ecosystem needs time to mature before we get even more attention.

He added:

… regulation that leaves the crypto space free to act internally but makes it harder for crypto projects to reach the mainstream is much less bad than regulation that intrudes on how crypto works internally.

Following the FTX collapse, Buterin elaborated on the risks he sees for decentralized crypto services, singling out vulnerabilities in smart-contract code as a major concern.

Buterin’s vision is one of a de facto regulatory sandbox, allowing experimentation and technological development, but combined with restrictions on the expanding integration of crypto with the broader economy.

Centralization Will Stay, but with Heavier Regulation

It is even more understandable that observers who come from traditional finance have reservations about the potential of decentralized services to replace the centralized ones, at least in the near term. One example is JPMorgan’s recent research report. The report predicts that institutional crypto custodians, not DeFi, will benefit the most from FTX’s collapse. According to JPMorgan, this will happen due to, among other factors:

  • Regulatory pressure to unbundle various roles in crypto-finance, such as brokerage-trading, lending, clearing, and custody. The argument is that—by combining trading, clearing, and settlement—DeFi solutions operate more efficiently than centralized services and will thus “face greater scrutiny.”
  • DeFi services being unattractive to large institutional investors because of lower transaction speeds and the public nature of blockchain transaction, both of which run counter to trading history and strategies.

The report listed several other concerns, including smart-contract risks (which Buterin also singled out) and front-running of trades (part of the wider “MEV” extraction phenomenon), which may lead to worse execution prices for a trader.

Those concerns do refer to real issues in DeFi although, as the report notes, there are solutions to address them under active development. But it is also important, when comparing the current state of DeFi to custodial finance, to assess the relative benefits of the latter realistically. For example, the risk of market manipulation in DeFi needs to be contrasted with how opaque custodial services are, creating opportunities for rent extraction at customer expense.

JPMorgan stressed that the likely reaction to the FTX collapse will be increased pressure for heavier regulation of custody of customer funds, transparency requirements and, as noted earlier, unbundling of various roles in crypto-finance. The report’s prediction that, in doing so, policymakers will not be inclined to distinguish between centralized and decentralized services may be accurate, but that would be an unfortunate and unwarranted outcome.

The risks that centralized services pose—due to their lack of transparency and their taking custody of customer funds—do not translate straightforwardly to decentralized services. Regarding unbundling, it should be noted that a key reason for this regulatory solution is to prevent conflicts of interests. But a DEX that operates autonomously according to publicly shared logic (open source code) does not pose the same conflict-of-interest risks that a CEX faces. Decentralized services do face risks and there may be good reasons to seek policy responses to those risks. But the unique features of decentralized services should be appropriately accommodated. Nevertheless, it is admittedly a challenging task, partially due to the difficulty of defining decentralization in the law.

Conclusion

The collapse of FTX was a failure of a centralized model of crypto-asset services. This does not mean that centralized services do not have a future, but more work will need to be done to build stakeholder trust. Moreover, the FTX affair clearly increased the pressure for additional regulation of centralized services, although it is unclear whether it will prompt certain specific regulatory responses.

Just before the FTX collapse, the EU had nearly finalized its Markets in Crypto-Assets (“MiCA”) Regulation that was intended to regulate centralized “crypto-assets service providers.” There is an argument to be made that MiCA might have stopped a situation like that at FTX, but—given the vague general language used in MiCA—whether this would happen in future cases depends chiefly on how regulators implement prudential oversight.

Given the well-known cases of sophisticated regulators failing to prevent harm—e.g., in MF Global and Wirecard—the mere existence of prudential oversight may be insufficient to ground trust in centralized services. Thus, JPMorgan’s thesis that centralized services will benefit from the FTX affair lacks sufficient justification. Perhaps, even without the involvement of regulators, centralized providers will develop mechanisms for reliable transparency—such as “proof of reserves”—although there is a significant risk here of mere “transparency theatre.”

As to decentralized crypto services, the FTX collapse may be a chance for broader adoption, but Buterin’s words of caution should not be dismissed. JPMorgan may also be right to suggest that policymakers will not be inclined to distinguish between centralized and decentralized services and that the pressure for increased regulation will spill over to DeFi. As I noted earlier, however, policymakers would do well to be attentive to the relevant differences. For example, centralized services pose risks due to lack of transparency and their control of customer funds—two significant risks do not necessarily apply to decentralized services. Hence, unbundling of the kind that could be beneficial for centralized services may bring little of value to a DEX, while risking giving up some core benefits of decentralized solutions.

In late August, Roberto Campos Neto, the head of Brazil’s central bank, is reported to have said about Pix, the bank’s two-year-old real-time-payments (RTP) system, that it “eliminates the need to have a credit card. I think that credit cards will cease to exist at some point soon.” Wow! Sounds amazing. A new system that does everything a credit card can do, but better.

As the old saying goes, however, something that sounds too good to be true probably isn’t. While Pix has some advantages, it also has many disadvantages. In particular, it lacks many of the features currently offered by credit cards, such as liability caps, fraud prevention, and—perhaps crucially—access to credit. So, it seems unlikely to replace credit cards any time soon.

Pix and the Unbanked

When Brazil’s central bank launched Pix in November 2020, evangelists at the bank hoped it would offer a low-cost alternative to existing payments and would entice some of the country’s tens of millions of unbanked and underbanked adults into the banking system. While Pix has, indeed, attracted many users, it has done little, if anything, to solve the problem of the unbanked.

Proponents of Pix asserted that the RTP system would dramatically reduce the number of unbanked individuals in Brazil. While it is true that many Brazilians who were previously unbanked do now have Pix accounts, it would be incorrect to conclude that Pix was the reason they ceased to be unbanked.

A study by Americas Market Intelligence (commissioned by Mastercard) found that, during the COVID-19 pandemic, “Brazil reduced its unbanked population by an astounding 73%.” But the study was based on research conducted between June and August 2020 and was published in October 2020, the month before Pix launched. It described the implementation of state and federal programs launched in Brazil in response to the pandemic:

  • The “Coronavoucher” program distributed emergency funds to low-income informal workers exclusively via state-owned bank Caixa Econômica Federal (CEF). Applications for funds could only be made via CEF’s Caixa Tem smartphone app, and funds were distributed via the same app. As of Aug. 5, 2020, 66 million people had received Coronavouchers via the Caix Tem app. Of those, 36 million were previously unbanked.
  • Merenda em Casa (“snack at home”), a program run by state governments, distributed funds to low-income families with children at public schools to help them pay for food while schools were closed due to COVID-19. The program distributed funds via PicPay and PagBank’s PagSeguro, both private-sector payment apps.

Following the launch of Pix, the central bank-run RTP program was made available to clients of Caixa Tem, PicPay, and PagBank. As a result, previously unbanked individuals who had become banked because of the Coronavoucher and Merenda em Casa programs were able to obtain and use Pix keys to send and receive payments.

It remains unclear, however, what proportion of those previously unbanked individuals actually use Pix. As Figure 1 below shows, the number of Pix keys registered vastly outstrips the number of users. As such, not only is it false to claim that Pix helped reduce the number of unbanked Brazilians, but it isn’t possible to say with certainty how many of those previously unbanked individuals are now active users of Pix.

FIGURE 1: Pix Keys Registered to Natural Persons and Pix Users Who Are Natural Persons

Pix-Created Problems

Pix suffered a series of data breaches this past year, with the end result that details of Pix accounts were stolen from more than 500,000 account holders. Meanwhile, hackers have set up fake apps designed to steal money from users’ bank accounts by masquerading as legitimate Pix-compliant wallets. And Pix has been associated with a rise in lightning kidnappings, whereby kidnappers force their victims to make a transfer on Pix in order to be released.

Faced with the problem that they cannot avoid having Pix because their banks have automatically enabled the system, some Brazilians have responded to the threat of kidnappings by purchasing second “Pix phones.” Users load these mid-range Android phones with banking and Pix apps and leave them at home. Meanwhile, they delete all banking apps from their primary phone. While such an approach ostensibly prevents criminals from stealing potentially large amounts of money from individuals who can afford to have a second phone, it is quite a costly and inconvenient solution.

Pix vs Credit Cards

Roberto Campos Neto reportedly conceded that Pix data breaches will occur “with some frequency.” This acknowledgment of Pix’s unresolved security issues is difficult to square with the central bank president’s claim that the service will soon replace credit cards. After all, the major credit-card networks (Visa, Mastercard, American Express, and Discover) have more than half a century of experience managing fraud, and have built massive artificial-intelligence-based systems to identify and prevent potentially fraudulent transactions. Pix has no such system. Credit-card networks have also developed a highly effective system for challenging fraudulent transactions called “chargebacks.”

Card networks’ investment in fraud management has enabled them to offer “zero liability” terms to cardholders, which has made credit cards attractive as a means of paying for goods and services, both at brick-and-mortar locations and online. While Pix now has a system to reverse fraudulent transactions, its reliability has yet to be tested, and Pix as yet does not offer zero liability. Thus, given the choice between a credit card and Pix, users are unlikely to use Pix to pay for goods where there is a risk that the business will fail to deliver goods or services as promised.  

Finally, credit cards offer users the ability to defer payment for no fee until their next bill becomes due (usually at least a month). And they offer the ability to defer payment for longer, if necessary, with interest payable on the amount outstanding.

Conclusion: There Ain’t No Such Thing as a Free Lunch

The investments that credit-card networks have made in the identification, prevention, and rectification of fraud have been possible because they are able to charge a (very small) fee to process transactions. Pix also charges merchants a small fee for transactions but, as noted, it is not able to offer the same protections.

Most Pix transactions to date have been person-to-person (P2P), effectively replacing transactions that would have otherwise been made with cash, checks, or online bank-to-bank funds transfers. That makes sense when one thinks about the risks involved. P2P transactions are likely to involve parties that know one another and/or are engaged in repeat business. By contrast, many consumer-to-business and business-to-business transactions involve parties that are relatively less well-known to one another and thus have more incentive to renege on commitments. Consumers are therefore more inclined to use the payment system with protections built in, while merchants—who are happy for the additional business—are willing to pay the price for that business.

The science-fiction writer Robert Heinlein popularized a pithy phrase to describe the idea that it is not possible to get something for nothing: “There Ain’t No Such Thing as a Free Lunch.” If Pix is to challenge credit cards as a real consumer-payments system, it will have to offer similar levels of fraud protection to consumers. That will not be cheap. While the central bank might continue to subsidize Pix transactions, doing so to the degree that would be necessary to offer such fraud protections would be an abuse of its position. Thinking otherwise is science fiction.

European Union lawmakers appear close to finalizing a number of legislative proposals that aim to reform the EU’s financial-regulation framework in response to the rise of cryptocurrencies. Prominent within the package are new anti-money laundering and “countering the financing of terrorism” rules (AML/CFT), including an extension of the so-called “travel rule.” The travel rule, which currently applies to wire transfers managed by global banks, would be extended to require crypto-asset service providers to similarly collect and make available details about the originators and beneficiaries of crypto-asset transfers.

This legislative process proceeded with unusual haste in recent months, which partially explains why legal objections to the proposals have not been adequately addressed. The resulting legislation is fundamentally flawed to such an extent that some of its key features are clearly invalid under EU primary (treaty) law and liable to be struck down by the Court of Justice of the European Union (CJEU). 

In this post, I will offer a brief overview of some of the concerns, which I also discuss in this recent Twitter thread. I focus primarily on the travel rule, which—in the light of EU primary law—constitutes a broad and indiscriminate surveillance regime for personal data. This characterization also applies to most of AML/CFT.

The CJEU, the EU’s highest court, established a number of conditions that such legally mandated invasions of privacy must satisfy in order to be valid under EU primary law (the EU Charter of Fundamental Rights). The legal consequences of invalidity are illustrated well by the Digital Rights Ireland judgment, in which the CJEU struck down an entire piece of EU legislation (the Data Retention Directive). Alternatively, the CJEU could decide to interpret EU law as if it complied with primary law, even if that is contrary to the text.

The Travel Rule in the Transfer of Funds Regulation

The EU travel rule is currently contained in the 2015 Wire Transfer Regulation (WTR). But at the end of June, EU legislators reached a likely final deal on its replacement, the Transfer of Funds Regulation (TFR; see the original proposal from July 2021). I focus here on the TFR, but much of the argument also applies to the older WTR now in force. 

The TFR imposes obligations on payment-system providers and providers of crypto-asset transfers (refer to here, collectively, as “service providers”) to collect, retain, transfer to other service providers, and—in some cases—report to state authorities:

…information on payers and payees, accompanying transfers of funds, in any currency, and the information on originators and beneficiaries, accompanying transfers of crypto-assets, for the purposes of preventing, detecting and investigating money laundering and terrorist financing, where at least one of the payment or crypto-asset service providers involved in the transfer of funds or crypto-assets is established in the Union. (Article 1 TFR)

The TFR’s scope extends to money transfers between bank accounts or other payment accounts, as well as transfers of crypto assets other than peer-to-peer transfers without the involvement of a service provider (Article 2 TFR). Hence, the scope of the TFR includes, but is not limited to, all those who send or receive bank transfers. This constitutes the vast majority of adult EU residents.

The information that service providers are obligated to collect and retain (under Articles 4, 10, 14, and 21 TFR) include data that allow for the identification of both sides of a transfer of funds (the parties’ names, as well as the address, country, official personal document number, customer identification number, or the sender’s date and place of birth) and for linking their identity with the (payment or crypto-asset) account number or crypto-asset wallet address. The TFR also obligates service providers to collect and retain additional data to verify the accuracy of the identifying information “on the basis of documents, data or information obtained from a reliable and independent source” (Articles 4(4), 7(3), 14(5), 16(2) TFR). 

The scope of the obligation to collect and retain verification data is vague and is likely to lead service providers to require their customers to provide copies of passports, national ID documents, bank or payment-account statements, and utility bills, as is the case under the WTR and the 5th AML Directive. Such data is overwhelmingly likely to go beyond information on the civil identity of customers and will often, if not almost always, allow inferring even sensitive personal data about the customer.

The data-collection and retention obligations in the TFR are general and indiscriminate. No distinction is made in TFR’s data-collection and retention provisions based on likelihood of a connection with criminal activity, except for verification data in the case of transfers of funds (an exception not applicable to crypto assets). Even, the distinction in the case of verification data for transfers of funds (“has reasonable grounds for suspecting money laundering or terrorist financing”) arguably lacks the precision required under CJEU case law.

Analogies with the CJEU’s Passenger Name Records Decision

In late June, following its established approach in similar cases, the CJEU gave its judgment in the Ligue des droits humains case, which challenged the EU and Belgian regimes on passenger name records (PNR). The CJEU decided there that the applicable EU law, the PNR Directive, is valid under EU primary law. But it reached that result by interpreting some of the directive’s provisions in ways contrary to their express language and by deciding that some national legal rules implementing the directive are invalid. Some features of the PNR regime that were challenged by the court are strikingly similar to the TFR regime.

First, just like the TFR, the PNR rules imposed a five-year data-retention period for the data of all passengers, even where there is no “objective evidence capable of establishing a risk that relates to terrorist offences or serious crime having an objective link, even if only an indirect one, with those passengers’ air travel.” The court decided that this was a disproportionate restriction of the rights to privacy and to the protection of personal data under Articles 5-7 of the EU Charter of Fundamental Rights. Instead of invalidating the relevant article of the PNR Directive, the CJEU reinterpreted it as if it only allowed for five-year retention in cases where there is evidence of a relevant connection to criminality.

Applying analogous reasoning to the TFR, which imposes an indiscriminate five-year data retention period in its Article 21, the conclusion must be that this TFR provision is invalid under Articles 7-8 of the charter. Article 21 TFR may, at minimum, need to be recast to apply only to that transaction data where there is “objective evidence capable of establishing a risk” that it is connected to serious crime. The court also considered the issue of government access to data that has already been collected. Under the CJEU’s established interpretation of the EU Charter, “it is essential that access to retained data by the competent authorities be subject to a prior review carried out either by a court or by an independent administrative body.” In the PNR regime, at least some countries (such as Belgium) assigned this role to their “passenger information units” (PIUs). The court noted that a PIU is “an authority competent for the prevention, detection, investigation and prosecution of terrorist offences and of serious crime, and that its staff members may be agents seconded from the competent authorities” (e.g. from police or intelligence authorities). But according to the court:

That requirement of independence means that that authority must be a third party in relation to the authority which requests access to the data, in order that the former is able to carry out the review, free from any external influence. In particular, in the criminal field, the requirement of independence entails that the said authority, first, should not be involved in the conduct of the criminal investigation in question and, secondly, must have a neutral stance vis-a-vis the parties to the criminal proceedings …

The CJEU decided that PIUs do not satisfy this requirement of independence and, as such, cannot decide on government access to the retained data.

The TFR (especially its Article 19 on provision of information) does not provide for prior independent review of access to retained data. To the extent that such a review is conducted by Financial Intelligence Units (FIUs) under the AML Directive, concerns arise very similar to the treatment of PIUs under the PNR regime. While Article 32 of the AML Directive requires FIUs to be independent, that doesn’t necessarily mean that they are independent in the ways required of the authority that will decide access to retained data under Articles 7-8 of the EU Charter. For example, the AML Directive does not preclude the possibility of seconding public prosecutors, police, or intelligence officers to FIUs.

It is worth noting that none of the conclusions reached by the CJEU in the PNR case are novel; they are well-grounded in established precedent. 

A General Proportionality Argument

Setting aside specific analogies with previous cases, the TFR clearly has not been accompanied by a more general and fundamental reflection on the proportionality of its basic scheme in the light of the EU Charter. A pressing question is whether the TFR’s far-reaching restrictions of the rights established in Articles 7-8 of the EU Charter (and perhaps other rights, like freedom of expression in Article 11) are strictly necessary and proportionate. 

Arguably, the AML/CFT regime—including the travel rule—are significantly more costly and more rights-restricting than potential alternatives. The basic problem is that there is no reliable data on the relative effectiveness of measures like the travel rule. Defenders of the current AML/CFT regime focus on evidence that it contributes to preventing or prosecuting some crime. But this is not the relevant question when it comes to proportionality. The relevant question is whether those measures are as effective or more effective than alternative, less costly, and more privacy-preserving alternatives. One conservative estimate holds that AML compliance costs in Europe were “120 times the amount successfully recovered from criminals’ and exceeded the estimated total of criminal funds (including funds not seized or identified).” 

The fact that the current AML/CFT regime is a de facto global standard cannot serve as a sufficient justification either, given that EU fundamental law is perfectly comfortable in rejecting non-European law-enforcement practices (see the CJEU’s decision in Schrems). The travel rule has been unquestioningly imported to EU law from U.S. law (via FATF), where the standards of constitutional protection of privacy are much different than under the EU Charter. This fact would likely be noticed by the Court of Justice in any putative challenge to the TFR or other elements of the AML/CFT regime. 

Here, I only flag the possibility of a general proportionality challenge. Much more work needs to be done to flesh it out.

Conclusion

Due to the political and resource constraints of the EU legislative process, it is possible that the legislative proposals in the financial-regulation package did not receive sufficient legal scrutiny from the perspective of their compatibility with the EU Charter of Fundamental Rights. This hypothesis would explain the presence of seemingly clear violations, such as the indiscriminate five-year data-retention period. Given that none of the proposals has, as yet, been voted into law, making the legislators aware of the problem may help to address at least some of the issues.

Legal arguments about the AML/CFT regime’s incompatibility with the EU Charter should be accompanied with concrete alternative proposals to achieve the goals of preventing and combating serious crime that, according to the best evidence, the current AML/CFT regime does ineffectively. We need more regulatory imagination. For example, one part of the solution may be to properly staff and equip government agencies tasked with prosecuting financial crime.

But it’s also possible that the proposals, including the TFR, will be adopted broadly without amendment. In that case, the main recourse available to EU citizens (or to any EU government) will be to challenge the legality of the measures before the Court of Justice.

Banco Central do Brasil (BCB), Brazil’s central bank, launched a new real-time payment (RTP) system in November 2020 called Pix. Evangelists at the central bank hoped that Pix would offer a low-cost alternative to existing payments systems and would entice some of the country’s tens of millions of unbanked and underbanked adults into the banking system.

A recent review of Pix, published by the Bank for International Settlements, claims that the payment system has achieved these goals and that it is a model for other jurisdictions. However, the BIS review seems to have been written with rose-tinted spectacles. This is perhaps not surprising, given that the lead author runs the division of the central bank that developed Pix. In a critique published this week, I suggest that, when seen in full color, Pix looks a lot less pretty. 

Among other things, the BIS review misconstrues the economics of payment networks. By ignoring the two-sided nature of such networks, the authors claim erroneously that payment cards incur a net economic cost. In fact, evidence shows that payment cards generate net benefits. One study put their value add to the Brazilian economy at 0.17% of GDP. 

The report also obscures the costs of the Pix system and fails to explain that, whereas private payment systems must recover their full operational cost, Pix appears to benefit from both direct and indirect subsidies. The direct subsidies come from the BCB, which incurred substantial costs in developing and promoting Pix and, unlike other central banks such as the U.S. Federal Reserve, is not required to recover all operational costs. Indirect subsidies come from the banks and other payment-service providers (PSPs), many of which have been forced by the BCB to provide Pix to their clients, even though doing so cannibalizes their other payment systems, including interchange fees earned from payment cards. 

Moreover, the BIS review mischaracterizes the role of interchange fees, which are often used to encourage participation in the payment-card network. In the case of debit cards, this often includes covering some or all of the operational costs of bank accounts. The availability of “free” bank accounts with relatively low deposit requirements offers customers incentives to open and maintain accounts. 

While the report notes that Pix has “signed up” 67% of adult Brazilians, it fails to mention that most of these were automatically enrolled by their banks, the majority of which were required by the BCB to adopt Pix. It also fails to mention that 33% of adult Brazilians have not “signed up” to Pix, nor that a recent survey found that more than 20% of adult Brazilians remain unbanked or underbanked, nor that the main reason given for not having a bank account was the cost of such accounts. Moreover, by diverting payments away from debit cards, Pix has reduced interchange fees and thereby reduced the ability of banks and other PSPs to subsidize bank accounts, which might otherwise have increased financial inclusion.  

The BIS review falsely asserts that “Big Tech” payment networks are able to establish and maintain market power. In reality, tech firms operate in highly competitive markets and have little to no market power in payment networks. Nonetheless, the report uses this claim regarding Big Tech’s alleged market power to justify imposing restrictions on the WhatsApp payment system. The irony, of course, is that by moving to prohibit the WhatsApp payment service shortly before the rollout of Pix, the BCB unfairly inhibited competition, effectively giving Pix a monopoly on RTP with the full support of the government. 

In acting as both a supplier of a payment service and the regulator of payment service providers, the BCB has a massive conflict of interest. Indeed, the BIS itself has recommended that, in cases where such conflicts might exist, it is good practice to ensure that the regulator is clearly separated from the supplier. Pix, in contrast, was developed and promoted by the same part of the bank as the payments regulator. 

Finally, the BIS report also fails to address significant security issues associated with Pix, including a dramatic rise in the number of “lightning kidnappings” in which hostages were forced to send funds to Pix addresses. 

Welcome to the FTC UMC Roundup, our new weekly update of news and events relating to antitrust and, more specifically, to the Federal Trade Commission’s (FTC) newfound interest in “revitalizing” the field. Each week we will bring you a brief recap of the week that was and a preview of the week to come. All with a bit of commentary and news of interest to regular readers of Truth on the Market mixed in.

This week’s headline? Of course it’s that Alvaro Bedoya has been confirmed as the FTC’s fifth commissioner—notably breaking the commission’s 2-2 tie between Democrats and Republicans and giving FTC Chair Lina Khan the majority she has been lacking. Politico and Gibson Dunn both offer some thoughts on what to expect next—though none of the predictions are surprising: more aggressive merger review and litigation; UMC rulemakings on a range of topics, including labor, right-to-repair, and pharmaceuticals; and privacy-related consumer protection. The real question is how quickly and aggressively the FTC will implement this agenda. Will we see a flurry of rulemakings in the next week, or will they be rolled out over a period of months or years? Will the FTC risk major litigation questions with a “go big or go home” attitude, or will it take a more incrementalist approach to boiling the frog?

Much of the rest of this week’s action happened on the Hill. Khan, joined by Securities and Exchange Commission (SEC) Chair Gary Gensler, made the regular trip to Congress to ask for a bigger budget to support more hires. (FTC, Law360) Sen. Mike Lee  (R-Utah) asked for unanimous consent on his State Antitrust Enforcement Venue Act, but met resistance from Sen. Amy Klobuchar (D-Minn.), who wants that bill paired with her own American Innovation and Choice Online Act. This follows reports that Senate Majority Leader Chuck Schumer (D-N.Y.) is pushing Klobuchar to get support in line for both AICOA and the Open App Markets Act to be brought to the Senate floor. Of course, if they had the needed support, we probably wouldn’t be talking so much about whether they have the needed support.

Questions about the climate at the FTC continue following release of the Office of Personnel Management’s (OPM) Federal Employee Viewpoint Survey. Sen. Roger Wicker (R-Miss.) wants to know what has caused staff satisfaction at the agency to fall precipitously. And former senior FTC staffer Eileen Harrington issued a stern rebuke of the agency at this week’s open meeting, saying of the relationship between leadership and staff that: “The FTC is not a failed agency but it’s on the road to becoming one. This is a crisis.”

Perhaps the only thing experiencing greater inflation than the dollar is interest in the FTC doing something about inflation. Alden Abbott and Andrew Mercado remind us that these calls are misplaced. But that won’t stop politicians from demanding the FTC do something about high gas prices. Or beef production. Or utilities. Or baby formula.

A little further afield, the 5th U.S. Circuit Court of Appeals issued an opinion this week in a case involving SEC administrative-law judges that took broad issue with them on delegation, due process, and “take care” grounds. It may come as a surprise that this has led to much overwrought consternation that the opinion would dismantle the administrative state. But given that it is often the case that the SEC and FTC face similar constitutional issues (recall that Kokesh v. SEC was the precursor to AMG Capital), the 5th Circuit case could portend future problems for FTC adjudication. Add this to the queue with the Supreme Court’s pending review of whether federal district courts can consider constitutional challenges to an agency’s structure. The court was already scheduled to consider this question with respect to the FTC this next term in Axon, and agreed this week to hear a similar SEC-focused case next term as well. 

Some Navel-Gazing News! 

Congratulations to recent University of Michigan Law School graduate Kacyn Fujii, winner of our New Voices competition for contributions to our recent symposium on FTC UMC Rulemaking (hey, this post is actually part of that symposium, as well!). Kacyn’s contribution looked at the statutory basis for FTC UMC rulemaking authority and evaluated the use of such authority as a way to address problematic use of non-compete clauses.

And, one for the academics (and others who enjoy writing academic articles): you might be interested in this call for proposals for a research roundtable on Market Structuring Regulation that the International Center for Law & Economics will host in September. If you are interested in writing on topics that include conglomerate business models, market-structuring regulation, vertical integration, or other topics relating to the regulation and economics of contemporary markets, we hope to hear from you!

[Today’s guest post—the 11th entry in our FTC UMC Rulemaking symposium—comes from Ramsi A. Woodcock of the University of Kentucky’s Rosenberg College of Law. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

In an effort to fight inflation, the Federal Open Market Committee raised interest rates to 20% over the course of 1980 and 1981, triggering a recession that threw more than 4 million Americans, many in well-paying manufacturing jobs, out of work.

As it continues to do today, the committee met in secret and explained its rate decisions in a handful of paragraphs.

None of the millions of Americans thrown out of work—or the many businesses driven to bankruptcy—sued the FOMC. No one argued that the FOMC’s power to disrupt the American economy was an unconstitutional delegation of legislative authority. No one argued that, in adopting its rate decisions, the FOMC had failed to comply with any of the notice-and-comment procedures required by the Administrative Procedure Act (APA).

They were wise not to sue, because they would have lost.

There have been only five lawsuits against the FOMC since it was created in 1933. All have failed; none has challenged a FOMC rate decision.

As Judge Augustus Hand put it in a related case: “it would be an unthinkable burden upon any banking system if its open market sales and discount rates were to be subject to judicial review.”

Even if everything Frank Easterbrook has had to say about antitrust is correct, it is unlikely that the Federal Trade Commission (FTC) could ever trigger a recession, much less one as severe as the one the FOMC created 40 years ago. And yet, no FTC commissioner can dream of the agency enjoying anything like the level of deference from the courts enjoyed by the FOMC.

The reality of FTC practice is just too depressing.

The FTC Act of 1914 is an expression of profound ambivalence about the administrative project, denying to the FTC even the authority to carry out internal deliberations other than through an adjudicative process. The FTC must bring an administrative complaint; firms have the right to a hearing; and so on. A Congress that would do that to an agency would certainly subject the agency’s final decisions to review by the federal courts—which, of course, Congress did.

Unlike their francophone peers on the European Court of Justice (ECJ), who have leveraged a culture of judicial deference to administrative action—as well as the fact that the ECJ’s language of business is their native tongue—to give the European Union’s antitrust agency something like carte blanche, American judges have delighted at using their powers to humiliate the FTC.

Take pay-for-delay. The FTC—informed by a staff of 80 PhD economists, not all Democrats—declared the practice to be bad for consumers in the late 1990s. But several courts actually decided that the practice was so good for consumers that it should be per se legal instead. It took more than a decade of litigation before the FTC was able to make a dent in the rate of accumulation of these agreements.

So whipped is the FTC by the courts that even when it dreams of a better life, the commission seems unable to imagine one without judicial review. During a period when bipartisan groups of legislators are seeking to reform the antitrust laws, one might have hoped that the FTC would ask for some of the discretion enjoyed by the FOMC.

Instead, the FTC’s current leadership appears intent to strap the FTC into the straightjacket of notice-and-comment rulemaking under the APA, which will only extend the FTC’s subjugation to the courts.

Indeed, progressives understood the passage of the APA in 1946 to be a signal defeat, clawing back power for the courts that progressives had fought for two generations to lodge in administrative agencies. The act was literally adopted over FDR’s dead body—he vetoed its forerunner in 1940 and died in 1945. It is consistent with contemporary progressives’ habit of mistaking counterproductive, middle-of-the-road policies for radical interventions (the original progressives of a century ago didn’t think much of the entire antitrust enterprise, either), that they should mistake the APA’s notice-and-comment rulemaking for a recipe for FTC invigoration.

To be sure, the issuance of competition regulations would be a new thing for the FTC. Rather than just enforce existing antitrust rules (and fantasizing that, one day, a court might read the FTC’s power to condemn “unfair methods of competition” more broadly), the FTC would be able actually to make new antitrust law.

But law is a double-edged sword for an administrative agency. It binds the public, but it also binds the agency. Any rule the FTC seeks to adopt, the FTC itself must follow; if a defendant can show that the firm complied, the FTC loses its case.

And that’s after the FTC has made it through the hell of the rulemaking process itself—the notice-and-comment periods, the court challenges to the agency’s interpretation of every point of process, along with the substantive basis for the rule—for every single rule the agency wishes to adopt. Or  to repeal.

The FOMC suffers no such indignities.

Although Congress calls the FOMC’s decisions “regulations,” they are not subject to the APA. The FOMC can make a rate decision and then change its mind whenever and however it wishes. The FOMC does not need to provide the public with notice and an opportunity to comment—indeed, the FOMC waits five years to release transcripts of its deliberations—and its decisions are never reviewed, even for caprice.

If the FTC wanted real power—if it wanted to get something done—it would want discretion. Discretion has made the FOMC nimble and being nimble has made the FOMC effective. Economists agree that the FOMC’s rate decisions slew inflation in the early 1980s; it could not have done that if, like the FTC and pay-for-delay, it had had to wait a decade for the courts’ approval.

As Judge Hand put it, “the correction of discount rates by judicial decree seems almost grotesque, when we remember that conditions in the money market often change from hour to hour, and the disease would ordinarily be over long before a judicial diagnosis could be made.”

How strange it is to read this as an antitrust scholar and reflect that the single most important attack on antitrust enforcement has always been, in Judge Hand’s words, that “the disease [is] ordinarily … over long before a judicial diagnosis [is] made.”

Is that not the lesson drawn by antitrust’s critics from the Microsoft litigation? Microsoft may well have monopolized operating systems in 1992 or 1994. But by the time the case settled in 2001, Windows’ dominance could not be rolled back. America was already used to a single operating system, a single Office suite, and so on. And mobile, which Microsoft did not dominate, was on the horizon. If there had been a time when antitrust enforcers could have done something to promote competition, it had passed.

Or AT&T. Antitrust managed to break the company up just in time for the cell-phone revolution to render its decades-old landline monopoly irrelevant.

If, as Judge Hand observed, “conditions in the money market change from hour to hour,” so too do conditions in virtually every market—including the markets that the FTC regulates. If that is the argument for FOMC discretion, it is an equally potent argument for FTC discretion.

But to get power, you have to want it, and the current leadership cries out instead only for a more varied servitude.

The case for instead making the FTC more like the FOMC is strong. (Even the name fits.)

Both institutions are charged with using indirect methods to get prices right in fluid market environments—the FOMC by using the purchase and sale of securities to get interest rates right; the FTC by tweaking market structure to get market prices to competitive levels. As has already been observed, this can be done effectively only through the unfettered exercise of administrative discretion.

Independence from all three branches of government (including the courts) is essential to both. Just as an accountable FOMC would probably not have had the will to throw millions out of work and drive many businesses into bankruptcy in order to fight inflation—even though that was ultimately best for the economy—an accountable FTC cannot embark on a campaign of economy-wide deconcentration when that is the right thing for the economy (which is not to say that it always is).

The sort of systemic regulation of the preconditions for a successful capitalism in which both the FOMC and the FTC are engaged creates too many powerful winners and losers for either institution to be able to do its job without complete and utter discretion to act as it sees fit—something the FTC lacks.

Indeed, the last time the FTC tried to flex its muscles, it was smacked down by all three branches of government—attacked by both Jimmy Carter and Ronald Reagan from the campaign trail, threatened with defunding by Congress, and rejected by the courts.

One can distinguish the FOMC from the FTC on the grounds that the FOMC paints with a broader brush than does the FTC. To get interest rates right, the FOMC directs the purchase and sale of securities, often in great volumes, whereas the FTC may need to tell a single, identifiable company how to do a particular, identifiable thing, such as to distribute a particular input on reasonable terms or to excise a particular provision from its contracts. Because of the potential for abuse of the individual that might result from such individualized action, the argument goes, the courts must keep the FTC on a tighter leash.

There is a fictional premise here. The FTC rarely deals with individuals—flesh-and-blood humans—but instead with corporations, often so large that they have thousands of workers and managers, and still more shareholders. The potential for abuse of actual individuals, as opposed to the fictive corporate individual, is low.

But even if we accept this fiction—as, alas, the courts have done—the FTC differs from the FOMC here only because it has so far adhered to an adjudicatory model of decisionmaking. The FTC could, for example, decide instead to target competitive prices by ordering every firm in the economy having an accounting profit in excess of 15% to be broken up, along the lines of the Industrial Reorganization Act considered by Congress in the 1970s.

That would paint with a brush of FOMCian breadth. Indeed, by varying the triggering profit percentage, the FTC would be able to vary, in a rough way, the level of competition and hence the level of prices in the economy, just as, by varying its target interest rate, the FOMC varies, in a rough way, the level of inflation in the economy.

(I do not mean to suggest an equivalence between monopoly pricing and inflation; monopoly pricing is a problem of levels whereas inflation is a problem of rates of change; they are two different problems with two different causes, two different institutions to mind them, and two different fixes.)

And although such a broad approach would surely send copious “good” firms that have engaged in no monopolizing activities to their fates, the FOMC’s rate increases doubtless also send to their fates plenty of “good” firms that have not inflated their prices but cannot survive at a 20% cost of capital. The FOMC does that because it is more expedient to discipline every firm than to identify the inflators and coax them into altering their behavior on a case-by-case basis.

We tolerate this sacrifice of innocents because we believe that low inflation confers long-term gains on everyone. If we believe that competitive pricing confers long-term gains on everyone—and that is the premise of competition policy—surely we must tolerate the same from the FTC.

If anything, the case for a broad-brush FTC is stronger than that for the FOMC, because, as already noted, no matter how overzealous the deconcentration program, it is hard to imagine deconcentration plunging the economy into recession and throwing millions of Americans out of work, at least in the short run.

If anything, deconcentration should raise employment, because competition is wasteful and duplicative; all those shards of big firms need their own independent support staffs. And, of course, it is a staple of antitrust theory that when competition increases, output goes up, not down.

One might also seek to distinguish between the FOMC and the FTC on the grounds that what the FTC must do is more complicated, and hence more prone to error, than what the FOMC must do, making oversight more appropriate for the FTC. Both inflation and monopoly power are bad for growth, the argument might go, but the connection between inflation and growth is clear whereas that between monopoly power and growth—not so much.

Indeed, too much inflation prevents firms from planning and, so, from innovating. But while the adversity associated with competition is the mother of invention, many innovations—such as social networks—can be delivered only at scale, suggesting that too much competition can be as bad for growth as too little. It would seem to follow that getting monetary policy right is easy, whereas getting competition policy right is hard.

Except that the FOMC must strike a balance between too much inflation and too little, just as the FTC must strike a balance between too much competition and too little.

Deflation can be just as bad for growth—just as hard on business planning—as inflation, as any Japanese central banker of the previous generation can tell you. The FOMC must, therefore, find the interest rates that produce neither too much nor too little inflation, just as the FTC must find the level of concentration that produces neither too much nor too little competition.

Both the FOMC and the FTC have hard jobs. Why do we trust one to handle its job better than the other?

One reason might be that the FOMC is a friend to big business whereas the FTC is a natural enemy thereof. Inflation, when unexpected, levels, because it reduces the real value of debts. If firms tend to be creditors and consumers debtors, and firms’ shareholders tend to be richer than consumers, the wealth gap narrows.

It follows that, in preventing inflation, the FOMC tilts, and so big business wants the FOMC healthy and free. The FTC, by contrast, levels, because it eliminates monopoly profits, benefiting consumers at the expense of shareholders. So, big business prefers the FTC shackled.

If that is right, then the FOMC enjoys a level of discretion that the FTC never can, because the power behind government never will give the FTC so loose a leash. Congress has authorized both the FOMC and the FTC to create regulations. But the courts would never interpret this language consistently; for the FOMC, to “adopt” a “regulation” means to do whatever you like whereas for the FTC to “make” a “regulation” means either nothing at all or, at best, notice-and-comment rulemaking under the APA.

But I rather think there is a better explanation for the divergent experiences of the FOMC and the FTC, one that does not turn on class conflict and which has been staring us in the face all along.

Just as competition policy probably cannot cause a recession or throw millions of Americans out of work, it probably cannot much increase growth or employ many more Americans either. The future of an economy may be decided by the variance of an interest rate between 0% and 20%; this is not so for the variance of a market price between the competitive level and the monopoly level. The FOMC is simply more important to the success of the capitalist system than is the FTC.

And both are probably not that important for economic inequality. While unexpected inflation does tend to make debts go away, firms rewrite contracts to account for expected inflation, so inflation’s contribution to equality is blip-like.

The contribution of monopoly profits to inequality is also likely to be small; scarcity profits, which firms generate even in competitive markets, are likely to play a more important role. At least, that’s what Thomas Piketty, the dean of inequality studies, happens to think.

And maybe also what the rich think: there is conservative support for more competition policy, but none for more tax policy, which tells us something about which is likely to have a more radical impact on the distribution of wealth.

So, it is because the FTC is not dangerous, rather than because it is dangerous, that we feel free to hobble it with process. And because the FOMC is dangerous that we want it free and maximally effective.

Just so, there is no due process in wartime because there is so much at stake, whereas in peacetime you can’t kill a statue without multiple appeals.

Which takes us back to the real deficit in progressive radicalism. Yes, rulemaking for the FTC is a cop out.

But so is the entire antitrust project.

In Fleites v. MindGeek—currently before the U.S. District Court for the District of Central California, Southern Division—plaintiffs seek to hold MindGeek subsidiary PornHub liable for alleged instances of human trafficking under the Racketeer Influenced and Corrupt Organizations (RICO) and the Trafficking Victims Protection Reauthorization Act (TVPRA). Writing for the International Center for Law & Economics (ICLE), we have filed a motion for leave to submit an amicus brief regarding whether it is valid to treat co-defendant Visa Inc. as a proper party under principles of collateral liability.

The proposed brief draws on our previous work on the law & economics of collateral liability, and argues that holding Visa liable as a participant under RICO or TVPRA would amount to stretching collateral liability far beyond what is reasonable. Such a move, we posit, would “generate a massive amount of social cost that would outweigh the potential deterrent or compensatory gains sought.”

Collateral liability can make sense when intermediaries are in a position to effectively monitor and control potential harms. That is, it can be appropriate to apply collateral liability to parties who are what is often referred to as a “least cost avoider.” As we write:

In some circumstances it is indeed proper to hold third parties liable even though they are not primary actors directly implicated in wrongdoing. Most significantly, such liability may be appropriate when a collateral actor stands in a relationship to the wrongdoing (or wrongdoers or victims) such that the threat of liability can incentivize it to take action (or refrain from taking action) to prevent or mitigate the wrongdoing. That is to say, collateral liability may be appropriate when the third party has a significant enough degree of control over the primary actors such that its actions can cause them to reduce the risk of harm at reasonable cost. Importantly, however, such liability is appropriate only when direct deterrence is insufficient and/or the third party can prevent harm at lower cost or more effectively than direct enforcement… From an economic perspective, liability should be imposed upon the party or parties best positioned to deter the harms in question, such that the costs of enforcement do not exceed the social gains realized.

The law of negligence under the common law, as well as contributory infringement under copyright law, both help illustrate this principle. Under the common law, collateral actors have a duty in only limited circumstances, when the harms are “reasonably foreseeable” and the actor has special access to particularized information about the victims or the perpetrators, as well as a special ability to control harmful conditions. Under copyright law, collateral liability is similarly limited to circumstances where collateral actors are best positioned to prevent the harm, and the benefits of holding such actors liable exceed the harms. 

Neither of these conditions are true in Fleites v. MindGeek: Visa is not the type of collateral actor that has any access to specialized information or the ability to control actual bad actors. Visa, as a card-payment network, simply processes payments. The only tool at the disposal of Visa is a giant sledgehammer: it can foreclose all transactions to particular sites that run over its network. There is no dispute that the vast majority of content hosted on sites like MindGeek is lawful, however awful one may believe pornography to be. Holding card networks liable here would create incentives to avoid processing payments for such sites altogether in order to avoid legal consequences. 

The potential costs of the theory of liability asserted here stretch far beyond Visa or this particular case. The plaintiffs’ theory would hold anyone liable who provides services that “allow[] the alleged principal actors to continue to do business.” This would mean that Federal Express, for example, would be liable for continuing to deliver packages to MindGeek’s address or that a waste-management company could be liable for providing custodial services to the building where MindGeek has an office. 

According to the plaintiffs, even the mere existence of a newspaper article alleging a company is doing something illegal is sufficient to find that professionals who have provided services to that company “participate” in a conspiracy. This would have ripple effects for professionals from many other industries—from accountants to bankers to insurance—who all would see significantly increased risk of liability.

To read the rest of the brief, see here.

[Judge Douglas Ginsburg was invited to respond to the Beesley Lecture given by Andrea Coscelli, chief executive of the U.K. Competition and Markets Authority (CMA). Both the lecture and Judge Ginsburg’s response were broadcast by the BBC on Oct. 28, 2021. The text of Mr. Coscelli’s Beesley lecture is available on the CMA’s website. Judge Ginsburg’s response follows below.]

Thank you, Victoria, for the invitation to respond to Mr. Coscelli and his proposal for a legislatively founded Digital Markets Unit. Mr. Coscelli is one of the most talented, successful, and creative heads a competition agency has ever had. In the case of the DMU [ed., Digital Markets Unit], however, I think he has let hope triumph over experience and prudence. This is often the case with proposals for governmental reform: Indeed, it has a name, the Nirvana Fallacy, which comes from comparing the imperfectly functioning marketplace with the perfectly functioning government agency. Everything we know about the regulation of competition tells us the unintended consequences may dwarf the intended benefits and the result may be a less, not more, competitive economy. The precautionary principle counsels skepticism about such a major and inherently risky intervention.

Mr. Coscelli made a point in passing that highlights the difference in our perspectives: He said the SMS [ed., strategic market status] merger regime would entail “a more cautious standard of proof.” In our shared Anglo-American legal culture, a more cautious standard of proof means the government would intervene in fewer, not more, market activities; proof beyond a reasonable doubt in criminal cases is a more cautious standard than a mere preponderance of the evidence. I, too, urge caution, but of the traditional kind.

I will highlight five areas of concern with the DMU proposal.

I. Chilling Effects

The DMU’s ability to designate a firm as being of strategic market significance—or SMS—will place a potential cloud over innovative activity in far more sectors than Mr. Coscelli could mention in his lecture. He views the DMU’s reach as limited to a small number of SMS-designated firms; and that may prove true, but there is nothing in the proposal limiting DMU’s reach.

Indeed, the DMU’s authority to regulate digital markets is surely going to be difficult to confine. Almost every major retail activity or consumer-facing firm involves an increasingly significant digital component, particularly after the pandemic forced many more firms online. Deciding which firms the DMU should cover seems easy in theory, but will prove ever more difficult and cumbersome in practice as digital technology continues to evolve. For instance, now that money has gone digital, a bank is little more than a digital platform bringing together lenders (called depositors) and borrowers, much as Amazon brings together buyers and sellers; so, is every bank with market power and an entrenched position to be subject to rules and remedies laid down by the DMU as well as supervision by the bank regulators? Is Aldi in the crosshairs now that it has developed an online retail platform? Match.com, too? In short, the number of SMS firms will likely grow apace in the next few years.

II. SMS Designations Should Not Apply to the Whole Firm

The CMA’s proposal would apply each SMS designation firm-wide, even if the firm has market power in a single line of business. This will inhibit investment in further diversification and put an SMS firm at a competitive disadvantage across all its businesses.

Perhaps company-wide SMS designations could be justified if the unintended costs were balanced by expected benefits to consumers, but this will not likely be the case. First, there is little evidence linking consumer harm to lines of business in which large digital firms do not have market power. On the contrary, despite the discussion of Amazon’s supposed threat to competition, consumers enjoy lower prices from many more retailers because of the competitive pressure Amazon brings to bear upon them.

Second, the benefits Mr. Coscelli expects the economy to reap from faster government enforcement are, at best, a mixed blessing. The proposal, you see, reverses the usual legal norm, instead making interim relief the rule rather than the exception. If a firm appeals its SMS designation, then under the CMA’s proposal, the DMU’s SMS designations and pro-competition interventions, or PCIs, will not be stayed pending appeal, raising the prospect that a firm’s activities could be regulated for a significant period even though it was improperly designated. Even prevailing in the courts may be a Pyrrhic victory because opportunities will have slipped away. Making matters worse, the DMU’s designation of a firm as SMS will likely receive a high degree of judicial deference, so that errors may never be corrected.

III. The DMU Cannot Be Evidence-based Given its Goals and Objectives

The DMU’s stated goal is to “further the interests of consumers and citizens in digital markets by promoting competition and innovation.”[1] DMU’s objectives for developing codes of conduct are: fair trading, open choices, and trust and transparency.[2] Fairness, openness, trust, and transparency are all concepts that are difficult to define and probably impossible to quantify. Therefore, I fear Mr. Coscelli’s aspiration that the DMU will be an evidence-based, tailored, and predictable regime seem unrealistic. The CMA’s idea of “an evidence-based regime” seems destined to rely mostly upon qualitative conjecture about the potential for the code of conduct to set “rules of the game” that encourage fair trading, open choices, trust, and transparency. Even if the DMU commits to considering empirical evidence at every step of its process, these fuzzy, qualitative objectives will allow it to come to virtually any conclusion about how a firm should be regulated.

Implementing those broad goals also throws into relief the inevitable tensions among them. Some potential conflicts between DMU’s objectives for developing codes of conduct are clear from the EU’s experience. For example, one of the things DMU has considered already is stronger protection for personal data. The EU’s experience with the GDPR shows that data protection is costly and, like any costly requirement, tends to advantage incumbents and thereby discourage new entry. In other words, greater data protections may come at the expense of start-ups or other new entrants and the contribution they would otherwise have made to competition, undermining open choices in the name of data transparency.

Another example of tension is clear from the distinction between Apple’s iOS and Google’s Android ecosystems. They take different approaches to the trade-off between data privacy and flexibility in app development. Apple emphasizes consumer privacy at the expense of allowing developers flexibility in their design choices and offers its products at higher prices. Android devices have fewer consumer-data protections but allow app developers greater freedom to design their apps to satisfy users and are offered at lower prices. The case of Epic Games v. Apple put on display the purportedly pro-competitive arguments the DMU could use to justify shutting down Apple’s “walled garden,” whereas the EU’s GDPR would cut against Google’s open ecosystem with limited consumer protections. Apple’s model encourages consumer trust and adoption of a single, transparent model for app development, but Google’s model encourages app developers to choose from a broader array of design and payment options and allows consumers to choose between the options; no matter how the DMU designs its code of conduct, it will be creating winners and losers at the cost of either “open choices” or “trust and transparency.” As experience teaches is always the case, it is simply not possible for an agency with multiple goals to serve them all at the same time. The result is an unreviewable discretion to choose among them ad hoc.

Finally, notice that none of the DMU’s objectives—fair trading, open choices, and trust and transparency—revolves around quantitative evidence; at bottom, these goals are not amenable to the kind of rigor Mr. Coscelli hopes for.

IV. Speed of Proposals

Mr. Coscelli has emphasized the slow pace of competition law matters; while I empathize, surely forcing merging parties to prove a negative and truncating their due process rights is not the answer.

As I mentioned earlier, it seems a more cautious standard of proof to Mr. Coscelli is one in which an SMS firm’s proposal to acquire another firm is presumed, or all but presumed, to be anticompetitive and unlawful. That is, the DMU would block the transaction unless the firms can prove their deal would not be anticompetitive—an extremely difficult task. The most self-serving version of the CMA’s proposal would require it to prove only that the merger poses a “realistic prospect” of lessening competition, which is vague, but may in practice be well below a 50% chance. Proving that the merged entity does not harm competition will still require a predictive forward-looking assessment with inherent uncertainty, but the CMA wants the costs of uncertainty placed upon firms, rather than it. Given the inherent uncertainty in merger analysis, the CMA’s proposal would pose an unprecedented burden of proof on merging parties.

But it is not only merging parties the CMA would deprive of due process; the DMU’s so-called pro-competitive interventions, or PCI, SMS designations, and code-of-conduct requirements generally would not be stayed pending appeal. Further, an SMS firm could overturn the CMA’s designation only if it could overcome substantial deference to the DMU’s fact-finding. It is difficult to discern, then, the difference between agency decisions and final orders.

The DMU would not have to show or even assert an extraordinary need for immediate relief. This is the opposite of current practice in every jurisdiction with which I am familiar.  Interim orders should take immediate effect only in exceptional circumstances, when there would otherwise be significant and irreversible harm to consumers, not in the ordinary course of agency decision making.

V. Antitrust Is Not Always the Answer

Although one can hardly disagree with Mr. Coscelli’s premise that the digital economy raises new legal questions and practical challenges, it is far from clear that competition law is the answer to them all. Some commentators of late are proposing to use competition law to solve consumer protection and even labor market problems. Unfortunately, this theme also recurs in Mr. Coscelli’s lecture. He discusses concerns with data privacy and fair and reasonable contract terms, but those have long been the province of consumer protection and contract law; a government does not need to step in and regulate all realms of activity by digital firms and call it competition law. Nor is there reason to confine needed protections of data privacy or fair terms of use to SMS firms.

Competition law remedies are sometimes poorly matched to the problems a government is trying to correct. Mr. Coscelli discusses the possibility of strong interventions, such as forcing the separation of a platform from its participation in retail markets; for example, the DMU could order Amazon to spin off its online business selling and shipping its own brand of products. Such powerful remedies can be a sledgehammer; consider forced data sharing or interoperability to make it easier for new competitors to enter. For example, if Apple’s App Store is required to host all apps submitted to it in the interest of consumer choice, then Apple loses its ability to screen for security, privacy, and other consumer benefits, as its refusal   to deal is its only way to prevent participation in its store. Further, it is not clear consumers want Apple’s store to change; indeed, many prefer Apple products because of their enhanced security.

Forced data sharing would also be problematic; the hiQ v. LinkedIn case in the United States should serve as a cautionary tale. The trial court granted a preliminary injunction forcing LinkedIn to allow hiQ to scrape its users’ profiles while the suit was ongoing. LinkedIn ultimately won the suit because it did not have market power, much less a monopoly, in any relevant market. The court concluded each theory of anticompetitive conduct was implausible, but meanwhile LinkedIn had been forced to allow hiQ to scrape its data for an extended period before the final decision. There is no simple mechanism to “unshare” the data now that LinkedIn has prevailed. This type of case could be common under the CMA proposal because the DMU’s orders will go into immediate effect.

There is potentially much redeeming power in the Digital Regulation Co-operation Forum as Mr. Coscelli described it, but I take a different lesson from this admirable attempt to coordinate across agencies: Perhaps it is time to look beyond antitrust to solve problems that are not based upon market power. As the DRCF highlights, there are multiple agencies with overlapping authority in the digital market space. ICO and Ofcom each have authority to take action against a firm that disseminates fake news or false advertisements. Mr. Coscelli says it would be too cumbersome to take down individual bad actors, but, if so, then the solution is to adopt broader consumer protection rules, not apply an ill-fitting set of competition law rules. For example, the U.K. could change its notice-and-takedown rules to subject platforms to strict liability if they host fake news, even without knowledge that they are doing so, or perhaps only if they are negligent in discharging their obligation to police against it.

Alternatively, the government could shrink the amount of time platforms have to take down information; France gives platforms only about an hour to remove harmful information. That sort of solution does not raise the same prospect of broadly chilling market activity, but still addresses one of the concerns Mr. Coscelli raises with digital markets.

In sum, although Mr. Coscelli is of course correct that competition authorities and governments worldwide are considering whether to adopt broad reforms to their competition laws, the case against broadening remains strong. Instead of relying upon the self-corrective potential of markets, which is admittedly sometimes slower than anyone would like, the CMA assumes markets need regulation until firms prove otherwise. Although clearly well-intentioned, the DMU proposal is in too many respects not met to the task of protecting competition in digital markets; at worst, it will inhibit innovation in digital markets to the point of driving startups and other innovators out of the U.K.


[1] See Digital markets Taskforce, A new pro-competition regime for digital markets, at 22, Dec. 2020, available at: https://assets.publishing.service.gov.uk/media/5fce7567e90e07562f98286c/Digital_Taskforce_-_Advice.pdf; Oliver Dowden & Kwasi Kwarteng, A New Pro-competition Regime for Digital Markets, July 2021, available from: https://www.gov.uk/government/consultations/a-new-pro-competition-regime-for-digital-markets, at ¶ 27.

[2] Sam Bowman, Sam Dumitriu & Aria Babu, Conflicting Missions:The Risks of the Digital Markets Unit to Competition and Innovation, Int’l Center for L. & Econ., June 2021, at 13.

The U.S. economy survived the COVID-19 pandemic and associated government-imposed business shutdowns with a variety of innovations that facilitated online shopping, contactless payments, and reduced use and handling of cash, a known vector of disease transmission.

While many of these innovations were new, they would have been impossible but for their reliance on an established and ubiquitous technological infrastructure: the global credit and debit-card payments system. Not only did consumers prefer to use plastic instead of cash, the number of merchants going completely “cashless” quadrupled in the first two months of the pandemic alone. From food delivery to online shopping, many small businesses were able to survive largely because of payment cards.

But there are costs to maintain the global payment-card network that processes billions of transactions daily, and those costs are higher for online payments, which present elevated fraud and security risks. As a result, while the boom in online shopping over this past year kept many retailers and service providers afloat, that hasn’t prevented them from grousing about their increased card-processing costs.

So it is that retailers are now lobbying Washington to impose new regulations on payment-card markets designed to force down the fees they pay for accepting debit and credit cards. Called interchange fees, these fees are charged by banks that issue debit cards on each transaction, and they are part of a complex process that connects banks, card networks, merchants, and consumers.

Fig. 1: A basic illustration of the 3- and 4-party payment-processing networks that underlie the use of credit cards.

Regulation II—a provision of 2010’s Dodd–Frank Wall Street Reform and Consumer Protection Act commonly known as the “Durbin amendment,” after its primary sponsor, Senate Majority Whip Richard Durbin (D-Ill.)—placed price controls on interchange fees for debit cards issued by larger banks and credit unions (those with more than $10 billion in assets). It required all debit-card issuers to offer multiple networks for “routing” and processing card transactions. Merchants now want to expand these routing provisions to credit cards, as well. The consequences for consumers, especially low-income consumers, would be disastrous.

The price controls imposed by the Durbin amendment have led to a 52% decrease in the average per-transaction interchange fee, resulting in billions of dollars in revenue losses for covered depositories. But banks and credit unions have passed on these losses to consumers in the form of fewer free checking accounts, higher fees, and higher monthly minimums required to avoid those fees.

One empirical study found that the share of covered banks offering free checking accounts fell from 60% to 20%, the average monthly checking accounts fees increased from $4.34 to $7.44, and the minimum account balance required to avoid those fees increased by roughly 25%. Another study found that fees charged by covered institutions were 15% higher than they would have been absent the price regulation; those increases offset about 90% of the depositories’ lost revenue. Banks and credit unions also largely eliminated cash-back and other rewards on debit cards.

In fact, those who have been most harmed by the Durbin amendment’s consequences have been low-income consumers. Middle-class families hardly noticed the higher minimum balance requirements, or used their credit cards more often to offset the disappearance of debit-card rewards. Those with the smallest checking account balances, however, suffered the most from reduced availability of free banking and higher monthly maintenance and other fees. Priced out of the banking system, as many as 1 million people might have lost bank accounts in the wake of the Durbin amendment, forcing them to turn to such alternatives as prepaid cards, payday lenders, and pawn shops to make ends meet. Lacking bank accounts, these needy families weren’t even able to easily access their much-needed government stimulus funds at the onset of the pandemic without paying fees to alternative financial services providers.

In exchange for higher bank fees and reduced benefits, merchants promised lower prices at the pump and register. This has not been the case. Scholarship since  implementation of the Federal Reserve’s rule shows that whatever benefits have been gained have gone to merchants, with little pass-through to consumers. For instance, one study found that covered banks had their interchange revenue drop by 25%, but little evidence of a corresponding drop in prices from merchants.

Another study found that the benefits and costs to merchants have been unevenly distributed, with retailers who sell large-ticket items receiving a windfall, while those specializing in small-ticket items have often faced higher effective rates. Discounts previously offered to smaller merchants have been eliminated to offset reduced revenues from big-box stores. According to a 2014 Federal Reserve study, when acceptance fees increased, merchants hiked retail prices; but when fees were reduced, merchants pocketed the windfall.

Moreover, while the Durbin amendment’s proponents claimed it would only apply to big banks, the provisions that determine how transactions are routed on the payment networks apply to cards issued by credit unions and community banks, as well. As a result, smaller players have also seen average interchange fees beaten down, reducing this revenue stream even as they have been forced to cope with higher regulatory costs imposed by Dodd-Frank. Extending the Durbin amendment’s routing provisions to credit cards would further drive down interchange-fee revenue, creating the same negative spiral of higher consumer fees and reduced benefits that the original Durbin amendment spawned for debit cards.

More fundamentally, merchants believe it is their decision—not yours—as to which network will route your transaction. You may prefer Visa or Mastercard because of your confidence in their investments in security and anti-fraud detection, but later discover that the merchant has routed your transaction through a processor you’ve never heard of, simply because that network is cheaper for the merchant.

The resilience of the U.S. economy during this horrible viral contagion is due, in part, to the ubiquitous access of American families to credit and debit cards. That system has proved its mettle this past year, seamlessly adapting to the sudden shift to electronic payments. Yet, in the wake of this American success story, politicians and regulators, egged on by powerful special interests, instead want to meddle with this system just so big-box retailers can transfer their costs onto American families and small banks. As the economy and public health recovers, Congress and regulators should resist the impulse to impose new financial harm on working-class families.

In a recent op-ed, Robert Bork Jr. laments the Biden administration’s drive to jettison the Consumer Welfare Standard that has formed nearly half a century of antitrust jurisprudence. The move can be seen in the near-revolution at the Federal Trade Commission, in the president’s executive order on competition enforcement, and in several of the major antitrust bills currently before Congress.

Bork notes the Competition and Antitrust Law Enforcement Reform Act, introduced by Sen. Amy Klobuchar (D-Minn.), would “outlaw any mergers or acquisitions for the more than 80 large U.S. companies valued over $100 billion.”

Bork is correct that it will be more than 80 companies, but it is likely to be way more. While the Klobuchar bill does not explicitly outlaw such mergers, under certain circumstances, it shifts the burden of proof to the merging parties, who must demonstrate that the benefits of the transaction outweigh the potential risks. Under current law, the burden is on the government to demonstrate the potential costs outweigh the potential benefits.

One of the measure’s specific triggers for this burden-shifting is if the acquiring party has a market capitalization, assets, or annual net revenue of more than $100 billion and seeks a merger or acquisition valued at $50 million or more. About 120 or more U.S. companies satisfy at least one of these conditions. The end of this post provides a list of publicly traded companies, according to Zacks’ stock screener, that would likely be subject to the shift in burden of proof.

If the goal is to go after Big Tech, the Klobuchar bill hits the mark. All of the FAANG companies—Facebook, Amazon, Apple, Netflix, and Alphabet (formerly known as Google)—satisfy one or more of the criteria. So do Microsoft and PayPal.

But even some smaller tech firms will be subject to the shift in burden of proof. Zoom and Square have market caps that would trigger under Klobuchar’s bill and Snap is hovering around $100 billion in market cap. Twitter and eBay, however, are well under any of the thresholds. Likewise, privately owned Advance Communications, owner of Reddit, would also likely fall short of any of the triggers.

Snapchat has a little more than 300 million monthly active users. Twitter and Reddit each have about 330 million monthly active users. Nevertheless, under the Klobuchar bill, Snapchat is presumed to have more market power than either Twitter or Reddit, simply because the market assigns a higher valuation to Snap.

But this bill is about more than Big Tech. Tesla, which sold its first car only 13 years ago, is now considered big enough that it will face the same antitrust scrutiny as the Big 3 automakers. Walmart, Costco, and Kroger would be subject to the shifted burden of proof, while Safeway and Publix would escape such scrutiny. An acquisition by U.S.-based Nike would be put under the microscope, but a similar acquisition by Germany’s Adidas would not fall under the Klobuchar bill’s thresholds.

Tesla accounts for less than 2% of the vehicles sold in the United States. I have no idea what Walmart, Costco, Kroger, or Nike’s market share is, or even what comprises “the” market these companies compete in. What we do know is that the U.S. Department of Justice and Federal Trade Commission excel at narrowly crafting market definitions so that just about any company can be defined as dominant.

So much of the recent interest in antitrust has focused on Big Tech. But even the biggest of Big Tech firms operate in dynamic and competitive markets. None of my four children use Facebook or Twitter. My wife and I don’t use Snapchat. We all use Netflix, but we also use Hulu, Disney+, HBO Max, YouTube, and Amazon Prime Video. None of these services have a monopoly on our eyeballs, our attention, or our pocketbooks.

The antitrust bills currently working their way through Congress abandon the long-standing balancing of pro- versus anti-competitive effects of mergers in favor of a “big is bad” approach. While the Klobuchar bill appears to provide clear guidance on the thresholds triggering a shift in the burden of proof, the arbitrary nature of the thresholds will result in arbitrary application of the burden of proof. If passed, we will soon be faced with a case in which two firms who differ only in market cap, assets, or sales will be subject to very different antitrust scrutiny, resulting in regulatory chaos.

Publicly traded companies with more than $100 billion in market capitalization

3MDanaher Corp.PepsiCo
Abbott LaboratoriesDeere & Co.Pfizer
AbbVieEli Lilly and Co.Philip Morris International
Adobe Inc.ExxonMobilProcter & Gamble
Advanced Micro DevicesFacebook Inc.Qualcomm
Alphabet Inc.General Electric Co.Raytheon Technologies
AmazonGoldman SachsSalesforce
American ExpressHoneywellServiceNow
American TowerIBMSquare Inc.
AmgenIntelStarbucks
Apple Inc.IntuitTarget Corp.
Applied MaterialsIntuitive SurgicalTesla Inc.
AT&TJohnson & JohnsonTexas Instruments
Bank of AmericaJPMorgan ChaseThe Coca-Cola Co.
Berkshire HathawayLockheed MartinThe Estée Lauder Cos.
BlackRockLowe’sThe Home Depot
BoeingMastercardThe Walt Disney Co.
Bristol Myers SquibbMcDonald’sThermo Fisher Scientific
Broadcom Inc.MedtronicT-Mobile US
Caterpillar Inc.Merck & Co.Union Pacific Corp.
Charles Schwab Corp.MicrosoftUnited Parcel Service
Charter CommunicationsMorgan StanleyUnitedHealth Group
Chevron Corp.NetflixVerizon Communications
Cisco SystemsNextEra EnergyVisa Inc.
CitigroupNike Inc.Walmart
ComcastNvidiaWells Fargo
CostcoOracle Corp.Zoom Video Communications
CVS HealthPayPal

Publicly traded companies with more than $100 billion in current assets

Ally FinancialFreddie Mac
American International GroupKeyBank
BNY MellonM&T Bank
Capital OneNorthern Trust
Citizens Financial GroupPNC Financial Services
Fannie MaeRegions Financial Corp.
Fifth Third BankState Street Corp.
First Republic BankTruist Financial
Ford Motor Co.U.S. Bancorp

Publicly traded companies with more than $100 billion in sales

AmerisourceBergenDell Technologies
AnthemGeneral Motors
Cardinal HealthKroger
Centene Corp.McKesson Corp.
CignaWalgreens Boots Alliance

The Biden Administration’s July 9 Executive Order on Promoting Competition in the American Economy is very much a mixed bag—some positive aspects, but many negative ones.

It will have some positive effects on economic welfare, to the extent it succeeds in lifting artificial barriers to competition that harm consumers and workers—such as allowing direct sales of hearing aids in drug stores—and helping to eliminate unnecessary occupational licensing restrictions, to name just two of several examples.

But it will likely have substantial negative effects on economic welfare as well. Many aspects of the order appear to emphasize new regulation—such as Net Neutrality requirements that may reduce investment in broadband by internet service providers—and imposing new regulatory requirements on airlines, pharmaceutical companies, digital platforms, banks, railways, shipping, and meat packers, among others. Arbitrarily imposing new rules in these areas, without a cost-beneficial appraisal and a showing of a market failure, threatens to reduce innovation and slow economic growth, hurting producers and consumer. (A careful review of specific regulatory proposals may shed greater light on the justifications for particular regulations.)

Antitrust-related proposals to challenge previously cleared mergers, and to impose new antitrust rulemaking, are likely to raise costly business uncertainty, to the detriment of businesses and consumers. They are a recipe for slower economic growth, not for vibrant competition.

An underlying problem with the order is that it is based on the false premise that competition has diminished significantly in recent decades and that “big is bad.” Economic analysis found in the February 2020 Economic Report of the President, and in other economic studies, debunks this flawed assumption.

In short, the order commits the fundamental mistake of proposing intrusive regulatory solutions for a largely nonexistent problem. Competitive issues are best handled through traditional well-accepted antitrust analysis, which centers on promoting consumer welfare and on weighing procompetitive efficiencies against anticompetitive harm on a case-by-case basis. This approach:

  1. Deals effectively with serious competitive problems; while at the same time
  2. Cabining error costs by taking into account all economically relevant considerations on a case-specific basis.

Rather than using an executive order to direct very specific regulatory approaches without a strong economic and factual basis, the Biden administration would have been better served by raising a host of competitive issues that merit possible study and investigation by expert agencies. Such an approach would have avoided imposing the costs of unwarranted regulation that unfortunately are likely to stem from the new order.

Finally, the order’s call for new regulations and the elimination of various existing legal policies will spawn matter-specific legal challenges, and may, in many cases, not succeed in court. This will impose unnecessary business uncertainty in addition to public and private resources wasted on litigation.