Archives For law enforcement

The language of the federal antitrust laws is extremely general. Over more than a century, the federal courts have applied common-law techniques to construe this general language to provide guidance to the private sector as to what does or does not run afoul of the law. The interpretive process has been fraught with some uncertainty, as judicial approaches to antitrust analysis have changed several times over the past century. Nevertheless, until very recently, judges and enforcers had converged toward relying on a consumer welfare standard as the touchstone for antitrust evaluations (see my antitrust primer here, for an overview).

While imperfect and subject to potential error in application—a problem of legal interpretation generally—the consumer welfare principle has worked rather well as the focus both for antitrust-enforcement guidance and judicial decision-making. The general stability and predictability of antitrust under a consumer welfare framework has advanced the rule of law. It has given businesses sufficient information to plan transactions in a manner likely to avoid antitrust liability. It thereby has cabined uncertainty and increased the probability that private parties would enter welfare-enhancing commercial arrangements, to the benefit of society.

In a very thoughtful 2017 speech, then Acting Assistant Attorney General for Antitrust Andrew Finch commented on the importance of the rule of law to principled antitrust enforcement. He noted:

[H]ow do we administer the antitrust laws more rationally, accurately, expeditiously, and efficiently? … Law enforcement requires stability and continuity both in rules and in their application to specific cases.

Indeed, stability and continuity in enforcement are fundamental to the rule of law. The rule of law is about notice and reliance. When it is impossible to make reasonable predictions about how a law will be applied, or what the legal consequences of conduct will be, these important values are diminished. To call our antitrust regime a “rule of law” regime, we must enforce the law as written and as interpreted by the courts and advance change with careful thought.

The reliance fostered by stability and continuity has obvious economic benefits. Businesses invest, not only in innovation but in facilities, marketing, and personnel, and they do so based on the economic and legal environment they expect to face.

Of course, we want businesses to make those investments—and shape their overall conduct—in accordance with the antitrust laws. But to do so, they need to be able to rely on future application of those laws being largely consistent with their expectations. An antitrust enforcement regime with frequent changes is one that businesses cannot plan for, or one that they will plan for by avoiding certain kinds of investments.

That is certainly not to say there has not been positive change in the antitrust laws in the past, or that we would have been better off without those changes. U.S. antitrust law has been refined, and occasionally recalibrated, with the courts playing their appropriate interpretive role. And enforcers must always be on the watch for new or evolving threats to competition.  As markets evolve and products develop over time, our analysis adapts. But as those changes occur, we pursue reliability and consistency in application in the antitrust laws as much as possible.

Indeed, we have enjoyed remarkable continuity and consensus for many years. Antitrust law in the U.S. has not been a “paradox” for quite some time, but rather a stable and valuable law enforcement regime with appropriately widespread support.

Unfortunately, policy decisions taken by the new Federal Trade Commission (FTC) leadership in recent weeks have rejected antitrust continuity and consensus. They have injected substantial uncertainty into the application of competition-law enforcement by the FTC. This abrupt change in emphasis undermines the rule of law and threatens to reduce economic welfare.

As of now, the FTC’s departure from the rule of law has been notable in two areas:

  1. Its rejection of previous guidance on the agency’s “unfair methods of competition” authority, the FTC’s primary non-merger-related enforcement tool; and
  2. Its new advice rejecting time limits for the review of generally routine proposed mergers.

In addition, potential FTC rulemakings directed at “unfair methods of competition” would, if pursued, prove highly problematic.

Rescission of the Unfair Methods of Competition Policy Statement

The FTC on July 1 voted 3-2 to rescind the 2015 FTC Policy Statement Regarding Unfair Methods of Competition under Section 5 of the FTC Act (UMC Policy Statement).

The bipartisan UMC Policy Statement has originally been supported by all three Democratic commissioners, including then-Chairwoman Edith Ramirez. The policy statement generally respected and promoted the rule of law by emphasizing that, in applying the facially broad “unfair methods of competition” (UMC) language, the FTC would be guided by the well-established principles of the antitrust rule of reason (including considering any associated cognizable efficiencies and business justifications) and the consumer welfare standard. The FTC also explained that it would not apply “standalone” Section 5 theories to conduct that would violate the Sherman or Clayton Acts.

In short, the UMC Policy Statement sent a strong signal that the commission would apply UMC in a manner fully consistent with accepted and well-understood antitrust policy principles. As in the past, the vast bulk of FTC Section 5 prosecutions would be brought against conduct that violated the core antitrust laws. Standalone Section 5 cases would be directed solely at those few practices that harmed consumer welfare and competition, but somehow fell into a narrow crack in the basic antitrust statutes (such as, perhaps, “invitations to collude” that lack plausible efficiency justifications). Although the UMC Statement did not answer all questions regarding what specific practices would justify standalone UMC challenges, it substantially limited business uncertainty by bringing Section 5 within the boundaries of settled antitrust doctrine.

The FTC’s announcement of the UMC Policy Statement rescission unhelpfully proclaimed that “the time is right for the Commission to rethink its approach and to recommit to its mandate to police unfair methods of competition even if they are outside the ambit of the Sherman or Clayton Acts.” As a dissenting statement by Commissioner Christine S. Wilson warned, consumers would be harmed by the commission’s decision to prioritize other unnamed interests. And as Commissioner Noah Joshua Phillips stressed in his dissent, the end result would be reduced guidance and greater uncertainty.

In sum, by suddenly leaving private parties in the dark as to how to conform themselves to Section 5’s UMC requirements, the FTC’s rescission offends the rule of law.

New Guidance to Parties Considering Mergers

For decades, parties proposing mergers that are subject to statutory Hart-Scott-Rodino (HSR) Act pre-merger notification requirements have operated under the understanding that:

  1. The FTC and U.S. Justice Department (DOJ) will routinely grant “early termination” of review (before the end of the initial 30-day statutory review period) to those transactions posing no plausible competitive threat; and
  2. An enforcement agency’s decision not to request more detailed documents (“second requests”) after an initial 30-day pre-merger review effectively serves as an antitrust “green light” for the proposed acquisition to proceed.

Those understandings, though not statutorily mandated, have significantly reduced antitrust uncertainty and related costs in the planning of routine merger transactions. The rule of law has been advanced through an effective assurance that business combinations that appear presumptively lawful will not be the target of future government legal harassment. This has advanced efficiency in government, as well; it is a cost-beneficial optimal use of resources for DOJ and the FTC to focus exclusively on those proposed mergers that present a substantial potential threat to consumer welfare.

Two recent FTC pronouncements (one in tandem with DOJ), however, have generated great uncertainty by disavowing (at least temporarily) those two welfare-promoting review policies. Joined by DOJ, the FTC on Feb. 4 announced that the agencies would temporarily suspend early terminations, citing an “unprecedented volume of filings” and a transition to new leadership. More than six months later, this “temporary” suspension remains in effect.

Citing “capacity constraints” and a “tidal wave of merger filings,” the FTC subsequently published an Aug. 3 blog post that effectively abrogated the 30-day “green lighting” of mergers not subject to a second request. It announced that it was sending “warning letters” to firms reminding them that FTC investigations remain open after the initial 30-day period, and that “[c]ompanies that choose to proceed with transactions that have not been fully investigated are doing so at their own risk.”

The FTC’s actions interject unwarranted uncertainty into merger planning and undermine the rule of law. Preventing early termination on transactions that have been approved routinely not only imposes additional costs on business; it hints that some transactions might be subject to novel theories of liability that fall outside the antitrust consensus.

Perhaps more significantly, as three prominent antitrust practitioners point out, the FTC’s warning letters states that:

[T]he FTC may challenge deals that “threaten to reduce competition and harm consumers, workers, and honest businesses.” Adding in harm to both “workers and honest businesses” implies that the FTC may be considering more ways that transactions can have an adverse impact other than just harm to competition and consumers [citation omitted].

Because consensus antitrust merger analysis centers on consumer welfare, not the protection of labor or business interests, any suggestion that the FTC may be extending its reach to these new areas is inconsistent with established legal principles and generates new business-planning risks.

More generally, the Aug. 6 FTC “blog post could be viewed as an attempt to modify the temporal framework of the HSR Act”—in effect, an effort to displace an implicit statutory understanding in favor of an agency diktat, contrary to the rule of law. Commissioner Wilson sees the blog post as a means to keep investigations open indefinitely and, thus, an attack on the decades-old HSR framework for handling most merger reviews in an expeditious fashion (see here). Commissioner Phillips is concerned about an attempt to chill legal M&A transactions across the board, particularly unfortunate when there is no reason to conclude that particular transactions are illegal (see here).

Finally, the historical record raises serious questions about the “resource constraint” justification for the FTC’s new merger review policies:

Through the end of July 2021, more than 2,900 transactions were reported to the FTC. It is not clear, however, whether these record-breaking HSR filing numbers have led (or will lead) to more deals being investigated. Historically, only about 13 percent of all deals reported are investigated in some fashion, and roughly 3 percent of all deals reported receive a more thorough, substantive review through the issuance of a Second Request. Even if more deals are being reported, for the majority of transactions, the HSR process is purely administrative, raising no antitrust concerns, and, theoretically, uses few, if any, agency resources. [Citations omitted.]

Proposed FTC Competition Rulemakings

The new FTC leadership is strongly considering competition rulemakings. As I explained in a recent Truth on the Market post, such rulemakings would fail a cost-benefit test. They raise serious legal risks for the commission and could impose wasted resource costs on the FTC and on private parties. More significantly, they would raise two very serious economic policy concerns:

First, competition rules would generate higher error costs than adjudications. Adjudications cabin error costs by allowing for case-specific analysis of likely competitive harms and procompetitive benefits. In contrast, competition rules inherently would be overbroad and would suffer from a very high rate of false positives. By characterizing certain practices as inherently anticompetitive without allowing for consideration of case-specific facts bearing on actual competitive effects, findings of rule violations inevitably would condemn some (perhaps many) efficient arrangements.

Second, competition rules would undermine the rule of law and thereby reduce economic welfare. FTC-only competition rules could lead to disparate legal treatment of a firm’s business practices, depending upon whether the FTC or the U.S. Justice Department was the investigating agency. Also, economic efficiency gains could be lost due to the chilling of aggressive efficiency-seeking business arrangements in those sectors subject to rules. [Emphasis added.]

In short, common law antitrust adjudication, focused on the consumer welfare standard, has done a good job of promoting a vibrant competitive economy in an efficient fashion. FTC competition rulemaking would not.

Conclusion

Recent FTC actions have undermined consensus antitrust-enforcement standards and have departed from established merger-review procedures with respect to seemingly uncontroversial consolidations. Those decisions have imposed costly uncertainty on the business sector and are thereby likely to disincentivize efficiency-seeking arrangements. What’s more, by implicitly rejecting consensus antitrust principles, they denigrate the primacy of the rule of law in antitrust enforcement. The FTC’s pursuit of competition rulemaking would further damage the rule of law by imposing arbitrary strictures that ignore matter-specific considerations bearing on the justifications for particular business decisions.

Fortunately, these are early days in the Biden administration. The problematic initial policy decisions delineated in this comment could be reversed based on further reflection and deliberation within the commission. Chairwoman Lina Khan and her fellow Democratic commissioners would benefit by consulting more closely with Commissioners Wilson and Phillips to reach agreement on substantive and procedural enforcement policies that are better tailored to promote consumer welfare and enhance vibrant competition. Such policies would benefit the U.S. economy in a manner consistent with the rule of law.

In a recent op-ed, Robert Bork Jr. laments the Biden administration’s drive to jettison the Consumer Welfare Standard that has formed nearly half a century of antitrust jurisprudence. The move can be seen in the near-revolution at the Federal Trade Commission, in the president’s executive order on competition enforcement, and in several of the major antitrust bills currently before Congress.

Bork notes the Competition and Antitrust Law Enforcement Reform Act, introduced by Sen. Amy Klobuchar (D-Minn.), would “outlaw any mergers or acquisitions for the more than 80 large U.S. companies valued over $100 billion.”

Bork is correct that it will be more than 80 companies, but it is likely to be way more. While the Klobuchar bill does not explicitly outlaw such mergers, under certain circumstances, it shifts the burden of proof to the merging parties, who must demonstrate that the benefits of the transaction outweigh the potential risks. Under current law, the burden is on the government to demonstrate the potential costs outweigh the potential benefits.

One of the measure’s specific triggers for this burden-shifting is if the acquiring party has a market capitalization, assets, or annual net revenue of more than $100 billion and seeks a merger or acquisition valued at $50 million or more. About 120 or more U.S. companies satisfy at least one of these conditions. The end of this post provides a list of publicly traded companies, according to Zacks’ stock screener, that would likely be subject to the shift in burden of proof.

If the goal is to go after Big Tech, the Klobuchar bill hits the mark. All of the FAANG companies—Facebook, Amazon, Apple, Netflix, and Alphabet (formerly known as Google)—satisfy one or more of the criteria. So do Microsoft and PayPal.

But even some smaller tech firms will be subject to the shift in burden of proof. Zoom and Square have market caps that would trigger under Klobuchar’s bill and Snap is hovering around $100 billion in market cap. Twitter and eBay, however, are well under any of the thresholds. Likewise, privately owned Advance Communications, owner of Reddit, would also likely fall short of any of the triggers.

Snapchat has a little more than 300 million monthly active users. Twitter and Reddit each have about 330 million monthly active users. Nevertheless, under the Klobuchar bill, Snapchat is presumed to have more market power than either Twitter or Reddit, simply because the market assigns a higher valuation to Snap.

But this bill is about more than Big Tech. Tesla, which sold its first car only 13 years ago, is now considered big enough that it will face the same antitrust scrutiny as the Big 3 automakers. Walmart, Costco, and Kroger would be subject to the shifted burden of proof, while Safeway and Publix would escape such scrutiny. An acquisition by U.S.-based Nike would be put under the microscope, but a similar acquisition by Germany’s Adidas would not fall under the Klobuchar bill’s thresholds.

Tesla accounts for less than 2% of the vehicles sold in the United States. I have no idea what Walmart, Costco, Kroger, or Nike’s market share is, or even what comprises “the” market these companies compete in. What we do know is that the U.S. Department of Justice and Federal Trade Commission excel at narrowly crafting market definitions so that just about any company can be defined as dominant.

So much of the recent interest in antitrust has focused on Big Tech. But even the biggest of Big Tech firms operate in dynamic and competitive markets. None of my four children use Facebook or Twitter. My wife and I don’t use Snapchat. We all use Netflix, but we also use Hulu, Disney+, HBO Max, YouTube, and Amazon Prime Video. None of these services have a monopoly on our eyeballs, our attention, or our pocketbooks.

The antitrust bills currently working their way through Congress abandon the long-standing balancing of pro- versus anti-competitive effects of mergers in favor of a “big is bad” approach. While the Klobuchar bill appears to provide clear guidance on the thresholds triggering a shift in the burden of proof, the arbitrary nature of the thresholds will result in arbitrary application of the burden of proof. If passed, we will soon be faced with a case in which two firms who differ only in market cap, assets, or sales will be subject to very different antitrust scrutiny, resulting in regulatory chaos.

Publicly traded companies with more than $100 billion in market capitalization

3MDanaher Corp.PepsiCo
Abbott LaboratoriesDeere & Co.Pfizer
AbbVieEli Lilly and Co.Philip Morris International
Adobe Inc.ExxonMobilProcter & Gamble
Advanced Micro DevicesFacebook Inc.Qualcomm
Alphabet Inc.General Electric Co.Raytheon Technologies
AmazonGoldman SachsSalesforce
American ExpressHoneywellServiceNow
American TowerIBMSquare Inc.
AmgenIntelStarbucks
Apple Inc.IntuitTarget Corp.
Applied MaterialsIntuitive SurgicalTesla Inc.
AT&TJohnson & JohnsonTexas Instruments
Bank of AmericaJPMorgan ChaseThe Coca-Cola Co.
Berkshire HathawayLockheed MartinThe Estée Lauder Cos.
BlackRockLowe’sThe Home Depot
BoeingMastercardThe Walt Disney Co.
Bristol Myers SquibbMcDonald’sThermo Fisher Scientific
Broadcom Inc.MedtronicT-Mobile US
Caterpillar Inc.Merck & Co.Union Pacific Corp.
Charles Schwab Corp.MicrosoftUnited Parcel Service
Charter CommunicationsMorgan StanleyUnitedHealth Group
Chevron Corp.NetflixVerizon Communications
Cisco SystemsNextEra EnergyVisa Inc.
CitigroupNike Inc.Walmart
ComcastNvidiaWells Fargo
CostcoOracle Corp.Zoom Video Communications
CVS HealthPayPal

Publicly traded companies with more than $100 billion in current assets

Ally FinancialFreddie Mac
American International GroupKeyBank
BNY MellonM&T Bank
Capital OneNorthern Trust
Citizens Financial GroupPNC Financial Services
Fannie MaeRegions Financial Corp.
Fifth Third BankState Street Corp.
First Republic BankTruist Financial
Ford Motor Co.U.S. Bancorp

Publicly traded companies with more than $100 billion in sales

AmerisourceBergenDell Technologies
AnthemGeneral Motors
Cardinal HealthKroger
Centene Corp.McKesson Corp.
CignaWalgreens Boots Alliance

Democratic leadership of the House Judiciary Committee have leaked the approach they plan to take to revise U.S. antitrust law and enforcement, with a particular focus on digital platforms. 

Broadly speaking, the bills would: raise fees for larger mergers and increase appropriations to the FTC and DOJ; require data portability and interoperability; declare that large platforms can’t own businesses that compete with other businesses that use the platform; effectively ban large platforms from making any acquisitions; and generally declare that large platforms cannot preference their own products or services. 

All of these are ideas that have been discussed before. They are very much in line with the EU’s approach to competition, which places more regulation-like burdens on big businesses, and which is introducing a Digital Markets Act that mirrors the Democrats’ proposals. Some Republicans are reportedly supportive of the proposals, which is surprising since they mean giving broad, discretionary powers to antitrust authorities that are controlled by Democrats who take an expansive view of antitrust enforcement as a way to achieve their other social and political goals. The proposals may also be unpopular with consumers if, for example, they would mean that popular features like integrating Maps into relevant Google Search results becomes prohibited.

The multi-bill approach here suggests that the committee is trying to throw as much at the wall as possible to see what sticks. It may reflect a lack of confidence among the proposers in their ability to get their proposals through wholesale, especially given that Amy Klobuchar’s CALERA bill in the Senate creates an alternative that, while still highly interventionist, does not create ex ante regulation of the Internet the same way these proposals do.

In general, the bills are misguided for three main reasons. 

One, they seek to make digital platforms into narrow conduits for other firms to operate on, ignoring the value created by platforms curating their own services by, for example, creating quality controls on entry (as Apple does on its App Store) or by integrating their services with related products (like, say, Google adding events from Gmail to users’ Google Calendars). 

Two, they ignore the procompetitive effects of digital platforms extending into each other’s markets and competing with each other there, in ways that often lead to far more intense competition—and better outcomes for consumers—than if the only firms that could compete with the incumbent platform were small startups.

Three, they ignore the importance of incentives for innovation. Platforms invest in new and better products when they can make money from doing so, and limiting their ability to do that means weakened incentives to innovate. Startups and their founders and investors are driven, in part, by the prospect of being acquired, often by the platforms themselves. Making those acquisitions more difficult, or even impossible, means removing one of the key ways startup founders can exit their firms, and hence one of the key rewards and incentives for starting an innovative new business. 

For more, our “Joint Submission of Antitrust Economists, Legal Scholars, and Practitioners” set out why many of the House Democrats’ assumptions about the state of the economy and antitrust enforcement were mistaken. And my post, “Buck’s “Third Way”: A Different Road to the Same Destination”, argued that House Republicans like Ken Buck were misguided in believing they could support some of the proposals and avoid the massive regulatory oversight that they said they rejected.

Platform Anti-Monopoly Act 

The flagship bill, introduced by Antitrust Subcommittee Chairman David Cicilline (D-R.I.), establishes a definition of “covered platform” used by several of the other bills. The measures would apply to platforms with at least 500,000 U.S.-based users, a market capitalization of more than $600 billion, and that is deemed a “critical trading partner” with the ability to restrict or impede the access that a “dependent business” has to its users or customers.

Cicilline’s bill would bar these covered platforms from being able to promote their own products and services over the products and services of competitors who use the platform. It also defines a number of other practices that would be regarded as discriminatory, including: 

  • Restricting or impeding “dependent businesses” from being able to access the platform or its software on the same terms as the platform’s own lines of business;
  • Conditioning access or status on purchasing other products or services from the platform; 
  • Using user data to support the platform’s own products in ways not extended to competitors; 
  • Restricting the platform’s commercial users from using or accessing data generated on the platform from their own customers;
  • Restricting platform users from uninstalling software pre-installed on the platform;
  • Restricting platform users from providing links to facilitate business off of the platform;
  • Preferencing the platform’s own products or services in search results or rankings;
  • Interfering with how a dependent business prices its products; 
  • Impeding a dependent business’ users from connecting to services or products that compete with those offered by the platform; and
  • Retaliating against users who raise concerns with law enforcement about potential violations of the act.

On a basic level, these would prohibit lots of behavior that is benign and that can improve the quality of digital services for users. Apple pre-installing a Weather app on the iPhone would, for example, run afoul of these rules, and the rules as proposed could prohibit iPhones from coming with pre-installed apps at all. Instead, users would have to manually download each app themselves, if indeed Apple was allowed to include the App Store itself pre-installed on the iPhone, given that this competes with other would-be app stores.

Apart from the obvious reduction in the quality of services and convenience for users that this would involve, this kind of conduct (known as “self-preferencing”) is usually procompetitive. For example, self-preferencing allows platforms to compete with one another by using their strength in one market to enter a different one; Google’s Shopping results in the Search page increase the competition that Amazon faces, because it presents consumers with a convenient alternative when they’re shopping online for products. Similarly, Amazon’s purchase of the video-game streaming service Twitch, and the self-preferencing it does to encourage Amazon customers to use Twitch and support content creators on that platform, strengthens the competition that rivals like YouTube face. 

It also helps innovation, because it gives firms a reason to invest in services that would otherwise be unprofitable for them. Google invests in Android, and gives much of it away for free, because it can bundle Google Search into the OS, and make money from that. If Google could not self-preference Google Search on Android, the open source business model simply wouldn’t work—it wouldn’t be able to make money from Android, and would have to charge for it in other ways that may be less profitable and hence give it less reason to invest in the operating system. 

This behavior can also increase innovation by the competitors of these companies, both by prompting them to improve their products (as, for example, Google Android did with Microsoft’s mobile operating system offerings) and by growing the size of the customer base for products of this kind. For example, video games published by console manufacturers (like Nintendo’s Zelda and Mario games) are often blockbusters that grow the overall size of the user base for the consoles, increasing demand for third-party titles as well.

For more, check out “Against the Vertical Discrimination Presumption” by Geoffrey Manne and Dirk Auer’s piece “On the Origin of Platforms: An Evolutionary Perspective”.

Ending Platform Monopolies Act 

Sponsored by Rep. Pramila Jayapal (D-Wash.), this bill would make it illegal for covered platforms to control lines of business that pose “irreconcilable conflicts of interest,” enforced through civil litigation powers granted to the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ).

Specifically, the bill targets lines of business that create “a substantial incentive” for the platform to advantage its own products or services over those of competitors that use the platform, or to exclude or disadvantage competing businesses from using the platform. The FTC and DOJ could potentially order that platforms divest lines of business that violate the act.

This targets similar conduct as the previous bill, but involves the forced separation of different lines of business. It also appears to go even further, seemingly implying that companies like Google could not even develop services like Google Maps or Chrome because their existence would create such “substantial incentives” to self-preference them over the products of their competitors. 

Apart from the straightforward loss of innovation and product developments this would involve, requiring every tech company to be narrowly focused on a single line of business would substantially entrench Big Tech incumbents, because it would make it impossible for them to extend into adjacent markets to compete with one another. For example, Apple could not develop a search engine to compete with Google under these rules, and Amazon would be forced to sell its video-streaming services that compete with Netflix and Youtube.

For more, check out Geoffrey Manne’s written testimony to the House Antitrust Subcommittee and “Platform Self-Preferencing Can Be Good for Consumers and Even Competitors” by Geoffrey and me. 

Platform Competition and Opportunity Act

Introduced by Rep. Hakeem Jeffries (D-N.Y.), this bill would bar covered platforms from making essentially any acquisitions at all. To be excluded from the ban on acquisitions, the platform would have to present “clear and convincing evidence” that the acquired business does not compete with the platform for any product or service, does not pose a potential competitive threat to the platform, and would not in any way enhance or help maintain the acquiring platform’s market position. 

The two main ways that founders and investors can make a return on a successful startup are to float the company at IPO or to be acquired by another business. The latter of these, acquisitions, is extremely important. Between 2008 and 2019, 90 percent of U.S. start-up exits happened through acquisition. In a recent survey, half of current startup executives said they aimed to be acquired. One study found that countries that made it easier for firms to be taken over saw a 40-50 percent increase in VC activity, and that U.S. states that made acquisitions harder saw a 27 percent decrease in VC investment deals

So this proposal would probably reduce investment in U.S. startups, since it makes it more difficult for them to be acquired. It would therefore reduce innovation as a result. It would also reduce inter-platform competition by banning deals that allow firms to move into new markets, like the acquisition of Beats that helped Apple to build a Spotify competitor, or the deals that helped Google, Microsoft, and Amazon build cloud-computing services that all compete with each other. It could also reduce competition faced by old industries, by preventing tech companies from buying firms that enable it to move into new markets—like Amazon’s acquisitions of health-care companies that it has used to build a health-care offering. Even Walmart’s acquisition of Jet.com, which it has used to build an Amazon competitor, could have been banned under this law if Walmart had had a higher market cap at the time.

For more, check out Dirk Auer’s piece “Facebook and the Pros and Cons of Ex Post Merger Reviews” and my piece “Cracking down on mergers would leave us all worse off”. 

ACCESS Act

The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, sponsored by Rep. Mary Gay Scanlon (D-Pa.), would establish data portability and interoperability requirements for platforms. 

Under terms of the legislation, covered platforms would be required to allow third parties to transfer data to their users or, with the user’s consent, to a competing business. It also would require platforms to facilitate compatible and interoperable communications with competing businesses. The law directs the FTC to establish technical committees to promulgate the standards for portability and interoperability. 

Data portability and interoperability involve trade-offs in terms of security and usability, and overseeing them can be extremely costly and difficult. In security terms, interoperability requirements prevent companies from using closed systems to protect users from hostile third parties. Mandatory openness means increasing—sometimes, substantially so—the risk of data breaches and leaks. In practice, that could mean users’ private messages or photos being leaked more frequently, or activity on a social media page that a user considers to be “their” private data, but that “belongs” to another user under the terms of use, can be exported and publicized as such. 

It can also make digital services more buggy and unreliable, by requiring that they are built in a more “open” way that may be more prone to unanticipated software mismatches. A good example is that of Windows vs iOS; Windows is far more interoperable with third-party software than iOS is, but tends to be less stable as a result, and users often prefer the closed, stable system. 

Interoperability requirements also entail ongoing regulatory oversight, to make sure data is being provided to third parties reliably. It’s difficult to build an app around another company’s data without assurance that the data will be available when users want it. For a requirement as broad as this bill’s, that could mean setting up quite a large new de facto regulator. 

In the UK, Open Banking (an interoperability requirement imposed on British retail banks) has suffered from significant service outages, and targets a level of uptime that many developers complain is too low for them to build products around. Nor has Open Banking yet led to any obvious competition benefits.

For more, check out Gus Hurwitz’s piece “Portable Social Media Aren’t Like Portable Phone Numbers” and my piece “Why Data Interoperability Is Harder Than It Looks: The Open Banking Experience”.

Merger Filing Fee Modernization Act

A bill that mirrors language in the Endless Frontier Act recently passed by the U.S. Senate, would significantly raise filing fees for the largest mergers. Rather than the current cap of $280,000 for mergers valued at more than $500 million, the bill—sponsored by Rep. Joe Neguse (D-Colo.)–the new schedule would assess fees of $2.25 million for mergers valued at more than $5 billion; $800,000 for those valued at between $2 billion and $5 billion; and $400,000 for those between $1 billion and $2 billion.

Smaller mergers would actually see their filing fees cut: from $280,000 to $250,000 for those between $500 million and $1 billion; from $125,000 to $100,000 for those between $161.5 million and $500 million; and from $45,000 to $30,000 for those less than $161.5 million. 

In addition, the bill would appropriate $418 million to the FTC and $252 million to the DOJ’s Antitrust Division for Fiscal Year 2022. Most people in the antitrust world are generally supportive of more funding for the FTC and DOJ, although whether this is actually good or not depends both on how it’s spent at those places. 

It’s hard to object if it goes towards deepening the agencies’ capacities and knowledge, by hiring and retaining higher quality staff with salaries that are more competitive with those offered by the private sector, and on greater efforts to study the effects of the antitrust laws and past cases on the economy. If it goes toward broadening the activities of the agencies, by doing more and enabling them to pursue a more aggressive enforcement agenda, and supporting whatever of the above proposals make it into law, then it could be very harmful. 

For more, check out my post “Buck’s “Third Way”: A Different Road to the Same Destination” and Thom Lambert’s post “Bad Blood at the FTC”.

Overview

Virtually all countries in the world have adopted competition laws over the last three decades. In a recent Mercatus Foundation Research Paper, I argue that the spread of these laws has benefits and risks. The abstract of my Paper states:

The United States stood virtually alone when it enacted its first antitrust statute in 1890. Today, almost all nations have adopted competition laws (the term used in most other nations), and US antitrust agencies interact with foreign enforcers on a daily basis. This globalization of antitrust is becoming increasingly important to the economic welfare of many nations, because major businesses (in particular, massive digital platforms like Google and Facebook) face growing antitrust scrutiny by multiple enforcement regimes worldwide. As such, the United States should take the lead in encouraging adoption of antitrust policies, here and abroad, that are conducive to economic growth and innovation. Antitrust policies centered on promoting consumer welfare would be best suited to advancing these desirable aims. Thus, the United States should oppose recent efforts (here and abroad) to turn antitrust into a regulatory system that seeks to advance many objectives beyond consumer welfare. American antitrust enforcers should also work with like-minded agencies—and within multilateral organizations such as the International Competition Network and the Organisation for Economic Cooperation and Development—to promote procedural fairness and the rule of law in antitrust enforcement.

A brief summary of my Paper follows.

Discussion

Widespread calls for “reform” of the American antitrust laws are based on the false premises that (1) U.S. economic concentration has increased excessively and competition has diminished in recent decades; and (2) U.S. antitrust enforcers have failed to effectively enforce the antitrust laws (the consumer welfare standard is sometimes cited as the culprit to blame for “ineffective” antitrust enforcement). In fact, sound economic scholarship, some of it cited in chapter 6 of the 2020 Economic Report of the President, debunks these claims. In reality, modern U.S. antitrust enforcement under the economics-based consumer welfare standard (despite being imperfect and subject to error costs) has done a good job overall of promoting competitive and efficient markets.

The adoption of competition laws by foreign nations was promoted by the U.S. Government. The development of European competition law in the 1950s, and its incorporation into treaties that laid the foundation for the European Union (EU), was particularly significant. The EU administrative approach to antitrust, based on civil law (as compared to the U.S. common law approach), has greatly influenced the contours of most new competition laws. The EU, like the U.S., focuses on anticompetitive joint conduct, single firm conduct, and mergers. EU enforcement (carried out through the European Commission’s Directorate General for Competition) initially relied more on formal agency guidance than American antitrust law, but it began to incorporate an economic effects-based consumer welfare-centric approach over the last 20 years. Nevertheless, EU enforcers still pay greater attention to the welfare of competitors than their American counterparts.

In recent years, the EU prosecutions of digital platforms have begun to adopt a “precautionary antitrust” perspective, which seeks to prevent potential monopoly abuses in their incipiency by sanctioning business conduct without showing that it is causing any actual or likely consumer harm. What’s more, the EU’s recently adopted “Digital Markets Act” for the first time imposes ex ante competition regulation of platforms. These developments reflect a move away from a consumer welfare approach. On the plus side, the EU (unlike the U.S.) subjects state-owned or controlled monopolies to liability for anticompetitive conduct and forbids anticompetitive government subsidies that seriously distort competition (“state aids”).

Developing and former communist bloc countries rapidly enacted and implemented competition laws over the last three decades. Many newly minted competition agencies suffer from poor institutional capacity. The U.S. Government and the EU have worked to enhance the quality and consistency of competition enforcement in these jurisdictions by supporting technical support and training.

Various institutions support efforts to improve competition law enforcement and develop support for a “competition culture.” The International Competition Network (ICN), established in 2001, is a “virtual network” comprised of almost all competition agencies. The ICN focuses on discrete projects aimed at procedural and substantive competition law convergence through the development of consensual, nonbinding “best practices” recommendations and reports. It also provides a significant role for nongovernmental advisers from the business, legal, economic, consumer, and academic communities, as well as for experts from other international organizations. ICN member agency staff are encouraged to communicate with each other about the fundamentals of investigations and evaluations and to use ICN-generated documents and podcasts to support training. The application of economic analysis to case-specific facts has been highlighted in ICN work product. The Organization for Economic Cooperation and Development (OECD) and the World Bank (both of which carry out economics-based competition policy research) have joined with the ICN in providing national competition agencies (both new and well established) with the means to advocate effectively for procompetitive, economically beneficial government policies. ICN and OECD “toolkits” provide strategies for identifying and working to dislodge (or not enact) anticompetitive laws and regulations that harm the economy.

While a fair degree of convergence has been realized, substantive uniformity among competition law regimes has not been achieved. This is not surprising, given differences among jurisdictions in economic development, political organization, economic philosophy, history, and cultural heritage—all of which may help generate a multiplicity of policy goals. In addition to consumer welfare, different jurisdictions’ competition laws seek to advance support for small and medium sized businesses, fairness and equality, public interest factors, and empowerment of historically disadvantaged persons, among other outcomes. These many goals may not take center stage in the evaluation of most proposed mergers or restrictive business arrangements, but they may affect the handling of particular matters that raise national sensitivities tied to the goals.

The spread of competition law worldwide has generated various tangible benefits. These include consensus support for combating hard core welfare-reducing cartels, fruitful international cooperation among officials dedicated to a pro-competition mission, and support for competition advocacy aimed at dismantling harmful government barriers to competition.

There are, however, six other factors that raise questions regarding whether competition law globalization has been cost-beneficial overall: (1) effective welfare-enhancing antitrust enforcement is stymied in jurisdictions where the rule of law is weak and private property is poorly protected; (2) high enforcement error costs (particularly in jurisdictions that consider factors other than consumer welfare) may undermine the procompetitive features of antitrust enforcement efforts; (3) enforcement demands by multiple competition authorities substantially increase the costs imposed on firms that are engaging in multinational transactions; (4) differences among national competition law rules create complications for national agencies as they seek to have their laws vindicated while maintaining good cooperative relationships with peer enforcers; (5) anticompetitive rent-seeking by less efficient rivals may generate counterproductive prosecutions of successful companies, thereby disincentivizing welfare-inducing business behavior; and (6) recent developments around the world suggest that antitrust policy directed at large digital platforms (and perhaps other dominant companies as well) may be morphing into welfare-inimical regulation. These factors are discussed at greater length in my paper.

One cannot readily quantify the positive and negative welfare effects of the consequences of competition law globalization. Accordingly, one cannot state with any degree of confidence whether globalization has been “good” or “bad” overall in terms of economic welfare.

Conclusion

The extent to which globalized competition law will be a boon to consumers and the global economy will depend entirely on the soundness of public policy decision-making.  The U.S. Government should take the lead in advancing a consumer welfare-centric competition policy at home and abroad. It should work with multilateral institutions and engage in bilateral and regional cooperation to support the rule of law, due process, and antitrust enforcement centered on the consumer welfare standard.

Despite calls from some NGOs to mandate radical interoperability, the EU’s draft Digital Markets Act (DMA) adopted a more measured approach, requiring full interoperability only in “ancillary” services like identification or payment systems. There remains the possibility, however, that the DMA proposal will be amended to include stronger interoperability mandates, or that such amendments will be introduced in the Digital Services Act. Without the right checks and balances, this could pose grave threats to Europeans’ privacy and security.

At the most basic level, interoperability means a capacity to exchange information between computer systems. Email is an example of an interoperable standard that most of us use today. Expanded interoperability could offer promising solutions to some of today’s difficult problems. For example, it might allow third-party developers to offer different “flavors” of social media news feed, with varying approaches to content ranking and moderation (see Daphne Keller, Mike Masnick, and Stephen Wolfram for more on that idea). After all, in a pluralistic society, someone will always be unhappy with what some others consider appropriate content. Why not let smaller groups decide what they want to see? 

But to achieve that goal using currently available technology, third-party developers would have to be able to access all of a platform’s content that is potentially available to a user. This would include not just content produced by users who explicitly agrees for their data to be shared with third parties, but also content—e.g., posts, comments, likes—created by others who may have strong objections to such sharing. It doesn’t require much imagination to see how, without adequate safeguards, mandating this kind of information exchange would inevitably result in something akin to the 2018 Cambridge Analytica data scandal.

It is telling that supporters of this kind of interoperability use services like email as their model examples. Email (more precisely, the SMTP protocol) originally was designed in a notoriously insecure way. It is a perfect example of the opposite of privacy by design. A good analogy for the levels of privacy and security provided by email, as originally conceived, is that of a postcard message sent without an envelope that passes through many hands before reaching the addressee. Even today, email continues to be a source of security concerns due to its prioritization of interoperability.

It also is telling that supporters of interoperability tend to point to what are small-scale platforms (e.g., Mastodon) or protocols with unacceptably poor usability for most of today’s Internet users (e.g., Usenet). When proposing solutions to potential privacy problems—e.g., that users will adequately monitor how various platforms use their data—they often assume unrealistic levels of user interest or technical acumen.

Interoperability in the DMA

The current draft of the DMA contains several provisions that broadly construe interoperability as applying only to “gatekeepers”—i.e., the largest online platforms:

  1. Mandated interoperability of “ancillary services” (Art 6(1)(f)); 
  2. Real-time data portability (Art 6(1)(h)); and
  3. Business-user access to their own and end-user data (Art 6(1)(i)). 

The first provision, (Art 6(1)(f)), is meant to force gatekeepers to allow e.g., third-party payment or identification services—for example, to allow people to create social media accounts without providing an email address, which is possible using services like “Sign in with Apple.” This kind of interoperability doesn’t pose as big of a privacy risk as mandated interoperability of “core” services (e.g., messaging on a platform like WhatsApp or Signal), partially due to a more limited scope of data that needs to be exchanged.

However, even here, there may be some risks. For example, users may choose poorly secured identification services and thus become victims of attacks. Therefore, it is important that gatekeepers not be prevented from protecting their users adequately. Of course,there are likely trade-offs between those protections and the interoperability that some want. Proponents of stronger interoperability want this provision amended to cover all “core” services, not just “ancillary” ones, which would constitute precisely the kind of radical interoperability that cannot be safely mandated today.

The other two provisions do not mandate full two-way interoperability, where a third party could both read data from a service like Facebook and modify content on that service. Instead, they provide for one-way “continuous and real-time” access to data—read-only.

The second provision (Art 6(1)(h)) mandates that gatekeepers give users effective “continuous and real-time” access to data “generated through” their activity. It’s not entirely clear whether this provision would be satisfied by, e.g., Facebook’s Graph API, but it likely would not be satisfied simply by being able to download one’s Facebook data, as that is not “continuous and real-time.”

Importantly, the proposed provision explicitly references the General Data Protection Regulation (GDPR), which suggests that—at least as regards personal data—the scope of this portability mandate is not meant to be broader than that from Article 20 GDPR. Given the GDPR reference and the qualification that it applies to data “generated through” the user’s activity, this mandate would not include data generated by other users—which is welcome, but likely will not satisfy the proponents of stronger interoperability.

The third provision from Art 6(1)(i) mandates only “continuous and real-time” data access and only as regards data “provided for or generated in the context of the use of the relevant core platform services” by business users and by “the end users engaging with the products or services provided by those business users.” This provision is also explicitly qualified with respect to personal data, which are to be shared after GDPR-like user consent and “only where directly connected with the use effectuated by the end user in respect of” the business user’s service. The provision should thus not be a tool for a new Cambridge Analytica to siphon data on users who interact with some Facebook page or app and their unwitting contacts. However, for the same reasons, it will also not be sufficient for the kinds of uses that proponents of stronger interoperability envisage.

Why can’t stronger interoperability be safely mandated today?

Let’s imagine that Art 6(1)(f) is amended to cover all “core” services, so gatekeepers like Facebook end up with a legal duty to allow third parties to read data from and write data to Facebook via APIs. This would go beyond what is currently possible using Facebook’s Graph API, and would lack the current safety valve of Facebook cutting off access because of the legal duty to deal created by the interoperability mandate. As Cory Doctorow and Bennett Cyphers note, there are at least three categories of privacy and security risks in this situation:

1. Data sharing and mining via new APIs;

2. New opportunities for phishing and sock puppetry in a federated ecosystem; and

3. More friction for platforms trying to maintain a secure system.

Unlike some other proponents of strong interoperability, Doctorow and Cyphers are open about the scale of the risk: “[w]ithout new legal safeguards to protect the privacy of user data, this kind of interoperable ecosystem could make Cambridge Analytica-style attacks more common.”

There are bound to be attempts to misuse interoperability through clearly criminal activity. But there also are likely to be more legally ambiguous attempts that are harder to proscribe ex ante. Proposals for strong interoperability mandates need to address this kind of problem.

So, what could be done to make strong interoperability reasonably safe? Doctorow and Cyphers argue that there is a “need for better privacy law,” but don’t say whether they think the GDPR’s rules fit the bill. This may be a matter of reasonable disagreement.

What isn’t up for serious debate is that the current framework and practice of privacy enforcement offers little confidence that misuses of strong interoperability would be detected and prosecuted, much less that they would be prevented (see here and here on GDPR enforcement). This is especially true for smaller and “judgment-proof” rule-breakers, including those from outside the European Union. Addressing the problems of privacy law enforcement is a herculean task, in and of itself.

The day may come when radical interoperability will, thanks to advances in technology and/or privacy enforcement, become acceptably safe. But it would be utterly irresponsible to mandate radical interoperability in the DMA and/or DSA, and simply hope the obvious privacy and security problems will somehow be solved before the law takes force. Instituting such a mandate would likely discredit the very idea of interoperability.

The European Commission this week published its proposed Artificial Intelligence Regulation, setting out new rules for  “artificial intelligence systems” used within the European Union. The regulation—the commission’s attempt to limit pernicious uses of AI without discouraging its adoption in beneficial cases—casts a wide net in defining AI to include essentially any software developed using machine learning. As a result, a host of software may fall under the regulation’s purview.

The regulation categorizes AIs by the kind and extent of risk they may pose to health, safety, and fundamental rights, with the overarching goal to:

  • Prohibit “unacceptable risk” AIs outright;
  • Place strict restrictions on “high-risk” AIs;
  • Place minor restrictions on “limited-risk” AIs;
  • Create voluntary “codes of conduct” for “minimal-risk” AIs;
  • Establish a regulatory sandbox regime for AI systems; 
  • Set up a European Artificial Intelligence Board to oversee regulatory implementation; and
  • Set fines for noncompliance at up to 30 million euros, or 6% of worldwide turnover, whichever is greater.

AIs That Are Prohibited Outright

The regulation prohibits AI that are used to exploit people’s vulnerabilities or that use subliminal techniques to distort behavior in a way likely to cause physical or psychological harm. Also prohibited are AIs used by public authorities to give people a trustworthiness score, if that score would then be used to treat a person unfavorably in a separate context or in a way that is disproportionate. The regulation also bans the use of “real-time” remote biometric identification (such as facial-recognition technology) in public spaces by law enforcement, with exceptions for specific and limited uses, such as searching for a missing child.

The first prohibition raises some interesting questions. The regulation says that an “exploited vulnerability” must relate to age or disability. In its announcement, the commission says this is targeted toward AIs such as toys that might induce a child to engage in dangerous behavior.

The ban on AIs using “subliminal techniques” is more opaque. The regulation doesn’t give a clear definition of what constitutes a “subliminal technique,” other than that it must be something “beyond a person’s consciousness.” Would this include TikTok’s algorithm, which imperceptibly adjusts the videos shown to the user to keep them engaged on the platform? The notion that this might cause harm is not fanciful, but it’s unclear whether the provision would be interpreted to be that expansive, whatever the commission’s intent might be. There is at least a risk that this provision would discourage innovative new uses of AI, causing businesses to err on the side of caution to avoid the huge penalties that breaking the rules would incur.

The prohibition on AIs used for social scoring is limited to public authorities. That leaves space for socially useful expansions of scoring systems, such as consumers using their Uber rating to show a record of previous good behavior to a potential Airbnb host. The ban is clearly oriented toward more expansive and dystopian uses of social credit systems, which some fear may be used to arbitrarily lock people out of society.

The ban on remote biometric identification AI is similarly limited to its use by law enforcement in public spaces. The limited exceptions (preventing an imminent terrorist attack, searching for a missing child, etc.) would be subject to judicial authorization except in cases of emergency, where ex-post authorization can be sought. The prohibition leaves room for private enterprises to innovate, but all non-prohibited uses of remote biometric identification would be subject to the requirements for high-risk AIs.

Restrictions on ‘High-Risk’ AIs

Some AI uses are not prohibited outright, but instead categorized as “high-risk” and subject to strict rules before they can be used or put to market. AI systems considered to be high-risk include those used for:

  • Safety components for certain types of products;
  • Remote biometric identification, except those uses that are banned outright;
  • Safety components in the management and operation of critical infrastructure, such as gas and electricity networks;
  • Dispatching emergency services;
  • Educational admissions and assessments;
  • Employment, workers management, and access to self-employment;
  • Evaluating credit-worthiness;
  • Assessing eligibility to receive social security benefits or services;
  • A range of law-enforcement purposes (e.g., detecting deepfakes or predicting the occurrence of criminal offenses);
  • Migration, asylum, and border-control management; and
  • Administration of justice.

While the commission considers these AIs to be those most likely to cause individual or social harm, it may not have appropriately balanced those perceived harms with the onerous regulatory burdens placed upon their use.

As Mikołaj Barczentewicz at the Surrey Law and Technology Hub has pointed out, the regulation would discourage even simple uses of logic or machine-learning systems in such settings as education or workplaces. This would mean that any workplace that develops machine-learning tools to enhance productivity—through, for example, monitoring or task allocation—would be subject to stringent requirements. These include requirements to have risk-management systems in place, to use only “high quality” datasets, and to allow human oversight of the AI, as well as other requirements around transparency and documentation.

The obligations would apply to any companies or government agencies that develop an AI (or for whom an AI is developed) with a view toward marketing it or putting it into service under their own name. The obligations could even attach to distributors, importers, users, or other third parties if they make a “substantial modification” to the high-risk AI, market it under their own name, or change its intended purpose—all of which could potentially discourage adaptive use.

Without going into unnecessary detail regarding each requirement, some are likely to have competition- and innovation-distorting effects that are worth discussing.

The rule that data used to train, validate, or test a high-risk AI has to be high quality (“relevant, representative, and free of errors”) assumes that perfect, error-free data sets exist, or can easily be detected. Not only is this not necessarily the case, but the requirement could impose an impossible standard on some activities. Given this high bar, high-risk AIs that use data of merely “good” quality could be precluded. It also would cut against the frontiers of research in artificial intelligence, where sometimes only small and lower-quality datasets are available to train AI. A predictable effect is that the rule would benefit large companies that are more likely to have access to large, high-quality datasets, while rules like the GDPR make it difficult for smaller companies to acquire that data.

High-risk AIs also must submit technical and user documentation that detail voluminous information about the AI system, including descriptions of the AI’s elements, its development, monitoring, functioning, and control. These must demonstrate the AI complies with all the requirements for high-risk AIs, in addition to documenting its characteristics, capabilities, and limitations. The requirement to produce vast amounts of information represents another potentially significant compliance cost that will be particularly felt by startups and other small and medium-sized enterprises (SMEs). This could further discourage AI adoption within the EU, as European enterprises already consider liability for potential damages and regulatory obstacles as impediments to AI adoption.

The requirement that the AI be subject to human oversight entails that the AI can be overseen and understood by a human being and that the AI can never override a human user. While it may be important that an AI used in, say, the criminal justice system must be understood by humans, this requirement could inhibit sophisticated uses beyond the reasoning of a human brain, such as how to safely operate a national electricity grid. Providers of high-risk AI systems also must establish a post-market monitoring system to evaluate continuous compliance with the regulation, representing another potentially significant ongoing cost for the use of high-risk AIs.

The regulation also places certain restrictions on “limited-risk” AIs, notably deepfakes and chatbots. Such AIs must be labeled to make a user aware they are looking at or listening to manipulated images, video, or audio. AIs must also be labeled to ensure humans are aware when they are speaking to an artificial intelligence, where this is not already obvious.

Taken together, these regulatory burdens may be greater than the benefits they generate, and could chill innovation and competition. The impact on smaller EU firms, which already are likely to struggle to compete with the American and Chinese tech giants, could prompt them to move outside the European jurisdiction altogether.

Regulatory Support for Innovation and Competition

To reduce the costs of these rules, the regulation also includes a new regulatory “sandbox” scheme. The sandboxes would putatively offer environments to develop and test AIs under the supervision of competent authorities, although exposure to liability would remain for harms caused to third parties and AIs would still have to comply with the requirements of the regulation.

SMEs and startups would have priority access to the regulatory sandboxes, although they must meet the same eligibility conditions as larger competitors. There would also be awareness-raising activities to help SMEs and startups to understand the rules; a “support channel” for SMEs within the national regulator; and adjusted fees for SMEs and startups to establish that their AIs conform with requirements.

These measures are intended to prevent the sort of chilling effect that was seen as a result of the GDPR, which led to a 17% increase in market concentration after it was introduced. But it’s unclear that they would accomplish this goal. (Notably, the GDPR contained similar provisions offering awareness-raising activities and derogations from specific duties for SMEs.) Firms operating in the “sandboxes” would still be exposed to liability, and the only significant difference to market conditions appears to be the “supervision” of competent authorities. It remains to be seen how this arrangement would sufficiently promote innovation as to overcome the burdens placed on AI by the significant new regulatory and compliance costs.

Governance and Enforcement

Each EU member state would be expected to appoint a “national competent authority” to implement and apply the regulation, as well as bodies to ensure high-risk systems conform with rules that require third party-assessments, such as remote biometric identification AIs.

The regulation establishes the European Artificial Intelligence Board to act as the union-wide regulatory body for AI. The board would be responsible for sharing best practices with member states, harmonizing practices among them, and issuing opinions on matters related to implementation.

As mentioned earlier, maximum penalties for marketing or using a prohibited AI (as well as for failing to use high-quality datasets) would be a steep 30 million euros or 6% of worldwide turnover, whichever is greater. Breaking other requirements for high-risk AIs carries maximum penalties of 20 million euros or 4% of worldwide turnover, while maximums of 10 million euros or 2% of worldwide turnover would be imposed for supplying incorrect, incomplete, or misleading information to the nationally appointed regulator.

Is the Commission Overplaying its Hand?

While the regulation only restricts AIs seen as creating risk to society, it defines that risk so broadly and vaguely that benign applications of AI may be included in its scope, intentionally or unintentionally. Moreover, the commission also proposes voluntary codes of conduct that would apply similar requirements to “minimal” risk AIs. These codes—optional for now—may signal the commission’s intent eventually to further broaden the regulation’s scope and application.

The commission clearly hopes it can rely on the “Brussels Effect” to steer the rest of the world toward tighter AI regulation, but it is also possible that other countries will seek to attract AI startups and investment by introducing less stringent regimes.

For the EU itself, more regulation must be balanced against the need to foster AI innovation. Without European tech giants of its own, the commission must be careful not to stifle the SMEs that form the backbone of the European market, particularly if global competitors are able to innovate more freely in the American or Chinese markets. If the commission has got the balance wrong, it may find that AI development simply goes elsewhere, with the EU fighting the battle for the future of AI with one hand tied behind its back.

The U.S. Supreme Court’s just-published unanimous decision in AMG Capital Management LLC v. FTC—holding that Section 13(b) of the Federal Trade Commission Act does not authorize the commission to obtain court-ordered equitable monetary relief (such as restitution or disgorgement)—is not surprising. Moreover, by dissipating the cloud of litigation uncertainty that has surrounded the FTC’s recent efforts to seek such relief, the court cleared the way for consideration of targeted congressional legislation to address the issue.

But what should such legislation provide? After briefly summarizing the court’s holding, I will turn to the appropriate standards for optimal FTC consumer redress actions, which inform a welfare-enhancing legislative fix.

The Court’s Opinion

Justice Stephen Breyer’s opinion for the court is straightforward, centering on the structure and history of the FTC Act. Section 13(b) makes no direct reference to monetary relief. Its plain language merely authorizes the FTC to seek a “permanent injunction” in federal court against “any person, partnership, or corporation” that it believes “is violating, or is about to violate, any provision of law” that the commission enforces. In addition, by its terms, Section 13(b) is forward-looking, focusing on relief that is prospective, not retrospective (this cuts against the argument that payments for prior harm may be recouped from wrongdoers).

Furthermore, the FTC Act provisions that specifically authorize conditioned and limited forms of monetary relief (Section 5(l) and Section 19) are in the context of commission cease and desist orders, involving FTC administrative proceedings, unlike Section 13(b) actions that avoid the administrative route. In sum, the court concludes that:

[T]o read §13(b) to mean what it says, as authorizing injunctive but not monetary relief, produces a coherent enforcement scheme: The Commission may obtain monetary relief by first invoking its administrative procedures and then §19’s redress provisions (which include limitations). And the Commission may use §13(b) to obtain injunctive relief while administrative proceedings are foreseen or in progress, or when it seeks only injunctive relief. By contrast, the Commission’s broad reading would allow it to use §13(b) as a substitute for §5 and §19. For the reasons we have just stated, that could not have been Congress’ intent.

The court’s opinion concludes by succinctly rejecting the FTC’s arguments to the contrary.

What Comes Next

The Supreme Court’s decision has been anticipated by informed observers. All four sitting FTC Commissioners have already called for a Section 13(b) “legislative fix,” and in an April 20 hearing of Senate Commerce Committee, Chairwoman Maria Cantwell (D-Wash.) emphasized that, “[w]e have to do everything we can to protect this authority and, if necessary, pass new legislation to do so.”

What, however, should be the contours of such legislation? In considering alternative statutory rules, legislators should keep in mind not only the possible consumer benefits of monetary relief, but the costs of error, as well. Error costs are a ubiquitous element of public law enforcement, and this is particularly true in the case of FTC actions. Ideally, enforcers should seek to minimize the sum of the costs attributable to false positives (type I error), false negatives (type II error), administrative costs, and disincentive costs imposed on third parties, which may also be viewed as a subset of false positives. (See my 2014 piece “A Cost-Benefit Framework for Antitrust Enforcement Policy.”

Monetary relief is most appropriate in cases where error costs are minimal, and the quantum of harm is relatively easy to measure. This suggests a spectrum of FTC enforcement actions that may be candidates for monetary relief. Ideally, selection of targets for FTC consumer redress actions should be calibrated to yield the highest return to scarce enforcement resources, with an eye to optimal enforcement criteria.

Consider consumer protection enforcement. The strongest cases involve hardcore consumer fraud (where fraudulent purpose is clear and error is almost nil); they best satisfy accuracy in measurement and error-cost criteria. Next along the spectrum are cases of non-fraudulent but unfair or deceptive acts or practices that potentially involve some degree of error. In this category, situations involving easily measurable consumer losses (e.g., systematic failure to deliver particular goods requested or poor quality control yielding shipments of ruined goods) would appear to be the best candidates for monetary relief.

Moving along the spectrum, matters involving a higher likelihood of error and severe measurement problems should be the weakest candidates for consumer redress in the consumer protection sphere. For example, cases involve allegedly misleading advertising regarding the nature of goods, or allegedly insufficient advertising substantiation, may generate high false positives and intractable difficulties in estimating consumer harm. As a matter of judgment, given resource constraints, seeking financial recoveries solely in cases of fraud or clear deception where consumer losses are apparent and readily measurable makes the most sense from a cost-benefit perspective.

Consumer redress actions are problematic for a large proportion of FTC antitrust enforcement (“unfair methods of competition”) initiatives. Many of these antitrust cases are “cutting edge” matters involving novel theories and complex fact patterns that pose a significant threat of type I error. (In comparison, type I error is low in hardcore collusion cases brought by the U.S. Justice Department where the existence, nature, and effects of cartel activity are plain). What’s more, they generally raise extremely difficult if not impossible problems in estimating the degree of consumer harm. (Even DOJ price-fixing cases raise non-trivial measurement difficulties.)

For example, consider assigning a consumer welfare loss number to a patent antitrust settlement that may or may not have delayed entry of a generic drug by some length of time (depending upon the strength of the patent) or to a decision by a drug company to modify a drug slightly just before patent expiration in order to obtain a new patent period (raising questions of valuing potential product improvements). These and other examples suggest that only rarely should the FTC pursue requests for disgorgement or restitution in antitrust cases, if error-cost-centric enforcement criteria are to be honored.

Unfortunately, the FTC currently has nothing to say about when it will seek monetary relief in antitrust matters. Commendably, in 2003, the commission issued a Policy Statement on Monetary Equitable Remedies in Competition Cases specifying that it would only seek monetary relief in “exceptional cases” involving a “[c]lear [v]iolation” of the antitrust laws. Regrettably, in 2012, a majority of the FTC (with Commissioner Maureen Ohlhausen dissenting) withdrew that policy statement and the limitations it imposed. As I concluded in a 2012 article:

This action, which was taken without the benefit of advance notice and public comment, raises troubling questions. By increasing business uncertainty, the withdrawal may substantially chill efficient business practices that are not well understood by enforcers. In addition, it raises the specter of substantial error costs in the FTC’s pursuit of monetary sanctions. In short, it appears to represent a move away from, rather than towards, an economically enlightened antitrust enforcement policy.

In a 2013 speech, then-FTC Commissioner Josh Wright also lamented the withdrawal of the 2003 Statement, and stated that he would limit:

… the FTC’s ability to pursue disgorgement only against naked price fixing agreements among competitors or, in the case of single firm conduct, only if the monopolist’s conduct has no plausible efficiency justification. This latter category would include fraudulent or deceptive conduct, or tortious activity such as burning down a competitor’s plant.

As a practical matter, the FTC does not bring cases of this sort. The DOJ brings naked price-fixing cases and the unilateral conduct cases noted are as scarce as unicorns. Given that fact, Wright’s recommendation may rightly be seen as a rejection of monetary relief in FTC antitrust cases. Based on the previously discussed serious error-cost and measurement problems associated with monetary remedies in FTC antitrust cases, one may also conclude that the Wright approach is right on the money.

Finally, a recent article by former FTC Chairman Tim Muris, Howard Beales, and Benjamin Mundel opined that Section 13(b) should be construed to “limit[] the FTC’s ability to obtain monetary relief to conduct that a reasonable person would know was dishonest or fraudulent.” Although such a statutory reading is now precluded by the Supreme Court’s decision, its incorporation in a new statutory “fix” would appear ideal. It would allow for consumer redress in appropriate cases, while avoiding the likely net welfare losses arising from a more expansive approach to monetary remedies.

 Conclusion

The AMG Capital decision is sure to generate legislative proposals to restore the FTC’s ability to secure monetary relief in federal court. If Congress adopts a cost-beneficial error-cost framework in shaping targeted legislation, it should limit FTC monetary relief authority (recoupment and disgorgement) to situations of consumer fraud or dishonesty arising under the FTC’s authority to pursue unfair or deceptive acts or practices. Giving the FTC carte blanche to obtain financial recoveries in the full spectrum of antitrust and consumer protection cases would spawn uncertainty and could chill a great deal of innovative business behavior, to the ultimate detriment of consumer welfare.


,

In the battle of ideas, it is quite useful to be able to brandish clear and concise debating points in support of a proposition, backed by solid analysis. Toward that end, in a recent primer about antitrust law published by the Mercatus Center, I advance four reasons to reject neo-Brandeisian critiques of the consensus (at least, until very recently) consumer welfare-centric approach to antitrust enforcement. My four points, drawn from the primer (with citations deleted and hyperlinks added) are as follows:

First, the underlying assumptions of rising concentration and declining competition on which the neo-Brandeisian critique is largely based (and which are reflected in the introductory legislative findings of the Competition and Antitrust Law Enforcement Reform Act [of 2021, introduced by Senator Klobuchar on February 4, lack merit]. Chapter 6 of the 2020 Economic Report of the President, dealing with competition policy, summarizes research debunking those assumptions. To begin with, it shows that studies complaining that competition is in decline are fatally flawed. Studies such as one in 2016 by the Council of Economic Advisers rely on overbroad market definitions that say nothing about competition in specific markets, let alone across the entire economy. Indeed, in 2018, professor Carl Shapiro, chief DOJ antitrust economist in the Obama administration, admitted that a key summary chart in the 2016 study “is not informative regarding overall trends in concentration in well-defined relevant markets that are used by antitrust economists to assess market power, much less trends in concentration in the U.S. economy.” Furthermore, as the 2020 report points out, other literature claiming that competition is in decline rests on a problematic assumption that increases in concentration (even assuming such increases exist) beget softer competition. Problems with this assumption have been understood since at least the 1970s. The most fundamental problem is that there are alternative explanations (such as exploitation of scale economies) for why a market might demonstrate both high concentration and high markups—explanations that are still consistent with procompetitive behavior by firms. (In a related vein, research by other prominent economists has exposed flaws in studies that purport to show a weakening of merger enforcement standards in recent years.) Finally, the 2020 report notes that the real solution to perceived economic problems may be less government, not more: “As historic regulatory reform across American industries has shown, cutting government-imposed barriers to innovation leads to increased competition, strong economic growth, and a revitalized private sector.”

Second, quite apart from the flawed premises that inform the neo-Brandeisian critique, specific neo-Brandeisian reforms appear highly problematic on economic grounds. Breakups of dominant firms or near prohibitions on dominant firm acquisitions would sacrifice major economies of scale and potential efficiencies of integration, harming consumers without offering any proof that the new market structures in reshaped industries would yield consumer or producer benefits. Furthermore, a requirement that merging parties prove a negative (that the merger will not harm competition) would limit the ability of entrepreneurs and market makers to act on information about misused or underutilized assets through the merger process. This limitation would reduce economic efficiency. After-the-fact studies indicating that a large percentage of mergers do not add wealth and do not otherwise succeed as much as projected miss this point entirely. They ignore what the world would be like if mergers were much more difficult to enter into: a world where there would be lower efficiency and dynamic economic growth because there would be less incentive to seek out market-improving opportunities.

Third, one aspect of the neo-Brandeisian approach to antitrust policy is at odds with fundamental notions of fair notice of wrongdoing and equal treatment under neutral principles, notions that are central to the rule of law. In particular, the neo-Brandeisian call for considering a multiplicity of new factors such as fairness, labor, and the environment when enforcing policy is troublesome. There is no neutral principle for assigning weights to such divergent interests, and (even if weights could be assigned) there are no economic tools for accurately measuring how a transaction under review would affect those interests. It follows that abandoning antitrust law’s consumer-welfare standard in favor of an ill-defined multifactor approach would spawn confusion in the private sector and promote arbitrariness in enforcement decisions, undermining the transparency that is a key aspect of the rule of law. Whereas concerns other than consumer welfare may of course be validly considered in setting public policy, they are best dealt with under other statutory schemes, not under antitrust law.

Fourth, and finally, neo-Brandeisian antitrust proposals are not a solution to widely expressed concerns that big companies in general, and large digital platforms in particular, are undermining free speech by censoring content of which they disapprove. Antitrust law is designed to prevent businesses from creating impediments to market competition that reduce economic welfare; it is not well-suited to policing companies’ determinations regarding speech. To the extent that policymakers wish to address speech censorship on large platforms, they should consider other regulatory institutions that would be better suited to the task (such as communications law), while keeping in mind First Amendment limitations on the ability of government to control private speech.

In light of these four points, the primer concludes that the neo-Brandeisian-inspired antitrust “reform” proposals being considered by Congress should be rejected:

[E]fforts to totally reshape antitrust policy into a quasi-regulatory system that arbitrarily blocks and disincentivizes (1) welfare-enhancing mergers and (2) an array of actions by dominant firms are highly troubling. Such interventionist proposals ignore the lack of evidence of serious competitive problems in the American economy and appear arbitrary compared to the existing consumer-welfare-centric antitrust enforcement regime. To use a metaphor, Congress and public officials should avoid a drastic new antitrust cure for an anticompetitive disease that can be handled effectively with existing antitrust medications.

Let us hope that the serious harm associated with neo-Brandeisian legislative “deformation” (a more apt term than reformation) of the antitrust laws is given a full legislative airing before Congress acts.

Amazingly enough, at a time when legislative proposals for new antitrust restrictions are rapidly multiplying—see the Competition and Antitrust Law Enforcement Reform Act (CALERA), for example—Congress simultaneously is seriously considering granting antitrust immunity to a price-fixing cartel among members of the newsmedia. This would thereby authorize what the late Justice Antonin Scalia termed “the supreme evil of antitrust: collusion.” What accounts for this bizarre development?

Discussion

The antitrust exemption in question, embodied in the Journalism Competition and Preservation Act of 2021, was introduced March 10 simultaneously in the U.S. House and Senate. The press release announcing the bill’s introduction portrayed it as a “good government” effort to help struggling newspapers in their negotiations with large digital platforms, and thereby strengthen American democracy:

We must enable news organizations to negotiate on a level playing field with the big tech companies if we want to preserve a strong and independent press[.] …

A strong, diverse, free press is critical for any successful democracy. …

Nearly 90 percent of Americans now get news while on a smartphone, computer, or tablet, according to a Pew Research Center survey conducted last year, dwarfing the number of Americans who get news via television, radio, or print media. Facebook and Google now account for the vast majority of online referrals to news sources, with the two companies also enjoying control of a majority of the online advertising market. This digital ad duopoly has directly contributed to layoffs and consolidation in the news industry, particularly for local news.

This legislation would address this imbalance by providing a safe harbor from antitrust laws so publishers can band together to negotiate with large platforms. It provides a 48-month window for companies to negotiate fair terms that would flow subscription and advertising dollars back to publishers, while protecting and preserving Americans’ right to access quality news. These negotiations would strictly benefit Americans and news publishers at-large; not just one or a few publishers.

The Journalism Competition and Preservation Act only allows coordination by news publishers if it (1) directly relates to the quality, accuracy, attribution or branding, and interoperability of news; (2) benefits the entire industry, rather than just a few publishers, and are non-discriminatory to other news publishers; and (3) is directly related to and reasonably necessary for these negotiations.

Lurking behind this public-spirited rhetoric, however, is the specter of special interest rent seeking by powerful media groups, as discussed in an insightful article by Thom Lambert. The newspaper industry is indeed struggling, but that is true overseas as well as in the United States. Competition from internet websites has greatly reduced revenues from classified and non-classified advertising. As Lambert notes, in “light of the challenges the internet has created for their advertising-focused funding model, newspapers have sought to employ the government’s coercive power to increase their revenues.”

In particular, media groups have successfully lobbied various foreign governments to impose rules requiring that Google and Facebook pay newspapers licensing fees to display content. The Australian government went even further by mandating that digital platforms share their advertising revenue with news publishers and give the publishers advance notice of any algorithm changes that could affect page rankings and displays. Media rent-seeking efforts took a different form in the United States, as Lambert explains (citations omitted):

In the United States, news publishers have sought to extract rents from digital platforms by lobbying for an exemption from the antitrust laws. Their efforts culminated in the introduction of the Journalism Competition and Preservation Act of 2018. According to a press release announcing the bill, it would allow “small publishers to band together to negotiate with dominant online platforms to improve the access to and the quality of news online.” In reality, the bill would create a four-year safe harbor for “any print or digital news organization” to jointly negotiate terms of trade with Google and Facebook. It would not apply merely to “small publishers” but would instead immunize collusive conduct by such major conglomerates as Murdoch’s News Corporation, the Walt Disney Corporation, the New York Times, Gannet Company, Bloomberg, Viacom, AT&T, and the Fox Corporation. The bill would permit news organizations to fix prices charged to digital platforms as long as negotiations with the platforms were not limited to price, were not discriminatory toward similarly situated news organizations, and somehow related to “the quality, accuracy, attribution or branding, and interoperability of news.” Given the ease of meeting that test—since news organizations could always claim that higher payments were necessary to ensure journalistic quality—the bill would enable news publishers in the United States to extract rents via collusion rather than via direct government coercion, as in Australia.

The 2021 version of the JCPA is nearly identical to the 2018 version discussed by Thom. The only substantive change is that the 2021 version strengthens the pro-cartel coalition by adding broadcasters (it applies to “any print, broadcast, or news organization”). While the JCPA plainly targets Facebook and Google (“online content distributors” with “not fewer than 1,000,000,000 monthly active users, in the aggregate, on its website”), Microsoft President Brad Smith noted in a March 12 House Antitrust Subcommittee Hearing on the bill that his company would also come under its collective-bargaining terms. Other online distributors could eventually become subject to the proposed law as well.

Purported justifications for the proposal were skillfully skewered by John Yun in a 2019 article on the substantively identical 2018 JCPA. Yun makes several salient points. First, the bill clearly shields price fixing. Second, the claim that all news organizations (in particular, small newspapers) would receive the same benefit from the bill rings hollow. The bill’s requirement that negotiations be “nondiscriminatory as to similarly situated news content creators” (emphasis added) would allow the cartel to negotiate different terms of trade for different “tiers” of organizations. Thus The New York Times and The Washington Post, say, might be part of a top tier getting the most favorable terms of trade. Third, the evidence does not support the assertion that Facebook and Google are monopolistic gateways for news outlets.

Yun concludes by summarizing the case against this legislation (citations omitted):

Put simply, the impact of the bill is to legalize a media cartel. The bill expressly allows the cartel to fix the price and set the terms of trade for all market participants. The clear goal is to transfer surplus from online platforms to news organizations, which will likely result in higher content costs for these platforms, as well as provisions that will stifle the ability to innovate. In turn, this could negatively impact quality for the users of these platforms.

Furthermore, a stated goal of the bill is to promote “quality” news and to “highlight trusted brands.” These are usually antitrust code words for favoring one group, e.g., those that are part of the News Media Alliance, while foreclosing others who are not “similarly situated.” What about the non-discrimination clause? Will it protect non-members from foreclosure? Again, a careful reading of the bill raises serious questions as to whether it will actually offer protection. The bill only ensures that the terms of the negotiations are available to all “similarly situated” news organizations. It is very easy to carve out provisions that would favor top tier members of the media cartel.

Additionally, an unintended consequence of antitrust exemptions can be that it makes the beneficiaries lax by insulating them from market competition and, ultimately, can harm the industry by delaying inevitable and difficult, but necessary, choices. There is evidence that this is what occurred with the Newspaper Preservation Act of 1970, which provided antitrust exemption to geographically proximate newspapers for joint operations.

There are very good reasons why antitrust jurisprudence reserves per se condemnation to the most egregious anticompetitive acts including the formation of cartels. Legislative attempts to circumvent the federal antitrust laws should be reserved solely for the most compelling justifications. There is little evidence that this level of justification has been met in this present circumstance.

Conclusion

Statutory exemptions to the antitrust laws have long been disfavored, and with good reason. As I explained in my 2005 testimony before the Antitrust Modernization Commission, such exemptions tend to foster welfare-reducing output restrictions. Also, empirical research suggests that industries sheltered from competition perform less well than those subject to competitive forces. In short, both economic theory and real-world data support a standard that requires proponents of an exemption to bear the burden of demonstrating that the exemption will benefit consumers.

This conclusion applies most strongly when an exemption would specifically authorize hard-core price fixing, as in the case with the JCPA. What’s more, the bill’s proponents have not borne the burden of justifying their pro-cartel proposal in economic welfare terms—quite the opposite. Lambert’s analysis exposes this legislation as the product of special interest rent seeking that has nothing to do with consumer welfare. And Yun’s evaluation of the bill clarifies that, not only would the JCPA foster harmful collusive pricing, but it would also harm its beneficiaries by allowing them to avoid taking steps to modernize and render themselves more efficient competitors.

In sum, though the JCPA claims to fly a “public interest” flag, it is just another private interest bill promoted by well-organized rent seekers would harm consumer welfare and undermine innovation.

Antitrust by Fiat

Jonathan M. Barnett —  23 February 2021

The Competition and Antitrust Law Enforcement Reform Act (CALERA), recently introduced in the U.S. Senate, exhibits a remarkable willingness to cast aside decades of evidentiary standards that courts have developed to uphold the rule of law by precluding factually and economically ungrounded applications of antitrust law. Without those safeguards, antitrust enforcement is prone to be driven by a combination of prosecutorial and judicial fiat. That would place at risk the free play of competitive forces that the antitrust laws are designed to protect.

Antitrust law inherently lends itself to the risk of erroneous interpretations of ambiguous evidence. Outside clear cases of interfirm collusion, virtually all conduct that might appear anti-competitive might just as easily be proven, after significant factual inquiry, to be pro-competitive. This fundamental risk of a false diagnosis has guided antitrust case law and regulatory policy since at least the Supreme Court’s landmark Continental Television v. GTE Sylvania decision in 1977 and arguably earlier. Judicial and regulatory efforts to mitigate this ambiguity, while preserving the deterrent power of the antitrust laws, have resulted in the evidentiary requirements that are targeted by the proposed bill.

Proponents of the legislative “reforms” might argue that modern antitrust case law’s careful avoidance of enforcement error yields excessive caution. To relieve regulators and courts from having to do their homework before disrupting a targeted business and its employees, shareholders, customers and suppliers, the proposed bill empowers plaintiffs to allege and courts to “find” anti-competitive conduct without having to be bound to the reasonably objective metrics upon which courts and regulators have relied for decades. That runs the risk of substituting rhetoric and intuition for fact and analysis as the guiding principles of antitrust enforcement and adjudication.

This dismissal of even a rudimentary commitment to rule-of-law principles is illustrated by two dramatic departures from existing case law in the proposed bill. Each constitutes a largely unrestrained “blank check” for regulatory and judicial overreach.

Blank Check #1

The bill includes a broad prohibition on “exclusionary” conduct, which is defined to include any conduct that “materially disadvantages 1 or more actual or potential competitors” and “presents an appreciable risk of harming competition.” That amorphous language arguably enables litigants to target a firm that offers consumers lower prices but “disadvantages” less efficient competitors that cannot match that price.

In fact, the proposed legislation specifically facilitates this litigation strategy by relieving predatory pricing claims from having to show that pricing is below cost or likely to result ultimately in profits for the defendant. While the bill permits a defendant to escape liability by showing sufficiently countervailing “procompetitive benefits,” the onus rests on the defendant to show otherwise. This burden-shifting strategy encourages lagging firms to shift competition from the marketplace to the courthouse.

Blank Check #2

The bill then removes another evidentiary safeguard by relieving plaintiffs from always having to define a relevant market. Rather, it may be sufficient to show that the contested practice gives rise to an “appreciable risk of harming competition … based on the totality of the circumstances.” It is hard to miss the high degree of subjectivity in this standard.

This ambiguous threshold runs counter to antitrust principles that require a credible showing of market power in virtually all cases except horizontal collusion. Those principles make perfect sense. Market power is the gateway concept that enables courts to distinguish between claims that plausibly target alleged harms to competition and those that do not. Without a well-defined market, it is difficult to know whether a particular practice reflects market power or market competition. Removing the market power requirement can remove any meaningful grounds on which a defendant could avoid a nuisance lawsuit or contest or appeal a conclusory allegation or finding of anticompetitive conduct.

Anti-Market Antitrust

The bill’s transparently outcome-driven approach is likely to give rise to a cloud of liability that penalizes businesses that benefit consumers through price and quality combinations that competitors cannot replicate. This obviously runs directly counter to the purpose of the antitrust laws. Certainly, winners can and sometimes do entrench themselves through potentially anticompetitive practices that should be closely scrutinized. However, the proposed legislation seems to reflect a presumption that successful businesses usually win by employing illegitimate tactics, rather than simply being the most efficient firm in the market. Under that assumption, competition law becomes a tool for redoing, rather than enabling, competitive outcomes.

While this populist approach may be popular, it is neither economically sound nor consistent with a market-driven economy in which resources are mostly allocated through pricing mechanisms and government intervention is the exception, not the rule. It would appear that some legislators would like to reverse that presumption. Far from being a victory for consumers, that outcome would constitute a resounding loss.

The European Commission has unveiled draft legislation (the Digital Services Act, or “DSA”) that would overhaul the rules governing the online lives of its citizens. The draft rules are something of a mixed bag. While online markets present important challenges for law enforcement, the DSA would significantly increase the cost of doing business in Europe and harm the very freedoms European lawmakers seek to protect. The draft’s newly proposed “Know Your Business Customer” (KYBC) obligations, however, will enable smoother operation of the liability regimes that currently apply to online intermediaries. 

These reforms come amid a rash of headlines about election meddling, misinformation, terrorist propaganda, child pornography, and other illegal and abhorrent content spread on digital platforms. These developments have galvanized debate about online liability rules.

Existing rules, codified in the e-Commerce Directive, largely absolve “passive” intermediaries that “play a neutral, merely technical and passive role” from liability for content posted by their users so long as they remove it once notified. “Active” intermediaries have more legal exposure. This regime isn’t perfect, but it seems to have served the EU well in many ways.

With its draft regulation, the European Commission is effectively arguing that those rules fail to address the legal challenges posed by the emergence of digital platforms. As the EC’s press release puts it:

The landscape of digital services is significantly different today from 20 years ago, when the eCommerce Directive was adopted. […]  Online intermediaries […] can be used as a vehicle for disseminating illegal content, or selling illegal goods or services online. Some very large players have emerged as quasi-public spaces for information sharing and online trade. They have become systemic in nature and pose particular risks for users’ rights, information flows and public participation.

Online platforms initially hoped lawmakers would agree to some form of self-regulation, but those hopes were quickly dashed. Facebook released a white paper this Spring proposing a more moderate path that would expand regulatory oversight to “ensure companies are making decisions about online speech in a way that minimizes harm but also respects the fundamental right to free expression.” The proposed regime would not impose additional liability for harmful content posted by users, a position that Facebook and other internet platforms reiterated during congressional hearings in the United States.

European lawmakers were not moved by these arguments. EU Commissioner for Internal Market and Services Thierry Breton, among other European officials, dismissed Facebook’s proposal within hours of its publication, saying:

It’s not enough. It’s too slow, it’s too low in terms of responsibility and regulation.

Against this backdrop, the draft DSA includes many far-reaching measures: transparency requirements for recommender systems, content moderation decisions, and online advertising; mandated sharing of data with authorities and researchers; and numerous compliance measures that include internal audits and regular communication with authorities. Moreover, the largest online platforms—so-called “gatekeepers”—will have to comply with a separate regulation that gives European authorities new tools to “protect competition” in digital markets (the Digital Markets Act, or “DMA”).

The upshot is that, if passed into law, the draft rules will place tremendous burdens upon online intermediaries. This would be self-defeating. 

Excessive regulation or liability would significantly increase their cost of doing business, leading to significantly smaller networks and significantly increased barriers to access for many users. Stronger liability rules would also encourage platforms to play it safe, such as by quickly de-platforming and refusing access to anyone who plausibly engaged in illegal activity. Such an outcome would harm the very freedoms European lawmakers seek to protect.

This could prove particularly troublesome for small businesses that find it harder to compete against large platforms due to rising compliance costs. In effect, the new rules will increase barriers to entry, as has already been seen with the GDPR.

In the commission’s defense, some of the proposed reforms are more appealing. This is notably the case with the KYBC requirements, as well as the decision to leave most enforcement to member states, where services providers have their main establishments. The latter is likely to preserve regulatory competition among EU members to attract large tech firms, potentially limiting regulatory overreach. 

Indeed, while the existing regime does, to some extent, curb the spread of online crime, it does little for the victims of cybercrime, who ultimately pay the price. Removing illegal content doesn’t prevent it from reappearing in the future, sometimes on the same platform. Importantly, hosts have no obligation to provide the identity of violators to authorities, or even to know their identity in the first place. The result is an endless game of “whack-a-mole”: illegal content is taken down, but immediately reappears elsewhere. This status quo enables malicious users to upload illegal content, such as that which recently led card networks to cut all ties with Pornhub

Victims arguably need additional tools. This is what the Commission seeks to achieve with the DSA’s “traceability of traders” requirement, a form of KYBC:

Where an online platform allows consumers to conclude distance contracts with traders, it shall ensure that traders can only use its services to promote messages on or to offer products or services to consumers located in the Union if, prior to the use of its services, the online platform has obtained the following information: […]

Instead of rewriting the underlying liability regime—with the harmful unintended consequences that would likely entail—the draft DSA creates parallel rules that require platforms to better protect victims.

Under the proposed rules, intermediaries would be required to obtain the true identity of commercial clients (as opposed to consumers) and to sever ties with businesses that refuse to comply (rather than just take down their content). Such obligations would be, in effect, a version of the “Know Your Customer” regulations that exist in other industries. Banks, for example, are required to conduct due diligence to ensure scofflaws can’t use legitimate financial services to further criminal enterprises. It seems reasonable to expect analogous due diligence from the Internet firms that power so much of today’s online economy.

Obligations requiring platforms to vet their commercial relationships may seem modest, but they’re likely to enable more effective law enforcement against the actual perpetrators of online harms without diminishing platform’s innovation and the economic opportunity they provide (and that everyone agrees is worth preserving).

There is no silver bullet. Illegal activity will never disappear entirely from the online world, just as it has declined, but not vanished, from other walks of life. But small regulatory changes that offer marginal improvements can have a substantial effect. Modest informational requirements would weed out the most blatant crimes without overly burdening online intermediaries. In short, it would make the Internet a safer place for European citizens.

High-profile cases like those of Michael Brown in Ferguson, Missouri, and Breonna Taylor in Louisville, Kentucky, have garnered attention from the media and the academy alike about decisions by grand juries not to charge police officers with homicide. 

While much of this focus centers on alleged racial bias on the part of police officers and the criminal justice system writ large, it’s also important to examine the perverse incentives faced by local district attorneys tasked with prosecuting police.

District attorneys rely on close professional relationships with police officers and law enforcement departments to prosecute criminal cases. Professional incentives require district attorneys to win cases. They can’t do that without cooperation from the police who investigate and bring criminal complaints. Moreover, police unions have disproportionate influence on district attorney elections.

Applying a law & economics lens to criminal justice offers a way forward that could better align incentives to prosecute police officers who break the law.

The legal profession is regulated largely by the rules of professional conduct developed by bar associations in each jurisdiction. The stated goal of these rules is to promote legal ethics among attorneys admitted to the bar. But these rules can also be understood economically. The organized bar can use legal ethics rules to increase its members’ profits in two main ways: by restricting entry to the practice of law and by adopting efficient rules that reduce the costs of contracting between lawyers and clients.

The bar’s rules can restrict competition in the market by requiring prospective lawyers to have graduated from an accredited law school and passed a bar exam, or to have substantial experience in another jurisdiction before they are allowed to waive in. The ability to practice law in a given jurisdiction without having taken the necessary steps to become a member of the bar is limited to pro hac vice rules that require working with a member of the bar. The result of the limitations allows lawyers to raise prices higher than they would without the restrictions on competition.

But the rules also can promote economically efficient outcomes. For instance, conflict-of-interest rules prevent lawyers from representing clients who have interests directly adverse to other clients, or where there would be significant risk that representation would be materially limited by responsibilities to other clients or former clients. (See, for example, Rule 1.7 of the American Bar Association’s Model Rules of Professional Conduct.) Many of these conflicts are waivable, but some are not

It is worth considering why these rules make sense economically. In a world devoid of transaction costs and strategic behavior, lawyers and clients could negotiate complete contracts for each representation, which would include compensation for those who would possibly be hurt by conflicts. But that’s not the real world. Conflict-of-interest rules are designed to overcome the principal-agent problems that arise from representing clients with adverse interests, including the potential use of information from representations to the detriment of those clients. Thus, conflict-of-interest rules supply efficient defaults that generally limit potentially harmful representation. 

Incentives in prosecuting police

Imagine the following scenario: a local district attorney works with a municipal police officer on a number of cases over the years, relying upon that officer’s evidence and testimony to prosecute criminal defendants. A video of the officer is later posted on YouTube showing him beating a non-resisting handcuffed citizen with his baton. The district attorney must now make the decision of whether to charge the officer with potential crimes. 

The bar’s usual conflict-of-interest rules, as described above, do not apply the same way to prosecutors. The prosecutor’s client is presumed to be the public, rather than the police officers with whom they work on a daily basis. Thus, the district attorney is not deemed to face an ethical problem in prosecuting the officer, despite their long-standing professional institutional relationship. The rules of professional conduct don’t require a district attorney to recuse herself from the case.

Following the incentives, it is no surprise that prosecutors often give benefit of the doubt to police officers in allegations of criminal conduct. One of a prosecutor’s primary jobs is to ensure judges and juries believe the testimony of police officers. Future relationships with officers may be impaired by police prosecutions that are perceived by law enforcement to be unfair.  

Elections are ineffective checks on prosecutorial power

While in theory (and sometimes in fact), public elections could serve as a check on district attorneys who fail to live up to their duty to prosecute unlawful behavior by police officers, there are reasons to be skeptical that they successfully do so consistently. Public choice economics helps explain why.

The public as a whole is dispersed and unorganized, especially when it comes to its interest as potential victims of the criminal justice system. On the other hand, police unions and associations are organized to forward the interest of law enforcement officers. Indeed, among the benefits police unions commonly provide to members are lawyers to defend against civil rights lawsuits and criminal prosecutions. Police unions and associations also can exert significant influence on  who is chosen to be district attorney in the first place. Such organized interests often are among the leaders in spending and campaigning for or against district attorney candidates. By contrast, the voting public tends to have far less information about and interest in those elections. 

Getting the incentives right

In pursuing institutional reform, it is important both to get the incentives right and to remain cognizant of trade-offs. The goal should be to align incentives so that there is no disincentive for prosecuting police officers criminally if the facts call for it. Some popular proposed reforms, however, could be both legally deficient or suffer from similar incentive problems.

For instance, a number of California district attorneys and candidates have called for an amendment to the state’s rules of professional conduct to define it as a conflict of interest for a district attorney candidate to receive campaign contributions from a police union. While this calls out the same problem identified here, the proposal would be subject to challenge on First Amendment grounds for targeting political speech, and on equal protection grounds for preferencing other groups over police unions. 

Other possibilities, such as escalating police prosecutions to the state attorney general’s office, face the same public choice and conflict-of-interest problems identified for local district attorneys. 

One way to avoid the conflict of interest inherent in police prosecutions might be to appoint special prosecutors when there are police defendants. Bar associations could create a panel of lawyers for appointment in such cases, much like some jurisdictions have for indigent defendants. The special prosecutor would need investigatory power and the ability to carry out the case on behalf of the public. 

Conclusion

The incentives faced by district attorneys contribute to the problem of insufficient prosecution of police officers who engage in criminal behavior. Prosecutors who generally rely upon close professional relationships with police officers have a conflict of interest when it comes to cases where police officers are the defendants. A new path is needed to get the incentives right.