Welcome to the FTC UMC Roundup, our new weekly update of news and events relating to antitrust and, more specifically, to the Federal Trade Commission’s (FTC) newfound interest in “revitalizing” the field. Each week we will bring you a brief recap of the week that was and a preview of the week to come. All with a bit of commentary and news of interest to regular readers of Truth on the Market mixed in.
This week’s headline? Of course it’s that Alvaro Bedoya has been confirmed as the FTC’s fifth commissioner—notably breaking the commission’s 2-2 tie between Democrats and Republicans and giving FTC Chair Lina Khan the majority she has been lacking. Politico and Gibson Dunn both offer some thoughts on what to expect next—though none of the predictions are surprising: more aggressive merger review and litigation; UMC rulemakings on a range of topics, including labor, right-to-repair, and pharmaceuticals; and privacy-related consumer protection. The real question is how quickly and aggressively the FTC will implement this agenda. Will we see a flurry of rulemakings in the next week, or will they be rolled out over a period of months or years? Will the FTC risk major litigation questions with a “go big or go home” attitude, or will it take a more incrementalist approach to boiling the frog?
Much of the rest of this week’s action happened on the Hill. Khan, joined by Securities and Exchange Commission (SEC) Chair Gary Gensler, made the regular trip to Congress to ask for a bigger budget to support more hires. (FTC, Law360) Sen. Mike Lee (R-Utah) asked for unanimous consent on his State Antitrust Enforcement Venue Act, but met resistance from Sen. Amy Klobuchar (D-Minn.), who wants that bill paired with her own American Innovation and Choice Online Act. This follows reports that Senate Majority Leader Chuck Schumer (D-N.Y.) is pushing Klobuchar to get support in line for both AICOA and the Open App Markets Act to be brought to the Senate floor. Of course, if they had the needed support, we probably wouldn’t be talking so much about whether they have the needed support.
Questions about the climate at the FTC continue following release of the Office of Personnel Management’s (OPM) Federal Employee Viewpoint Survey. Sen. Roger Wicker (R-Miss.) wants to know what has caused staff satisfaction at the agency to fall precipitously. And former senior FTC staffer Eileen Harrington issued a stern rebuke of the agency at this week’s open meeting, saying of the relationship between leadership and staff that: “The FTC is not a failed agency but it’s on the road to becoming one. This is a crisis.”
Perhaps the only thing experiencing greater inflation than the dollar is interest in the FTC doing something about inflation. Alden Abbott and Andrew Mercado remind us that these calls are misplaced. But that won’t stop politicians from demanding the FTC do something about high gas prices. Or beef production. Or utilities. Or baby formula.
A little further afield, the 5th U.S. Circuit Court of Appealsissued an opinion this week in a case involving SEC administrative-law judges that took broad issue with them on delegation, due process, and “take care” grounds. It may come as a surprise that this has led to much overwroughtconsternation that the opinion would dismantle the administrative state. But given that it is often the case that the SEC and FTC face similar constitutional issues (recall that Kokesh v. SEC was the precursor to AMG Capital), the 5th Circuit case could portend future problems for FTC adjudication. Add this to the queue with the Supreme Court’s pending review of whether federal district courts can consider constitutional challenges to an agency’s structure. The court was already scheduled to consider this question with respect to the FTC this next term in Axon, and agreed this week to hear a similar SEC-focused case next term as well.
Some Navel-Gazing News!
Congratulations to recent University of Michigan Law School graduate Kacyn Fujii, winner of our New Voices competition for contributions to our recent symposium on FTC UMC Rulemaking (hey, this post is actually part of that symposium, as well!). Kacyn’s contribution looked at the statutory basis for FTC UMC rulemaking authority and evaluated the use of such authority as a way to address problematic use of non-compete clauses.
And, one for the academics (and others who enjoy writing academic articles): you might be interested in this call for proposals for a research roundtable on Market Structuring Regulation that the International Center for Law & Economics will host in September. If you are interested in writing on topics that include conglomerate business models, market-structuring regulation, vertical integration, or other topics relating to the regulation and economics of contemporary markets, we hope to hear from you!
[The following is a guest post from Andrew Mercado, a research assistant at the Mercatus Center at George Mason University and an adjunct professor and research assistant at George Mason’s Antonin Scalia Law School.]
Barry Schwartz’s seminal work “The Paradox of Choice” has received substantial attention since its publication nearly 20 years ago. In it, Schwartz argued that, faced with an ever-increasing plethora of products to choose from, consumers often feel overwhelmed and seek to limit the number of choices they must make.
In today’s online digital economy, a possible response to this problem is for digital platforms to use consumer data to present consumers with a “manageable” array of choices and thereby simplify their product selection. Appropriate “curation” of product-choice options may substantially benefit consumer welfare, provided that government regulators stay out of the way.
New Research
In a new paper in the American Economic Review, Mark Armstrong and Jidong Zhou—of Oxford and Yale universities, respectively—develop a theoretical framework to understand how companies compete using consumer data. Their findings conclude that there is, in fact, an impact on consumer, producer, and total welfare when different privacy regimes are enacted to change the amount of information a company can use to personalize recommendations.
The authors note that, at least in theory, there is an optimal situation that maximizes total welfare (scenario one). This is when a platform can aggregate information on consumers to such a degree that buyers and sellers are perfectly matched, leading to consumers buying their first-best option. While this can result in marginally higher prices, understandably leading to higher welfare for producers, search and mismatch costs are minimized by the platform, leading to a high level of welfare for consumers.
The highest level of aggregate consumer welfare comes when product differentiation is minimized (scenario two), leading to a high number of substitutes and low prices. This, however, comes with some level of mismatch. Since consumers are not matched with any recommendations, search costs are high and introduce some error. Some consumers may have had a higher level of welfare with an alternative product, but do not feel the negative effects of such mismatch because of the low prices. Therefore, consumer welfare is maximized, but producer welfare is significantly lower.
Finally, the authors suggest a “nearly total welfare” optimal solution in suggesting a “top two-best” scheme (scenario three), whereby consumers are shown their top two best options without explicit ranking. This nearly maximizes total welfare, since consumers are shown the best options for them and, even if the best match isn’t chosen, the second-best match is close in terms of welfare.
Implications
In cases of platform data aggregation and personalization, scenarios one, two, and three can be represented as different privacy regimes.
Scenario one (a personalized-product regime) is akin to unlimited data gathering, whereby platforms can use as much information as is available to perfectly suggest products based on revealed data. From a competition perspective, interfirm competition will tend to decrease under this regime, since product differentiation will be accentuated, and substitutability will be masked. Since one single product will be shown as the “correct” product, the consumer will not want to shift to a different, welfare-inferior product and firms have incentive to produce ever more specialized products for a relatively higher price. Total welfare under this regime is maximized, with producers using their information to garner a relatively large share of economic surplus. Producers are effectively matched with consumers, and all gains from trade are realized.
Scenario two (a data-privacy regime) is one of near-perfect data privacy, whereby the platform is only able to recommend products based on general information, such as sales trends, new products, or product specifications. Under this regime, competition is maximized, since consumers consider a large pool of goods to be close substitutes. Differences in offered products are downplayed, which has the tendency to reduce prices and increase quality, but at the tradeoff of some consumer-product mismatch. For consumers who want a general product and a low price, this is likely the best option, since prices are low, and competition is high. However, for consumers who want the best product match for their personal use case, they will likely undertake search costs, increasing their opportunity cost of product acquisition and tending toward a total cost closer to the cost under a personalized-product regime.
Scenario three (a curated-list regime) represents defined guardrails surrounding the display of information gathered, along the same lines as the personalized-product regime. Platforms remain able to gather as much information as they desire in order to make a personalized recommendation, but they display an array of products that represent the first two (or three to four, with tighter anti-preference rules) best-choice options. These options are displayed without ranking the products, allowing the consumer to choose from a curated list, rather than a single product. The scenario-three regime has two effects on the market:
It will tend to decrease prices through increased competition. Since firms can know only which consumers to target, not which will choose the product, they have to effectively compete with closely related products.
It will likely spur innovation and increase competition from nascent competitors.
From an innovation perspective, firms will have to find better methods to differentiate themselves from the competition, increasing the probability of a consumer acquiring their product. Also, considering nascent competitors, a new product has an increased chance of being picked when ranked sufficiently high to be included on the consumer’s curated list. In contrast, the probability of acquisition under scenario one’s personalized-product regime is low, since the new product must be a better match than other, existing products. Similarly, under scenario two’s data-privacy regime, there is so much product substitutability in the market that the probability of choosing any one new product is low.
Below is a list of how the regimes stack up:
Personalized-Product: Total welfare is maximized, but prices are relatively higher and competition is relatively lower than under a data-privacy regime.
Data-Privacy: Consumer welfare and competition are maximized, and prices are theoretically minimized, but at the cost of product mismatch. Consumers will face search costs that are not reflected in the prices paid.
Curated-List: Consumer welfare is higher and prices are lower than under a personalized-product regime and competition is lower than under a data-privacy regime, but total welfare is nearly optimal when considering innovation and nascent-competitor effects.
Policy in Context
Applying these theoretical findings to fashion administrable policy prescriptions is understandably difficult. A far easier task is to evaluate the welfare effects of actual and proposed government privacy regulations in the economy. In that light, I briefly assess a recently enacted European data-platform privacy regime and U.S. legislative proposals that would restrict data usage under the guise of bans on “self-preferencing.” I then briefly note the beneficial implications of self-preferencing associated with the two theoretical data-usage scenarios (scenarios one and three) described above (scenario two, data privacy, effectively renders self-preferencing ineffective).
GDPR
The European Union’s General Data Protection Regulation (GDPR)—among the most ambitious and all-encompassing data-privacy regimes to date—has significant negative ramifications for economic welfare. This regulation is most like the second scenario, whereby data collection and utilization are seriously restricted.
The GDPR diminishes competition through its restrictions on data collection and sharing, which reduce the competitive pressure platforms face. For platforms to gain a complete profile of a consumer for personalization, they cannot only rely on data collected on their platform. To ensure a level of personalization that effectively reduces search costs for consumers, these platforms must be able to acquire data from a range of sources and aggregate that data to create a complete profile. Restrictions on aggregation are what lead to diminished competition online.
The GDPR grants consumers the right to choose both how their data is collected and how it is distributed. Not only do platforms themselves have obligations to ensure consumers’ wishes are met regarding their privacy, but firms that sell data to the platform are obligated to ensure the platform does not infringe consumers’ privacy through aggregation.
This creates a high regulatory burden for both the platform and the data seller and reduces the incentive to transfer data between firms. Since the data seller can be held liable for actions taken by the platform, this significantly increases the price at which the data seller will transfer the data. By increasing the risk of regulatory malfeasance, the cost of data must now incorporate some risk premium, reducing the demand for outside data.
This has the effect of decreasing the quality of personalization and tilting the scales toward larger platforms, who have more robust data-collection practices and are able to leverage economies of scale to absorb high regulatory-enforcement costs. The quality of personalization is decreased, since the platform has incentive to create a consumption profile based on activity it directly observes without considering behavior occurring outside of the platform. Additionally, those platforms that are already entrenched and have large user bases are better able to manage the regulatory burden of the GDPR. One survey of U.S. companies with more than 500 workers found that 68% planned to spend between $1 and $10 million in upfront costs to prepare for GDPR compliance, a number that will likely pale in comparison to the long-term compliance costs. For nascent competitors, this outlay of capital represents a significant barrier to entry.
Additionally, as previously discussed, consumers derive some benefit from platforms that can accurately recommend products. If this is the case, then large platforms with vast amounts of accumulated, first-party data will be the consumers’ destination of choice. This will tend to reduce the ability for smaller firms to compete, simply because they do not have access to the same scale of data as the large platforms when data cannot be easily transferred between parties.
Self–Preferencing
Claims of anticompetitive behavior by platforms are abundant (e.g., see here and here), and they often focus on the concept of self-preferencing. Self-preferencing refers to when a company uses its economies of scale, scope, or a combination of the two to offer products at a lower price through an in-house brand. In decrying self-preferencing, many commentators and politicians point to an alleged “unfair advantage” in tech platforms’ ability to leverage data and personalization to drive traffic toward their own products.
It is far from clear, however, that this practice reduces consumer welfare. Indeed, numerous commentaries (e.g., see here and here) circulated since the introduction of anti-preferencing bills in the U.S. Congress (House; Senate) have rejected the notion that self-preferencing is anti-competitive or anti-consumer.
There are good reasons to believe that self-preferencing promotes both competition and consumer welfare. Assume that a company that manufactures or contracts for its own, in-house products can offer them at a marginally lower price for the same relative quality. This decrease in price raises consumer welfare. The in-house brand’s entrance into the market represents a potent competitive threat to firms already producing products, who in turn now have incentive to lower their own prices or raise the quality of their own goods (or both) to maintain their consumer base. This creates even more consumer welfare, since all consumers, not just the ones purchasing the in-house goods, are better off from the entrance of an in-house brand.
It therefore follows that the entrance of an in-house brand and self-preferencing in the data-utilizing regimes discussed above has the potential to enhance consumer welfare.
In general, the use of data analysis on the platform can allow for targeted product entrance into certain markets. If the platform believes it can make a product of similar quality for a lower price, then it will enter that market and consumers will be able to choose a comparable product for a lower price. (If the company does not believe it is able to produce such a product, it will not enter the market with an in-house brand, and consumer welfare will stay the same.) Consumer welfare will further rise as firms producing products that compete against the in-house brand will innovate to compete more effectively.
To be sure, under a personalized-product regime (scenario one), platforms may appear to have an incentive to self-preference to the detriment of consumers. If consumers trust the platform to show the greatest welfare-producing product before the emergence of an in-house brand, the platform may use this consumer trust to its advantage and suggest its own, potentially consumer-welfare-inferior product instead of a competitor’s welfare-superior product. In such a case, consumer welfare may decrease in the face of an in-house brand’s entrance.
The extent of any such welfare loss, however, may be ameliorated (or eliminated entirely) by the platform’s concern that an unexpectedly low level of house-brand product quality will diminish its reputation. Such a reputational loss could come about due to consumer disappointment, plus the efforts of platform rivals to highlight the in-house product’s inferiority. As such, the platform might decide to enhance the quality of its “inferior” in-house offering, or refrain from offering an in-house brand at all.
A curated-list regime (scenario three) is unequivocally consumer-welfare beneficial. Under such a regime, consumers will be shown several more options (a “manageable” number intended to minimize consumer-search costs) than under a personalized-product regime. Consumers can actively compare the offerings from different firms to determine the correct product for their individual use. In this case, there is no incentive to self-preference to the detriment of the consumer, as the consumer is able to make value judgements between the in-house brand and the alternatives.
If the in-house brand is significantly lower in price, but also lower in quality, consumers may not see the two as interchangeable and steer away from the in-house brand. The same follows when the in-house brand is higher in both price and quality. The only instance where the in-house brand has a strong chance of success is when the price is lower than and the quality is greater than competing products. This will tend to increase consumer welfare. Additionally, the entrance of consumer-welfare-superior products into a competitive market will encourage competing firms to innovate and lower prices or raise quality, again increasing consumer welfare for all consumers.
Conclusion
What effects do digital platform-data policies have on consumer welfare? As a matter of theory, if providing an increasing number of product choices does not tend to increase consumer welfare, then do reductions in prices or increases in quality? What about precise targeting of personal-product choices? How about curation—the idea that a consumer raises his or her level of certainty by outsourcing decision-making to a platform that chooses a small set of products for the consumer’s consideration at any given moment? Apart from these theoretical questions, is the current U.S. legal treatment of platform data usage doing a generally good job of promoting consumer welfare? Finally, considering this overview, are new government interventions in platform data policy likely to benefit or harm consumers?
Recently published economic research develops theoretical scenarios that demonstrate how digital platform curation of consumer data may facilitate welfare-enhancing consumer-purchase decisions. At least implicitly, this research should give pause to proponents of major new restrictions of platform data usage.
Furthermore, a review of actual and proposed regulatory restrictions underscores the serious welfare harm of government meddling in digital platform-data usage.
After the first four years of GDPR, it is clear that there have been significant negative unintended consequences stemming from omnibus privacy regulation. Competition has decreased, regulatory barriers to entry have increased, and consumers are marginally worse off. Since companies are less able and willing to leverage data in their operations and service offerings—due in large part to the risk of hefty fines—they are less able to curate and personalize services to consumers.
Additionally, anti-preferencing bills in the United States threaten to suppress the proper functioning of platform markets and reduce consumer welfare by making the utilization of data in product-market decisions illegal. More research is needed to determine the aggregate welfare effects of such preferencing on platforms, but all early indications point to the fact that consumers are better off when an in-house brand enters the market and increases competition.
Furthermore, current U.S. government policy, which generally allows platforms to use consumer data freely, is good for consumer welfare. Indeed, the consumer-welfare benefits generated by digital platforms, which depend critically on large volumes of data, are enormous. This is documented in a well-reasoned Harvard Business Review article (by an MIT professor and his student) that utilizes online choice experiments based on digital-survey techniques.
The message is clear. Governments should avoid new regulatory meddling in digital platform consumer-data usage practices. Such meddling would harm consumers and undermine the economy.
The Biden administration finally has taken a public position on parallel House (H.R. 3816) and Senate (S. 2992) bills that would impose new welfare-reducing regulatory constraints on the ability of large digital platforms to engage in innovative business practices that benefit consumers and the economy.
The administration’s articulation of its position—set forth in a March 28 U.S. Justice Department (DOJ letter to House and Senate Judiciary Committee leadership—is a fine example of draftsmanship. With just a few very minor redline edits, which I suggest below, the letter would advance sound and enlightened procompetitive policy.
I hope the DOJ will accept my modest redlines and incorporate them into a new letter to Congress, superseding the March 28 draft. My edited redline and clean revisions of the current draft follow (redline draft is in italics, clean draft in bold italics):
Redline Version
Dear Chairman Nadler, Chairman Cicilline, Representative Jordan, and Representative Buck:
The Department of Justice (Department) appreciates the considerable attention and resources devoted by the House and Senate Committees on the Judiciary over the past several years to ensuring the competitiveness of our digital economy, and writes today to express support for oppose the American Innovation and Choice Online Act, Senate bill S. 2992, and the American Innovation and Choice Online Act, House bill H.R. 3816, which contain similar prohibitions on discriminatory conduct by dominant platforms (the “bills”). Unfortunately, the legislative efforts expended on these bills have been a waste of time.
The Department views the rise of major digitaldominant platforms as presenting a great boon tothreat to open markets and competition, with bestowing benefits onrisks for consumers, businesses, innovation, resiliency, global competitiveness, and our democracy. By enhancing value controllingtransmitted through key arteries of the nation’s commerce and communications, such platforms have promoted a more vibrant and innovative can exercise outsized market power in our modern economy. Vesting in government the power to pick winners and losers across markets through legislative regulation as found in the bills in a small number of corporations contravenes the foundations of our capitalist system, and given the increasing importance of these markets, the economic benefits flowing frompower of such platforms activity areis likely to be curtailed if the bills are passed continue to grow unless checked. Enactment of the bills wouldThis puts at risk the nation’s economic progress and prosperity, ultimately threatening the economic liberty that undergirds our democracy.
The legislation, if enacted, would emphasize causes of action prohibiting the largest digital platforms from “discriminating” in favor of their own products or services, or among third parties. In so doing, it would eliminate and disincentivize many provide important clarification from Congress on types of discriminatory conductefficient business arrangements that can materially enhanceharm competition. This would thereby undermineimprove upon the system of ex ante enforcement through which the United States maintains competitive markets with legal prohibitions on competitively harmful corporate conduct. By mistakenly characterizing confirming the illegality of as anticompetitive platform behaviors that in reality enhancereduce incentives for vigorous innovation and dynamic competition, smaller or newer firms to innovate and compete, the legislation would underminesupplement the existing antitrust laws. Specifically, the legislation would in preventing the largest digital companies from managing their business transactions in an efficient welfare-enhancing manner, abusing and exploiting their dominant positions to the detriment of competition and the competitive process. The Department is strongly concerned aboutsupportive of these harmful effects.objectives and As such, it encourages both the Committees and Congress to work to abandon all efforts to finalize this legislationfinalize this legislation[1] and pass it into law.
The Department views the legislation’s new prohibitions on discrimination as a harmful detrimenthelpful complement to, and interference withclarification of, existing antitrust authority. In our view, the most significant harmbenefits would arise where the legislation seeks to elucidates Congress’ views of anticompetitive conduct—particularly with respect to harmful types of discrimination and self-preferencing by dominant platforms. Enumerating specific “discriminatory” and “self-preferencing” conduct that Congress views as anticompetitive and therefore illegal would undermine the economically informed, fact-specific evaluation of business conduct that lies at the heart of modern antitrust analysis, centered on consumer welfare. Modern economic analysis demonstrates that a great deal of superficially “discriminatory” and “self-preferencing” conduct often represents consumer welfare-enhancing behavior that adds to economic surplus. Deciding whether such conduct is procompetitive (welfare-enhancing) or anticompetitive in a particular instance is the role of existing case-by-case antitrust enforcement. This approach vindicates competition while avoiding the wrongful condemnation of economically beneficial behavior. In contrast, by creating a new class of antitrust “wrongs,” the bills would lead to the incorrect condemnation of many business practices that enhance market efficiency and strengthen the economy.clarify the antitrust laws and supplement the available causes of action and legal frameworks to pursue that conduct. Doing so would enhance the ability of the DOJ and FTC to challenge that conduct efficiently and effectively and better enable them to promote competition in digital markets. The legislation also has the potential to effectively harmonize broad prohibitions with the particularized needs and business practices of individual platforms over time.
If enacted, we believe that this legislation has the potential to have a major negativepositive effect on dynamism in digital markets going forward. Our future global competitiveness depends on innovators and entrepreneurs having the ability to access markets, free from counterproductive inflexible government market regulationdominant incumbents that impede innovation, competition, resiliency, and widespread prosperity. “Discriminatory” conduct by majordominant platforms, properly understood, often benefits can sap the rewards from other innovators and entrepreneurs, increasingreducing the incentives for entrepreneurship and innovation. Even more importantly, the legislation may undercutsupport the creation and growth of new tech businesses adjacent to the platforms., Such an unfortunate result would reduce the welfare-enhancing initiatives of new businesses that are complementary to the economically beneficial activity (to consumers and producers) generated by the platforms. which may ultimately pose a critically needed competitive check to the covered platforms themselves. We view reduction of these new business initiatives benefitsas a significant harm that would stem from passage of the bills. For these reasons, the Department strongly supports the principles and goals animatingopposes the legislation and looks forward to working with Congress to further explain why this undoubtedly well-meaning legislative initiative is detrimental to vigorous competition and a strong American economy.ensure that the final legislation enacted meets these goals.
Thank you for the opportunity to present our views. We hope this information is helpful. Please do not hesitate to contact this office if we may be of additional assistance to you.
[1] In other words,As , the Department respectfully recommends that members of Congress stop wasting time seeking to revise and enact this legislation.members continue to revise the legislation, the Department will provide under separate cover additional assistance to ensure that the bills achieve their goals.
Clean Version (incorporating all redline edits)
Dear Chairman Nadler, Chairman Cicilline, Representative Jordan, and Representative Buck:
The Department of Justice (Department) appreciates the considerable attention and resources devoted by the House and Senate Committees on the Judiciary over the past several years to ensuring the competitiveness of our digital economy, and writes today to oppose the American Innovation and Choice Online Act, Senate bill S. 2992, and the American Innovation and Choice Online Act, House bill H.R. 3816, which contain similar prohibitions on discriminatory conduct by dominant platforms (the “bills”). Unfortunately, the legislative efforts expended on these bills have been a waste of time.
The Department views the rise of major digital platforms as presenting a great boon to open markets and competition, bestowing benefits on consumers, businesses, innovation, resiliency, global competitiveness, and our democracy. By enhancing value transmitted through key arteries of the nation’s commerce and communications, such platforms have promoted a more vibrant and innovative modern economy. Vesting in government the power to pick winners and losers across markets through legislative regulation as found in the bills contravenes the foundations of our capitalist system, and given the increasing importance of these markets, the economic benefits flowing from platform activity are likely to be curtailed if the bills are passed. Enactment of the bills would put at risk the nation’s economic progress and prosperity, ultimately threatening the economic liberty that undergirds our democracy.
The legislation, if enacted, would emphasize causes of action prohibiting the largest digital platforms from “discriminating” in favor of their own products or services, or among third parties. In so doing, it would eliminate and disincentivize many efficient business arrangements that can materially enhance competition. This would thereby undermine the system of ex ante enforcement through which the United States maintains competitive markets with legal prohibitions on competitively harmful corporate conduct. By mistakenly characterizing as anticompetitive platform behaviors that in reality enhance incentives for vigorous innovation and dynamic competition, the legislation would undermine the existing antitrust laws. Specifically, the legislation would prevent the largest digital companies from managing their business transactions in an efficient welfare-enhancing manner, to the detriment of competition and the competitive process. The Department is strongly concerned about these harmful effects. As such, it encourages both the Committees and Congress to abandon all efforts to finalize this legislation and pass it into law.[1]
The Department views the legislation’s new prohibitions on discrimination as a harmful detriment to, and interference with, existing antitrust authority. In our view, the most significant harm would arise where the legislation seeks to elucidate Congress’ views of anticompetitive conduct—particularly with respect to harmful types of discrimination and self-preferencing by dominant platforms. Enumerating specific “discriminatory” and “self-preferencing” conduct that Congress views as anticompetitive and therefore illegal would undermine the economically informed, fact-specific evaluation of business conduct that lies at the heart of modern antitrust analysis, centered on consumer welfare. Modern economic analysis demonstrates that a great deal of superficially “discriminatory” and “self-preferencing” conduct often represents consumer welfare-enhancing behavior that adds to economic surplus. Deciding whether such conduct is procompetitive (welfare-enhancing) or anticompetitive in a particular instance is the role of existing case-by-case antitrust enforcement. This approach vindicates competition while avoiding the wrongful condemnation of economically beneficial behavior. In contrast, by creating a new class of antitrust “wrongs,” the bills would lead to the incorrect condemnation of many business practices that enhance market efficiency and strengthen the economy.
If enacted, we believe that this legislation has the potential to have a major negative effect on dynamism in digital markets going forward. Our future global competitiveness depends on innovators and entrepreneurs having the ability to access markets, free from counterproductive inflexible government market regulation that impede innovation, competition, resiliency, and widespread prosperity. “Discriminatory” conduct by major platforms, properly understood, often benefits innovators and entrepreneurs, increasing the incentives for entrepreneurship and innovation. Even more importantly, the legislation may undercut the creation and growth of new tech businesses adjacent to the platforms. Such an unfortunate result would reduce the welfare-enhancing initiatives of new businesses that are complementary to the economically beneficial activity (to consumers and producers) generated by the platforms. We view reduction of these new business initiatives as a significant harm that would stem from passage of the bills. For these reasons, the Department strongly opposes the legislation and looks forward to working with Congress to further explain why this undoubtedly well-meaning legislative initiative is detrimental to vigorous competition and a strong American economy.
Thank you for the opportunity to present our views. We hope this information is helpful. Please do not hesitate to contact this office if we may be of additional assistance to you.
[1] In other words, the Department respectfully recommends that members of Congress stop wasting time seeking to revise and enact this legislation.
The Autorità Garante della Concorenza e del Mercato (AGCM), Italy’s competition and consumer-protection watchdog, on Nov. 25 handed down fines against Google and Apple of €10 million each—the maximum penalty contemplated by the law—for alleged unfair commercial practices. Ultimately, the twodecisions stand as textbook examples of why regulators should, wherever possible, strongly defer to consumer preferences, rather than substitute their own.
The Alleged Infringements
The AGCM has made two practically identical cases built around two interrelated claims. The first claim is that the companies have not properly informed users that the data they consent to share will be used for commercial purposes. The second is that, by making users opt out if they don’t want to consent to data sharing, the companies unduly restrict users’ freedom of choice and constrain them to accept terms they would not have otherwise accepted.
According to the AGCM, Apple and Google’s behavior infringes Articles 20, 21, 22, 24 and 25 of the Italian Consumer Code. The first three provisions prohibit misleading business practices, and are typically applied to conduct such as lying, fraud, the sale of unsafe products, or the omission or otherwise deliberate misrepresentation of facts in ways that would deceive the average user. The conduct caught by the first claim would allegedly fall into this category.
The last two provisions, by contrast, refer to aggressive business practices such as coercion, blackmail, verbal threats, and even physical harassment capable of “limiting the freedom of choice of users.” The conduct described in the second claim would fall here.
The First Claim
The AGCM’s first claim does not dispute that the companies informed users about the commercial use of their data. Instead, the authority argues that the companies are not sufficiently transparent in how they inform users.
Let’s start with Google. Upon creating a Google ID, users can click to view the “Privacy and Terms” disclosure, which details the types of data that Google processes and the reasons that it does so. As Figure 1 below demonstrates, the company explains that it processes data: “to publish personalized ads, based on your account settings, on Google services as well as on other partner sites and apps” (translation of the Italian text highlighted in the first red rectangle). Below, under the “data combination” heading, the user is further informed that: “in accordance with the settings of your account, we show you personalized ads based on the information gathered from your combined activity on Google and YouTube” (the section in the second red rectangle).
Figure 1: ACGM Google decision, p. 7
After creating a Google ID, a pop-up once again reminds the user that “this Google account is configured to include the personalization function, which provides tips and personalized ads based on the information saved on your account. [And that] you can select ‘other options’ to change the personalization settings as well as the information saved in your account.”
The AGCM sees two problems with this. First, the user must click on “Privacy and Terms” to be told what Google does with their data and why. Viewing this information is not simply an unavoidable step in the registration process. Second, the AGCM finds it unacceptable that the commercial use of data is listed together with other, non-commercial uses, such as improved quality, security, etc. (the other items listed in Figure 1). The allegation is that this leads to confusion and makes it less likely that users will notice the commercial aspects of data usage.
A similar argument is made in the Apple decision, where the AGCM similarly contends that users are not properly informed that their data may be used for commercial purposes. As shown in Figure 2, upon creating an Apple ID, users are asked to consent to receive “communications” (notifications, tips, and updates on Apple products, services, and software) and “Apps, music, TV, and other” (latest releases, exclusive content, special offers, tips on apps, music, films, TV programs, books, podcasts, Apple Pay and others).
Figure 2: AGCM Apple decision, p. 8
If users click on “see how your data is managed”—located just above the “Continue” button, as shown in Figure 2—they are taken to another page, where they are given more detailed information about what data Apple collects and how it is used. Apple discloses that it may employ user data to send communications and marketing e-mails about new products and services. Categories are clearly delineated and users are reminded that, if they wish to change their marketing email preferences, they can do so by going to appleid.apple.com. The word “data” is used 40 times and the taxonomy of the kind of data gathered by Apple is truly comprehensive. See for yourself.
The App Store, Apple Book Store, and iTunes Store have similar clickable options (“see how your data is managed”) that lead to pages with detailed information about how Apple uses data. This includes unambiguous references to so-called “commercial use” (e.g., “Apple uses information on your purchases, downloads, and other activities to send you tailored ads and notifications relative to Apple marketing campaigns.”)
But these disclosures failed to convince the AGCM that users are sufficiently aware that their data may be used for commercial purposes. The two reasons cited in the opinion mirror those in the Google decision. First, the authority claims that the design of the “see how your data is managed” option does not “induce the user to click on it” (see the marked area in Figure 2). Further, it notes that accessing the “Apple ID Privacy” page requires a “voluntary and eventual [i.e., hypothetical]” action by the user. According to the AGCM, this leads to a situation in which “the average user” is not “directly and intuitively” aware of the magnitude of data used for commercial purposes, and is instead led to believe that data is shared to improve the functionality of the Apple product and the Apple ecosystem.
The Second Claim
The AGCM’s second claim contends that the opt-out mechanism used by both Apple and Google “limits and conditions” users’ freedom of choice by nudging them toward the companies’ preferred option—i.e., granting the widest possible consent to process data for commercial use.
In Google’s case, the AGCM first notes that, when creating a Google ID, a user must take an additional discretionary step before they can opt out of data sharing. This refers to mechanism in which a user must click the words “OTHER OPTIONS,” in bright blue capitalized font, as shown in Figure 3 below (first blue rectangle, upper right corner).
Figure 3: AGCM Google decision, p. 22
The AGCM’s complaint here is that it is insufficient to grant users merely the possibility of opting out, as Google does. Rather, the authority contends, users must be explicitly asked whether they wish to share their data. As in the first claim, the AGCM holds that questions relating to the commercial use of data must be woven in as unavoidable steps in the registration process.
The AGCM also posits that the opt-out mechanism itself (in the lower left corner of Figure 3) “restricts and conditions” users’ freedom of choice by preventing them from “expressly and preventively” manifesting their real preferences. The contention is that, if presented with an opt-in checkbox, users would choose differently—and thus, from the authority’s point of view, choose correctly. Indeed, the AGCM concludes from the fact that the vast majority of users have not opted out from data sharing (80-100%, according to the authority), that the only reasonable conclusion is that “a significant number of subscribers have been induced to make a commercial decision without being aware of it.”
A similar argument is made in the Apple decision. Here, the issue is the supposed difficulty of the opt-out mechanism, which the AGCM describes as “intricate and non-immediate.” If a user wishes to opt out of data sharing, he or she would not only have to “uncheck” the checkboxes displayed in Figure 2, but also do the same in the Apple Store with respect to their preferences for other individual Apple products. This “intricate” process generally involves two to three steps. For instance, to opt out of “personalized tips,” a user must first go to Settings, then select their name, then multimedia files, and then “deactivate personalized tips.”
According to the AGCM, the registration process is set up in such a way that the users’ consent is not informed, free, and specific. It concludes:
The consumer, entangled in this system, of which he is not aware, is conditioned in his choices, undergoing the transfer of his data, which the professional can dispose of for his own promotional purposes.
The AGCM’s decisions fail on three fronts. They are speculative, paternalistic, and subject to the Nirvana Fallacy. They are also underpinned by an extremely uncharitable conception of what the “average user” knows and understands.
Epistemic Modesty Under Uncertainty
The AGCM makes far-reaching and speculative assumptions about user behavior based on incomplete knowledge. For instance, both Google and Apple’s registration processes make clear that they gather users’ data for advertising purposes—which, especially in the relevant context, cannot be interpreted by a user as anything but “commercial” (even under the AGCM’s pessimistic assumptions about the “average user.”) It’s true that the disclosure requires the user to click “see how your data is managed” (Apple) or “Privacy and Terms” (Google). But it’s not at all clear that this is less transparent than, say, the obligatory scroll-text that most users will ignore before blindly clicking to accept.
For example, in registering for a Blizzard account (a gaming service), users are forced to read the company’s lengthy terms and conditions, with information on the “commercial use” of data buried somewhere in a seven-page document of legalese. Does it really follow from this that Blizzard users are better informed about the commercial use of their data? I don’t think so.
Rather than the obligatory scroll-text, the AGCM may have in mind some sort of pop-up screen. But would this mean that companies should also include separate, obligatory pop-ups for every other relevant aspect of their terms and conditions? This would presumably take us back to square one, as the AGCM’s complaint was that Google amalgamated commercial and non-commercial uses of data under the same title. Perhaps the pop-up for the commercial use of data would have to be made more conspicuous. This would presumably require a normative hierarchy of the companies’ terms and conditions, listed in order of relevance for users. That would raise other thorny questions. For instance, should information about the commercial use of data be more prominently displayed than information about safety and security?
A reasonable alternative—especially under conditions of uncertainty—would be to leave Google and Apple alone to determine the best way to inform consumers, because nobody reads the terms and conditions anyway, no matter how they are presented. Moreover, the AGCM offers no evidence to support its contention that companies’ opt-out mechanisms lead more users to share their data than would freely choose to do so.
Whose Preferences?
The AGCM also replaces revealed user preferences with its own view of what those preferences should be. For instance, the AGCM doesn’t explain why opting to share data for commercial purposes would be, in principle, a bad thing. There are a number of plausible and legitimate explanations for why a user would opt for more generous data-sharing arrangements: they may believe that data sharing will improve their experience; may wish to receive tailored ads rather than generic ones; or may simply value a company’s product and see data sharing as a fair exchange. None of these explanations—or, indeed, any others—are ever contemplated in the AGCM decision.
Assuming that opt-outs, facultative terms and conditions screens, and two-to-three-step procedures to change one’s preferences truncate users’ “freedom of choice” is paternalistic and divorced from the reality of the average person, and the average Italian.
Ideal or Illegal?
At the heart of the AGCM decisions is the notion that it is proper to punish market actors wherever the real doesn’t match a regulator’s vision of the ideal—commonly known as “the Nirvana fallacy.” When the AGCM claims that Apple and Google do not properly disclose the commercial use of user data, or that the offered opt-out mechanism is opaque or manipulative, the question is: compared to what? There will always be theoretically “better” ways of granting users the choice to opt out of sharing their data. The test should not be whether a company falls short of some ideal imagined practice, but whether the existing mechanism actually deceives users.
There is nothing in the AGCM’s decisions to suggest that it does. Depending on how precipitously one lowers the bar for what the “average user” would understand, just about any intervention might be justified, in principle. But to justify the AGCM’s intervention in this case requires stretching the plausible ignorance of the average user to its absolute theoretical limits.
Conclusion
Even if a court were to buy the AGCM’s impossibly low view of the “average user” and grant the first claim—which would be unfortunate, but plausible — not even the most liberal reading of Articles 24 and 25 can support the view that “overly complex, non-immediate” opt-outs, as interpreted by the AGCM, limit users’ freedom of choice in any way comparable to the type of conduct described in those provisions (coercion, blackmail, verbal threats, etc.)
The AGCM decisions are shot through with unsubstantiated assumptions about users’ habits and preferences, and risk imposing undue burdens not only on the companies, but on users themselves. With some luck, they will be stricken down by a sensible judge. In the meantime, however, the trend of regulatory paternalism and over-enforcement continues. Much like in the United States, where the Federal Trade Commission (FTC) has occasionally engaged in product-design decisions that substitute the commission’s own preferences for those of consumers, regulators around the world continue to think they know better than consumers about what’s in their best interests.
A debate has broken out among the four sitting members of the Federal Trade Commission (FTC) in connection with the recently submitted FTC Report to Congress on Privacy and Security. Chair Lina Khan argues that the commission “must explore using its rulemaking tools to codify baseline protections,” while Commissioner Rebecca Kelly Slaughter has urged the FTC to initiate a broad-based rulemaking proceeding on data privacy and security. By contrast, Commissioners Noah Joshua Phillips and Christine Wilson counsel against a broad-based regulatory initiative on privacy.
Decisions to initiate a rulemaking should be viewed through a cost-benefit lens (See summaries of Thom Lambert’s masterful treatment of regulation, of which rulemaking is a subset, here and here). Unless there is a market failure, rulemaking is not called for. Even in the face of market failure, regulation should not be adopted unless it is more cost-beneficial than reliance on markets (including the ability of public and private litigation to address market-failure problems, such as data theft). For a variety of reasons, it is unlikely that FTC rulemaking directed at privacy and data security would pass a cost-benefit test.
Discussion
As I have previously explained (see here and here), FTC rulemaking pursuant to Section 6(g) of the FTC Act (which authorizes the FTC “to make rules and regulations for the purpose of carrying out the provisions of this subchapter”) is properly read as authorizing mere procedural, not substantive, rules. As such, efforts to enact substantive competition rules would not pass a cost-benefit test. Such rules could well be struck down as beyond the FTC’s authority on constitutional law grounds, and as “arbitrary and capricious” on administrative law grounds. What’s more, they would represent retrograde policy. Competition rules would generate higher error costs than adjudications; could be deemed to undermine the rule of law, because the U.S. Justice Department (DOJ) could not apply such rules; and innovative efficiency-seeking business arrangements would be chilled.
Accordingly, the FTC likely would not pursue 6(g) rulemaking should it decide to address data security and privacy, a topic which best fits under the “consumer protection” category. Rather, the FTC presumably would most likely initiate a “Magnuson-Moss” rulemaking (MMR) under Section 18 of the FTC Act, which authorizes the commission to prescribe “rules which define with specificity acts or practices which are unfair or deceptive acts or practices in or affecting commerce within the meaning of Section 5(a)(1) of the Act.” Among other things, Section 18 requires that the commission’s rulemaking proceedings provide an opportunity for informal hearings at which interested parties are accorded limited rights of cross-examination. Also, before commencing an MMR proceeding, the FTC must have reason to believe the practices addressed by the rulemaking are “prevalent.” 15 U.S.C. Sec. 57a(b)(3).
MMR proceedings, which are not governed under the Administrative Procedure Act (APA), do not present the same degree of legal problems as Section 6(g) rulemakings (see here). The question of legal authority to adopt a substantive rule is not raised; “rule of law” problems are far less serious (the DOJ is not a parallel enforcer of consumer-protection law); and APA issues of “arbitrariness” and “capriciousness” are not directly presented. Indeed, MMR proceedings include a variety of procedures aimed at promoting fairness (see here, for example). An MMR proceeding directed at data privacy predictably would be based on the claim that the failure to adhere to certain data-protection norms is an “unfair act or practice.”
Nevertheless, MMR rules would be subject to two substantial sources of legal risk.
The first of these arises out of federalism. Three states (California, Colorado, and Virginia) recently have enacted comprehensive data-privacy laws, and a large number of other state legislatures are considering data-privacy bills (see here). The proliferation of state data-privacy statutes would raise the risk of inconsistent and duplicative regulatory norms, potentially chilling business innovations addressed at data protection (a severe problem in the Internet Age, when business data-protection programs typically will have interstate effects).
An FTC MMR data-protection regulation that successfully “occupied the field” and preempted such state provisions could eliminate that source of costs. The Magnuson–Moss Warranty Act, however, does not contain an explicit preemption clause, leaving in serious doubt the ability of an FTC rule to displace state regulations (see here for a summary of the murky state of preemption law, including the skepticism of textualist Supreme Court justices toward implied “obstacle preemption”). In particular, the long history of state consumer-protection and antitrust laws that coexist with federal laws suggests that the case for FTC rule-based displacement of state data protection is a weak one. The upshot, then, of a Section 18 FTC data-protection rule enactment could be “the worst of all possible worlds,” with drawn-out litigation leading to competing federal and state norms that multiplied business costs.
The second source of risk arises out of the statutory definition of “unfair practices,” found in Section 5(n) of the FTC Act. Section 5(n) codifies the meaning of unfair practices, and thereby constrains the FTC’s application of rulemakings covering such practices. Section 5(n) states:
The Commission shall have no authority . . . to declare unlawful an act or practice on the grounds that such an act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.
In effect, Section 5(n) implicitly subjects unfair practices to a well-defined cost-benefit framework. Thus, in promulgating a data-privacy MMR, the FTC first would have to demonstrate that specific disfavored data-protection practices caused or were likely to cause substantial harm. What’s more, the commission would have to show that any actual or likely harm would not be outweighed by countervailing benefits to consumers or competition. One would expect that a data-privacy rulemaking record would include submissions that pointed to the efficiencies of existing data-protection policies that would be displaced by a rule.
Moreover, subsequent federal court challenges to a final FTC rule likely would put forth the consumer and competitive benefits sacrificed by rule requirements. For example, rule challengers might point to the added business costs passed on to consumers that would arise from particular rule mandates, and the diminution in competition among data-protection systems generated by specific rule provisions. Litigation uncertainties surrounding these issues could be substantial and would cast into further doubt the legal viability of any final FTC data protection rule.
Apart from these legal risk-based costs, an MMR data privacy predictably would generate error-based costs. Given imperfect information in the hands of government and the impossibility of achieving welfare-maximizing nirvana through regulation (see, for example, here), any MMR data-privacy rule would erroneously condemn some economically inefficient business protocols and disincentivize some efficiency-seeking behavior. The Section 5(n) cost-benefit framework, though helpful, would not eliminate such error. (For example, even bureaucratic efforts to accommodate some business suggestions during the rulemaking process might tilt the post-rule market in favor of certain business models, thereby distorting competition.) In the abstract, it is difficult to say whether the welfare benefits of a final MMA data-privacy rule (measured by reductions in data-privacy-related consumer harm) would outweigh the costs, even before taking legal costs into account.
Conclusion
At least two FTC commissioners (and likely a third, assuming that President Joe Biden’s highly credentialed nominee Alvaro Bedoya will be confirmed by the U.S. Senate) appear to support FTC data-privacy regulation, even in the absence of new federal legislation. Such regulation, which presumably would be adopted as an MMR pursuant to Section 18 of the FTC Act, would probably not prove cost-beneficial. Not only would adoption of a final data-privacy rule generate substantial litigation costs and uncertainty, it would quite possibly add an additional layer of regulatory burdens above and beyond the requirements of proliferating state privacy rules. Furthermore, it is impossible to say whether the consumer-privacy benefits stemming from such an FTC rule would outweigh the error costs (manifested through competitive distortions and consumer harm) stemming from the inevitable imperfections of the rule’s requirements. All told, these considerations counsel against the allocation of scarce FTC resources to a Section 18 data-privacy rulemaking initiative.
But what about legislation? New federal privacy legislation that explicitly preempted state law would eliminate costs arising from inconsistencies among state privacy rules. Ideally, if such legislation were to be pursued, it should to the extent possible embody a cost-benefit framework designed to minimize the sum of administrative (including litigation) and error costs. The nature of such a possible law, and the role the FTC might play in administering it, however, is a topic for another day.
Federal Trade Commission (FTC) Chair Lina Khan’s Sept. 22 memorandum to FTC commissioners and staff—entitled “Vision and Priorities for the FTC” (VP Memo)—offers valuable insights into the chair’s strategy and policy agenda for the commission. Unfortunately, it lacks an appreciation for the limits of antitrust and consumer-protection law; it also would have benefited from greater regulatory humility. After summarizing the VP Memo’s key sections, I set forth four key takeaways from this rather unusual missive.
Introduction
The VP Memo begins appropriately enough, with praise for commission staff and a call to focus on key FTC strategic priorities and operational objectives. So far, so good. Regrettably, the introductory section is the memo’s strongest feature.
Strategic Approach
The VP Memo’s first substantive section, which lays out Khan’s strategic approach, raises questions that require further clarification.
This section is long on glittering generalities. First, it begins with the need to take a “holistic approach” that recognizes law violations harm workers and independent businesses, as well as consumers. Legal violations that reflect “power asymmetries” and harm to “marginalized communities” are emphasized, but not defined. Are new enforcement standards to supplement or displace consumer welfare enhancement being proposed?
Second, similar ambiguity surrounds the need to target enforcement efforts toward “root causes” of unlawful conduct, rather than “one-off effects.” Root causes are said to involve “structural incentives that enable unlawful conduct” (such as conflicts of interest, business models, or structural dominance), as well as “upstream” examination of firms that profit from such conduct. How these observations may be “operationalized” into case-selection criteria (and why these observations are superior to alternative means for spotting illegal behavior) is left unexplained.
Third, the section endorses a more “rigorous and empiricism-driven approach” to the FTC’s work, a “more interdisciplinary approach” that incorporates “a greater range of analytical tools and skillsets.” This recommendation is not problematic on its face, though it is a bit puzzling. The FTC already relies heavily on economics and empirical work, as well as input from technologists, advertising specialists, and other subject matter experts, as required. What other skillsets are being endorsed? (A more far-reaching application of economic thinking in certain consumer-protection cases would be helpful, but one suspects that is not the point of the paragraph.)
Fourth, the need to be especially attentive to next-generation technologies, innovations, and nascent industries is trumpeted. Fine, but the FTC already does that in its competition and consumer-protection investigations.
Finally, the need to “democratize” the agency is highlighted, to keep the FTC in tune with “the real problems that Americans are facing in their daily lives and using that understanding to inform our work.” This statement seems to imply that the FTC is not adequately dealing with “real problems.” The FTC, however, has not been designated by Congress to be a general-purpose problem solver. Rather, the agency has a specific statutory remit to combat anticompetitive activity and unfair acts or practices that harm consumers. Ironically, under Chair Khan, the FTC has abruptly implemented major changes in key areas (including rulemaking, the withdrawal of guidance, and merger-review practices) without prior public input or consultation among the commissioners (see, for example, here)—actions that could be deemed undemocratic.
Policy Priorities
The memo’s brief discussion of Khan’s policy priorities raises three significant concerns.
First, Khan stresses the “need to address rampant consolidation and the dominance that it has enabled across markets” in the areas of merger enforcement and dominant-firm scrutiny. The claim that competition has substantially diminished has been critiqued by leading economists, and is dubious at best (see, for example, here). This flat assertion is jarring, and in tension with the earlier call for more empirical analysis. Khan’s call for revision of the merger guidelines (presumably both horizontal and vertical), in tandem with the U.S. Justice Department (DOJ), will be headed for trouble if it departs from the economic reasoning that has informed prior revisions of those guidelines. (The memo’s critical and cryptic reference to the “narrow and outdated framework” of recent guidelines provides no clue as to the new guidelines format that Chair Khan might deem acceptable.)
Second, the chair supports prioritizing “dominant intermediaries” and “extractive business models,” while raising concerns about “private equity and other investment vehicles” that “strip productive capacity” and “target marginalized communities.” No explanation is given as to why such prioritization will best utilize the FTC’s scarce resources to root out harmful anticompetitive behavior and consumer-protection harms. By assuming from the outset that certain “unsavory actors” merit prioritization, this discussion also is in tension with an empirical approach that dispassionately examines the facts in determining how resources should best be allocated to maximize the benefits of enforcement.
Third, the chair wants to direct special attention to “one-sided contract provisions” that place “[c]onsumers, workers, franchisees, and other market participants … at a significant disadvantage.” Non-competes, repair restrictions, and exclusionary clauses are mentioned as examples. What is missing is a realistic acknowledgement of the legal complications that would be involved in challenging such provisions, and a recognition of possible welfare benefits that such restraints could generate under many circumstances. In that vein, mere perceived inequalities in bargaining power alluded to in the discussion do not, in and of themselves, constitute antitrust or consumer-protection violations.
Operational Objectives
The closing section, on “operational objectives,” is not particularly troublesome. It supports an “integrated approach” to enforcement and policy tools, and endorses “breaking down silos” between competition (BC) and consumer-protection (BCP) staff. (Of course, while greater coordination between BC and BCP occasionally may be desirable, competition and consumer-protection cases will continue to feature significant subject matter and legal differences.) It also calls for greater diversity in recruitment and a greater staffing emphasis on regional offices. Finally, it endorses bringing in more experts from “outside disciplines” and more rigorous analysis of conduct, remedies, and market studies. These points, although not controversial, do not directly come to grip with questions of optimal resource allocation within the agency, which the FTC will have to address.
Evaluating the VP Memo: 4 Key Takeaways
The VP Memo is a highly aggressive call-to-arms that embodies Chair Khan’s full-blown progressive vision for the FTC. There are four key takeaways:
Promoting the consumer interest, which for decades has been the overarching principle in both FTC antitrust and consumer-protection cases (which address different sources of consumer harm), is passé. Protecting consumers is only referred to in passing. Rather, the concerns of workers, “honest businesses,” and “marginalized communities” are emphasized. Courts will, however, continue to focus on established consumer-welfare and consumer-harm principles in ruling on antitrust and consumer-protection cases. If the FTC hopes to have any success in winning future cases based on novel forms of harm, it will have to ensure that its new case-selection criteria also emphasize behavior that harms consumers.
Despite multiple references to empiricism and analytical rigor, the VP Memo ignores the potential economic-welfare benefits of the categories of behavior it singles out for condemnation. The memo’s critiques of “middlemen,” “gatekeepers,” “extractive business models,” “private equity,” and various types of vertical contracts, reference conduct that frequently promotes efficiency, generating welfare benefits for producers and consumers. Even if FTC lawsuits or regulations directed at these practices fail, the business uncertainty generated by the critiques could well disincentivize efficient forms of conduct that spark innovation and economic growth.
The VP Memo in effect calls for new enforcement initiatives that challenge conduct different in nature from FTC cases brought in recent decades. This implicit support for lawsuits that would go well beyond existing judicial interpretations of the FTC’s competition and consumer-protection authority reflects unwarranted hubris. This April, in the AMG case, the U.S. Supreme Court unanimously rejected the FTC’s argument that it had implicit authority to obtain monetary relief under Section 13(b) of the FTC Act, which authorizes permanent injunctions – despite the fact that several appellate courts had found such authority existed. The Court stated that the FTC could go to Congress if it wanted broader authority. This decision bodes ill for any future FTC efforts to expand its authority into new realms of “unfair” activity through “creative” lawyering.
Chair Khan’s unilateral statement of her policy priorities embodied in the VP Memo bespeaks a lack of humility. It ignores a long history of consensus FTC statements on agency priorities, reflected in numerous commission submissions to congressional committees in connection with oversight hearings. Although commissioners have disagreed on specific policy statements or enforcement complaints, general “big picture” policy statements to congressional overseers typically have been by unanimous vote. By ignoring this tradition, the VP Memo departs from a longstanding bipartisan tradition that will tend to undermine the FTC’s image as a serious deliberative body that seeks to reconcile varying viewpoints (while recognizing that, at times, different positions will be expressed on particular matters). If the FTC acts more and more like a one-person executive agency, why does it need to be “independent,” and, indeed, what special purpose does it serve as a second voice on federal antitrust matters? Under seeming unilateral rule, the prestige of the FTC before federal courts may suffer, undermining its effectiveness in defending enforcement actions and promulgating rules. This will particularly be the case if more and more FTC decisions are taken by a 3-2 vote and appear to reflect little or no consultation with minority commissioners.
Conclusion
The VP Memo reflects a lack of humility and strategic insight. It sets forth priorities that are disconnected from the traditional core of the FTC’s consumer-welfare-centric mission. It emphasizes new sorts of initiatives that are likely to “crash and burn” in the courts, unless they are better anchored to established case law and FTC enforcement principles. As a unilateral missive announcing an unprecedented change in policy direction, the memo also undermines the tradition of collegiality and reasoned debate that generally has characterized the commission’s activities in recent decades.
As such, the memo will undercut, not advance, the effectiveness of FTC advocacy before the courts. It will also undermine the FTC’s reputation as a truly independent deliberative body. Accordingly, one may hope that Chair Khan will rethink her approach, withdraw the VP Memo, and work with all of her fellow commissioners to recraft a new consensus policy document.
From Sen. Elizabeth Warren (D-Mass.) to Sen. Josh Hawley (R-Mo.), populist calls to “fix” our antitrust laws and the underlying Consumer Welfare Standard have found a foothold on Capitol Hill. At the same time, there are calls to “fix” the Supreme Court by packing it with new justices. The court’s unanimous decision in NCAA v. Alston demonstrates that neither needs repair. To the contrary, clearly anti-competitive conduct—like the NCAA’s compensation rules—is proscribed under the Consumer Welfare Standard, and every justice from Samuel Alito to Sonia Sotomayor can agree on that.
In 1984, the court in NCAA v. Board of Regents suggested that “courts should take care when assessing the NCAA’s restraints on student-athlete compensation.” After all, joint ventures like sports leagues are entitled to rule-of-reason treatment. But while times change, the Consumer Welfare Standard is sufficiently flexible to meet those changes.
Where a competitive restraint exists primarily to ensure that “enormous sums of money flow to seemingly everyone except the student athletes,” the court rightly calls it out for what it is. As Associate Justice Brett Kavanaugh wrote in his concurrence:
Nowhere else in America can businesses get away with agreeing not to pay their workers a fair market rate on the theory that their product is defined by not paying their workers a fair market rate. And under ordinary principles of antitrust law, it is not evident why college sports should be any different. The NCAA is not above the law.
Disturbing these “ordinary principles”—whether through legislation, administrative rulemaking, or the common law—is simply unnecessary. For example, the Open Markets Institute filed an amicus brief arguing that the rule of reason should be “bounded” and willfully blind to the pro-competitive benefits some joint ventures can create (an argument that has been used, unsuccessfully, to attack ridesharing services like Uber and Lyft). Sen. Amy Klobuchar (D-Minn.) has proposed shifting the burden of proof so that merging parties are guilty until proven innocent. Sen. Warren would go further, deeming Amazon’s acquisition of Whole Foods anti-competitive simply because the company is “big,” and ignoring the merger’s myriad pro-competitive benefits. Sen. Hawley has gone further still: calling on Amazon to be investigated criminally for the crime of being innovative and successful.
Several of the current proposals, including those from Sens. Klobuchar and Hawley (and those recently introduced in the House that essentially single out firms for disfavored treatment), would replace the Consumer Welfare Standard that has underpinned antitrust law for decades with a policy that effectively punishes firms for being politically unpopular.
These examples demonstrate we should be wary when those in power assert that things are so irreparably broken that they need a complete overhaul. The “solutions” peddled usually increase politicians’ power by enabling them to pick winners and losers through top-down approaches that stifle the bottom-up innovations that make consumers’ lives better.
Are antitrust law and the Supreme Court perfect? Hardly. But in a 9-0 decision, the court proved this week that there’s nothing broken about either.
Interrogations concerning the role that economic theory should play in policy decisions are nothing new. Milton Friedman famously drew a distinction between “positive” and “normative” economics, notably arguing that theoretical models were valuable, despite their unrealistic assumptions. Kenneth Arrow and Gerard Debreu’s highly theoretical work on General Equilibrium Theory is widely acknowledged as one of the most important achievements of modern economics.
But for all their intellectual value and academic merit, the use of models to inform policy decisions is not uncontroversial. There is indeed a long and unfortunate history of influential economic models turning out to be poor depictions (and predictors) of real-world outcomes.
This raises a key question: should policymakers use economic models to inform their decisions and, if so, how? This post uses the economics of externalities to illustrate both the virtues and pitfalls of economic modeling. Throughout economic history, externalities have routinely been cited to support claims of market failure and calls for government intervention. However, as explained below, these fears have frequently failed to withstand empirical scrutiny.
Today, similar models are touted to support government intervention in digital industries. Externalities are notably said to prevent consumers from switching between platforms, allegedly leading to unassailable barriers to entry and deficient venture-capital investment. Unfortunately, as explained below, the models that underpin these fears are highly abstracted and far removed from underlying market realities.
Ultimately, this post argues that, while models provide a powerful way of thinking about the world, naïvely transposing them to real-world settings is misguided. This is not to say that models are useless—quite the contrary. Indeed, “falsified” models can shed powerful light on economic behavior that would otherwise prove hard to understand.
Bees
Fears surrounding economic externalities are as old as modern economics. For example, in the 1950s, economists routinely cited bee pollination as a source of externalities and, ultimately, market failure.
The basic argument was straightforward: Bees and orchards provide each other with positive externalities. Bees cross-pollinate flowers and orchards contain vast amounts of nectar upon which bees feed, thus improving honey yields. Accordingly, several famous economists argued that there was a market failure; bees fly where they please and farmers cannot prevent bees from feeding on their blossoming flowers—allegedly causing underinvestment in both. This led James Meade to conclude:
[T]he apple-farmer provides to the beekeeper some of his factors free of charge. The apple-farmer is paid less than the value of his marginal social net product, and the beekeeper receives more than the value of his marginal social net product.
If, then, apple producers are unable to protect their equity in apple-nectar and markets do not impute to apple blossoms their correct shadow value, profit-maximizing decisions will fail correctly to allocate resources at the margin. There will be failure “by enforcement.” This is what I would call an ownership externality. It is essentially Meade’s “unpaid factor” case.
It took more than 20 years and painstaking research by Steven Cheung to conclusively debunk these assertions. So how did economic agents overcome this “insurmountable” market failure?
The answer, it turns out, was extremely simple. While bees do fly where they please, the relative placement of beehives and orchards has a tremendous impact on both fruit and honey yields. This is partly because bees have a very limited mean foraging range (roughly 2-3km). This left economic agents with ample scope to prevent free-riding.
Using these natural sources of excludability, they built a web of complex agreements that internalize the symbiotic virtues of beehives and fruit orchards. To cite Steven Cheung’s research:
Pollination contracts usually include stipulations regarding the number and strength of the colonies, the rental fee per hive, the time of delivery and removal of hives, the protection of bees from pesticide sprays, and the strategic placing of hives. Apiary lease contracts differ from pollination contracts in two essential aspects. One is, predictably, that the amount of apiary rent seldom depends on the number of colonies, since the farmer is interested only in obtaining the rent per apiary offered by the highest bidder. Second, the amount of apiary rent is not necessarily fixed. Paid mostly in honey, it may vary according to either the current honey yield or the honey yield of the preceding year.
But what of neighboring orchards? Wouldn’t these entail a more complex externality (i.e., could one orchard free-ride on agreements concluded between other orchards and neighboring apiaries)? Apparently not:
Acknowledging the complication, beekeepers and farmers are quick to point out that a social rule, or custom of the orchards, takes the place of explicit contracting: during the pollination period the owner of an orchard either keeps bees himself or hires as many hives per area as are employed in neighboring orchards of the same type. One failing to comply would be rated as a “bad neighbor,” it is said, and could expect a number of inconveniences imposed on him by other orchard owners. This customary matching of hive densities involves the exchange of gifts of the same kind, which apparently entails lower transaction costs than would be incurred under explicit contracting, where farmers would have to negotiate and make money payments to one another for the bee spillover.
In short, not only did the bee/orchard externality model fail, but it failed to account for extremely obvious counter-evidence. Even a rapid flip through the Yellow Pages (or, today, a search on Google) would have revealed a vibrant market for bee pollination. In short, the bee externalities, at least as presented in economic textbooks, were merely an economic “fable.” Unfortunately, they would not be the last.
The Lighthouse
Lighthouses provide another cautionary tale. Indeed, Henry Sidgwick, A.C. Pigou, John Stuart Mill, and Paul Samuelson all cited the externalities involved in the provision of lighthouse services as a source of market failure.
Here, too, the problem was allegedly straightforward. A lighthouse cannot prevent ships from free-riding on its services when they sail by it (i.e., it is mostly impossible to determine whether a ship has paid fees and to turn off the lighthouse if that is not the case). Hence there can be no efficient market for light dues (lighthouses were seen as a “public good”). As Paul Samuelson famously put it:
Take our earlier case of a lighthouse to warn against rocks. Its beam helps everyone in sight. A businessman could not build it for a profit, since he cannot claim a price from each user. This certainly is the kind of activity that governments would naturally undertake.
He added that:
[E]ven if the operators were able—say, by radar reconnaissance—to claim a toll from every nearby user, that fact would not necessarily make it socially optimal for this service to be provided like a private good at a market-determined individual price. Why not? Because it costs society zero extra cost to let one extra ship use the service; hence any ships discouraged from those waters by the requirement to pay a positive price will represent a social economic loss—even if the price charged to all is no more than enough to pay the long-run expenses of the lighthouse.
More than a century after it was first mentioned in economics textbooks, Ronald Coase finally laid the lighthouse myth to rest—rebutting Samuelson’s second claim in the process.
What piece of evidence had eluded economists for all those years? As Coase observed, contemporary economists had somehow overlooked the fact that large parts of the British lighthouse system were privately operated, and had been for centuries:
[T]he right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament. The tolls were collected at the ports by agents (who might act for several lighthouses), who might be private individuals but were commonly customs officials. The toll varied with the lighthouse and ships paid a toll, varying with the size of the vessel, for each lighthouse passed. It was normally a rate per ton (say 1/4d or 1/2d) for each voyage. Later, books were published setting out the lighthouses passed on different voyages and the charges that would be made.
In other words, lighthouses used a simple physical feature to create “excludability” and prevent free-riding. The main reason ships require lighthouses is to avoid hitting rocks when they make their way to a port. By tying port fees and light dues, lighthouse owners—aided by mild government-enforced property rights—could easily earn a return on their investments, thus disproving the lighthouse free-riding myth.
Ultimately, this meant that a large share of the British lighthouse system was privately operated throughout the 19th century, and this share would presumably have been more pronounced if government-run “Trinity House” lighthouses had not crowded out private investment:
The position in 1820 was that there were 24 lighthouses operated by Trinity House and 22 by private individuals or organizations. But many of the Trinity House lighthouses had not been built originally by them but had been acquired by purchase or as the result of the expiration of a lease.
Of course, this system was not perfect. Some ships (notably foreign ones that did not dock in the United Kingdom) might free-ride on this arrangement. It also entailed some level of market power. The ability to charge light dues meant that prices were higher than the “socially optimal” baseline of zero (the marginal cost of providing light is close to zero). Though it is worth noting that tying port fees and light dues might also have decreased double marginalization, to the benefit of sailors.
Samuelson was particularly weary of this market power that went hand in hand with the private provision of public goods, including lighthouses:
Being able to limit a public good’s consumption does not make it a true-blue private good. For what, after all, are the true marginal costs of having one extra family tune in on the program? They are literally zero. Why then prevent any family which would receive positive pleasure from tuning in on the program from doing so?
However, as Coase explained, light fees represented only a tiny fraction of a ship’s costs. In practice, they were thus unlikely to affect market output meaningfully:
[W]hat is the gain which Samuelson sees as coming from this change in the way in which the lighthouse service is financed? It is that some ships which are now discouraged from making a voyage to Britain because of the light dues would in future do so. As it happens, the form of the toll and the exemptions mean that for most ships the number of voyages will not be affected by the fact that light dues are paid. There may be some ships somewhere which are laid up or broken up because of the light dues, but the number cannot be great, if indeed there are any ships in this category.
Samuelson’s critique also falls prey to the Nirvana Fallacy pointed out by Harold Demsetz: markets might not be perfect, but neither is government intervention. Market power and imperfect appropriability are the two (paradoxical) pitfalls of the first; “white elephants,” underinvestment, and lack of competition (and the information it generates) tend to stem from the latter.
Which of these solutions is superior, in each case, is an empirical question that early economists had simply failed to consider—assuming instead that market failure was systematic in markets that present prima facie externalities. In other words, models were taken as gospel without any circumspection about their relevance to real-world settings.
The Tragedy of the Commons
Externalities were also said to undermine the efficient use of “common pool resources,” such grazing lands, common irrigation systems, and fisheries—resources where one agent’s use diminishes that of others, and where exclusion is either difficult or impossible.
The most famous formulation of this problem is Garret Hardin’s highly influential (over 47,000 cites) “tragedy of the commons.” Hardin cited the example of multiple herdsmen occupying the same grazing ground:
The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another … But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.
In more technical terms, each economic agent purportedly exerts an unpriced negative externality on the others, thus leading to the premature depletion of common pool resources. Hardin extended this reasoning to other problems, such as pollution and allegations of global overpopulation.
Although Hardin hardly documented any real-world occurrences of this so-called tragedy, his policy prescriptions were unequivocal:
The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from the misery of overpopulation. Freedom to breed will bring ruin to all.
As with many other theoretical externalities, empirical scrutiny revealed that these fears were greatly overblown. In her Nobel-winning work, Elinor Ostrom showed that economic agents often found ways to mitigate these potential externalities markedly. For example, mountain villages often implement rules and norms that limit the use of grazing grounds and wooded areas. Likewise, landowners across the world often set up “irrigation communities” that prevent agents from overusing water.
Along similar lines, Julian Morris and I conjecture that informal arrangements and reputational effects might mitigate opportunistic behavior in the standard essential patent industry.
These bottom-up solutions are certainly not perfect. Many common institutions fail—for example, Elinor Ostrom documents several problematic fisheries, groundwater basins and forests, although it is worth noting that government intervention was sometimes behind these failures. To cite but one example:
Several scholars have documented what occurred when the Government of Nepal passed the “Private Forest Nationalization Act” […]. Whereas the law was officially proclaimed to “protect, manage and conserve the forest for the benefit of the entire country”, it actually disrupted previously established communal control over the local forests. Messerschmidt (1986, p.458) reports what happened immediately after the law came into effect:
Nepalese villagers began freeriding — systematically overexploiting their forest resources on a large scale.
In any case, the question is not so much whether private institutions fail, but whether they do so more often than government intervention. be it regulation or property rights. In short, the “tragedy of the commons” is ultimately an empirical question: what works better in each case, government intervention, propertization, or emergent rules and norms?
More broadly, the key lesson is that it is wrong to blindly apply models while ignoring real-world outcomes. As Elinor Ostrom herself put it:
The intellectual trap in relying entirely on models to provide the foundation for policy analysis is that scholars then presume that they are omniscient observers able to comprehend the essentials of how complex, dynamic systems work by creating stylized descriptions of some aspects of those systems.
Dvorak Keyboards
In 1985, Paul David published an influential paper arguing that market failures undermined competition between the QWERTY and Dvorak keyboard layouts. This version of history then became a dominant narrative in the field of network economics, including works by Joseph Farrell & Garth Saloner, and Jean Tirole.
The basic claim was that QWERTY users’ reluctance to switch toward the putatively superior Dvorak layout exerted a negative externality on the rest of the ecosystem (and a positive externality on other QWERTY users), thus preventing the adoption of a more efficient standard. As Paul David put it:
Although the initial lead acquired by QWERTY through its association with the Remington was quantitatively very slender, when magnified by expectations it may well have been quite sufficient to guarantee that the industry eventually would lock in to a de facto QWERTY standard. […]
Competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system — where decentralized decision making subsequently has sufficed to hold it.
Unfortunately, many of the above papers paid little to no attention to actual market conditions in the typewriter and keyboard layout industries. Years later, Stan Liebowitz and Stephen Margolis undertook a detailed analysis of the keyboard layout market. They almost entirely rejected any notion that QWERTY prevailed despite it being the inferior standard:
Yet there are many aspects of the QWERTY-versus-Dvorak fable that do not survive scrutiny. First, the claim that Dvorak is a better keyboard is supported only by evidence that is both scant and suspect. Second, studies in the ergonomics literature find no significant advantage for Dvorak that can be deemed scientifically reliable. Third, the competition among producers of typewriters, out of which the standard emerged, was far more vigorous than is commonly reported. Fourth, there were far more typing contests than just the single Cincinnati contest. These contests provided ample opportunity to demonstrate the superiority of alternative keyboard arrangements. That QWERTY survived significant challenges early in the history of typewriting demonstrates that it is at least among the reasonably fit, even if not the fittest that can be imagined.
In short, there was little to no evidence supporting the view that QWERTY inefficiently prevailed because of network effects. The falsification of this narrative also weakens broader claims that network effects systematically lead to either excess momentum or excess inertia in standardization. Indeed, it is tempting to characterize all network industries with heavily skewed market shares as resulting from market failure. Yet the QWERTY/Dvorak story suggests that such a conclusion would be premature.
Killzones, Zoom, and TikTok
If you are still reading at this point, you might think that contemporary scholars would know better than to base calls for policy intervention on theoretical externalities. Alas, nothing could be further from the truth.
For instance, a recent paper by Sai Kamepalli, Raghuram Rajan and Luigi Zingales conjectures that the interplay between mergers and network externalities discourages the adoption of superior independent platforms:
If techies expect two platforms to merge, they will be reluctant to pay the switching costs and adopt the new platform early on, unless the new platform significantly outperforms the incumbent one. After all, they know that if the entering platform’s technology is a net improvement over the existing technology, it will be adopted by the incumbent after merger, with new features melded with old features so that the techies’ adjustment costs are minimized. Thus, the prospect of a merger will dissuade many techies from trying the new technology.
Although this key behavioral assumption drives the results of the theoretical model, the paper presents no evidence to support the contention that it occurs in real-world settings. Admittedly, the paper does present evidence of reduced venture capital investments after mergers involving large tech firms. But even on their own terms, this data simply does not support the authors’ behavioral assumption.
And this is no isolated example. Over the past couple of years, several scholars have called for more muscular antitrust intervention in networked industries. A common theme is that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in, thus raising barriers to entry for potential rivals (here, here, here).
But there are also countless counterexamples, where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.
Zoom is one of the most salient instances. As I have written previously:
To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.
Along similar lines, Geoffrey Manne and Alec Stapp have put forward a multitude of other examples. These include: The demise of Yahoo; the disruption of early instant-messaging applications and websites; MySpace’s rapid decline; etc. In all these cases, outcomes do not match the predictions of theoretical models.
More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and its powerful algorithm are the most likely explanations for its success.
While these developments certainly do not disprove network effects theory, they eviscerate the common belief in antitrust circles that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. As in the previous examples, the question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions.
In Conclusion
My argument is not that models are without value. To the contrary, framing problems in economic terms—and simplifying them in ways that make them cognizable—enables scholars and policymakers to better understand where market failures might arise, and how these problems can be anticipated and solved by private actors. In other words, models alone cannot tell us that markets will fail, but they can direct inquiries and help us to understand why firms behave the way they do, and why markets (including digital ones) are organized in a given way.
In that respect, both the theoretical and empirical research cited throughout this post offer valuable insights for today’s policymakers.
For a start, as Ronald Coase famously argued in what is perhaps his most famous work, externalities (and market failure more generally) are a function of transaction costs. When these are low (relative to the value of a good), market failures are unlikely. This is perhaps clearest in the “Fable of the Bees” example. Given bees’ short foraging range, there were ultimately few real-world obstacles to writing contracts that internalized the mutual benefits of bees and orchards.
Perhaps more importantly, economic research sheds light on behavior that might otherwise be seen as anticompetitive. The rules and norms that bind farming/beekeeping communities, as well as users of common pool resources, could easily be analyzed as a cartel by naïve antitrust authorities. Yet externality theory suggests they play a key role in preventing market failure.
Along similar lines, mergers and acquisitions (as well as vertical integration, more generally) can reduce opportunism and other externalities that might otherwise undermine collaboration between firms (here, here and here). And much of the same is true for certain types of unilateral behavior. Tying video games to consoles (and pricing the console below cost) can help entrants overcome network externalities that might otherwise shield incumbents. Likewise, Google tying its proprietary apps to the open source Android operating system arguably enabled it to earn a return on its investments, thus overcoming the externality problem that plagues open source software.
All of this raises a tantalizing prospect that deserves far more attention than it is currently given in policy circles: authorities around the world are seeking to regulate the tech space. Draft legislation has notably been tabled in the United States, European Union and the United Kingdom. These draft bills would all make it harder for large tech firms to implement various economic hierarchies, including mergers and certain contractual arrangements.
This is highly paradoxical. If digital markets are indeed plagued by network externalities and high transaction costs, as critics allege, then preventing firms from adopting complex hierarchies—which have traditionally been seen as a way to solve externalities—is just as likely to exacerbate problems. In other words, like the economists of old cited above, today’s policymakers appear to be focusing too heavily on simple models that predict market failure, and far too little on the mechanisms that firms have put in place to thrive within this complex environment.
The bigger picture is that far more circumspection is required when using theoretical models in real-world policy settings. Indeed, as Harold Demsetz famously put it, the purpose of normative economics is not so much to identify market failures, but to help policymakers determine which of several alternative institutions will deliver the best outcomes for consumers:
This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem […].
Lina Khan’s appointment as chair of the Federal Trade Commission (FTC) is a remarkable accomplishment. At 32 years old, she is the youngest chair ever. Her longstanding criticisms of the Consumer Welfare Standard and alignment with the neo-Brandeisean school of thought make her appointment a significant achievement for proponents of those viewpoints.
Her appointment also comes as House Democrats are preparing to mark up five bills designed to regulate Big Tech and, in the process, vastly expand the FTC’s powers. This expansion may combine with Khan’s appointment in ways that lawmakers considering the bills have not yet considered.
As things stand, the FTC under Khan’s leadership is likely to push for more extensive regulatory powers, akin to those held by the Federal Communications Commission (FCC). But these expansions would be trivial compared to what is proposed by many of the bills currently being prepared for a June 23 mark-up in the House Judiciary Committee.
The flagship bill—Rep. David Cicilline’s (D-R.I.) American Innovation and Choice Online Act—is described as a platform “non-discrimination” bill. I have already discussed what the real-world effects of this bill would likely be. Briefly, it would restrict platforms’ ability to offer richer, more integrated services at all, since those integrations could be challenged as “discrimination” at the cost of would-be competitors’ offerings. Things like free shipping on Amazon Prime, pre-installed apps on iPhones, or even including links to Gmail and Google Calendar at the top of a Google Search page could be precluded under the bill’s terms; in each case, there is a potential competitor being undermined.
But this shifts the focus to the FTC itself, and implies that it would have potentially enormous discretionary power under these proposals to enforce the law selectively.
Companies found guilty of breaching the bill’s terms would be liable for civil penalties of up to 15 percent of annual U.S. revenue, a potentially significant sum. And though the Supreme Court recently ruled unanimously against the FTC’s powers to levy civil fines unilaterally—which the FTC opposed vociferously, and may get restored by other means—there are two scenarios through which it could end up getting extraordinarily extensive control over the platforms covered by the bill.
The first course is through selective enforcement. What Singer above describes as a positive—the fact that enforcers would just let “benign” violations of the law be—would mean that the FTC itself would have tremendous scope to choose which cases it brings, and might do so for idiosyncratic, politicized reasons.
Obviously, that’s far more sinister than what we’re talking about here. But these examples highlight how excessively broad laws applied at the enforcer’s discretion give broad powers to the enforcer to penalize defendants for other, unrelated things. Or, to quote Jay-Z: “Am I under arrest or should I guess some more? / ‘Well, you was doing 55 in a 54.’”
The second path would be to use these powers as leverage to get broad consent decrees to govern the conduct of covered platforms. These occur when a lawsuit is settled, with the defendant company agreeing to change its business practices under supervision of the plaintiff agency (in this case, the FTC). The Cambridge Analytica lawsuit ended this way, with Facebook agreeing to change its data-sharing practices under the supervision of the FTC.
This path would mean the FTC creating bespoke, open-ended regulation for each covered platform. Like the first path, this could create significant scope for discretionary decision-making by the FTC and potentially allow FTC officials to impose their own, non-economic goals on these firms. And it would require costly monitoring of each firm subject to bespoke regulation to ensure that no breaches of that regulation occurred.
Khan herself has been less explicit about the goals she has in mind, but has given some hints. In her essay “The Ideological Roots of America’s Market Power Problem”, Khan highlights approvingly former Associate Justice William O. Douglas’s account of:
“economic power as inextricably political. Power in industry is the power to steer outcomes. It grants outsized control to a few, subjecting the public to unaccountable private power—and thereby threatening democratic order. The account also offers a positive vision of how economic power should be organized (decentralized and dispersed), a recognition that forms of economic power are not inevitable and instead can be restructured.” [italics added]
Though I have focused on Cicilline’s flagship bill, others grant significant new powers to the FTC, as well. The data portability and interoperability bill doesn’t actually define what “data” is; it leaves it to the FTC to “define the term ‘data’ for the purpose of implementing and enforcing this Act.” And, as I’ve written elsewhere, data interoperability needs significant ongoing regulatory oversight to work at all, a responsibility that this bill also hands to the FTC. Even a move as apparently narrow as data portability will involve a significant expansion of the FTC’s powers and give it a greater role as an ongoing economic regulator.
Democratic leadership of the House Judiciary Committee have leaked the approach they plan to take to revise U.S. antitrust law and enforcement, with a particular focus on digital platforms.
Broadly speaking, the bills would: raise fees for larger mergers and increase appropriations to the FTC and DOJ; require data portability and interoperability; declare that large platforms can’t own businesses that compete with other businesses that use the platform; effectively ban large platforms from making any acquisitions; and generally declare that large platforms cannot preference their own products or services.
All of these are ideas that have been discussed before. They are very much in line with the EU’s approach to competition, which places more regulation-like burdens on big businesses, and which is introducing a Digital Markets Act that mirrors the Democrats’ proposals. Some Republicans are reportedly supportive of the proposals, which is surprising since they mean giving broad, discretionary powers to antitrust authorities that are controlled by Democrats who take an expansive view of antitrust enforcement as a way to achieve their other social and political goals. The proposals may also be unpopular with consumers if, for example, they would mean that popular features like integrating Maps into relevant Google Search results becomes prohibited.
The multi-bill approach here suggests that the committee is trying to throw as much at the wall as possible to see what sticks. It may reflect a lack of confidence among the proposers in their ability to get their proposals through wholesale, especially given that Amy Klobuchar’s CALERA bill in the Senate creates an alternative that, while still highly interventionist, does not create ex ante regulation of the Internet the same way these proposals do.
In general, the bills are misguided for three main reasons.
One, they seek to make digital platforms into narrow conduits for other firms to operate on, ignoring the value created by platforms curating their own services by, for example, creating quality controls on entry (as Apple does on its App Store) or by integrating their services with related products (like, say, Google adding events from Gmail to users’ Google Calendars).
Two, they ignore the procompetitive effects of digital platforms extending into each other’s markets and competing with each other there, in ways that often lead to far more intense competition—and better outcomes for consumers—than if the only firms that could compete with the incumbent platform were small startups.
Three, they ignore the importance of incentives for innovation. Platforms invest in new and better products when they can make money from doing so, and limiting their ability to do that means weakened incentives to innovate. Startups and their founders and investors are driven, in part, by the prospect of being acquired, often by the platforms themselves. Making those acquisitions more difficult, or even impossible, means removing one of the key ways startup founders can exit their firms, and hence one of the key rewards and incentives for starting an innovative new business.
The flagship bill, introduced by Antitrust Subcommittee Chairman David Cicilline (D-R.I.), establishes a definition of “covered platform” used by several of the other bills. The measures would apply to platforms with at least 500,000 U.S.-based users, a market capitalization of more than $600 billion, and that is deemed a “critical trading partner” with the ability to restrict or impede the access that a “dependent business” has to its users or customers.
Cicilline’s bill would bar these covered platforms from being able to promote their own products and services over the products and services of competitors who use the platform. It also defines a number of other practices that would be regarded as discriminatory, including:
Restricting or impeding “dependent businesses” from being able to access the platform or its software on the same terms as the platform’s own lines of business;
Conditioning access or status on purchasing other products or services from the platform;
Using user data to support the platform’s own products in ways not extended to competitors;
Restricting the platform’s commercial users from using or accessing data generated on the platform from their own customers;
Restricting platform users from uninstalling software pre-installed on the platform;
Restricting platform users from providing links to facilitate business off of the platform;
Preferencing the platform’s own products or services in search results or rankings;
Interfering with how a dependent business prices its products;
Impeding a dependent business’ users from connecting to services or products that compete with those offered by the platform; and
Retaliating against users who raise concerns with law enforcement about potential violations of the act.
On a basic level, these would prohibit lots of behavior that is benign and that can improve the quality of digital services for users. Apple pre-installing a Weather app on the iPhone would, for example, run afoul of these rules, and the rules as proposed could prohibit iPhones from coming with pre-installed apps at all. Instead, users would have to manually download each app themselves, if indeed Apple was allowed to include the App Store itself pre-installed on the iPhone, given that this competes with other would-be app stores.
Apart from the obvious reduction in the quality of services and convenience for users that this would involve, this kind of conduct (known as “self-preferencing”) is usually procompetitive. For example, self-preferencing allows platforms to compete with one another by using their strength in one market to enter a different one; Google’s Shopping results in the Search page increase the competition that Amazon faces, because it presents consumers with a convenient alternative when they’re shopping online for products. Similarly, Amazon’s purchase of the video-game streaming service Twitch, and the self-preferencing it does to encourage Amazon customers to use Twitch and support content creators on that platform, strengthens the competition that rivals like YouTube face.
It also helps innovation, because it gives firms a reason to invest in services that would otherwise be unprofitable for them. Google invests in Android, and gives much of it away for free, because it can bundle Google Search into the OS, and make money from that. If Google could not self-preference Google Search on Android, the open source business model simply wouldn’t work—it wouldn’t be able to make money from Android, and would have to charge for it in other ways that may be less profitable and hence give it less reason to invest in the operating system.
This behavior can also increase innovation by the competitors of these companies, both by prompting them to improve their products (as, for example, Google Android did with Microsoft’s mobile operating system offerings) and by growing the size of the customer base for products of this kind. For example, video games published by console manufacturers (like Nintendo’s Zelda and Mario games) are often blockbusters that grow the overall size of the user base for the consoles, increasing demand for third-party titles as well.
Sponsored by Rep. Pramila Jayapal (D-Wash.), this bill would make it illegal for covered platforms to control lines of business that pose “irreconcilable conflicts of interest,” enforced through civil litigation powers granted to the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ).
Specifically, the bill targets lines of business that create “a substantial incentive” for the platform to advantage its own products or services over those of competitors that use the platform, or to exclude or disadvantage competing businesses from using the platform. The FTC and DOJ could potentially order that platforms divest lines of business that violate the act.
This targets similar conduct as the previous bill, but involves the forced separation of different lines of business. It also appears to go even further, seemingly implying that companies like Google could not even develop services like Google Maps or Chrome because their existence would create such “substantial incentives” to self-preference them over the products of their competitors.
Apart from the straightforward loss of innovation and product developments this would involve, requiring every tech company to be narrowly focused on a single line of business would substantially entrench Big Tech incumbents, because it would make it impossible for them to extend into adjacent markets to compete with one another. For example, Apple could not develop a search engine to compete with Google under these rules, and Amazon would be forced to sell its video-streaming services that compete with Netflix and Youtube.
Introduced by Rep. Hakeem Jeffries (D-N.Y.), this bill would bar covered platforms from making essentially any acquisitions at all. To be excluded from the ban on acquisitions, the platform would have to present “clear and convincing evidence” that the acquired business does not compete with the platform for any product or service, does not pose a potential competitive threat to the platform, and would not in any way enhance or help maintain the acquiring platform’s market position.
The two main ways that founders and investors can make a return on a successful startup are to float the company at IPO or to be acquired by another business. The latter of these, acquisitions, is extremely important. Between 2008 and 2019, 90 percent of U.S. start-up exits happened through acquisition. In a recent survey, half of current startup executives said they aimed to be acquired. One study found that countries that made it easier for firms to be taken over saw a 40-50 percent increase in VC activity, and that U.S. states that made acquisitions harder saw a 27 percent decrease in VC investment deals.
So this proposal would probably reduce investment in U.S. startups, since it makes it more difficult for them to be acquired. It would therefore reduce innovation as a result. It would also reduce inter-platform competition by banning deals that allow firms to move into new markets, like the acquisition of Beats that helped Apple to build a Spotify competitor, or the deals that helped Google, Microsoft, and Amazon build cloud-computing services that all compete with each other. It could also reduce competition faced by old industries, by preventing tech companies from buying firms that enable it to move into new markets—like Amazon’s acquisitions of health-care companies that it has used to build a health-care offering. Even Walmart’s acquisition of Jet.com, which it has used to build an Amazon competitor, could have been banned under this law if Walmart had had a higher market cap at the time.
Under terms of the legislation, covered platforms would be required to allow third parties to transfer data to their users or, with the user’s consent, to a competing business. It also would require platforms to facilitate compatible and interoperable communications with competing businesses. The law directs the FTC to establish technical committees to promulgate the standards for portability and interoperability.
Data portability and interoperability involve trade-offs in terms of security and usability, and overseeing them can be extremely costly and difficult. In security terms, interoperability requirements prevent companies from using closed systems to protect users from hostile third parties. Mandatory openness means increasing—sometimes, substantially so—the risk of data breaches and leaks. In practice, that could mean users’ private messages or photos being leaked more frequently, or activity on a social media page that a user considers to be “their” private data, but that “belongs” to another user under the terms of use, can be exported and publicized as such.
It can also make digital services more buggy and unreliable, by requiring that they are built in a more “open” way that may be more prone to unanticipated software mismatches. A good example is that of Windows vs iOS; Windows is far more interoperable with third-party software than iOS is, but tends to be less stable as a result, and users often prefer the closed, stable system.
Interoperability requirements also entail ongoing regulatory oversight, to make sure data is being provided to third parties reliably. It’s difficult to build an app around another company’s data without assurance that the data will be available when users want it. For a requirement as broad as this bill’s, that could mean setting up quite a large new de facto regulator.
In the UK, Open Banking (an interoperability requirement imposed on British retail banks) has suffered from significant service outages, and targets a level of uptime that many developers complain is too low for them to build products around. Nor has Open Banking yet led to any obvious competition benefits.
A bill that mirrors language in the Endless Frontier Act recently passed by the U.S. Senate, would significantly raise filing fees for the largest mergers. Rather than the current cap of $280,000 for mergers valued at more than $500 million, the bill—sponsored by Rep. Joe Neguse (D-Colo.)–the new schedule would assess fees of $2.25 million for mergers valued at more than $5 billion; $800,000 for those valued at between $2 billion and $5 billion; and $400,000 for those between $1 billion and $2 billion.
Smaller mergers would actually see their filing fees cut: from $280,000 to $250,000 for those between $500 million and $1 billion; from $125,000 to $100,000 for those between $161.5 million and $500 million; and from $45,000 to $30,000 for those less than $161.5 million.
In addition, the bill would appropriate $418 million to the FTC and $252 million to the DOJ’s Antitrust Division for Fiscal Year 2022. Most people in the antitrust world are generally supportive of more funding for the FTC and DOJ, although whether this is actually good or not depends both on how it’s spent at those places.
It’s hard to object if it goes towards deepening the agencies’ capacities and knowledge, by hiring and retaining higher quality staff with salaries that are more competitive with those offered by the private sector, and on greater efforts to study the effects of the antitrust laws and past cases on the economy. If it goes toward broadening the activities of the agencies, by doing more and enabling them to pursue a more aggressive enforcement agenda, and supporting whatever of the above proposals make it into law, then it could be very harmful.
Despite calls fromsomeNGOs to mandate radical interoperability, the EU’s draft Digital Markets Act (DMA) adopted a more measured approach, requiring full interoperability only in “ancillary” services like identification or payment systems. There remains the possibility, however, that the DMA proposal will be amended to include stronger interoperability mandates, or that such amendments will be introduced in the Digital Services Act. Without the right checks and balances, this could pose grave threats to Europeans’ privacy and security.
At the most basic level, interoperability means a capacity to exchange information between computer systems. Email is an example of an interoperable standard that most of us use today. Expanded interoperability could offer promising solutions to some of today’s difficult problems. For example, it might allow third-party developers to offer different “flavors” of social media news feed, with varying approaches to content ranking and moderation (see Daphne Keller, Mike Masnick, and Stephen Wolfram for more on that idea). After all, in a pluralistic society, someone will always be unhappy with what some others consider appropriate content. Why not let smaller groups decide what they want to see?
But to achieve that goal using currently available technology, third-party developers would have to be able to access all of a platform’s content that is potentially available to a user. This would include not just content produced by users who explicitly agrees for their data to be shared with third parties, but also content—e.g., posts, comments, likes—created by others who may have strong objections to such sharing. It doesn’t require much imagination to see how, without adequate safeguards, mandating this kind of information exchange would inevitably result in something akin to the 2018 Cambridge Analytica data scandal.
It is telling that supporters of this kind of interoperability use services like email as their model examples. Email (more precisely, the SMTP protocol) originally was designed in a notoriously insecure way. It is a perfect example of the opposite of privacy by design. A good analogy for the levels of privacy and security provided by email, as originally conceived, is that of a postcard message sent without an envelope that passes through many hands before reaching the addressee. Even today, email continues to be a source of security concerns due to its prioritization of interoperability.
It also is telling that supporters of interoperability tend to point to what are small-scale platforms (e.g., Mastodon) or protocols with unacceptably poor usability for most of today’s Internet users (e.g., Usenet). When proposing solutions to potential privacy problems—e.g., that users will adequately monitor how various platforms use their data—they often assume unrealistic levels of user interest or technical acumen.
Interoperability in the DMA
The current draft of the DMA contains several provisions that broadly construe interoperability as applying only to “gatekeepers”—i.e., the largest online platforms:
Mandated interoperability of “ancillary services” (Art 6(1)(f));
Real-time data portability (Art 6(1)(h)); and
Business-user access to their own and end-user data (Art 6(1)(i)).
The first provision, (Art 6(1)(f)), is meant to force gatekeepers to allow e.g., third-party payment or identification services—for example, to allow people to create social media accounts without providing an email address, which is possible using services like “Sign in with Apple.” This kind of interoperability doesn’t pose as big of a privacy risk as mandated interoperability of “core” services (e.g., messaging on a platform like WhatsApp or Signal), partially due to a more limited scope of data that needs to be exchanged.
However, even here, there may be some risks. For example, users may choose poorly secured identification services and thus become victims of attacks. Therefore, it is important that gatekeepers not be prevented from protecting their users adequately. Of course,there are likely trade-offs between those protections and the interoperability that some want. Proponents of stronger interoperability want this provision amended to cover all “core” services, not just “ancillary” ones, which would constitute precisely the kind of radical interoperability that cannot be safely mandated today.
The other two provisions do not mandate full two-way interoperability, where a third party could both read data from a service like Facebook and modify content on that service. Instead, they provide for one-way “continuous and real-time” access to data—read-only.
The second provision (Art 6(1)(h)) mandates that gatekeepers give users effective “continuous and real-time” access to data “generated through” their activity. It’s not entirely clear whether this provision would be satisfied by, e.g., Facebook’s Graph API, but it likely would not be satisfied simply by being able to download one’s Facebook data, as that is not “continuous and real-time.”
Importantly, the proposed provision explicitly references the General Data Protection Regulation (GDPR), which suggests that—at least as regards personal data—the scope of this portability mandate is not meant to be broader than that from Article 20 GDPR. Given the GDPR reference and the qualification that it applies to data “generated through” the user’s activity, this mandate would not include data generated by other users—which is welcome, but likely will not satisfy the proponents of stronger interoperability.
The third provision from Art 6(1)(i) mandates only “continuous and real-time” data access and only as regards data “provided for or generated in the context of the use of the relevant core platform services” by business users and by “the end users engaging with the products or services provided by those business users.” This provision is also explicitly qualified with respect to personal data, which are to be shared after GDPR-like user consent and “only where directly connected with the use effectuated by the end user in respect of” the business user’s service. The provision should thus not be a tool for a new Cambridge Analytica to siphon data on users who interact with some Facebook page or app and their unwitting contacts. However, for the same reasons, it will also not be sufficient for the kinds of uses that proponents of stronger interoperability envisage.
Why can’t stronger interoperability be safely mandated today?
Let’s imagine that Art 6(1)(f) is amended to cover all “core” services, so gatekeepers like Facebook end up with a legal duty to allow third parties to read data from and write data to Facebook via APIs. This would go beyond what is currently possible using Facebook’s Graph API, and would lack the current safety valve of Facebook cutting off access because of the legal duty to deal created by the interoperability mandate. As Cory Doctorow and Bennett Cyphers note, there are at least three categories of privacy and security risks in this situation:
1. Data sharing and mining via new APIs;
2. New opportunities for phishing and sock puppetry in a federated ecosystem; and
3. More friction for platforms trying to maintain a secure system.
Unlike some other proponents of strong interoperability, Doctorow and Cyphers are open about the scale of the risk: “[w]ithout new legal safeguards to protect the privacy of user data, this kind of interoperable ecosystem could make Cambridge Analytica-style attacks more common.”
There are bound to be attempts to misuse interoperability through clearly criminal activity. But there also are likely to be more legally ambiguous attempts that are harder to proscribe ex ante. Proposals for strong interoperability mandates need to address this kind of problem.
So, what could be done to make strong interoperability reasonably safe? Doctorow and Cyphers argue that there is a “need for better privacy law,” but don’t say whether they think the GDPR’s rules fit the bill. This may be a matter of reasonable disagreement.
What isn’t up for serious debate is that the current framework and practice of privacy enforcement offers little confidence that misuses of strong interoperability would be detected and prosecuted, much less that they would be prevented (see here and here on GDPR enforcement). This is especially true for smaller and “judgment-proof” rule-breakers, including those from outside the European Union. Addressing the problems of privacy law enforcement is a herculean task, in and of itself.
The day may come when radical interoperability will, thanks to advances in technology and/or privacy enforcement, become acceptably safe. But it would be utterly irresponsible to mandate radical interoperability in the DMA and/or DSA, and simply hope the obvious privacy and security problems will somehow be solved before the law takes force. Instituting such a mandate would likely discredit the very idea of interoperability.
The European Commission this week published its proposed Artificial Intelligence Regulation, setting out new rules for “artificial intelligence systems” used within the European Union. The regulation—the commission’s attempt to limit pernicious uses of AI without discouraging its adoption in beneficial cases—casts a wide net in defining AI to include essentially any software developed using machine learning. As a result, a host of software may fall under the regulation’s purview.
The regulation categorizes AIs by the kind and extent of risk they may pose to health, safety, and fundamental rights, with the overarching goal to:
Prohibit “unacceptable risk” AIs outright;
Place strict restrictions on “high-risk” AIs;
Place minor restrictions on “limited-risk” AIs;
Create voluntary “codes of conduct” for “minimal-risk” AIs;
Establish a regulatory sandbox regime for AI systems;
Set up a European Artificial Intelligence Board to oversee regulatory implementation; and
Set fines for noncompliance at up to 30 million euros, or 6% of worldwide turnover, whichever is greater.
AIs That Are Prohibited Outright
The regulation prohibits AI that are used to exploit people’s vulnerabilities or that use subliminal techniques to distort behavior in a way likely to cause physical or psychological harm. Also prohibited are AIs used by public authorities to give people a trustworthiness score, if that score would then be used to treat a person unfavorably in a separate context or in a way that is disproportionate. The regulation also bans the use of “real-time” remote biometric identification (such as facial-recognition technology) in public spaces by law enforcement, with exceptions for specific and limited uses, such as searching for a missing child.
The first prohibition raises some interesting questions. The regulation says that an “exploited vulnerability” must relate to age or disability. In its announcement, the commission says this is targeted toward AIs such as toys that might induce a child to engage in dangerous behavior.
The ban on AIs using “subliminal techniques” is more opaque. The regulation doesn’t give a clear definition of what constitutes a “subliminal technique,” other than that it must be something “beyond a person’s consciousness.” Would this include TikTok’s algorithm, which imperceptibly adjusts the videos shown to the user to keep them engaged on the platform? The notion that this might cause harm is not fanciful, but it’s unclear whether the provision would be interpreted to be that expansive, whatever the commission’s intent might be. There is at least a risk that this provision would discourage innovative new uses of AI, causing businesses to err on the side of caution to avoid the huge penalties that breaking the rules would incur.
The prohibition on AIs used for social scoring is limited to public authorities. That leaves space for socially useful expansions of scoring systems, such as consumers using their Uber rating to show a record of previous good behavior to a potential Airbnb host. The ban is clearly oriented toward more expansive and dystopian uses of social credit systems, which some fear may be used to arbitrarily lock people out of society.
The ban on remote biometric identification AI is similarly limited to its use by law enforcement in public spaces. The limited exceptions (preventing an imminent terrorist attack, searching for a missing child, etc.) would be subject to judicial authorization except in cases of emergency, where ex-post authorization can be sought. The prohibition leaves room for private enterprises to innovate, but all non-prohibited uses of remote biometric identification would be subject to the requirements for high-risk AIs.
Restrictions on ‘High-Risk’ AIs
Some AI uses are not prohibited outright, but instead categorized as “high-risk” and subject to strict rules before they can be used or put to market. AI systems considered to be high-risk include those used for:
Safety components for certain types of products;
Remote biometric identification, except those uses that are banned outright;
Safety components in the management and operation of critical infrastructure, such as gas and electricity networks;
Dispatching emergency services;
Educational admissions and assessments;
Employment, workers management, and access to self-employment;
Evaluating credit-worthiness;
Assessing eligibility to receive social security benefits or services;
A range of law-enforcement purposes (e.g., detecting deepfakes or predicting the occurrence of criminal offenses);
Migration, asylum, and border-control management; and
Administration of justice.
While the commission considers these AIs to be those most likely to cause individual or social harm, it may not have appropriately balanced those perceived harms with the onerous regulatory burdens placed upon their use.
As Mikołaj Barczentewicz at the Surrey Law and Technology Hub has pointed out, the regulation would discourage even simple uses of logic or machine-learning systems in such settings as education or workplaces. This would mean that any workplace that develops machine-learning tools to enhance productivity—through, for example, monitoring or task allocation—would be subject to stringent requirements. These include requirements to have risk-management systems in place, to use only “high quality” datasets, and to allow human oversight of the AI, as well as other requirements around transparency and documentation.
The obligations would apply to any companies or government agencies that develop an AI (or for whom an AI is developed) with a view toward marketing it or putting it into service under their own name. The obligations could even attach to distributors, importers, users, or other third parties if they make a “substantial modification” to the high-risk AI, market it under their own name, or change its intended purpose—all of which could potentially discourage adaptive use.
Without going into unnecessary detail regarding each requirement, some are likely to have competition- and innovation-distorting effects that are worth discussing.
The rule that data used to train, validate, or test a high-risk AI has to be high quality (“relevant, representative, and free of errors”) assumes that perfect, error-free data sets exist, or can easily be detected. Not only is this not necessarily the case, but the requirement could impose an impossible standard on some activities. Given this high bar, high-risk AIs that use data of merely “good” quality could be precluded. It also would cut against the frontiers of research in artificial intelligence, where sometimes only small and lower-quality datasets are available to train AI. A predictable effect is that the rule would benefit large companies that are more likely to have access to large, high-quality datasets, while rules like the GDPR make it difficult for smaller companies to acquire that data.
High-risk AIs also must submit technical and user documentation that detail voluminous information about the AI system, including descriptions of the AI’s elements, its development, monitoring, functioning, and control. These must demonstrate the AI complies with all the requirements for high-risk AIs, in addition to documenting its characteristics, capabilities, and limitations. The requirement to produce vast amounts of information represents another potentially significant compliance cost that will be particularly felt by startups and other small and medium-sized enterprises (SMEs). This could further discourage AI adoption within the EU, as European enterprises already consider liability for potential damages and regulatory obstacles as impediments to AI adoption.
The requirement that the AI be subject to human oversight entails that the AI can be overseen and understood by a human being and that the AI can never override a human user. While it may be important that an AI used in, say, the criminal justice system must be understood by humans, this requirement could inhibit sophisticated uses beyond the reasoning of a human brain, such as how to safely operate a national electricity grid. Providers of high-risk AI systems also must establish a post-market monitoring system to evaluate continuous compliance with the regulation, representing another potentially significant ongoing cost for the use of high-risk AIs.
The regulation also places certain restrictions on “limited-risk” AIs, notably deepfakes and chatbots. Such AIs must be labeled to make a user aware they are looking at or listening to manipulated images, video, or audio. AIs must also be labeled to ensure humans are aware when they are speaking to an artificial intelligence, where this is not already obvious.
Taken together, these regulatory burdens may be greater than the benefits they generate, and could chill innovation and competition. The impact on smaller EU firms, which already are likely to struggle to compete with the American and Chinese tech giants, could prompt them to move outside the European jurisdiction altogether.
Regulatory Support for Innovation and Competition
To reduce the costs of these rules, the regulation also includes a new regulatory “sandbox” scheme. The sandboxes would putatively offer environments to develop and test AIs under the supervision of competent authorities, although exposure to liability would remain for harms caused to third parties and AIs would still have to comply with the requirements of the regulation.
SMEs and startups would have priority access to the regulatory sandboxes, although they must meet the same eligibility conditions as larger competitors. There would also be awareness-raising activities to help SMEs and startups to understand the rules; a “support channel” for SMEs within the national regulator; and adjusted fees for SMEs and startups to establish that their AIs conform with requirements.
These measures are intended to prevent the sort of chilling effect that was seen as a result of the GDPR, which led to a 17% increase in market concentration after it was introduced. But it’s unclear that they would accomplish this goal. (Notably, the GDPR contained similar provisions offering awareness-raising activities and derogations from specific duties for SMEs.) Firms operating in the “sandboxes” would still be exposed to liability, and the only significant difference to market conditions appears to be the “supervision” of competent authorities. It remains to be seen how this arrangement would sufficiently promote innovation as to overcome the burdens placed on AI by the significant new regulatory and compliance costs.
Governance and Enforcement
Each EU member state would be expected to appoint a “national competent authority” to implement and apply the regulation, as well as bodies to ensure high-risk systems conform with rules that require third party-assessments, such as remote biometric identification AIs.
The regulation establishes the European Artificial Intelligence Board to act as the union-wide regulatory body for AI. The board would be responsible for sharing best practices with member states, harmonizing practices among them, and issuing opinions on matters related to implementation.
As mentioned earlier, maximum penalties for marketing or using a prohibited AI (as well as for failing to use high-quality datasets) would be a steep 30 million euros or 6% of worldwide turnover, whichever is greater. Breaking other requirements for high-risk AIs carries maximum penalties of 20 million euros or 4% of worldwide turnover, while maximums of 10 million euros or 2% of worldwide turnover would be imposed for supplying incorrect, incomplete, or misleading information to the nationally appointed regulator.
Is the Commission Overplaying its Hand?
While the regulation only restricts AIs seen as creating risk to society, it defines that risk so broadly and vaguely that benign applications of AI may be included in its scope, intentionally or unintentionally. Moreover, the commission also proposes voluntary codes of conduct that would apply similar requirements to “minimal” risk AIs. These codes—optional for now—may signal the commission’s intent eventually to further broaden the regulation’s scope and application.
The commission clearly hopes it can rely on the “Brussels Effect” to steer the rest of the world toward tighter AI regulation, but it is also possible that other countries will seek to attract AI startups and investment by introducing less stringent regimes.
For the EU itself, more regulation must be balanced against the need to foster AI innovation. Without European tech giants of its own, the commission must be careful not to stifle the SMEs that form the backbone of the European market, particularly if global competitors are able to innovate more freely in the American or Chinese markets. If the commission has got the balance wrong, it may find that AI development simply goes elsewhere, with the EU fighting the battle for the future of AI with one hand tied behind its back.