Archives For Data Privacy & Security

[The following is a guest post from Andrew Mercado, a research assistant at the Mercatus Center at George Mason University and an adjunct professor and research assistant at George Mason’s Antonin Scalia Law School.]

Barry Schwartz’s seminal work “The Paradox of Choice” has received substantial attention since its publication nearly 20 years ago. In it, Schwartz argued that, faced with an ever-increasing plethora of products to choose from, consumers often feel overwhelmed and seek to limit the number of choices they must make.

In today’s online digital economy, a possible response to this problem is for digital platforms to use consumer data to present consumers with a “manageable” array of choices and thereby simplify their product selection. Appropriate “curation” of product-choice options may substantially benefit consumer welfare, provided that government regulators stay out of the way.   

New Research

In a new paper in the American Economic Review, Mark Armstrong and Jidong Zhou—of Oxford and Yale universities, respectively—develop a theoretical framework to understand how companies compete using consumer data. Their findings conclude that there is, in fact, an impact on consumer, producer, and total welfare when different privacy regimes are enacted to change the amount of information a company can use to personalize recommendations.

The authors note that, at least in theory, there is an optimal situation that maximizes total welfare (scenario one). This is when a platform can aggregate information on consumers to such a degree that buyers and sellers are perfectly matched, leading to consumers buying their first-best option. While this can result in marginally higher prices, understandably leading to higher welfare for producers, search and mismatch costs are minimized by the platform, leading to a high level of welfare for consumers.

The highest level of aggregate consumer welfare comes when product differentiation is minimized (scenario two), leading to a high number of substitutes and low prices. This, however, comes with some level of mismatch. Since consumers are not matched with any recommendations, search costs are high and introduce some error. Some consumers may have had a higher level of welfare with an alternative product, but do not feel the negative effects of such mismatch because of the low prices. Therefore, consumer welfare is maximized, but producer welfare is significantly lower.

Finally, the authors suggest a “nearly total welfare” optimal solution in suggesting a “top two-best” scheme (scenario three), whereby consumers are shown their top two best options without explicit ranking. This nearly maximizes total welfare, since consumers are shown the best options for them and, even if the best match isn’t chosen, the second-best match is close in terms of welfare.

Implications

In cases of platform data aggregation and personalization, scenarios one, two, and three can be represented as different privacy regimes.

Scenario one (a personalized-product regime) is akin to unlimited data gathering, whereby platforms can use as much information as is available to perfectly suggest products based on revealed data. From a competition perspective, interfirm competition will tend to decrease under this regime, since product differentiation will be accentuated, and substitutability will be masked. Since one single product will be shown as the “correct” product, the consumer will not want to shift to a different, welfare-inferior product and firms have incentive to produce ever more specialized products for a relatively higher price. Total welfare under this regime is maximized, with producers using their information to garner a relatively large share of economic surplus. Producers are effectively matched with consumers, and all gains from trade are realized.

Scenario two (a data-privacy regime) is one of near-perfect data privacy, whereby the platform is only able to recommend products based on general information, such as sales trends, new products, or product specifications. Under this regime, competition is maximized, since consumers consider a large pool of goods to be close substitutes. Differences in offered products are downplayed, which has the tendency to reduce prices and increase quality, but at the tradeoff of some consumer-product mismatch. For consumers who want a general product and a low price, this is likely the best option, since prices are low, and competition is high. However, for consumers who want the best product match for their personal use case, they will likely undertake search costs, increasing their opportunity cost of product acquisition and tending toward a total cost closer to the cost under a personalized-product regime.

Scenario three (a curated-list regime) represents defined guardrails surrounding the display of information gathered, along the same lines as the personalized-product regime. Platforms remain able to gather as much information as they desire in order to make a personalized recommendation, but they display an array of products that represent the first two (or three to four, with tighter anti-preference rules) best-choice options. These options are displayed without ranking the products, allowing the consumer to choose from a curated list, rather than a single product. The scenario-three regime has two effects on the market:

  1. It will tend to decrease prices through increased competition. Since firms can know only which consumers to target, not which will choose the product, they have to effectively compete with closely related products.
  2. It will likely spur innovation and increase competition from nascent competitors.

From an innovation perspective, firms will have to find better methods to differentiate themselves from the competition, increasing the probability of a consumer acquiring their product. Also, considering nascent competitors, a new product has an increased chance of being picked when ranked sufficiently high to be included on the consumer’s curated list. In contrast, the probability of acquisition under scenario one’s personalized-product regime is low, since the new product must be a better match than other, existing products. Similarly, under scenario two’s data-privacy regime, there is so much product substitutability in the market that the probability of choosing any one new product is low.

Below is a list of how the regimes stack up:

  • Personalized-Product: Total welfare is maximized, but prices are relatively higher and competition is relatively lower than under a data-privacy regime.
  • Data-Privacy: Consumer welfare and competition are maximized, and prices are theoretically minimized, but at the cost of product mismatch. Consumers will face search costs that are not reflected in the prices paid.
  • Curated-List: Consumer welfare is higher and prices are lower than under a personalized-product regime and competition is lower than under a data-privacy regime, but total welfare is nearly optimal when considering innovation and nascent-competitor effects.

Policy in Context

Applying these theoretical findings to fashion administrable policy prescriptions is understandably difficult. A far easier task is to evaluate the welfare effects of actual and proposed government privacy regulations in the economy. In that light, I briefly assess a recently enacted European data-platform privacy regime and U.S. legislative proposals that would restrict data usage under the guise of bans on “self-preferencing.” I then briefly note the beneficial implications of self-preferencing associated with the two theoretical data-usage scenarios (scenarios one and three) described above (scenario two, data privacy, effectively renders self-preferencing ineffective). 

GDPR

The European Union’s General Data Protection Regulation (GDPR)—among the most ambitious and all-encompassing data-privacy regimes to date—has significant negative ramifications for economic welfare. This regulation is most like the second scenario, whereby data collection and utilization are seriously restricted.

The GDPR diminishes competition through its restrictions on data collection and sharing, which reduce the competitive pressure platforms face. For platforms to gain a complete profile of a consumer for personalization, they cannot only rely on data collected on their platform. To ensure a level of personalization that effectively reduces search costs for consumers, these platforms must be able to acquire data from a range of sources and aggregate that data to create a complete profile. Restrictions on aggregation are what lead to diminished competition online.

The GDPR grants consumers the right to choose both how their data is collected and how it is distributed. Not only do platforms themselves have obligations to ensure consumers’ wishes are met regarding their privacy, but firms that sell data to the platform are obligated to ensure the platform does not infringe consumers’ privacy through aggregation.

This creates a high regulatory burden for both the platform and the data seller and reduces the incentive to transfer data between firms. Since the data seller can be held liable for actions taken by the platform, this significantly increases the price at which the data seller will transfer the data. By increasing the risk of regulatory malfeasance, the cost of data must now incorporate some risk premium, reducing the demand for outside data.

This has the effect of decreasing the quality of personalization and tilting the scales toward larger platforms, who have more robust data-collection practices and are able to leverage economies of scale to absorb high regulatory-enforcement costs. The quality of personalization is decreased, since the platform has incentive to create a consumption profile based on activity it directly observes without considering behavior occurring outside of the platform. Additionally, those platforms that are already entrenched and have large user bases are better able to manage the regulatory burden of the GDPR. One survey of U.S. companies with more than 500 workers found that 68% planned to spend between $1 and $10 million in upfront costs to prepare for GDPR compliance, a number that will likely pale in comparison to the long-term compliance costs. For nascent competitors, this outlay of capital represents a significant barrier to entry.

Additionally, as previously discussed, consumers derive some benefit from platforms that can accurately recommend products. If this is the case, then large platforms with vast amounts of accumulated, first-party data will be the consumers’ destination of choice. This will tend to reduce the ability for smaller firms to compete, simply because they do not have access to the same scale of data as the large platforms when data cannot be easily transferred between parties.

SelfPreferencing

Claims of anticompetitive behavior by platforms are abundant (e.g., see here and here), and they often focus on the concept of self-preferencing. Self-preferencing refers to when a company uses its economies of scale, scope, or a combination of the two to offer products at a lower price through an in-house brand. In decrying self-preferencing, many commentators and politicians point to an alleged “unfair advantage” in tech platforms’ ability to leverage data and personalization to drive traffic toward their own products.

It is far from clear, however, that this practice reduces consumer welfare. Indeed, numerous commentaries (e.g., see here and here) circulated since the introduction of anti-preferencing bills in the U.S. Congress (House; Senate) have rejected the notion that self-preferencing is anti-competitive or anti-consumer.

There are good reasons to believe that self-preferencing promotes both competition and consumer welfare. Assume that a company that manufactures or contracts for its own, in-house products can offer them at a marginally lower price for the same relative quality. This decrease in price raises consumer welfare. The in-house brand’s entrance into the market represents a potent competitive threat to firms already producing products, who in turn now have incentive to lower their own prices or raise the quality of their own goods (or both) to maintain their consumer base. This creates even more consumer welfare, since all consumers, not just the ones purchasing the in-house goods, are better off from the entrance of an in-house brand.

It therefore follows that the entrance of an in-house brand and self-preferencing in the data-utilizing regimes discussed above has the potential to enhance consumer welfare.

In general, the use of data analysis on the platform can allow for targeted product entrance into certain markets. If the platform believes it can make a product of similar quality for a lower price, then it will enter that market and consumers will be able to choose a comparable product for a lower price. (If the company does not believe it is able to produce such a product, it will not enter the market with an in-house brand, and consumer welfare will stay the same.) Consumer welfare will further rise as firms producing products that compete against the in-house brand will innovate to compete more effectively.

To be sure, under a personalized-product regime (scenario one), platforms may appear to have an incentive to self-preference to the detriment of consumers. If consumers trust the platform to show the greatest welfare-producing product before the emergence of an in-house brand, the platform may use this consumer trust to its advantage and suggest its own, potentially consumer-welfare-inferior product instead of a competitor’s welfare-superior product. In such a case, consumer welfare may decrease in the face of an in-house brand’s entrance.

The extent of any such welfare loss, however, may be ameliorated (or eliminated entirely) by the platform’s concern that an unexpectedly low level of house-brand product quality will diminish its reputation. Such a reputational loss could come about due to consumer disappointment, plus the efforts of platform rivals to highlight the in-house product’s inferiority. As such, the platform might decide to enhance the quality of its “inferior” in-house offering, or refrain from offering an in-house brand at all.

A curated-list regime (scenario three) is unequivocally consumer-welfare beneficial. Under such a regime, consumers will be shown several more options (a “manageable” number intended to minimize consumer-search costs) than under a personalized-product regime. Consumers can actively compare the offerings from different firms to determine the correct product for their individual use. In this case, there is no incentive to self-preference to the detriment of the consumer, as the consumer is able to make value judgements between the in-house brand and the alternatives.

If the in-house brand is significantly lower in price, but also lower in quality, consumers may not see the two as interchangeable and steer away from the in-house brand. The same follows when the in-house brand is higher in both price and quality. The only instance where the in-house brand has a strong chance of success is when the price is lower than and the quality is greater than competing products. This will tend to increase consumer welfare. Additionally, the entrance of consumer-welfare-superior products into a competitive market will encourage competing firms to innovate and lower prices or raise quality, again increasing consumer welfare for all consumers.

Conclusion

What effects do digital platform-data policies have on consumer welfare? As a matter of theory, if providing an increasing number of product choices does not tend to increase consumer welfare, then do reductions in prices or increases in quality? What about precise targeting of personal-product choices? How about curation—the idea that a consumer raises his or her level of certainty by outsourcing decision-making to a platform that chooses a small set of products for the consumer’s consideration at any given moment? Apart from these theoretical questions, is the current U.S. legal treatment of platform data usage doing a generally good job of promoting consumer welfare? Finally, considering this overview, are new government interventions in platform data policy likely to benefit or harm consumers?

Recently published economic research develops theoretical scenarios that demonstrate how digital platform curation of consumer data may facilitate welfare-enhancing consumer-purchase decisions. At least implicitly, this research should give pause to proponents of major new restrictions of platform data usage.

Furthermore, a review of actual and proposed regulatory restrictions underscores the serious welfare harm of government meddling in digital platform-data usage.   

After the first four years of GDPR, it is clear that there have been significant negative unintended consequences stemming from omnibus privacy regulation. Competition has decreased, regulatory barriers to entry have increased, and consumers are marginally worse off. Since companies are less able and willing to leverage data in their operations and service offerings—due in large part to the risk of hefty fines—they are less able to curate and personalize services to consumers.

Additionally, anti-preferencing bills in the United States threaten to suppress the proper functioning of platform markets and reduce consumer welfare by making the utilization of data in product-market decisions illegal. More research is needed to determine the aggregate welfare effects of such preferencing on platforms, but all early indications point to the fact that consumers are better off when an in-house brand enters the market and increases competition.

Furthermore, current U.S. government policy, which generally allows platforms to use consumer data freely, is good for consumer welfare. Indeed, the consumer-welfare benefits generated by digital platforms, which depend critically on large volumes of data, are enormous. This is documented in a well-reasoned Harvard Business Review article (by an MIT professor and his student) that utilizes online choice experiments based on digital-survey techniques.

The message is clear. Governments should avoid new regulatory meddling in digital platform consumer-data usage practices. Such meddling would harm consumers and undermine the economy.

The Federal Trade Commission (FTC) is at it again, threatening new sorts of regulatory interventions in the legitimate welfare-enhancing activities of businesses—this time in the realm of data collection by firms.

Discussion

In an April 11 speech at the International Association of Privacy Professionals’ Global Privacy Summit, FTC Chair Lina Khan set forth a litany of harms associated with companies’ data-acquisition practices. Certainly, fraud and deception with respect to the use of personal data has the potential to cause serious harm to consumers and is the legitimate target of FTC enforcement activity. At the same time, the FTC should take into account the substantial benefits that private-sector data collection may bestow on the public (see, for example, here, here, and here) in order to formulate economically beneficial law-enforcement protocols.

Chair Khan’s speech, however, paid virtually no attention to the beneficial side of data collection. To the contrary, after highlighting specific harmful data practices, Khan then waxed philosophical in condemning private data-collection activities (citations omitted):

Beyond these specific harms, the data practices of today’s surveillance economy can create and exacerbate deep asymmetries of information—exacerbating, in turn, imbalances of power. As numerous scholars have noted, businesses’ access to and control over such vast troves of granular data on individuals can give those firms enormous power to predict, influence, and control human behavior. In other words, what’s at stake with these business practices is not just one’s subjective preference for privacy, but—over the long term—one’s freedom, dignity, and equal participation in our economy and society.

Even if one accepts that private-sector data practices have such transcendent social implications, are the FTC’s philosopher kings ideally equipped to devise optimal policies that promote “freedom, dignity, and equal participation in our economy and society”? Color me skeptical. (Indeed, one could argue that the true transcendent threat to society from fast-growing growing data collection comes not from businesses but, rather, from the government, which unlike private businesses holds a legal monopoly on the right to use or authorize the use of force. This question is, however, beyond the scope of my comments.)

Chair Khan turned from these highfalutin musings to a more prosaic and practical description of her plans for “adapting the commission’s existing authority to address and rectify unlawful data practices.” She stressed “focusing on firms whose business practices cause widespread harm”; “assessing data practices through both a consumer protection and competition lens”; and “designing effective remedies that are informed by the business strategies that specific markets favor and reward.” These suggestions are not inherently problematic, but they need to be fleshed out in far greater detail. For example, there are potentially major consumer-protection risks posed by applying antitrust to “big data” problems (see here, here and here, for example).

Khan ended her presentation by inviting us “to consider how we might need to update our [FTC] approach further yet.” Her suggested “updates” raise significant problems.

First, she stated that the FTC “is considering initiating a rulemaking to address commercial surveillance and lax data security practices.” Even assuming such a rulemaking could withstand legal scrutiny (its best shot would be to frame it as a consumer protection rule, not a competition rule), it would pose additional serious concerns. One-size-fits-all rules prevent consideration of possible economic efficiencies associated with specific data-security and surveillance practices. Thus, some beneficial practices would be wrongly condemned. Such rules would also likely deter firms from experimenting and innovating in ways that could have led to improved practices. In both cases, consumer welfare would suffer.

Second, Khan asserted “the need to reassess the frameworks we presently use to assess unlawful conduct. Specifically, I am concerned that present market realities may render the ‘notice and consent’ paradigm outdated and insufficient.” Accordingly, she recommended that “we should approach data privacy and security protections by considering substantive limits rather than just procedural protections, which tend to create process requirements while sidestepping more fundamental questions about whether certain types of data collection should be permitted in the first place.”  

In support of this startling observation, Khan approvingly cites Daniel Solove’s article “The Myth of the Privacy Paradox,” which claims that “[t]he fact that people trade their privacy for products or services does not mean that these transactions are desirable in their current form. … [T]he mere fact that people make a tradeoff doesn’t mean that the tradeoff is fair, legitimate, or justifiable.”

Khan provides no economic justification for a data-collection ban. The implication that the FTC would consider banning certain types of otherwise legal data collection is at odds with free-market principles and would have disastrous economic consequences for both consumers and producers. It strikes at voluntary exchange, a basic principle of market economics that benefits transactors and enables markets to thrive.

Businesses monetize information provided by consumers to offer a host of goods and services that satisfy consumer interests. This is particularly true in the case of digital platforms. Preventing the voluntary transfer of data from consumers to producers based on arbitrary government concerns about “fairness” (for example) would strike at firms’ ability to monetize data and thereby generate additional consumer and producer surplus. The arbitrary destruction of such potential economic value by government fiat would be the essence of “unfairness.”

In particular, the consumer welfare benefits generated by digital platforms, which depend critically on large volumes of data, are enormous. As Erik Brynjolfsson of the Massachusetts Institute of Technology and his student Avinash Collis explained in a December 2019 article in the Harvard Business Review, such benefits far exceed those measured by conventional GDP. Online choice experiments based on digital-survey techniques enabled the authors “to estimate the consumer surplus for a great variety of goods, including free ones that are missing from GDP statistics.” Brynjolfsson and Collis found, for example, that U.S. consumers derived $231 billion in value from Facebook since its inception in 2004. Furthermore:

[O]ur estimates indicate that the [Facebook] platform generates a median consumer surplus of about $500 per person annually in the United States, and at least that much for users in Europe. In contrast, average revenue per user is only around $140 per year in United States and $44 per year in Europe. In other words, Facebook operates one of the most advanced advertising platforms, yet its ad revenues represent only a fraction of the total consumer surplus it generates. This reinforces research by NYU Stern School’s Michael Spence and Stanford’s Bruce Owen that shows that advertising revenues and consumer surplus are not always correlated: People can get a lot of value from content that doesn’t generate much advertising, such as Wikipedia or email. So it is a mistake to use advertising revenues as a substitute for consumer surplus…

In a similar vein, the authors found that various user-fee-based digital services yield consumer surplus five to ten times what users paid to access them. What’s more:

The effect of consumer surplus is even stronger when you look at categories of digital goods. We conducted studies to measure it for the most popular categories in the United States and found that search is the most valued category (with a median valuation of more than $17,000 a year), followed by email and maps. These categories do not have comparable off-line substitutes, and many people consider them essential for work and everyday life. When we asked participants how much they would need to be compensated to give up an entire category of digital goods, we found that the amount was higher than the sum of the value of individual applications in it. That makes sense, since goods within a category are often substitutes for one another.

In sum, the authors found:

To put the economic contributions of digital goods in perspective, we find that including the consumer surplus value of just one digital good—Facebook—in GDP would have added an average of 0.11 percentage points a year to U.S. GDP growth from 2004 through 2017. During this period, GDP rose by an average of 1.83% a year. Clearly, GDP has been substantially underestimated over that time.

Although far from definitive, this research illustrates how a digital-services model, based on voluntary data transfer and accumulation, has brought about enormous economic welfare benefits. Accordingly, FTC efforts to tamper with such a success story on abstruse philosophical grounds not only would be unwarranted, but would be economically disastrous. 

Conclusion

The FTC clearly plans to focus on “abuses” in private-sector data collection and usage. In so doing, it should hone in on those practices that impose clear harm to consumers, particularly in the areas of deception and fraud. It is not, however, the FTC’s role to restructure data-collection activities by regulatory fiat, through far-reaching inflexible rules and, worst of all, through efforts to ban collection of “inappropriate” information.

Such extreme actions would predictably impose substantial harm on consumers and producers. They would also slow innovation in platform practices and retard efficient welfare-generating business initiatives tied to the availability of broad collections of data. Eventually, the courts would likely strike down most harmful FTC data-related enforcement and regulatory initiatives, but substantial welfare losses (including harm due to a chilling effect on efficient business conduct) would be borne by firms and consumers in the interim. In short, the enforcement “updates” Khan recommends would reduce economic welfare—the opposite of what (one assumes) is intended.

For these reasons, the FTC should reject the chair’s overly expansive “updates.” It should instead make use of technologists, economists, and empirical research to unearth and combat economically harmful data practices. In doing so, the commission should pay attention to cost-benefit analysis and error-cost minimization. One can only hope that Khan’s fellow commissioners promptly endorse this eminently reasonable approach.   

Though details remain scant (and thus, any final judgment would be premature),  initial word on the new Trans-Atlantic Data Privacy Framework agreed to, in principle, by the White House and the European Commission suggests that it could be a workable successor to the Privacy Shield agreement that was invalidated by the Court of Justice of the European Union (CJEU) in 2020.

This new framework agreement marks the third attempt to create a lasting and stable legal regime to permit the transfer of EU citizens’ data to the United States. In the wake of the 2013 revelations by former National Security Agency contractor Edward Snowden about the extent of the United States’ surveillance of foreign nationals, the CJEU struck down (in its 2015 Schrems decision) the then-extant “safe harbor” agreement that had permitted transatlantic data flows. 

In the 2020 Schrems II decision (both cases were brought by Austrian privacy activist Max Schrems), the CJEU similarly invalidated the Privacy Shield, which had served as the safe harbor’s successor agreement. In Schrems II, the court found that U.S. foreign surveillance laws were not strictly proportional to the intelligence community’s needs and that those laws also did not give EU citizens adequate judicial redress.  

This new “Privacy Shield 2.0” agreement, announced during President Joe Biden’s recent trip to Brussels, is intended to address the issues raised in the Schrems II decision. In relevant part, the joint statement from the White House and European Commission asserts that the new framework will: “[s]trengthen the privacy and civil liberties safeguards governing U.S. signals intelligence activities; Establish a new redress mechanism with independent and binding authority; and Enhance its existing rigorous and layered oversight of signals intelligence activities.”

In short, the parties believe that the new framework will ensure that U.S. intelligence gathering is proportional and that there is an effective forum for EU citizens caught up in U.S. intelligence-gathering to vindicate their rights.

As I and my co-authors (my International Center for Law & Economics colleague Mikołaj Barczentewicz and Michael Mandel of the Progressive Policy Institute) detailed in an issue brief last fall, the stakes are huge. While the issue is often framed in terms of social-media use, transatlantic data transfers are implicated in an incredibly large swath of cross-border trade:

According to one estimate, transatlantic trade generates upward of $5.6 trillion in annual commercial sales, of which at least $333 billion is related to digitally enabled services. Some estimates suggest that moderate increases in data-localization requirements would result in a €116 billion reduction in exports from the EU.

The agreement will be implemented on this side of the Atlantic by a forthcoming executive order from the White House, at which point it will be up to EU courts to determine whether the agreement adequately restricts U.S. intelligence activities and protects EU citizens’ rights. For now, however, it appears at a minimum that the White House took the CJEU’s concerns seriously and made the right kind of concessions to reach agreement.

And now, once the framework is finalized, we just have to sit tight and wait for Mr. Schrems’ next case.

European Union (EU) legislators are now considering an Artificial Intelligence Act (AIA)—the original draft of which was published by the European Commission in April 2021—that aims to ensure AI systems are safe in a number of uses designated as “high risk.” One of the big problems with the AIA is that, as originally drafted, it is not at all limited to AI, but would be sweeping legislation covering virtually all software. The EU governments seem to have realized this and are trying to fix the proposal. However, some pressure groups are pushing in the opposite direction. 

While there can be reasonable debate about what constitutes AI, almost no one would intuitively consider most of the software covered by the original AIA draft to be artificial intelligence. Ben Mueller and I covered this in more detail in our report “More Than Meets The AI: The Hidden Costs of a European Software Law.” Among other issues, the proposal would seriously undermine the legitimacy of the legislative process: the public is told that a law is meant to cover one sphere of life, but it mostly covers something different. 

It also does not appear that the Commission drafters seriously considered the costs that would arise from imposing the AIA’s regulatory regime on virtually all software across a sphere of “high-risk” uses that include education, employment, and personal finance.

The following example illustrates how the AIA would work in practice: A school develops a simple logic-based expert system to assist in making decisions related to admissions. It could be as basic as a Microsoft Excel macro that checks if a candidate is in the school’s catchment area based on the candidate’s postal code, by comparing the content of one column of a spreadsheet with another column. 

Under the AIA’s current definitions, this would not only be an “AI system,” but also a “high-risk AI system” (because it is “intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions” – Annex III of the AIA). Hence, to use this simple Excel macro, the school would be legally required to, among other things:

  1. put in place a quality management system;
  2. prepare detailed “technical documentation”;
  3. create a system for logging and audit trails;
  4. conduct a conformity assessment (likely requiring costly legal advice);
  5. issue an “EU declaration of conformity”; and
  6. register the “AI system” in the EU database of high-risk AI systems.

This does not sound like proportionate regulation. 

Some governments of EU member states have been pushing for a narrower definition of an AI system, drawing rebuke from pressure groups Access Now and Algorithm Watch, who issued a statement effectively defending the “all-software” approach. For its part, the European Council, which represents member states, unveiled compromise text in November 2021 that changed general provisions around the AIA’s scope (Article 3), but not the list of in-scope techniques (Annex I).

While the new definition appears slightly narrower, it remains overly broad and will create significant uncertainty. It is likely that many software developers and users will require costly legal advice to determine whether a particular piece of software in a particular use case is in scope or not. 

The “core” of the new definition is found in Article(3)(1)(ii), according to which an AI system is one that: “infers how to achieve a given set of human-defined objectives using learning, reasoning or modeling implemented with the techniques and approaches listed in Annex I.” This redefinition does precious little to solve the AIA’s original flaws. A legal inquiry focused on an AI system’s capacity for “reasoning” and “modeling” will tend either toward overinclusion or toward imagining a software reality that doesn’t exist at all. 

The revised text can still be interpreted so broadly as to cover virtually all software. Given that the list of in-scope techniques (Annex I) was not changed, any “reasoning” implemented with “Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems” (i.e., all software) remains in scope. In practice, the combined effect of those two provisions will be hard to distinguish from the original Commission draft. In other words, we still have an all-software law, not an AI-specific law. 

The AIA deliberations highlight two basic difficulties in regulating AI. First, it is likely that many activists and legislators have in mind science-fiction scenarios of strong AI (or “artificial general intelligence”) when pushing for regulations that will apply in a world where only weak AI exists. Strong AI is AI that is at least equal to human intelligence and is therefore capable of some form of agency. Weak AI is akin to  software techniques that augment human processing of information. For as long as computer scientists have been thinking about AI, there have been serious doubts that software systems can ever achieve generalized intelligence or become strong AI. 

Thus, what’s really at stake in regulating AI is regulating software-enabled extensions of human agency. But leaving aside the activists who explicitly do want controls on all software, lawmakers who promote the AIA have something in mind conceptually distinct from “software.” This raises the question of whether the “AI” that lawmakers imagine they are regulating is actually a null set. These laws can only regulate the equivalent of Excel spreadsheets at scale, and lawmakers need to think seriously about how they intervene. For such interventions to be deemed necessary, there should at least be quantifiable consumer harms that require redress. Focusing regulation on such broad topics as “AI” or “software” is almost certain to generate unacceptable unseen costs.

Even if we limit our concern to the real, weak AI, settling on an accepted “scientific” definition will be a challenge. Lawmakers inevitably will include either too much or too little. Overly inclusive regulation may seem like a good way to “future proof” the rules, but such future-proofing comes at the cost of significant legal uncertainty. It will also come at the cost of making some uses of software too costly to be worthwhile.

There has been a wave of legislative proposals on both sides of the Atlantic that purport to improve consumer choice and the competitiveness of digital markets. In a new working paper published by the Stanford-Vienna Transatlantic Technology Law Forum, I analyzed five such bills: the EU Digital Services Act, the EU Digital Markets Act, and U.S. bills sponsored by Rep. David Cicilline (D-R.I.), Rep. Mary Gay Scanlon (D-Pa.), Sen. Amy Klobuchar (D-Minn.) and Sen. Richard Blumenthal (D-Conn.). I concluded that all those bills would have negative and unaddressed consequences in terms of information privacy and security.

In this post, I present the main points from the working paper regarding two regulatory solutions: (1) mandating interoperability and (2) mandating device neutrality (which leads to a possibility of sideloading applications, a special case of interoperability.) The full working paper  also covers the risks of compulsory data access (by vetted researchers or by authorities).

Interoperability

Interoperability is increasingly presented as a potential solution to some of the alleged problems associated with digital services and with large online platforms, in particular (see, e.g., here and here). For example, interoperability might allow third-party developers to offer different “flavors” of social-media newsfeeds, with varying approaches to content ranking and moderation. This way, it might matter less than it does now what content moderation decisions Facebook or other platforms make. Facebook users could choose alternative content moderators, delivering the kind of news feed that those users expect.

The concept of interoperability is popular not only among thought leaders, but also among legislators. The DMA, as well as the U.S. bills by Rep. Scanlon, Rep. Cicilline, and Sen. Klobuchar, all include interoperability mandates.

At the most basic level, interoperability means a capacity to exchange information between computer systems. Email is an example of an interoperable standard that most of us use today. It is telling that supporters of interoperability mandates use services like email as their model examples. Email (more precisely, the SMTP protocol) originally was designed in a notoriously insecure way. It is a perfect example of the opposite of privacy by design. A good analogy for the levels of privacy and security provided by email, as originally conceived, is that of a postcard message sent without an envelope that passes through many hands before reaching the addressee. Even today, email continues to be a source of security concerns, due to its prioritization of interoperability (see, e.g., here).

Using currently available technology to provide alternative interfaces or moderation services for social-media platforms, third-party developers would have to be able to access much of the platform content that is potentially available to a user. This would include not just content produced by users who explicitly agree to share their data with third parties, but also content—e.g., posts, comments, likes—created by others who may have strong objections to such sharing. It does not require much imagination to see how, without adequate safeguards, mandating this kind of information exchange would inevitably result in something akin to the 2018 Cambridge Analytica data scandal.

There are several constraints for interoperability frameworks that must be in place to safeguard privacy and security effectively.

First, solutions should be targeted toward real users of digital services, without assuming away some common but inconvenient characteristics. In particular, solutions should not assume unrealistic levels of user interest and technical acumen.

Second, solutions must address the issue of effective enforcement. Even the best information privacy and security laws do not, in and of themselves, solve any problems. Such rules must be followed, which requires addressing the problems of procedure and enforcement. In both the EU and the United States, the current framework and practice of privacy law enforcement offers little confidence that misuses of broadly construed interoperability would be detected and prosecuted, much less that they would be prevented. This is especially true for smaller and “judgment-proof” rulebreakers, including those from foreign jurisdictions.

If the service providers are placed under a broad interoperability mandate with non-discrimination provisions (preventing effective vetting of third parties, unilateral denials of access, and so on), then the burden placed on law enforcement will be mammoth. Just one bad actor, perhaps working from Russia or North Korea, could cause immense damage by taking advantage of interoperability mandates to exfiltrate user data or to execute a hacking (e.g., phishing) campaign. Of course, such foreign bad actors would be in violation of the EU GDPR, but that is unlikely to have any practical significance.

It would not be sufficient to allow (or require) service providers to enforce merely technical filters, such as a requirement to check whether the interoperating third parties’ IP address comes from a jurisdiction with sufficient privacy protections. Working around such technical limitations does not pose a significant difficulty to motivated bad actors.

Art 6(1) of the original DMA proposal included some general interoperability provisions applicable to “gatekeepers”—i.e., the largest online platforms. Those interoperability mandates were somewhat limited – applying only to “ancillary services” (e.g., payment or identification services) or requiring only one-way data portability. However, even here, there may be some risks. For example, users may choose poorly secured identification services and thus become victims of attacks. Therefore, it is important that gatekeepers not be prevented from protecting their users adequately.

The drafts of the DMA adopted by the European Council and by the European Parliament attempt to address that, but they only allow gatekeepers to do what is “strictly necessary” (Council) or “indispensable” (Parliament). This standard may be too high and could push gatekeepers to offer lower security to avoid liability for adopting measures that would be judged by EU institutions and the courts as going beyond what is strictly necessary or indispensable.

The more recent DMA proposal from the European Parliament goes significantly beyond the original proposal, mandating full interoperability of a number of “independent interpersonal communication services” and of social-networking services. The Parliament’s proposals are good examples of overly broad and irresponsible interoperability mandates. They would cover “any providers” wanting to interconnect with gatekeepers, without adequate vetting. The safeguard proviso mentioning “high level of security and personal data protection” does not come close to addressing the seriousness of the risks created by the mandate. Instead of facing up to the risks and ensuring that the mandate itself be limited in ways that minimize them, the proposal seems just to expect that the gatekeepers can solve the problems if they only “nerd harder.”

All U.S. bills considered here introduce some interoperability mandates and none of them do so in a way that would effectively safeguard information privacy and security. For example, Rep. Cicilline’s American Choice and Innovation Online Act (ACIOA) would make it unlawful (in Section 2(b)(1)) to:

All U.S. bills considered here introduce some interoperability mandates and none of them do so in a way that would effectively safeguard information privacy and security. For example, Rep. Cicilline’s American Choice and Innovation Online Act (ACIOA) would make it unlawful (in Section 2(b)(1)) to:

restrict or impede the capacity of a business user to access or interoperate with the same platform, operating system, hardware and software features that are available to the covered platform operator’s own products, services, or lines of business.

The language of the prohibition in Sen. Klobuchar’s American Innovation and Choice Online Act (AICOA) is similar (also in Section 2(b)(1)). Both ACIOA and AICOA allow for affirmative defenses that a service provider could use if sued under the statute. While those defenses mention privacy and security, they are narrow (“narrowly tailored, could not be achieved through a less discriminatory means, was nonpretextual, and was necessary”) and would not prevent service providers from incurring significant litigation costs. Hence, just like the provisions of the DMA, they would heavily incentivize covered service providers not to adopt the most effective protections of privacy and security.

Device Neutrality (Sideloading)

Article 6(1)(c) of the DMA contains specific provisions about “sideloading”—i.e., allowing installation of third-party software through alternative app stores other than the one provided by the manufacturer (e.g., Apple’s App Store for iOS devices). A similar express provision for sideloading is included in Sen. Blumenthal’s Open App Markets Act (Section 3(d)(2)). Moreover, the broad interoperability provisions in the other U.S. bills discussed above may also be interpreted to require permitting sideloading.

A sideloading mandate aims to give users more choice. It can only achieve this, however, by taking away the option of choosing a device with a “walled garden” approach to privacy and security (such as is taken by Apple with iOS). By taking away the choice of a walled garden environment, a sideloading mandate will effectively force users to use whatever alternative app stores are preferred by particular app developers. App developers would have strong incentive to set up their own app stores or to move their apps to app stores with the least friction (for developers, not users), which would also mean the least privacy and security scrutiny.

This is not to say that Apple’s app scrutiny is perfect, but it is reasonable for an ordinary user to prefer Apple’s approach because it provides greater security (see, e.g., here and here). Thus, a legislative choice to override the revealed preference of millions of users for a “walled garden” approach should not be made lightly. 

Privacy and security safeguards in the DMA’s sideloading provisions, as amended by the European Council and by the European Parliament, as well as in Sen. Blumenthal’s Open App Markets Act, share the same problem of narrowness as the safeguards discussed above.

There is a more general privacy and security issue here, however, that those safeguards cannot address. The proposed sideloading mandate would prohibit outright a privacy and security-protection model that many users rationally choose today. Even with broader exemptions, this loss will be genuine. It is unclear whether taking away this choice from users is justified.

Conclusion

All the U.S. and EU legislative proposals considered here betray a policy preference of privileging uncertain and speculative competition gains at the expense of introducing a new and clear danger to information privacy and security. The proponents of these (or even stronger) legislative interventions seem much more concerned, for example, that privacy safeguards are “not abused by Apple and Google to protect their respective app store monopoly in the guise of user security” (source).

Given the problems with ensuring effective enforcement of privacy protections (especially with respect to actors coming from outside the EU, the United States, and other broadly privacy-respecting jurisdictions), the lip service paid by the legislative proposals to privacy and security is not much more than that. Policymakers should be expected to offer a much more detailed vision of concrete safeguards and mechanisms of enforcement when proposing rules that come with significant and entirely predictable privacy and security risks. Such vision is lacking on both sides of the Atlantic.

I do not want to suggest that interoperability is undesirable. The argument of this paper was focused on legally mandated interoperability. Firms experiment with interoperability all the time—the prevalence of open APIs on the Internet is testament to this. My aim, however, is to highlight that interoperability is complex and exposes firms and their users to potentially large-scale cyber vulnerabilities.

Generalized obligations on firms to open their data, or to create service interoperability, can short-circuit the private ordering processes that seek out those forms of interoperability and sharing that pass a cost-benefit test. The result will likely be both overinclusive and underinclusive. It would be overinclusive to require all firms in the regulated class to broadly open their services and data to all interested parties, even where it wouldn’t make sense for privacy, security, or other efficiency reasons. It is underinclusive in that the broad mandate will necessarily sap regulated firms’ resources and deter them from looking for new innovative uses that might make sense, but that are outside of the broad mandate. Thus, the likely result is less security and privacy, more expense, and less innovation.

Others already have noted that the Federal Trade Commission’s (FTC) recently released 6(b) report on the privacy practices of Internet service providers (ISPs) fails to comprehend that widespread adoption of privacy-enabling technology—in particular, Hypertext Transfer Protocol Secure (HTTPS) and DNS over HTTPS (DoH), but also the use of virtual private networks (VPNs)—largely precludes ISPs from seeing what their customers do online.

But a more fundamental problem with the report lies in its underlying assumption that targeted advertising is inherently nefarious. Indeed, much of the report highlights not actual violations of the law by the ISPs, but “concerns” that they could use customer data for targeted advertising much like Google and Facebook already do. The final subheading before the report’s conclusion declares: “Many ISPs in Our Study Can Be At Least As Privacy-Intrusive as Large Advertising Platforms.”

The report does not elaborate on why it would be bad for ISPs to enter the targeted advertising market, which is particularly strange given the public focus regulators have shone in recent months on the supposed dominance of Google, Facebook, and Amazon in online advertising. As the International Center for Law & Economics (ICLE) has argued in past filings on the issue, there simply is no justification to apply sector-specific regulations to ISPs for the mere possibility that they will use customer data for targeted advertising.

ISPs Could be Competition for the Digital Advertising Market

It is ironic to witness FTC warnings about ISPs engaging in targeted advertising even as there are open antitrust cases against Google for its alleged dominance of the digital advertising market. In fact, news reports suggest the U.S. Justice Department (DOJ) is preparing to join the antitrust suits against Google brought by state attorneys general. An obvious upshot of ISPs engaging in a larger amount of targeted advertising if that they could serve as a potential source of competition for Google, Facebook, and Amazon.

Despite the fears raised in the 6(b) report of rampant data collection for targeted ads, ISPs are, in fact, just a very small part of the $152.7 billion U.S. digital advertising market. As the report itself notes: “in 2020, the three largest players, Google, Facebook, and Amazon, received almost two-third of all U.S. digital advertising,” while Verizon pulled in just 3.4% of U.S. digital advertising revenues in 2018.

If the 6(b) report is correct that ISPs have access to troves of consumer data, it raises the question of why they don’t enjoy a bigger share of the digital advertising market. It could be that ISPs have other reasons not to engage in extensive advertising. Internet service provision is a two-sided market. ISPs could (and, over the years in various markets, some have) rely on advertising to subsidize Internet access. That they instead rely primarily on charging users directly for subscriptions may tell us something about prevailing demand on either side of the market.

Regardless of the reasons, the fact that ISPs have little presence in digital advertising suggests that it would be a misplaced focus for regulators to pursue industry-specific privacy regulation to crack down on ISP data collection for targeted advertising.

What’s the Harm in Targeted Advertising, Anyway?

At the heart of the FTC report is the commission’s contention that “advertising-driven surveillance of consumers’ online activity presents serious risks to the privacy of consumer data.” In Part V.B of the report, five of the six risks the FTC lists as associated with ISP data collection are related to advertising. But the only argument the report puts forth for why targeted advertising would be inherently pernicious is the assertion that it is contrary to user expectations and preferences.

As noted earlier, in a two-sided market, targeted ads could allow one side of the market to subsidize the other side. In other words, ISPs could engage in targeted advertising in order to reduce the price of access to consumers on the other side of the market. This is, indeed, one of the dominant models throughout the Internet ecosystem, so it wouldn’t be terribly unusual.

Taking away ISPs’ ability to engage in targeted advertising—particularly if it is paired with rumored net neutrality regulations from the Federal Communications Commission (FCC)—would necessarily put upward pricing pressure on the sector’s remaining revenue stream: subscriber fees. With bridging the so-called “digital divide” (i.e., building out broadband to rural and other unserved and underserved markets) a major focus of the recently enacted infrastructure spending package, it would be counterproductive to simultaneously take steps that would make Internet access more expensive and less accessible.

Even if the FTC were right that data collection for targeted advertising poses the risk of consumer harm, the report fails to justify why a regulatory scheme should apply solely to ISPs when they are such a small part of the digital advertising marketplace. Sector-specific regulation only makes sense if the FTC believes that ISPs are uniquely opaque among data collectors with respect to their collection practices.

Conclusion

The sector-specific approach implicitly endorsed by the 6(b) report would limit competition in the digital advertising market, even as there are already legal and regulatory inquiries into whether that market is sufficiently competitive. The report also fails to make the case the data collection for target advertising is inherently bad, or uniquely bad when done by an ISP.

There may or may not be cause for comprehensive federal privacy legislation, depending on whether it would pass cost-benefit analysis, but there is no reason to focus on ISPs alone. The FTC needs to go back to the drawing board.

[Judge Douglas Ginsburg was invited to respond to the Beesley Lecture given by Andrea Coscelli, chief executive of the U.K. Competition and Markets Authority (CMA). Both the lecture and Judge Ginsburg’s response were broadcast by the BBC on Oct. 28, 2021. The text of Mr. Coscelli’s Beesley lecture is available on the CMA’s website. Judge Ginsburg’s response follows below.]

Thank you, Victoria, for the invitation to respond to Mr. Coscelli and his proposal for a legislatively founded Digital Markets Unit. Mr. Coscelli is one of the most talented, successful, and creative heads a competition agency has ever had. In the case of the DMU [ed., Digital Markets Unit], however, I think he has let hope triumph over experience and prudence. This is often the case with proposals for governmental reform: Indeed, it has a name, the Nirvana Fallacy, which comes from comparing the imperfectly functioning marketplace with the perfectly functioning government agency. Everything we know about the regulation of competition tells us the unintended consequences may dwarf the intended benefits and the result may be a less, not more, competitive economy. The precautionary principle counsels skepticism about such a major and inherently risky intervention.

Mr. Coscelli made a point in passing that highlights the difference in our perspectives: He said the SMS [ed., strategic market status] merger regime would entail “a more cautious standard of proof.” In our shared Anglo-American legal culture, a more cautious standard of proof means the government would intervene in fewer, not more, market activities; proof beyond a reasonable doubt in criminal cases is a more cautious standard than a mere preponderance of the evidence. I, too, urge caution, but of the traditional kind.

I will highlight five areas of concern with the DMU proposal.

I. Chilling Effects

The DMU’s ability to designate a firm as being of strategic market significance—or SMS—will place a potential cloud over innovative activity in far more sectors than Mr. Coscelli could mention in his lecture. He views the DMU’s reach as limited to a small number of SMS-designated firms; and that may prove true, but there is nothing in the proposal limiting DMU’s reach.

Indeed, the DMU’s authority to regulate digital markets is surely going to be difficult to confine. Almost every major retail activity or consumer-facing firm involves an increasingly significant digital component, particularly after the pandemic forced many more firms online. Deciding which firms the DMU should cover seems easy in theory, but will prove ever more difficult and cumbersome in practice as digital technology continues to evolve. For instance, now that money has gone digital, a bank is little more than a digital platform bringing together lenders (called depositors) and borrowers, much as Amazon brings together buyers and sellers; so, is every bank with market power and an entrenched position to be subject to rules and remedies laid down by the DMU as well as supervision by the bank regulators? Is Aldi in the crosshairs now that it has developed an online retail platform? Match.com, too? In short, the number of SMS firms will likely grow apace in the next few years.

II. SMS Designations Should Not Apply to the Whole Firm

The CMA’s proposal would apply each SMS designation firm-wide, even if the firm has market power in a single line of business. This will inhibit investment in further diversification and put an SMS firm at a competitive disadvantage across all its businesses.

Perhaps company-wide SMS designations could be justified if the unintended costs were balanced by expected benefits to consumers, but this will not likely be the case. First, there is little evidence linking consumer harm to lines of business in which large digital firms do not have market power. On the contrary, despite the discussion of Amazon’s supposed threat to competition, consumers enjoy lower prices from many more retailers because of the competitive pressure Amazon brings to bear upon them.

Second, the benefits Mr. Coscelli expects the economy to reap from faster government enforcement are, at best, a mixed blessing. The proposal, you see, reverses the usual legal norm, instead making interim relief the rule rather than the exception. If a firm appeals its SMS designation, then under the CMA’s proposal, the DMU’s SMS designations and pro-competition interventions, or PCIs, will not be stayed pending appeal, raising the prospect that a firm’s activities could be regulated for a significant period even though it was improperly designated. Even prevailing in the courts may be a Pyrrhic victory because opportunities will have slipped away. Making matters worse, the DMU’s designation of a firm as SMS will likely receive a high degree of judicial deference, so that errors may never be corrected.

III. The DMU Cannot Be Evidence-based Given its Goals and Objectives

The DMU’s stated goal is to “further the interests of consumers and citizens in digital markets by promoting competition and innovation.”[1] DMU’s objectives for developing codes of conduct are: fair trading, open choices, and trust and transparency.[2] Fairness, openness, trust, and transparency are all concepts that are difficult to define and probably impossible to quantify. Therefore, I fear Mr. Coscelli’s aspiration that the DMU will be an evidence-based, tailored, and predictable regime seem unrealistic. The CMA’s idea of “an evidence-based regime” seems destined to rely mostly upon qualitative conjecture about the potential for the code of conduct to set “rules of the game” that encourage fair trading, open choices, trust, and transparency. Even if the DMU commits to considering empirical evidence at every step of its process, these fuzzy, qualitative objectives will allow it to come to virtually any conclusion about how a firm should be regulated.

Implementing those broad goals also throws into relief the inevitable tensions among them. Some potential conflicts between DMU’s objectives for developing codes of conduct are clear from the EU’s experience. For example, one of the things DMU has considered already is stronger protection for personal data. The EU’s experience with the GDPR shows that data protection is costly and, like any costly requirement, tends to advantage incumbents and thereby discourage new entry. In other words, greater data protections may come at the expense of start-ups or other new entrants and the contribution they would otherwise have made to competition, undermining open choices in the name of data transparency.

Another example of tension is clear from the distinction between Apple’s iOS and Google’s Android ecosystems. They take different approaches to the trade-off between data privacy and flexibility in app development. Apple emphasizes consumer privacy at the expense of allowing developers flexibility in their design choices and offers its products at higher prices. Android devices have fewer consumer-data protections but allow app developers greater freedom to design their apps to satisfy users and are offered at lower prices. The case of Epic Games v. Apple put on display the purportedly pro-competitive arguments the DMU could use to justify shutting down Apple’s “walled garden,” whereas the EU’s GDPR would cut against Google’s open ecosystem with limited consumer protections. Apple’s model encourages consumer trust and adoption of a single, transparent model for app development, but Google’s model encourages app developers to choose from a broader array of design and payment options and allows consumers to choose between the options; no matter how the DMU designs its code of conduct, it will be creating winners and losers at the cost of either “open choices” or “trust and transparency.” As experience teaches is always the case, it is simply not possible for an agency with multiple goals to serve them all at the same time. The result is an unreviewable discretion to choose among them ad hoc.

Finally, notice that none of the DMU’s objectives—fair trading, open choices, and trust and transparency—revolves around quantitative evidence; at bottom, these goals are not amenable to the kind of rigor Mr. Coscelli hopes for.

IV. Speed of Proposals

Mr. Coscelli has emphasized the slow pace of competition law matters; while I empathize, surely forcing merging parties to prove a negative and truncating their due process rights is not the answer.

As I mentioned earlier, it seems a more cautious standard of proof to Mr. Coscelli is one in which an SMS firm’s proposal to acquire another firm is presumed, or all but presumed, to be anticompetitive and unlawful. That is, the DMU would block the transaction unless the firms can prove their deal would not be anticompetitive—an extremely difficult task. The most self-serving version of the CMA’s proposal would require it to prove only that the merger poses a “realistic prospect” of lessening competition, which is vague, but may in practice be well below a 50% chance. Proving that the merged entity does not harm competition will still require a predictive forward-looking assessment with inherent uncertainty, but the CMA wants the costs of uncertainty placed upon firms, rather than it. Given the inherent uncertainty in merger analysis, the CMA’s proposal would pose an unprecedented burden of proof on merging parties.

But it is not only merging parties the CMA would deprive of due process; the DMU’s so-called pro-competitive interventions, or PCI, SMS designations, and code-of-conduct requirements generally would not be stayed pending appeal. Further, an SMS firm could overturn the CMA’s designation only if it could overcome substantial deference to the DMU’s fact-finding. It is difficult to discern, then, the difference between agency decisions and final orders.

The DMU would not have to show or even assert an extraordinary need for immediate relief. This is the opposite of current practice in every jurisdiction with which I am familiar.  Interim orders should take immediate effect only in exceptional circumstances, when there would otherwise be significant and irreversible harm to consumers, not in the ordinary course of agency decision making.

V. Antitrust Is Not Always the Answer

Although one can hardly disagree with Mr. Coscelli’s premise that the digital economy raises new legal questions and practical challenges, it is far from clear that competition law is the answer to them all. Some commentators of late are proposing to use competition law to solve consumer protection and even labor market problems. Unfortunately, this theme also recurs in Mr. Coscelli’s lecture. He discusses concerns with data privacy and fair and reasonable contract terms, but those have long been the province of consumer protection and contract law; a government does not need to step in and regulate all realms of activity by digital firms and call it competition law. Nor is there reason to confine needed protections of data privacy or fair terms of use to SMS firms.

Competition law remedies are sometimes poorly matched to the problems a government is trying to correct. Mr. Coscelli discusses the possibility of strong interventions, such as forcing the separation of a platform from its participation in retail markets; for example, the DMU could order Amazon to spin off its online business selling and shipping its own brand of products. Such powerful remedies can be a sledgehammer; consider forced data sharing or interoperability to make it easier for new competitors to enter. For example, if Apple’s App Store is required to host all apps submitted to it in the interest of consumer choice, then Apple loses its ability to screen for security, privacy, and other consumer benefits, as its refusal   to deal is its only way to prevent participation in its store. Further, it is not clear consumers want Apple’s store to change; indeed, many prefer Apple products because of their enhanced security.

Forced data sharing would also be problematic; the hiQ v. LinkedIn case in the United States should serve as a cautionary tale. The trial court granted a preliminary injunction forcing LinkedIn to allow hiQ to scrape its users’ profiles while the suit was ongoing. LinkedIn ultimately won the suit because it did not have market power, much less a monopoly, in any relevant market. The court concluded each theory of anticompetitive conduct was implausible, but meanwhile LinkedIn had been forced to allow hiQ to scrape its data for an extended period before the final decision. There is no simple mechanism to “unshare” the data now that LinkedIn has prevailed. This type of case could be common under the CMA proposal because the DMU’s orders will go into immediate effect.

There is potentially much redeeming power in the Digital Regulation Co-operation Forum as Mr. Coscelli described it, but I take a different lesson from this admirable attempt to coordinate across agencies: Perhaps it is time to look beyond antitrust to solve problems that are not based upon market power. As the DRCF highlights, there are multiple agencies with overlapping authority in the digital market space. ICO and Ofcom each have authority to take action against a firm that disseminates fake news or false advertisements. Mr. Coscelli says it would be too cumbersome to take down individual bad actors, but, if so, then the solution is to adopt broader consumer protection rules, not apply an ill-fitting set of competition law rules. For example, the U.K. could change its notice-and-takedown rules to subject platforms to strict liability if they host fake news, even without knowledge that they are doing so, or perhaps only if they are negligent in discharging their obligation to police against it.

Alternatively, the government could shrink the amount of time platforms have to take down information; France gives platforms only about an hour to remove harmful information. That sort of solution does not raise the same prospect of broadly chilling market activity, but still addresses one of the concerns Mr. Coscelli raises with digital markets.

In sum, although Mr. Coscelli is of course correct that competition authorities and governments worldwide are considering whether to adopt broad reforms to their competition laws, the case against broadening remains strong. Instead of relying upon the self-corrective potential of markets, which is admittedly sometimes slower than anyone would like, the CMA assumes markets need regulation until firms prove otherwise. Although clearly well-intentioned, the DMU proposal is in too many respects not met to the task of protecting competition in digital markets; at worst, it will inhibit innovation in digital markets to the point of driving startups and other innovators out of the U.K.


[1] See Digital markets Taskforce, A new pro-competition regime for digital markets, at 22, Dec. 2020, available at: https://assets.publishing.service.gov.uk/media/5fce7567e90e07562f98286c/Digital_Taskforce_-_Advice.pdf; Oliver Dowden & Kwasi Kwarteng, A New Pro-competition Regime for Digital Markets, July 2021, available from: https://www.gov.uk/government/consultations/a-new-pro-competition-regime-for-digital-markets, at ¶ 27.

[2] Sam Bowman, Sam Dumitriu & Aria Babu, Conflicting Missions:The Risks of the Digital Markets Unit to Competition and Innovation, Int’l Center for L. & Econ., June 2021, at 13.

A debate has broken out among the four sitting members of the Federal Trade Commission (FTC) in connection with the recently submitted FTC Report to Congress on Privacy and Security. Chair Lina Khan argues that the commission “must explore using its rulemaking tools to codify baseline protections,” while Commissioner Rebecca Kelly Slaughter has urged the FTC to initiate a broad-based rulemaking proceeding on data privacy and security. By contrast, Commissioners Noah Joshua Phillips and Christine Wilson counsel against a broad-based regulatory initiative on privacy.

Decisions to initiate a rulemaking should be viewed through a cost-benefit lens (See summaries of Thom Lambert’s masterful treatment of regulation, of which rulemaking is a subset, here and here). Unless there is a market failure, rulemaking is not called for. Even in the face of market failure, regulation should not be adopted unless it is more cost-beneficial than reliance on markets (including the ability of public and private litigation to address market-failure problems, such as data theft). For a variety of reasons, it is unlikely that FTC rulemaking directed at privacy and data security would pass a cost-benefit test.

Discussion

As I have previously explained (see here and here), FTC rulemaking pursuant to Section 6(g) of the FTC Act (which authorizes the FTC “to make rules and regulations for the purpose of carrying out the provisions of this subchapter”) is properly read as authorizing mere procedural, not substantive, rules. As such, efforts to enact substantive competition rules would not pass a cost-benefit test. Such rules could well be struck down as beyond the FTC’s authority on constitutional law grounds, and as “arbitrary and capricious” on administrative law grounds. What’s more, they would represent retrograde policy. Competition rules would generate higher error costs than adjudications; could be deemed to undermine the rule of law, because the U.S. Justice Department (DOJ) could not apply such rules; and innovative efficiency-seeking business arrangements would be chilled.

Accordingly, the FTC likely would not pursue 6(g) rulemaking should it decide to address data security and privacy, a topic which best fits under the “consumer protection” category. Rather, the FTC presumably would most likely initiate a “Magnuson-Moss” rulemaking (MMR) under Section 18 of the FTC Act, which authorizes the commission to prescribe “rules which define with specificity acts or practices which are unfair or deceptive acts or practices in or affecting commerce within the meaning of Section 5(a)(1) of the Act.” Among other things, Section 18 requires that the commission’s rulemaking proceedings provide an opportunity for informal hearings at which interested parties are accorded limited rights of cross-examination. Also, before commencing an MMR proceeding, the FTC must have reason to believe the practices addressed by the rulemaking are “prevalent.” 15 U.S.C. Sec. 57a(b)(3).

MMR proceedings, which are not governed under the Administrative Procedure Act (APA), do not present the same degree of legal problems as Section 6(g) rulemakings (see here). The question of legal authority to adopt a substantive rule is not raised; “rule of law” problems are far less serious (the DOJ is not a parallel enforcer of consumer-protection law); and APA issues of “arbitrariness” and “capriciousness” are not directly presented. Indeed, MMR proceedings include a variety of procedures aimed at promoting fairness (see here, for example). An MMR proceeding directed at data privacy predictably would be based on the claim that the failure to adhere to certain data-protection norms is an “unfair act or practice.”

Nevertheless, MMR rules would be subject to two substantial sources of legal risk.

The first of these arises out of federalism. Three states (California, Colorado, and Virginia) recently have enacted comprehensive data-privacy laws, and a large number of other state legislatures are considering data-privacy bills (see here). The proliferation of state data-privacy statutes would raise the risk of inconsistent and duplicative regulatory norms, potentially chilling business innovations addressed at data protection (a severe problem in the Internet Age, when business data-protection programs typically will have interstate effects).

An FTC MMR data-protection regulation that successfully “occupied the field” and preempted such state provisions could eliminate that source of costs. The Magnuson–Moss Warranty Act, however, does not contain an explicit preemption clause, leaving in serious doubt the ability of an FTC rule to displace state regulations (see here for a summary of the murky state of preemption law, including the skepticism of textualist Supreme Court justices toward implied “obstacle preemption”). In particular, the long history of state consumer-protection and antitrust laws that coexist with federal laws suggests that the case for FTC rule-based displacement of state data protection is a weak one. The upshot, then, of a Section 18 FTC data-protection rule enactment could be “the worst of all possible worlds,” with drawn-out litigation leading to competing federal and state norms that multiplied business costs.

The second source of risk arises out of the statutory definition of “unfair practices,” found in Section 5(n) of the FTC Act. Section 5(n) codifies the meaning of unfair practices, and thereby constrains the FTC’s application of rulemakings covering such practices. Section 5(n) states:

The Commission shall have no authority . . . to declare unlawful an act or practice on the grounds that such an act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

In effect, Section 5(n) implicitly subjects unfair practices to a well-defined cost-benefit framework. Thus, in promulgating a data-privacy MMR, the FTC first would have to demonstrate that specific disfavored data-protection practices caused or were likely to cause substantial harm. What’s more, the commission would have to show that any actual or likely harm would not be outweighed by countervailing benefits to consumers or competition. One would expect that a data-privacy rulemaking record would include submissions that pointed to the efficiencies of existing data-protection policies that would be displaced by a rule.

Moreover, subsequent federal court challenges to a final FTC rule likely would put forth the consumer and competitive benefits sacrificed by rule requirements. For example, rule challengers might point to the added business costs passed on to consumers that would arise from particular rule mandates, and the diminution in competition among data-protection systems generated by specific rule provisions. Litigation uncertainties surrounding these issues could be substantial and would cast into further doubt the legal viability of any final FTC data protection rule.

Apart from these legal risk-based costs, an MMR data privacy predictably would generate error-based costs. Given imperfect information in the hands of government and the impossibility of achieving welfare-maximizing nirvana through regulation (see, for example, here), any MMR data-privacy rule would erroneously condemn some economically inefficient business protocols and disincentivize some efficiency-seeking behavior. The Section 5(n) cost-benefit framework, though helpful, would not eliminate such error. (For example, even bureaucratic efforts to accommodate some business suggestions during the rulemaking process might tilt the post-rule market in favor of certain business models, thereby distorting competition.) In the abstract, it is difficult to say whether the welfare benefits of a final MMA data-privacy rule (measured by reductions in data-privacy-related consumer harm) would outweigh the costs, even before taking legal costs into account.

Conclusion

At least two FTC commissioners (and likely a third, assuming that President Joe Biden’s highly credentialed nominee Alvaro Bedoya will be confirmed by the U.S. Senate) appear to support FTC data-privacy regulation, even in the absence of new federal legislation. Such regulation, which presumably would be adopted as an MMR pursuant to Section 18 of the FTC Act, would probably not prove cost-beneficial. Not only would adoption of a final data-privacy rule generate substantial litigation costs and uncertainty, it would quite possibly add an additional layer of regulatory burdens above and beyond the requirements of proliferating state privacy rules. Furthermore, it is impossible to say whether the consumer-privacy benefits stemming from such an FTC rule would outweigh the error costs (manifested through competitive distortions and consumer harm) stemming from the inevitable imperfections of the rule’s requirements. All told, these considerations counsel against the allocation of scarce FTC resources to a Section 18 data-privacy rulemaking initiative.

But what about legislation? New federal privacy legislation that explicitly preempted state law would eliminate costs arising from inconsistencies among state privacy rules. Ideally, if such legislation were to be pursued, it should to the extent possible embody a cost-benefit framework designed to minimize the sum of administrative (including litigation) and error costs. The nature of such a possible law, and the role the FTC might play in administering it, however, is a topic for another day.

In recent years, a diverse cross-section of advocates and politicians have leveled criticisms at Section 230 of the Communications Decency Act and its grant of legal immunity to interactive computer services. Proposed legislative changes to the law have been put forward by both Republicans and Democrats.

It remains unclear whether Congress (or the courts) will amend Section 230, but any changes are bound to expand the scope, uncertainty, and expense of content risks. That’s why it’s important that such changes be developed and implemented in ways that minimize their potential to significantly disrupt and harm online activity. This piece focuses on those insurable content risks that most frequently result in litigation and considers the effect of the direct and indirect costs caused by frivolous suits and lawfare, not just the ultimate potential for a court to find liability. The experience of the 1980s asbestos-litigation crisis offers a warning of what could go wrong.

Enacted in 1996, Section 230 was intended to promote the Internet as a diverse medium for discourse, cultural development, and intellectual activity by shielding interactive computer services from legal liability when blocking or filtering access to obscene, harassing, or otherwise objectionable content. Absent such immunity, a platform hosting content produced by third parties could be held equally responsible as the creator for claims alleging defamation or invasion of privacy.

In the current legislative debates, Section 230’s critics on the left argue that the law does not go far enough to combat hate speech and misinformation. Critics on the right claim the law protects censorship of dissenting opinions. Legal challenges to the current wording of Section 230 arise primarily from what constitutes an “interactive computer service,” “good faith” restriction of content, and the grant of legal immunity, regardless of whether the restricted material is constitutionally protected. 

While Congress and various stakeholders debate various alternate statutory frameworks, several test cases simultaneously have been working their way through the judicial system and some states have either passed or are considering legislation to address complaints with Section 230. Some have suggested passing new federal legislation classifying online platforms as common carriers as an alternate approach that does not involve amending or repealing Section 230. Regardless of the form it may take, change to the status quo is likely to increase the risk of litigation and liability for those hosting or publishing third-party content.

The Nature of Content Risk

The class of individuals and organizations exposed to content risk has never been broader. Any information, content, or communication that is created, gathered, compiled, or amended can be considered “material” which, when disseminated to third parties, may be deemed “publishing.” Liability can arise from any step in that process. Those who republish material are generally held to the same standard of liability as if they were the original publisher. (See, e.g., Rest. (2d) of Torts § 578 with respect to defamation.)

Digitization has simultaneously reduced the cost and expertise required to publish material and increased the potential reach of that material. Where it was once limited to books, newspapers, and periodicals, “publishing” now encompasses such activities as creating and updating a website; creating a podcast or blog post; or even posting to social media. Much of this activity is performed by individuals and businesses who have only limited experience with the legal risks associated with publishing.

This is especially true regarding the use of third-party material, which is used extensively by both sophisticated and unsophisticated platforms. Platforms that host third-party-generated content—e.g., social media or websites with comment sections—have historically engaged in only limited vetting of that content, although this is changing. When combined with the potential to reach consumers far beyond the original platform and target audience—lasting digital traces that are difficult to identify and remove—and the need to comply with privacy and other statutory requirements, the potential for all manner of “publishers” to incur legal liability has never been higher.

Even sophisticated legacy publishers struggle with managing the litigation that arises from these risks. There are a limited number of specialist counsel, which results in higher hourly rates. Oversight of legal bills is not always effective, as internal counsel often have limited resources to manage their daily responsibilities and litigation. As a result, legal fees often make up as much as two-thirds of the average claims cost. Accordingly, defense spending and litigation management are indirect, but important, risks associated with content claims.

Effective risk management is any publisher’s first line of defense. The type and complexity of content risk management varies significantly by organization, based on its size, resources, activities, risk appetite, and sophistication. Traditional publishers typically have a formal set of editorial guidelines specifying policies governing the creation of content, pre-publication review, editorial-approval authority, and referral to internal and external legal counsel. They often maintain a library of standardized contracts; have a process to periodically review and update those wordings; and a process to verify the validity of a potential licensor’s rights. Most have formal controls to respond to complaints and to retraction/takedown requests.

Insuring Content Risks

Insurance is integral to most publishers’ risk-management plans. Content coverage is present, to some degree, in most general liability policies (i.e., for “advertising liability”). Specialized coverage—commonly referred to as “media” or “media E&O”—is available on a standalone basis or may be packaged with cyber-liability coverage. Terms of specialized coverage can vary significantly, but generally provides at least basic coverage for the three primary content risks of defamation, copyright infringement, and invasion of privacy.

Insureds typically retain the first dollar loss up to a specific dollar threshold. They may also retain a coinsurance percentage of every dollar thereafter in partnership with their insurer. For example, an insured may be responsible for the first $25,000 of loss, and for 10% of loss above that threshold. Such coinsurance structures often are used by insurers as a non-monetary tool to help control legal spending and to incentivize an organization to employ effective oversight of counsel’s billing practices.

The type and amount of loss retained will depend on the insured’s size, resources, risk profile, risk appetite, and insurance budget. Generally, but not always, increases in an insured’s retention or an insurer’s attachment (e.g., raising the threshold to $50,000, or raising the insured’s coinsurance to 15%) will result in lower premiums. Most insureds will seek the smallest retention feasible within their budget. 

Contract limits (the maximum coverage payout available) will vary based on the same factors. Larger policyholders often build a “tower” of insurance made up of multiple layers of the same or similar coverage issued by different insurers. Two or more insurers may partner on the same “quota share” layer and split any loss incurred within that layer on a pre-agreed proportional basis.  

Navigating the strategic choices involved in developing an insurance program can be complex, depending on an organization’s risks. Policyholders often use commercial brokers to aide them in developing an appropriate risk-management and insurance strategy that maximizes coverage within their budget and to assist with claims recoveries. This is particularly important for small and mid-sized insureds who may lack the sophistication or budget of larger organizations. Policyholders and brokers try to minimize the gaps in coverage between layers and among quota-share participants, but such gaps can occur, leaving a policyholder partially self-insured.

An organization’s options to insure its content risk may also be influenced by the dynamics of the overall insurance market or within specific content lines. Underwriters are not all created equal; it is a challenging responsibility requiring a level of prediction, and some underwriters may fail to adequately identify and account for certain risks. It can also be challenging to accurately measure risk aggregation and set appropriate reserves. An insurer’s appetite for certain lines and the availability of supporting reinsurance can fluctuate based on trends in the general capital markets. Specialty media/content coverage is a small niche within the global commercial insurance market, which makes insurers in this line more sensitive to these general trends.

Litigation Risks from Changes to Section 230

A full repeal or judicial invalidation of Section 230 generally would make every platform responsible for all the content they disseminate, regardless of who created the material requiring at least some additional editorial review. This would significantly disadvantage those platforms that host a significant volume of third-party content. Internet service providers, cable companies, social media, and product/service review companies would be put under tremendous strain, given the daily volume of content produced. To reduce the risk that they serve as a “deep pocket” target for plaintiffs, they would likely adopt more robust pre-publication screening of content and authorized third-parties; limit public interfaces; require registration before a user may publish content; employ more reactive complaint response/takedown policies; and ban problem users more frequently. Small and mid-sized enterprises (SMEs), as well as those not focused primarily on the business of publishing, would likely avoid many interactive functions altogether. 

A full repeal would be, in many ways, a blunderbuss approach to dealing with criticisms of Section 230, and would cause as many or more problems as it solves. In the current polarized environment, it also appears unlikely that Congress will reach bipartisan agreement on amended language for Section 230, or to classify interactive computer services as common carriers, given that the changes desired by the political left and right are so divergent. What may be more likely is that courts encounter a test case that prompts them to clarify the application of the existing statutory language—i.e., whether an entity was acting as a neutral platform or a content creator, whether its conduct was in “good faith,” and whether the material is “objectionable” within the meaning of the statute.

A relatively greater frequency of litigation is almost inevitable in the wake of any changes to the status quo, whether made by Congress or the courts. Major litigation would likely focus on those social-media platforms at the center of the Section 230 controversy, such as Facebook and Twitter, given their active role in these issues, deep pockets and, potentially, various admissions against interest helpful to plaintiffs regarding their level of editorial judgment. SMEs could also be affected in the immediate wake of a change to the statute or its interpretation. While SMEs are likely to be implicated on a smaller scale, the impact of litigation could be even more damaging to their viability if they are not adequately insured.

Over time, the boundaries of an amended Section 230’s application and any consequential effects should become clearer as courts develop application criteria and precedent is established for different fact patterns. Exposed platforms will likely make changes to their activities and risk-management strategies consistent with such developments. Operationally, some interactive features—such as comment sections or product and service reviews—may become less common.

In the short and medium term, however, a period of increased and unforeseen litigation to resolve these issues is likely to prove expensive and damaging. Insurers of content risks are likely to bear the brunt of any changes to Section 230, because these risks and their financial costs would be new, uncertain, and not incorporated into historical pricing of content risk. 

Remembering the Asbestos Crisis

The introduction of a new exposure or legal risk can have significant financial effects on commercial insurance carriers. New and revised risks must be accounted for in the assumptions, probabilities, and load factors used in insurance pricing and reserving models. Even small changes in those values can have large aggregate effects, which may undermine confidence in those models, complicate obtaining reinsurance, or harm an insurer’s overall financial health.

For example, in the 1980s, certain courts adopted the triple-trigger and continuous trigger methods[1] of determining when a policyholder could access coverage under an “occurrence” policy for asbestos claims. As a result, insurers paid claims under policies dating back to the early 1900s and, in some cases, under all policies from that date until the date of the claim. Such policies were written when mesothelioma related to asbestos was unknown and not incorporated into the policy pricing.

Insurers had long-since released reserves from the decades-old policy years, so those resources were not available to pay claims. Nor could underwriters retroactively increase premiums for the intervening years and smooth out the cost of these claims. This created extreme financial stress for impacted insurers and reinsurers, with some ultimately rendered insolvent. Surviving carriers responded by drastically reducing coverage and increasing prices, which resulted in a major capacity shortage that resolved only after the creation of the Bermuda insurance and reinsurance market. 

The asbestos-related liability crisis represented a perfect storm that is unlikely to be replicated. Given the ubiquitous nature of digital content, however, any drastic or misconceived changes to Section 230 protections could still cause significant disruption to the commercial insurance market. 

Content risk is covered, at least in part, by general liability and many cyber policies, but it is not currently a primary focus for underwriters. Specialty media underwriters are more likely to be monitoring Section 230 risk, but the highly competitive market will make it difficult for them to respond to any changes with significant price increases. In addition, the current market environment for U.S. property and casualty insurance generally is in the midst of correcting for years of inadequate pricing, expanding coverage, developing exposures, and claims inflation. It would be extremely difficult to charge an adequate premium increase if the potential severity of content risk were to increase suddenly.

In the face of such risk uncertainty and challenges to adequately increasing premiums, underwriters would likely seek to reduce their exposure to online content risks, i.e., by reducing the scope of coverage, reducing limits, and increasing retentions. How these changes would manifest, and the pain for all involved, would likely depend on how quickly such changes in policyholders’ risk profiles manifest. 

Small or specialty carriers caught unprepared could be forced to exit the market if they experienced a sharp spike in claims or unexpected increase in needed reserves. Larger, multiline carriers may respond by voluntarily reducing or withdrawing their participation in this space. Insurers exposed to ancillary content risk may simply exclude it from cover if adequate price increases are impractical. Such reactions could result in content coverage becoming harder to obtain or unavailable altogether. This, in turn, would incentivize organizations to limit or avoid certain digital activities.

Finding a More Thoughtful Approach

The tension between calls for reform of Section 230 and the potential for disrupting online activity does not mean that political leaders and courts should ignore these issues. Rather, it means that what’s required is a thoughtful, clear, and predictable approach to any changes, with the goal of maximizing the clarity of the changes and their application and minimizing any resulting litigation. Regardless of whether accomplished through legislation or the judicial process, addressing the following issues could minimize the duration and severity of any period of harmful disruption regarding content-risk:

  1. Presumptive immunity – Including an express statement in the definition of “interactive computer service,” or inferring one judicially, to clarify that platforms hosting third-party content enjoy a rebuttable presumption that statutory immunity applies would discourage frivolous litigation as courts establish precedent defining the applicability of any other revisions. 
  1. Specify the grounds for losing immunity – Clarify, at a minimum, what constitutes “good faith” with respect to content restrictions and further clarify what material is or is not “objectionable,” as it relates to newsworthy content or actions that trigger loss of immunity.
  1. Specify the scope and duration of any loss of immunity – Clarify whether the loss of immunity is total, categorical, or specific to the situation under review and the duration of that loss of immunity, if applicable.
  1. Reinstatement of immunity, subject to burden-shifting – Clarify what a platform must do to reinstate statutory immunity on a go-forward basis and clarify that it bears the burden of proving its go-forward conduct entitled it to statutory protection.
  1. Address associated issues – Any clarification or interpretation should address other issues likely to arise, such as the effect and weight to be given to a platform’s application of its community standards, adherence to neutral takedown/complain procedures, etc. Care should be taken to avoid overcorrecting and creating a “heckler’s veto.” 
  1. Deferred effect – If change is made legislatively, the effective date should be deferred for a reasonable time to allow platforms sufficient opportunity to adjust their current risk-management policies, contractual arrangements, content publishing and storage practices, and insurance arrangements in a thoughtful, orderly fashion that accounts for the new rules.

Ultimately, legislative and judicial stakeholders will chart their own course to address the widespread dissatisfaction with Section 230. More important than any of these specific policy suggestions is the principle underpins them: that any changes incorporate due consideration for the potential direct and downstream harm that can be caused if policy is not clear, comprehensive, and designed to minimize unnecessary litigation. 

It is no surprise that, in the years since Section 230 of the Communications Decency Act was passed, the environment and risks associated with digital platforms have evolved or that those changes have created a certain amount of friction in the law’s application. Policymakers should employ a holistic approach when evaluating their legislative and judicial options to revise or clarify the application of Section 230. Doing so in a targeted, predictable fashion should help to mitigate or avoid the risk of increased litigation and other unintended consequences that might otherwise prove harmful to online platforms in the commercial insurance market.

Aaron Tilley is a senior insurance executive with more than 16 years of commercial insurance experience in executive management, underwriting, legal, and claims working in or with the U.S., Bermuda, and London markets. He has served as chief underwriting officer of a specialty media E&O and cyber-liability insurer and as coverage counsel representing international insurers with respect to a variety of E&O and advertising liability claims


[1] The triple-trigger method allowed a policy to be accessed based on the date of the injury-in-fact, manifestation of injury, or exposure to substances known to cause injury. The continuous trigger allowed all policies issued by an insurer, not just one, to be accessed if a triggering event could be established during the policy period.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Nicolas Petit himself, the Joint Chair in Competition Law at the Department of Law at European University Institute in Fiesole, Italy, and at EUI’s Robert Schuman Centre for Advanced Studies. He is also invited professor at the College of Europe in Bruges
.]

A lot of water has gone under the bridge since my book was published last year. To close this symposium, I thought I would discuss the new phase of antirust statutorification taking place before our eyes. In the United States, Congress is working on five antitrust bills that propose to subject platforms to stringent obligations, including a ban on mergers and acquisitions, required data portability and interoperability, and line-of-business restrictions. In the European Union (EU), lawmakers are examining the proposed Digital Markets Act (“DMA”) that sets out a complicated regulatory system for digital “gatekeepers,” with per se behavioral limitations of their freedom over contractual terms, technological design, monetization, and ecosystem leadership.

Proponents of legislative reform on both sides of the Atlantic appear to share the common view that ongoing antitrust adjudication efforts are both instrumental and irrelevant. They are instrumental because government (or plaintiff) losses build the evidence needed to support the view that antitrust doctrine is exceedingly conservative, and that legal reform is needed. Two weeks ago, antitrust reform activists ran to Twitter to point out that the U.S. District Court dismissal of the Federal Trade Commission’s (FTC) complaint against Facebook was one more piece of evidence supporting the view that the antitrust pendulum needed to swing. They are instrumental because, again, government (or plaintiffs) wins will support scaling antitrust enforcement in the marginal case by adoption of governmental regulation. In the EU, antitrust cases follow each other almost like night the day, lending credence to the view that regulation will bring much needed coordination and economies of scale.

But both instrumentalities are, at the end of the line, irrelevant, because they lead to the same conclusion: legislative reform is long overdue. With this in mind, the logic of lawmakers is that they need not await the courts, and they can advance with haste and confidence toward the promulgation of new antitrust statutes.

The antitrust reform process that is unfolding is a cause for questioning. The issue is not legal reform in itself. There is no suggestion here that statutory reform is necessarily inferior, and no correlative reification of the judge-made-law method. Legislative intervention can occur for good reason, like when it breaks judicial inertia caused by ideological logjam.

The issue is rather one of precipitation. There is a lot of learning in the cases. The point, simply put, is that a supplementary court-legislative dialogue would yield additional information—or what Guido Calabresi has called “starting points” for regulation—that premature legislative intervention is sweeping under the rug. This issue is important because specification errors (see Doug Melamed’s symposium piece on this) in statutory legislation are not uncommon. Feedback from court cases create a factual record that will often be missing when lawmakers act too precipitously.

Moreover, a court-legislative iteration is useful when the issues in discussion are cross-cutting. The digital economy brings an abundance of them. As tech analysist Ben Evans has observed, data-sharing obligations raise tradeoffs between contestability and privacy. Chapter VI of my book shows that breakups of social networks or search engines might promote rivalry and, at the same time, increase the leverage of advertisers to extract more user data and conduct more targeted advertising. In such cases, Calabresi said, judges who know the legal topography are well-placed to elicit the preferences of society. He added that they are better placed than government agencies’ officials or delegated experts, who often attend to the immediate problem without the big picture in mind (all the more when officials are denied opportunities to engage with civil society and the press, as per the policy announced by the new FTC leadership).

Of course, there are three objections to this. The first consists of arguing that statutes are needed now because courts are too slow to deal with problems. The argument is not dissimilar to Frank Easterbrook’s concerns about irreversible harms to the economy, though with a tweak. Where Easterbook’s concern was one of ossification of Type I errors due to stare decisis, the concern here is one of entrenchment of durable monopoly power in the digital sector due to Type II errors. The concern, however, fails the test of evidence. The available data in both the United States and Europe shows unprecedented vitality in the digital sector. Venture capital funding cruises at historical heights, fueling new firm entry, business creation, and economic dynamism in the U.S. and EU digital sectors, topping all other industries. Unless we require higher levels of entry from digital markets than from other industries—or discount the social value of entry in the digital sector—this should give us reason to push pause on lawmaking efforts.

The second objection is that following an incremental process of updating the law through the courts creates intolerable uncertainty. But this objection, too, is unconvincing, at best. One may ask which of an abrupt legislative change of the law after decades of legal stability or of an experimental process of judicial renovation brings more uncertainty.

Besides, ad hoc statutes, such as the ones in discussion, are likely to pose quickly and dramatically the problem of their own legal obsolescence. Detailed and technical statutes specify rights, requirements, and procedures that often do not stand the test of time. For example, the DMA likely captures Windows as a core platform service subject to gatekeeping. But is the market power of Microsoft over Windows still relevant today, and isn’t it constrained in effect by existing antitrust rules?  In antitrust, vagueness in critical statutory terms allows room for change.[1] The best way to give meaning to buzzwords like “smart” or “future-proof” regulation consists of building in first principles, not in creating discretionary opportunities for permanent adaptation of the law. In reality, it is hard to see how the methods of future-proof regulation currently discussed in the EU creates less uncertainty than a court process.

The third objection is that we do not need more information, because we now benefit from economic knowledge showing that existing antitrust laws are too permissive of anticompetitive business conduct. But is the economic literature actually supportive of stricter rules against defendants than the rule-of-reason framework that applies in many unilateral conduct cases and in merger law? The answer is surely no. The theoretical economic literature has travelled a lot in the past 50 years. Of particular interest are works on network externalities, switching costs, and multi-sided markets. But the progress achieved in the economic understanding of markets is more descriptive than normative.

Take the celebrated multi-sided market theory. The main contribution of the theory is its advice to decision-makers to take the periscope out, so as to consider all possible welfare tradeoffs, not to be more or less defendant friendly. Payment cards provide a good example. Economic research suggests that any antitrust or regulatory intervention on prices affect tradeoffs between, and payoffs to, cardholders and merchants, cardholders and cash users, cardholders and banks, and banks and card systems. Equally numerous tradeoffs arise in many sectors of the digital economy, like ridesharing, targeted advertisement, or social networks. Multi-sided market theory renders these tradeoffs visible. But it does not come with a clear recipe for how to solve them. For that, one needs to follow first principles. A system of measurement that is flexible and welfare-based helps, as Kelly Fayne observed in her critical symposium piece on the book.

Another example might be worth considering. The theory of increasing returns suggests that markets subject to network effects tend to converge around the selection of a single technology standard, and it is not a given that the selected technology is the best one. One policy implication is that social planners might be justified in keeping a second option on the table. As I discuss in Chapter V of my book, the theory may support an M&A ban against platforms in tipped markets, on the conjecture that the assets of fringe firms might be efficiently repositioned to offer product differentiation to consumers. But the theory of increasing returns does not say under what conditions we can know that the selected technology is suboptimal. Moreover, if the selected technology is the optimal one, or if the suboptimal technology quickly obsolesces, are policy efforts at all needed?

Last, as Bo Heiden’s thought provoking symposium piece argues, it is not a given that antitrust enforcement of rivalry in markets is the best way to maintain an alternative technology alive, let alone to supply the innovation needed to deliver economic prosperity. Government procurement, science and technology policy, and intellectual-property policy might be equally effective (note that the fathers of the theory, like Brian Arthur or Paul David, have been very silent on antitrust reform).

There are, of course, exceptions to the limited normative content of modern economic theory. In some areas, economic theory is more predictive of consumer harms, like in relation to algorithmic collusion, interlocking directorates, or “killer” acquisitions. But the applications are discrete and industry-specific. All are insufficient to declare that the antitrust apparatus is dated and that it requires a full overhaul. When modern economic research turns normative, it is often way more subtle in its implications than some wild policy claims derived from it. For example, the emerging studies that claim to identify broad patterns of rising market power in the economy in no way lead to an implication that there are no pro-competitive mergers.

Similarly, the empirical picture of digital markets is incomplete. The past few years have seen a proliferation of qualitative research reports on industry structure in the digital sectors. Most suggest that industry concentration has risen, particularly in the digital sector. As with any research exercise, these reports’ findings deserve to be subject to critical examination before they can be deemed supportive of a claim of “sufficient experience.” Moreover, there is no reason to subject these reports to a lower standard of accountability on grounds that they have often been drafted by experts upon demand from antitrust agencies. After all, we academics are ethically obliged to be at least equally exacting with policy-based research as we are with science-based research.

Now, with healthy skepticism at the back of one’s mind, one can see immediately that the findings of expert reports to date have tended to downplay behavioral observations that counterbalance findings of monopoly power—such as intense business anxiety, technological innovation, and demand-expansion investments in digital markets. This was, I believe, the main takeaway from Chapter IV of my book. And less than six months ago, The Economist ran its leading story on the new marketplace reality of “Tech’s Big Dust-Up.”

More importantly, the findings of the various expert reports never seriously contemplate the possibility of competition by differentiation in business models among the platforms. Take privacy, for example. As Peter Klein reasonably writes in his symposium article, we should not be quick to assume market failure. After all, we might have more choice than meets the eye, with Google free but ad-based, and Apple pricy but less-targeted. More generally, Richard Langlois makes a very convincing point that diversification is at the heart of competition between the large digital gatekeepers. We might just be too short-termist—here, digital communications technology might help create a false sense of urgency—to wait for the end state of the Big Tech moligopoly.

Similarly, the expert reports did not really question the real possibility of competition for the purchase of regulation. As in the classic George Stigler paper, where the railroad industry fought motor-trucking competition with state regulation, the businesses that stand to lose most from the digital transformation might be rationally jockeying to convince lawmakers that not all business models are equal, and to steer regulation toward specific business models. Again, though we do not know how to consider this issue, there are signs that a coalition of large news corporations and the publishing oligopoly are behind many antitrust initiatives against digital firms.

Now, as is now clear from these few lines, my cautionary note against antitrust statutorification might be more relevant to the U.S. market. In the EU, sunk investments have been made, expectations have been created, and regulation has now become inevitable. The United States, however, has a chance to get this right. Court cases are the way to go. And unlike what the popular coverage suggests, the recent District Court dismissal of the FTC case far from ruled out the applicability of U.S. antitrust laws to Facebook’s alleged killer acquisitions. On the contrary, the ruling actually contains an invitation to rework a rushed complaint. Perhaps, as Shane Greenstein observed in his retrospective analysis of the U.S. Microsoft case, we would all benefit if we studied more carefully the learning that lies in the cases, rather than haste to produce instant antitrust analysis on Twitter that fits within 280 characters.


[1] But some threshold conditions like agreement or dominance might also become dated. 

ICLE at the Oxford Union

Sam Bowman —  13 July 2021

Earlier this year, the International Center for Law & Economics (ICLE) hosted a conference with the Oxford Union on the themes of innovation, competition, and economic growth with some of our favorite scholars. Though attendance at the event itself was reserved for Oxford Union members, videos from that day are now available for everyone to watch.

Charles Goodhart and Manoj Pradhan on demographics and growth

Charles Goodhart, of Goodhart’s Law fame, and Manoj Pradhan discussed the relationship between demographics and growth, and argued that an aging global population could mean higher inflation and interest rates sooner than many imagine.

Catherine Tucker on privacy and innovation — is there a trade-off?

Catherine Tucker of the Massachusetts Institute of Technology discussed the costs and benefits of privacy regulation with ICLE’s Sam Bowman, and considered whether we face a trade-off between privacy and innovation online and in the fight against COVID-19.

Don Rosenberg on the political and economic challenges facing a global tech company in 2021

Qualcomm’s General Counsel Don Rosenberg, formerly of Apple and IBM, discussed the political and economic challenges facing a global tech company in 2021, as well as dealing with China while working in one of the most strategically vital industries in the world.

David Teece on the dynamic capabilities framework

David Teece explained the dynamic capabilities framework, a way of understanding business strategy and behavior in an uncertain world.

Vernon Smith in conversation with Shruti Rajagopalan on what we still have to learn from Adam Smith

Nobel laureate Vernon Smith discussed the enduring insights of Adam Smith with the Mercatus Center’s Shruti Rajagopalan.

Samantha Hoffman, Robert Atkinson and Jennifer Huddleston on American and Chinese approaches to tech policy in the 2020s

The final panel, with the Information Technology and Innovation Foundation’s President Robert Atkinson, the Australian Strategic Policy Institute’s Samantha Hoffman, and the American Action Forum’s Jennifer Huddleston, discussed the role that tech policy in the U.S. and China plays in the geopolitics of the 2020s.

The Biden Administration’s July 9 Executive Order on Promoting Competition in the American Economy is very much a mixed bag—some positive aspects, but many negative ones.

It will have some positive effects on economic welfare, to the extent it succeeds in lifting artificial barriers to competition that harm consumers and workers—such as allowing direct sales of hearing aids in drug stores—and helping to eliminate unnecessary occupational licensing restrictions, to name just two of several examples.

But it will likely have substantial negative effects on economic welfare as well. Many aspects of the order appear to emphasize new regulation—such as Net Neutrality requirements that may reduce investment in broadband by internet service providers—and imposing new regulatory requirements on airlines, pharmaceutical companies, digital platforms, banks, railways, shipping, and meat packers, among others. Arbitrarily imposing new rules in these areas, without a cost-beneficial appraisal and a showing of a market failure, threatens to reduce innovation and slow economic growth, hurting producers and consumer. (A careful review of specific regulatory proposals may shed greater light on the justifications for particular regulations.)

Antitrust-related proposals to challenge previously cleared mergers, and to impose new antitrust rulemaking, are likely to raise costly business uncertainty, to the detriment of businesses and consumers. They are a recipe for slower economic growth, not for vibrant competition.

An underlying problem with the order is that it is based on the false premise that competition has diminished significantly in recent decades and that “big is bad.” Economic analysis found in the February 2020 Economic Report of the President, and in other economic studies, debunks this flawed assumption.

In short, the order commits the fundamental mistake of proposing intrusive regulatory solutions for a largely nonexistent problem. Competitive issues are best handled through traditional well-accepted antitrust analysis, which centers on promoting consumer welfare and on weighing procompetitive efficiencies against anticompetitive harm on a case-by-case basis. This approach:

  1. Deals effectively with serious competitive problems; while at the same time
  2. Cabining error costs by taking into account all economically relevant considerations on a case-specific basis.

Rather than using an executive order to direct very specific regulatory approaches without a strong economic and factual basis, the Biden administration would have been better served by raising a host of competitive issues that merit possible study and investigation by expert agencies. Such an approach would have avoided imposing the costs of unwarranted regulation that unfortunately are likely to stem from the new order.

Finally, the order’s call for new regulations and the elimination of various existing legal policies will spawn matter-specific legal challenges, and may, in many cases, not succeed in court. This will impose unnecessary business uncertainty in addition to public and private resources wasted on litigation.