[This post is a contribution to Truth on the Market‘s continuing digital symposium “FTC Rulemaking on Unfair Methods of Competition.”You can find other posts at thesymposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]
In a 3-2 July 2021 vote, the Federal Trade Commission (FTC) rescinded the nuanced statement it had issued in 2015 concerning the scope of unfair methods of competition under Section 5 of the FTC Act. At the same time, the FTC rejected the applicability of the balancing test set forth in the rule of reason (and with it, several decades of case law, agency guidance, and legal and economic scholarship).
The July 2021 statement not only rejected these long-established guiding principles for Section 5 enforcement but left in its place nothing but regulatory fiat. In the statement the FTC issued Nov. 10, 2022 (again, by a divided 3-1 vote), the agency has now adopted this “just trust us” approach as a permanent operating principle.
The November 2022 statement purports to provide a standard under which the agency will identify unfair methods of competition under Section 5. As Commissioner Christine Wilson explains in her dissent, however, it clearly fails to do so. Rather, it delivers a collection of vaguely described principles and pejorative rhetoric that encompass loosely defined harms to competition, competitors, workers and a catch-all group of “other market participants.”
The methodology for identifying these harms is comparably vague. The agency not only again rejects the rule of reason but asserts the authority to take action against a variety of “non-quantifiable harms,” all of which can be addressed at the most “incipient” stages. Moreover, and perhaps most remarkably, the statement specifically rejects any form of “net efficiencies” or “numerical cost-benefit analysis” to guide its enforcement decisions or provide even a modicum of predictability to the business community.
The November 2022 statement amounts to regulatory fiat on overdrive, presented with a thin veneer of legality derived from a medley of dormant judicial decisions, incomplete characterizations of precedent, and truncated descriptions of legislative history. Under the agency’s dubious understanding of Section 5, Congress in 1914 elected to provide the FTC with the authority to declare any business practice “unfair” subject to no principle other than the agency’s subjective understanding of that term (and, apparently, never to be informed by “numerical cost-benefit analysis”).
Moreover, any enforcement action that targeted a purportedly “unfair” practice would then be adjudicated within the agency and appealable in the first instance to the very same commissioners who authorized the action. This institutional hall of mirrors would establish the FTC as the national “fairness” arbiter subject to virtually no constraining principles under which the exercise of such powers could ever be deemed to have exceeded its scope. The license for abuse is obvious and the departure from due process inherent.
The views reflected in the November 2022 statement would almost certainly lead to a legal dead-end. If the agency takes action under its idiosyncratic understanding of the scope of unfair methods of competition under Section 5, it would elicit a legal challenge that would likely lead to two possible outcomes, both being adverse to the agency.
First, it is likely that a judge would reject the agency’s understanding of Section 5, since it is irreconcilable with a well-developed body of case law requiring that the FTC (just like any other administrative agency) act under principles that provide businesses with, as described by the 2nd U.S. Circuit Court of Appeals, at least “an inkling as to what they can lawfully do rather than be left in a state of complete unpredictability.”
Any legally defensible interpretation of the scope of unfair methods of competition under Section 5 must take into account not only legislative intent at the time the FTC Act was enacted but more than a century’s worth of case law that courts have developed to govern the actions of administrative powers. Contrary to suggestions made in the November 2022 statement, neither the statute nor the relevant body of case law mandates unqualified deference by courts to the presumed wisdom of expert regulators.
Second, even if a court accepted the agency’s interpretation of the statute (or did so provisionally), there is a strong likelihood that it would then be compelled to strike down Section 5 as an unconstitutional delegation of lawmaking powers from the legislative to the executive branch. Given the concern that a majority of the Supreme Court has increasingly expressed over actions by regulatory agencies—including the FTC, specifically, inAMG Capital Management LLC v. FTC(2021)and now again in the pending case, Axon Enterprise Inc. v. FTC—that do not clearly fall within the legislatively specified scope of an agency’s authority (as in the AMG decision and other recent Court decisions concerning the U.S. Securities and Exchange Commission, the Occupational Safety and Health Administration, the U.S. Environmental Protection Agency, and the United States Patent and Trademark Office), this would seem to be a high-probability outcome.
In short: any enforcement action taken under the agency’s newly expanded understanding of Section 5 is unlikely to withstand judicial scrutiny, either as a matter of statutory construction or as a matter of constitutional principle. Given this legal forecast, the November 2022 statement could be viewed as mere theatrics that is unlikely to have a long legal life or much practical impact (although, until judicial intervention, it could impose significant costs on firms that must defend against agency-enforcement actions brought under the unilaterally expanded scope of Section 5).
Even if that were the case, however, the November 2022 statement and, in particular, its expanded understanding of the harms that the agency is purportedly empowered to target, is nonetheless significant because it should leave little doubt concerning the lack of any meaningful commitment by agency leadership to the FTC’s historical mission to preserve market competition. Rather, it has become increasingly clear that agency leadership seeks to deploy the powerful remedies of the FTC Act (and the rest of the antitrust-enforcement apparatus) to displace a market-driven economy governed by the free play of competitive forces with an administered economy in which regulators continuously intervene to reengineer economic outcomes on grounds of fairness to favored constituencies, rather than to preserve the competitive process.
Reengineering Section 5 of the FTC Act as a “shadow” antitrust statute that operates outside the rule of reason (or any other constraining objective principle) provides a strategic detour around the inconvenient evidentiary and other legal obstacles that the agency would struggle to overcome when seeking to achieve these policy objectives under the Sherman and Clayton Acts. This intentionally unstructured and inherently politicized approach to antitrust enforcement threatens not only the institutional preconditions for a market economy but ultimately the rule of law itself.
[This post from Jonathan M. Barnett, the Torrey H. Webb Professor of Law at the University of Southern California’s Gould School of Law, is an entry in Truth on the Market’s continuing FTC UMC Rulemaking symposium. You can find other posts at thesymposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]
In its Advance Notice for Proposed Rulemaking (ANPR) on Commercial Surveillance and Data Security, the Federal Trade Commission (FTC) has requested public comment on an unprecedented initiative to promulgate and implement wide-ranging rules concerning the gathering and use of consumer data in digital markets. In this contribution, I will assume, for the sake of argument, that the commission has the legal authority to exercise its purported rulemaking powers for this purpose without a specific legislative mandate (a question as to which I recognize there is great uncertainty, which is further heightened by the fact that Congress is concurrently considered legislation in the same policy area).
In considering whether to use these powers for the purposes of adopting and implementing privacy-related regulations in digital markets, the commission would be required to undertake a rigorous assessment of the expected costs and benefits of any such regulation. Any such cost-benefit analysis must comprise at least two critical elements that are omitted from, or addressed in highly incomplete form in, the ANPR.
The Hippocratic Oath of Regulatory Intervention
There is a longstanding consensus that regulatory intervention is warranted only if a market failure can be identified with reasonable confidence. This principle is especially relevant in the case of the FTC, which is entrusted with preserving competitive markets and, therefore, should be hesitant about intervening in market transactions without a compelling evidentiary basis. As a corollary to this proposition, it is also widely agreed that implementing any intervention to correct a market failure would only be warranted to the extent that such intervention would be reasonably expected to correct any such failure at a net social gain.
This prudent approach tracks the “economic effect” analysis that the commission must apply in the rulemaking process contemplated under the Federal Trade Commission Act and the analysis of “projected benefits and … adverse economic effects” of proposed and final rules contemplated by the commission’s rules of practice. Consistent with these requirements, the commission has exhibited a longstanding commitment to thorough cost-benefit analysis. As observed by former Commissioner Julie Brill in 2016, “the FTC conducts its rulemakings with the same level of attention to costs and benefits that is required of other agencies.” Former Commissioner Brill also observed that the “FTC combines our broad mandate to protect consumers with a rigorous, empirical approach to enforcement matters.”
This demanding, fact-based protocol enhances the likelihood that regulatory interventions result in a net improvement relative to the status quo, an uncontroversial goal of any rational public policy. Unfortunately, the ANPR does not make clear that the commission remains committed to this methodology.
Assessing Market Failure in the Use of Consumer Data
To even “get off the ground,” any proposed privacy regulation would be required to identify a market failure arising from a particular use of consumer data. This requires a rigorous and comprehensive assessment of the full range of social costs and benefits that can be reasonably attributed to any such practice.
The ANPR’s Oversights
In contrast to the approach described by former Commissioner Brill, several elements of the ANPR raise significant doubts concerning the current commission’s willingness to assess evidence relevant to the potential necessity of privacy-related regulations in a balanced, rigorous, and comprehensive manner.
First, while the ANPR identifies a plethora of social harms attributable to data-collection practices, it merely acknowledges the possibility that consumers enjoy benefits from such practices “in theory.” This skewed perspective is not empirically serious. Focusing almost entirely on the costs of data collection and dismissing as conjecture any possible gains defies market realities, especially given the fact that (as discussed below) those gains are clearly significant and, in some cases, transformative.
Second, the ANPR’s choice of the normatively charged term “data surveillance” to encompass all uses of consumer data conveys the impression that all data collection through digital services is surreptitious or coerced, whereas (as discussed below) some users may knowingly provide such data to enable certain data-reliant functionalities.
Third, there is no mention in the ANPR that online providers widely provide users with notices concerning certain uses of consumer data and often require users to select among different levels of data collection.
Fourth, the ANPR unusually relies substantially on news websites and non-peer-reviewed publications in the style of policy briefs or advocacy papers, rather than the empirical social-science research on which the commission has historically made policy determinations.
This apparent indifference to analytical balance is particularly exhibited in the ANPR’s failure to address the economic gains generated through the use of consumer data in online markets. As was recognized in a 2014 White House report, many valuable digital services could not function effectively without engaging in some significant level of data collection. The examples are numerous and diverse, including traffic-navigation services that rely on data concerning a user’s geographic location (as well as other users’ geographic location); personalized ad delivery, which relies on data concerning a user’s search history and other disclosed characteristics; and search services, which rely on the ability to use user data to offer search services at no charge while offering targeted advertisements to paying advertisers.
There are equally clear gains on the “supply” side of the market. Data-collection practices can expand market access by enabling smaller vendors to leverage digital intermediaries to attract consumers that are most likely to purchase those vendors’ goods or services. The commission has recognized this point in the past, observing in a 2014 report:
Data brokers provide the information they compile to clients, who can use it to benefit consumers … [C]onsumers may benefit from increased and innovative product offerings fueled by increased competition from small businesses that are able to connect with consumers that they may not have otherwise been able to reach.
Given the commission’s statutory mission under the FTC Act to protect consumers’ interests and preserve competitive markets, these observations should be of special relevance.
Data Protection v. Data-Reliant Functionality
Data-reliant services yield social gains by substantially lowering transaction costs and, in the process, enabling services that would not otherwise be feasible, with favorable effects for consumers and vendors. This observation does not exclude the possibility that specific uses of consumer data may constitute a potential market failure that merits regulatory scrutiny and possible intervention (assuming there is sufficient legal authority for the relevant agency to undertake any such intervention). That depends on whether the social costs reasonably attributable to a particular use of consumer data exceed the social gains reasonably attributable to that use. This basic principle seems to be recognized by the ANPR, which states that the commission can only deem a practice “unfair” under the FTC Act if “it causes or is likely to cause substantial injury” and “the injury is not outweighed by benefits to consumers or competition.”
In implementing this principle, it is important to keep in mind that a market failure could only arise if the costs attributable to any particular use of consumer data are not internalized by the parties to the relevant transaction. This requires showing either that a particular use of consumer data imposes harms on third parties (a plausible scenario in circumstances implicating risks to data security) or consumers are not aware of, or do not adequately assess or foresee, the costs they incur as a result of such use (a plausible scenario in circumstances implicating risks to consumer data). For the sake of brevity, I will focus on the latter scenario.
Many scholars have taken the view that consumers do not meaningfully read privacy notices or consider privacy risks, although the academic literature has also recognized efforts by private entities to develop notice methodologies that can improve consumers’ ability to do so. Even accepting this view, however, it does not necessarily follow (as the ANPR appears to assume) that a more thorough assessment of privacy risks would inevitably lead consumers to elect higher levels of data privacy even where that would degrade functionality or require paying a positive price for certain services. That is a tradeoff that will vary across consumers. It is therefore difficult to predict and easy to get wrong.
As the ANPR indirectly acknowledges in questions 26 and 40, interventions that bar certain uses of consumer data may therefore harm consumers by compelling the modification, positive pricing, or removal from the market of popular data-reliant services. For this reason, some scholars and commentators have favored the informed-consent approach that provides users with the option to bar or limit certain uses of their data. This approach minimizes error costs since it avoids overestimating consumer preferences for privacy. Unlike a flat prohibition of certain uses of consumer data, it also can reflect differences in those preferences across consumers. The ANPR appears to dismiss this concern, asking in question 75 whether certain practices should be made illegal “irrespective of whether consumers consent to them” (my emphasis added).
Addressing the still-uncertain body of evidence concerning the tradeoff between privacy protections on the one hand and data-reliant functionalities on the other (as well as the still-unresolved extent to which users can meaningfully make that tradeoff) lies outside the scope of this discussion. However, the critical observation is that any determination of market failure concerning any particular use of consumer data must identify the costs (and specifically, identify non-internalized costs) attributable to any such use and then offset those costs against the gains attributable to that use.
This balancing analysis is critical. As the commission recognized in a 2015 report, it is essential to strike a balance between safeguarding consumer privacy without suppressing the economic gains that arise from data-reliant services that can benefit consumers and vendors alike. This even-handed approach is largely absent from the ANPR—which, as noted above, focuses almost entirely on costs while largely overlooking the gains associated with the uses of consumer data in online markets. This suggests a one-sided approach to privacy regulation that is incompatible with the cost-benefit analysis that the commission recognizes it must follow in the rulemaking process.
Private-Ordering Approaches to Consumer-Data Regulation
Suppose that a rigorous and balanced cost-benefit analysis determines that a particular use of consumer data would likely yield social costs that exceed social gains. It would still remain to be determined whether and howa regulator should intervene to yield a net social gain. As regulators make this determination, it is critical that they consider the full range of possible mechanisms to address a particular market failure in the use of consumer data.
Consistent with this approach, the FTC Act specifically requires that the commission specify in an ANPR “possible regulatory alternatives under consideration,” a requirement that is replicated at each subsequent stage of the rulemaking process, as provided in the rules of practice. The range of alternatives should include the possibility of taking no action, if no feasible intervention can be identified that would likely yield a net gain.
In selecting among those alternatives, it is imperative that the commission consider the possibility of unnecessary or overly burdensome rules that could impede the efficient development and supply of data-reliant services, either degrading the quality or raising the price of those services. In the past, the commission has emphasized this concern, stating in 2011 that “[t]he FTC actively looks for means to reduce burdens while preserving the effectiveness of a rule.”
This consideration (which appears to be acknowledged in question 24 of the ANPR) is of special importance to privacy-related regulation, given that the estimated annual costs to the U.S. economy (as calculated by the Information Technology and Innovation Foundation) of compliance with the most extensive proposed forms of privacy-related regulations would exceed $100 billion dollars. Those costs would be especially burdensome for smaller entities, effectively raising entry barriers and reducing competition in online markets (a concern that appears to be acknowledged in question 27 of the ANPR).
Given the exceptional breadth of the rules that the ANPR appears to contemplate—cover an ambitious range of activities that would typically be the subject of a landmark piece of federal legislation, rather than administrative rulemaking—it is not clear that the commission has seriously considered this vital point of concern.
In the event that the FTC does move forward with any of these proposed rulemakings (which would be required to rest on a factually supported finding of market failure), it would confront a range of possible interventions in markets for consumer data. That range is typically viewed as being bounded, on the least-interventionist side, by notice and consent requirements to facilitate informed user choice, and on the most interventionist side, by prohibitions that specifically bar certain uses of consumer data.
This is well-traveled ground within the academic and policy literature and the relative advantages and disadvantages of each regulatory approach are well-known (and differ depending on the type of consumer data and other factors). Within the scope of this contribution, I wish to address an alternative regulatory approach that lies outside this conventional range of policy options.
Bottom-Up v. Top-Down Regulation
Any cost-benefit analysis concerning potential interventions to modify or bar a particular use of consumer data, or to mandate notice-and-consent requirements in connection with any such use, must contemplate not only government-implemented solutions but also market-implemented solutions, including hybrid mechanisms in which government action facilitates or complements market-implemented solutions.
This is not a merely theoretical proposal (and is referenced indirectly in questions 36, 51, and 87 of the ANPR). As I have discussed in previously published research, the U.S. economy has a long-established record of having adopted, largely without government intervention, collective solutions to the information asymmetries that can threaten the efficient operation of consumer goods and services markets.
Examples abound: Underwriters Laboratories (UL), which establishes product-safety standards in hundreds of markets; large accounting firms, which confirm compliance with Generally Accepted Accounting Principles (GAAP), which are in turn established and updated by the Financial Accounting Standards Board, a private entity subject to oversight by the Securities and Exchange Commission; and intermediaries in other markets, such as consumer credit, business credit, insurance carriers, bond issuers, and content ratings in the entertainment and gaming industries. Collectively, these markets encompass thousands of providers, hundreds of millions of customers, and billions of dollars in value.
A collective solution is often necessary to resolve information asymmetries efficiently because the benefits from establishing an industrywide standard of product or service quality, together with a trusted mechanism for showing compliance with that standard, generates gains that cannot be fully internalized by any single provider.
Jurisdictions outside the United States have tended to address this collective-action problem through the top-down imposition of standards by government mandate and enforcement by regulatory agencies, as illustrated by the jurisdictions referenced by the ANPR that have imposed restrictions on the use of consumer data through direct regulatory intervention. By contrast, the U.S. economy has tended to favor the bottom-up development of voluntary standards, accompanied by certification and audit services, all accomplished by a mix of industry groups and third-party intermediaries. In certain markets, this may be a preferred model to address the information asymmetries between vendors and customers that are the key sources of potential market failure in the use of consumer data.
Privately organized initiatives to set quality standards and monitor compliance benefit the market by supplying a reliable standard that reduces information asymmetries and transaction costs between consumers and vendors. This, in turn, yields economic gains in the form of increased output, since consumers have reduced uncertainty concerning product quality. These quality standards are generally implemented through certification marks (for example, the “UL” certification mark) or ranking mechanisms (for example, consumer-credit or business-credit scores), which induce adoption and compliance through the opportunity to accrue reputational goodwill that, in turn, translates into economic gains.
These market-implemented voluntary mechanisms are a far less costly means to reduce information asymmetries in consumer-goods markets than regulatory interventions, which require significant investments of public funds in rulemaking, detection, investigation, enforcement, and adjudication activities.
Hybrid Policy Approaches
Private-ordering solutions to collective-action failures in markets that suffer from information asymmetries can sometimes benefit from targeted regulatory action, resulting in a hybrid policy approach. In particular, regulators can sometimes play two supplemental functions in this context.
First, regulators can require that providers in certain markets comply with (or can provide a liability safe harbor for providers that comply with) the quality standards developed by private intermediaries that have developed track records of efficiently establishing those standards and reliably confirming compliance. This mechanism is anticipated by the ANPR, which asks in question 51 whether the commission should “require firms to certify that their commercial surveillance practices meet clear standards concerning collection, use, retention, transfer, or monetization of consumer data” and further asks whether those standards should be set by “the Commission, a third-party organization, or some other entity.”
Other regulatory agencies already follow this model. For example, federal and state regulatory agencies in the fields of health care and education rely on accreditation by designated private entities for purposes of assessing compliance with applicable licensing requirements.
Second, regulators can supervise and review the quality standards implemented, adjusted, and enforced by private intermediaries. This is illustrated by the example of securities markets, in which the major exchanges institute and enforce certain governance, disclosure, and reporting requirements for listed companies but are subject to regulatory oversight by the SEC, which must approve all exchange rules and amendments. Similarly, major accounting firms monitor compliance by public companies with GAAP but must register with, and are subject to oversight by, the Public Company Accounting Oversight Board (PCAOB), a nonprofit entity subject to SEC oversight.
These types of hybrid mechanisms shift to private intermediaries most of the costs involved in developing, updating, and enforcing quality standards (in this context, standards for the use of consumer data) and harness private intermediaries’ expertise, capacities, and incentives to execute these functions efficiently and rapidly, while using targeted forms of regulatory oversight as a complementary policy tool.
Certain uses of consumer data in digital markets may impose net social harms that can be mitigated through appropriately crafted regulation. Assuming, for the sake of argument, that the commission has the legal power to enact regulation to address such harms (again, a point as to which there is great doubt), any specific steps must be grounded in rigorous and balanced cost-benefit analysis.
As a matter of law and sound public policy, it is imperative that the commission meaningfully consider the full range of reliable evidence to identify any potential market failures in the use of consumer data and how to formulate rules to rectify or mitigate such failures at a net social gain. Given the extent to which business models in digital environments rely on the use of consumer data, and the substantial value those business models confer on consumers and businesses, the potential “error costs” of regulatory overreach are high. It is therefore critical to engage in a thorough balancing of costs and gains concerning any such use.
Privacy regulation is a complex and economically consequential policy area that demands careful diagnosis and targeted remedies grounded in analysis and evidence, rather than sweeping interventions accompanied by rhetoric and anecdote.
The wave of populist antitrust that has been embraced by regulators and legislators in the United States, United Kingdom, European Union, and other jurisdictions rests on the assumption that currently dominant platforms occupy entrenched positions that only government intervention can dislodge. Following this view, Facebook will forever dominate social networking, Amazon will forever dominate cloud computing, Uber and Lyft will forever dominate ridesharing, and Amazon and Netflix will forever dominate streaming. This assumption of platform invincibility is so well-established that some policymakers advocate significant interventions without making any meaningful inquiry into whether a seemingly dominant platform actually exercises market power.
Yet this assumption is not supported by historical patterns in platform markets. It is true that network effects drive platform markets toward “winner-take-most” outcomes. But the winner is often toppled quickly and without much warning. There is no shortage of examples.
In 2007, a columnist in The Guardian observed that “it may already be too late for competitors to dislodge MySpace” and quoted an economist as authority for the proposition that “MySpace is well on the way to becoming … a natural monopoly.” About one year later, Facebook had overtaken MySpace “monopoly” in the social-networking market. Similarly, it was once thought that Blackberry would forever dominate the mobile-communications device market, eBay would always dominate the online e-commerce market, and AOL would always dominate the internet-service-portal market (a market that no longer even exists). The list of digital dinosaurs could go on.
All those tech leaders were challenged by entrants and descended into irrelevance (or reduced relevance, in eBay’s case). This occurred through the force of competition, not government intervention.
Why This Time is Probably Not Different
Given this long line of market precedents, current legislative and regulatory efforts to “restore” competition through extensive intervention in digital-platform markets require that we assume that “this time is different.” Just as that slogan has been repeatedly rebutted in the financial markets, so too is it likely to be rebutted in platform markets.
There is already supporting evidence.
In the cloud market, Amazon’s AWS now faces vigorous competition from Microsoft Azure and Google Cloud. In the streaming market, Amazon and Netflix face stiff competition from Disney+ and Apple TV+, just to name a few well-resourced rivals. In the social-networking market, Facebook now competes head-to-head with TikTok and seems to be losing. The market power once commonly attributed to leading food-delivery platforms such as Grubhub, UberEats, and DoorDash is implausible after persistent losses in most cases, and the continuous entry of new services into a rich variety of local and product-market niches.
Those who have advocated antitrust intervention on a fast-track schedule may remain unconvinced by these inconvenient facts. But the market is not.
Investors have already recognized Netflix’s vulnerability to competition, as reflected by a 35% fall in its stock price on April 20 and a decline of more than 60% over the past 12 months. Meta, Facebook’s parent, also experienced a reappraisal, falling more than 26% on Feb. 3 and more than 35% in the past 12 months. Uber, the pioneer of the ridesharing market, has declined by almost 50% over the past 12 months, while Lyft, its principal rival, has lost more than 60% of its value. These price freefalls suggest that antitrust populists may be pursuing solutions to a problem that market forces are already starting to address.
The Forgotten Curse of the Incumbent
For some commentators, the sharp downturn in the fortunes of the so-called “Big Tech” firms would not come as a surprise.
It has long been observed by some scholars and courts that a dominant firm “carries the seeds of its own destruction”—a phrase used by then-professor and later-Judge Richard Posner, writing in the University of Chicago Law Review in 1971. The reason: a dominant firm is liable to exhibit high prices, mediocre quality, or lackluster innovation, which then invites entry by more adept challengers. However, this view has been dismissed as outdated in digital-platform markets, where incumbents are purportedly protected by network effects and switching costs that make it difficult for entrants to attract users. Depending on the set of assumptions selected by an economic modeler, each contingency is equally plausible in theory.
The plunging values of leading platforms supplies real-world evidence that favors the self-correction hypothesis. It is often overlooked that network effects can work in both directions, resulting in a precipitous fall from market leader to laggard. Once users start abandoning a dominant platform for a new competitor, network effects operating in reverse can cause a “run for the exits” that leaves the leader with little time to recover. Just ask Nokia, the world’s leading (and seemingly unbeatable) smartphone brand until the Apple iPhone came along.
Market self-correction inherently outperforms regulatory correction: it operates far more rapidly and relies on consumer preferences to reallocate market leadership—a result perfectly consistent with antitrust’s mission to preserve “competition on the merits.” In contrast, policymakers can misdiagnose the competitive effects of business practices; are susceptible to the influence of private interests (especially those that are unable to compete on the merits); and often mispredict the market’s future trajectory. For Exhibit A, see the protracted antitrust litigation by the U.S. Department against IBM, which started in 1975 and ended in withdrawal of the suit in 1982. Given the launch of the Apple II in 1977, the IBM PC in 1981, and the entry of multiple “PC clones,” the forces of creative destruction swiftly displaced IBM from market leadership in the computing industry.
Regulators and legislators around the world have emphasized the urgency of taking dramatic action to correct claimed market failures in digital environments, casting aside prudential concerns over the consequences if any such failure proves to be illusory or temporary.
But the costs of regulatory failure can be significant and long-lasting. Markets must operate under unnecessary compliance burdens that are difficult to modify. Regulators’ enforcement resources are diverted, and businesses are barred from adopting practices that would benefit consumers. In particular, proposed breakup remedies advocated by some policymakers would undermine the scale economies that have enabled platforms to push down prices, an important consideration in a time of accelerating inflation.
The high concentration levels and certain business practices in digital-platform markets certainly raise important concerns as a matter of antitrust (as well as privacy, intellectual property, and other bodies of) law. These concerns merit scrutiny and may necessitate appropriately targeted interventions. Yet, any policy steps should be anchored in the factually grounded analysis that has characterized decades of regulatory and judicial action to implement the antitrust laws with appropriate care. Abandoning this nuanced framework for a blunt approach based on reflexive assumptions of market power is likely to undermine, rather than promote, the public interest in competitive markets.
[The ideas in this post from Truth on the Market regular Jonathan M. Barnett of USC Gould School of Law—the eighth entry in our FTC UMC Rulemaking symposium—are developed in greater detail in “Regulatory Rents: An Agency-Cost Analysis of the FTC Rulemaking Initiative,” a chapter in the forthcoming book FTC’s Rulemaking Authority, which will be published by Concurrences later this year. This is the first of two posts we are publishing today; see also this related post from Aaron Nielsen of BYU Law.You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]
In December 2021, the Federal Trade Commission (FTC) released its statement of regulatory priorities for 2022, which describes its intention to expand the agency’s rulemaking activities to target “unfair methods of competition” (UMC) under Section 5 of the Federal Trade Commission Act (FTC Act), in addition to (and in some cases, presumably in place of) the conventional mechanism of case-by-case adjudication. Agency leadership (meaning, the FTC chair and the majority commissioners) largely characterizes the rulemaking initiative as a logistical improvement to enable the agency to more efficiently execute its statutory commitment to preserve competitive markets. Unburdened by the costs and delays inherent to the adjudicative process (which, in the antitrust context, typically requires evidence of actual or likely competitive harm), the agency will be able to take expedited action against UMCs based on rules preemptively set forth by the agency.
This shift from enforcement by adjudication to enforcement by rulemaking is far from a mechanical adjustment. Rather, it is best understood as part of an initiative to make fundamental changes to the substance and methodology of antitrust enforcement. Substantively, the initiative appears to be part of a broader effort to alter the goals of antitrust enforcement so that it promotes what are deemed to be “equitable” market outcomes, rather than preserving the competitive process through which outcomes are determined by market forces. Methodologically, the initiative appears to be part of a broader effort to displace rule-of-reason treatment with the practical equivalent of per se prohibitions in a wide range of putatively “unfair” practices. Both steps would be inconsistent with the agency’s statutory mission to safeguard the competitive process or a meaningful commitment to a market-driven economy and the rule of law.
Abandoning Competitive Markets
Little steps sometimes portend bigger changes.
In July 2021, FTC leadership removed the following words from the mission description of the agency’s Bureau of Competition: “The Bureau’s work aims to preserve the free market system and assure the unfettered operation of the forces of supply and demand.” This omitted statement had tracked what remains the standard characterization by federal courts and agency guidelines of the core objective of the antitrust laws. Following this characterization, the antitrust laws seek to preserve the “rules of the game” for market competition, while remaining indifferent to the outcomes of such competition in any particular market. It is the competitive process, not the fortunes of particular competitors, that matters.
Other statements by FTC leadership suggest that they seek to abandon this outcome-agnostic perspective. A memo from the FTC chair to staff, distributed in September 2021, states that the agency’s actions “shape the distribution of power and opportunity” and encourages staff “to take a holistic approach to identifying harms, recognizing that antitrust and consumer protection violations harm workers and independent businesses as well as consumers.” In a draft strategic plan distributed by FTC leadership in October 2021, the agency described its mission as promoting “fair competition” for the “benefit of the public.” In contrast, the agency’s previously released strategic plan had described the agency’s mission as promoting “competition” for the benefit of consumers, consistent with the case law’s commitment to protecting consumer welfare, dating at least to the Supreme Court’s 1979 decision in Reiter v. Sonotone Corp. et al.The change in language suggests that the agency’s objectives encompass a broad range of stakeholders and policies (including distributive objectives) that extends beyond, and could conflict with, its commitment to preserve the integrity of the competitive process.
These little steps are part of a broader package of “big steps” undertaken during 2021 by FTC leadership.
In July 2021, the agency abandoned decades of federal case law and agency guidelines by rejecting the consumer-welfare standard for purposes of enforcement of Section 5 of the FTC Act against UMCs. Relatedly, FTC leadership asserted in the same statement that Congress had delegated to the agency authority under Section 5 “to determine which practices fell into the category of ‘unfair methods of competition’”. Remarkably, the agency’s claimed ambit of prosecutorial discretion to identify “unfair” practices is apparently only limited by a commitment to exercise such power “responsibly.”
This largely unbounded redefinition of the scope of Section 5 divorces the FTC’s enforcement authority from the concepts and methods as embodied in decades of federal case law and agency guidelines interpreting the Sherman and Clayton Acts. Those concepts and methods are in turn anchored in the consumer-welfare principle, which ensures that regulatory and judicial actions promote the public interest in the competitive process, rather than the private interests of any particular competitor or other policy goals not contemplated by the antitrust laws. Effectively, agency leadership has unilaterally converted Section 5 into an empty vessel into which enforcers may insert a fluid range of business practices that are deemed by fiat to pose a risk to “fair” competition.
Abandoning the Rule of Reason
In the same statement in which FTC leadership rejected the consumer-welfare principle for purposes of Section 5 enforcement, it rejected the relevance of the rule of reason for these same purposes. In that statement, agency leadership castigated the rule of reason as a standard that “leads to soaring enforcement costs” and asserted that it is incompatible with Section 5 of the FTC Act. In March 2021 remarks delivered to the House Judiciary Committee’s Antitrust Subcommittee, Commissioner Rebecca Kelly Slaughter similarly lamented “[t]he effect of cramped case law,” specifically viewing as problematic the fact that “[u]nder current Section 5 jurisprudence, courts have to consider conduct under the ‘rule of reason,’ a fact-intensive investigation into whether the anticompetitive effects of the conduct outweigh the procompetitive justifications.” Hence, it appears that the FTC, in exercising its purported rulemaking powers against UMCs under Section 5, does not intend to undertake the balancing of competitive harms and gains that is the signature element of rule-of-reason analysis. Tellingly, the agency’s draft strategic plan, released in October 2021, omits language that it would execute its enforcement mission “without unduly burdening legitimate business activity” (language that had appeared in the previously released strategic plan)—again, suggesting that it plans to take littleaccount of the offsetting competitive gains attributable to a particular business practice.
This change in methodology has two profound and concerning implications.
First, it means that any “unfair” practice targeted by the agency under Section 5 is effectively subject to a per se prohibition—that is, the agency can prevail merely by identifying that the defendant engaged in a particular practice, rather than having to show competitive harm. Note that this would represent a significant step beyond the per se rule that Sherman Act case law applies to certain cases of horizontal collusion. In those cases, a per se rule has been adopted because economic analysis indicates that these types of practices in general pose such a high risk of net anticompetitive harm that a rule-of-reason inquiry is likely to fail a cost-benefit test almost all of the time. By contrast, there is no indication that FTC leadership plans to confine its rulemaking activities to practices that systematically pose an especially high risk of anticompetitive harm, in part because it is not clear that agency leadership still views harm to the competitive process as being the determinative criterion in antitrust analysis.
Second, without further clarification from agency leadership, this means that the agency appears to place substantially reduced weight on the possibility of “false positive” error costs. This would be a dramatic departure from the conventional approach to error costs as reflected in federal antitrust case law. Antitrust scholars have long argued, and many courts have adopted the view, that “false positive” costs should be weighted more heavily relative to “false negative” error costs, principally on the ground that, as Judge Richard Posner once put it, “a cartel . . . carries within it the seeds of its own destruction.” To be clear, this weighted approach should still meaningfully assess the false-negative error costs that arise from mistaken failures to intervene. By contrast, the agency’s blanket rejection of the rule of reason in all circumstances for Section 5 purposes raises doubt as to whether it would assign any material weight to false-positive error costs in exercising its purported rulemaking power under Section 5 against UMCs. Consistent with this possibility, the agency’s July 2021 statement—which rejected the rule of reason specifically—adopted the view that Section 5 enforcement should target business practices in their “incipiency,” even absent evidence of a “likely” anticompetitive effect.
While there may be reasonable arguments in favor of an equal weighting of false-positive and false-negative error costs (on the grounds that markets are sometimes slow to correct anticompetitive conduct, as compared to the speed with which courts correct false-positive interventions), it is hard to fathom a reasonable policy argument in favor of placing no material weight on the former cost category. Under conditions of uncertainty, the net economic effect of any particular enforcement action, or failure to take such action, gives rise to a mix of probability-adjusted false-positive and false-negative error costs. Hence, any sound policy framework seeks to minimize the sum of those costs. Moreover, the wholesale rejection of a balancing analysis overlooks extensive scholarship identifying cases in which federal courts, especially during the period prior to the Supreme Court’s landmark 1977 decision in Continental TV Inc. v. GTE Sylvania Inc., applied per se rules that erroneously targeted business practices that were almost certainly generating net-positive competitive gains. Any such mistaken intervention counterproductively penalizes the efforts and ingenuity of the most efficient firms, which then harms consumers, who are compelled to suffer higher prices, lower quality, or fewer innovations than would otherwise have been the case.
The dismissal of efficiency considerations and false-positive error costs is difficult to reconcile with an economically informed approach that seeks to take enforcement actions only where there is a high likelihood of improving economic welfare based on available evidence. On this point, it is worth quoting Oliver Williamson’s well-known critique of 1960s-era antitrust: “[I]f neither the courts nor the enforcement agencies are sensitive to these [efficiency] considerations, the system fails to meet a basic test of economic rationality. And without this the whole enforcement system lacks defensible standards and becomes suspect.”
Abandoning the Rule of Law
In a liberal democratic system of government, the market relies on the state’s commitment to set forth governing laws with adequate notice and specificity, and then to enforce those laws in a manner that is reasonably amenable to judicial challenge in case of prosecutorial error or malfeasance. Without that commitment, investors are exposed to arbitrary enforcement and would be reluctant to place capital at stake. In light of the agency’s concurrent rejection of the consumer-welfare and rule-of-reason principles, any future attempt by the FTC to exercise its purported Section 5 rulemaking powers against UMCs under what currently appears to be a regime of largely unbounded regulatory discretion is likely to violate these elementary conditions for a rule-of-law jurisdiction.
Having dismissed decades of learning and precedent embodied in federal case law and agency guidelines, FTC leadership has declined to adopt any substitute guidelines to govern its actions under Section 5 and, instead, has stated (in its July 2021 statement rejecting the consumer-welfare principle) that there are few bounds on its authority to specify and target practices that it deems to be “unfair.” This blunt approach contrasts sharply with the measured approach reflected in existing agency guidelines and federal case law, which seek to delineate reasonably objective standards to govern enforcers’ and courts’ decision making when evaluating the competitive merits of a particular business practice.
This approach can be observed, even if imperfectly, in the application of the Herfindahl-Hirschman Index (HHI) metric in the merger-review process and the use of “safety zones” (defined principally by reference to market-share thresholds) in the agencies’ Antitrust Guidelines for the Licensing of Intellectual Property, Horizontal Merger Guidelines, and Antitrust Guidelines for Collaborations Among Competitors. This nuanced and evidence-based approach can also be observed in a decision such as California Dental Association v. FTC(1999), which provides a framework for calibrating the intensity of a rule-of-reason inquiry based on a preliminary assessment of the likely net competitive effect of a particular practice. In making these efforts to develop reasonably objective thresholds for triggering closer scrutiny, regulators and courts have sought to reconcile the open-ended language of the offenses described in the antitrust statutes—“restraint of trade” (Sherman Act Section 1) or “monopolization” (Sherman Act Section 2)—with a meaningful commitment to providing the market with adequate notice of the inherently fuzzy boundary between competitive and anti-competitive practices in most cases (and especially, in cases involving single-firm conduct that is most likely to be targeted by the agency under its Section 5 authority).
It does not appear that agency leadership intends to adopt this calibrated approach in implementing its rulemaking initiative, in light of its largely unbounded understanding of its Section 5 enforcement authority and wholesale rejection of the rule-of-reason methodology. If Section 5 is understood to encompass a broad and fluid set of social goals, including distributive objectives that can conflict with a commitment to the competitive process, then there is no analytical reference point by which markets can reliably assess the likelihood of antitrust liability and plan transactions accordingly. If enforcement under Section 5, including exercise of any purported rulemaking powers, does not require the agency to consider offsetting efficiencies attributable to any particular practice, then a chilling effect on everyday business activity and, more broadly, economic growth can easily ensue. In particular, firms may abstain from practices that may have mostly or even entirely procompetitive effects simply because there is some material likelihood that any such practice will be subject to investigation and enforcement under the agency’s understanding of its Section 5 authority and its adoption of a per se approach for which even strong evidence of predominantly procompetitive effects would be moot.
From Free Markets to Administered Markets
The FTC’s proposed rulemaking initiative, when placed within the context of other fundamental changes in substance and methodology adopted by agency leadership, is not easily reconciled with a market-driven economy in which resources are principally directed by the competitive forces of supply and demand. FTC leadership has reserved for the agency discretion to deem a business practice as “unfair,” while defining fairness by reference to an agglomeration of loosely described policy goals that include—but go beyond, and in some cases may conflict with—the agency’s commitment to preserve market competition. Concurrently, FTC leadership has rejected the rule-of-reason balancing approach and, by implication, may place no material weight on (or even fail to consider entirely) the efficiencies attributable to a particular business practice.
In the aggregate, any rulemaking activity undertaken within this unstructured framework would make it challenging for firms and investors to assess whether any particular action is likely to trigger agency scrutiny. Faced with this predicament, firms could only substantially reduce exposure to antitrust liability by seeking various forms of preclearance with FTC staff, who would in turn be led to issue supplemental guidance, rules, and regulations to handle the high volume of firm inquiries. Contrary to the advertised advantages of enforcement by rulemaking, this unavoidable cycle of rule interpretation and adjustment would likely increase substantially aggregate transaction and compliance costs as compared to enforcement by adjudication. While enforcement by adjudication occurs only periodically and impacts a limited number of firms, enforcement by rulemaking is a continuous activity that impacts all firms. The ultimate result: the free play of the forces of supply and demand would be replaced by a continuously regulated environment where market outcomes are constantly being reviewed through the administrative process, rather than being worked out through the competitive process.
This is a state of affairs substantially removed from the “free market system” to which the FTC’s Bureau of Competition had once been committed. Of course, that may be exactly what current agency leadership has in mind.
President Joe Biden’s July 2021 executive order set forth a commitment to reinvigorate U.S. innovation and competitiveness. The administration’s efforts to pass the America COMPETES Act would appear to further demonstrate a serious intent to pursue these objectives.
Yet several actions taken by federal agencies threaten to undermine the intellectual-property rights and transactional structures that have driven the exceptional performance of U.S. firms in key areas of the global innovation economy. These regulatory missteps together represent a policy “lose-lose” that lacks any sound basis in innovation economics and threatens U.S. leadership in mission-critical technology sectors.
Life Sciences: USTR Campaigns Against Intellectual-Property Rights
In the pharmaceutical sector, the administration’s signature action has been an unprecedented campaign by the Office of the U.S. Trade Representative (USTR) to block enforcement of patents and other intellectual-property rights held by companies that have broken records in the speed with which they developed and manufactured COVID-19 vaccines on a mass scale.
Patents were not an impediment in this process. To the contrary: they were necessary predicates to induce venture-capital investment in a small firm like BioNTech, which undertook drug development and then partnered with the much larger Pfizer to execute testing, production, and distribution. If success in vaccine development is rewarded with expropriation, this vital public-health sector is unlikely to attract investors in the future.
Contrary to increasingly common assertions that the Bayh-Dole Act (which enables universities to seek patents arising from research funded by the federal government) “robs” taxpayers of intellectual property they funded, the development of Covid-19 vaccines by scientist-founded firms illustrates how the combination of patents and private capital is essential to convert academic research into life-saving medical solutions. The biotech ecosystem has long relied on patents to structure partnerships among universities, startups, and large firms. The costly path from lab to market relies on a secure property-rights infrastructure to ensure exclusivity, without which no investor would put capital at stake in what is already a high-risk, high-cost enterprise.
This is not mere speculation. During the decades prior to the Bayh-Dole Act, the federal government placed strict limitations on the ability to patent or exclusively license innovations arising from federally funded research projects. The result: the market showed little interest in making the investment needed to convert those innovations into commercially viable products that might benefit consumers. This history casts great doubt on the wisdom of the USTR’s campaign to limit the ability of biopharmaceutical firms to maintain legal exclusivity over certain life sciences innovations.
Genomics: FTC Attempts to Block the Illumina/GRAIL Acquisition
In the genomics industry, the Federal Trade Commission (FTC) has devoted extensive resources to oppose the acquisition by Illumina—the market leader in next-generation DNA-sequencing equipment—of a medical-diagnostics startup, GRAIL (an Illumina spinoff), that has developed an early-stage cancer screening test.
It is hard to see the competitive threat. GRAIL is a pre-revenue company that operates in a novel market segment and its diagnostic test has not yet received approval from the Food and Drug Administration (FDA). To address concerns over barriers to potential competitors in this nascent market, Illumina has committed to 12-year supply contracts that would bar price increases or differential treatment for firms that develop oncology-detection tests requiring use of the Illumina platform.
The FTC’s case against Illumina’s re-acquisition of GRAIL relies on theoretical predictions of consumer harm in a market that is not yet operational. Hypothetical market failure scenarios may suit an academic seminar but fall well below the probative threshold for antitrust intervention.
Most critically, the Illumina enforcement action places at-risk a key element of well-functioning innovation ecosystems. Economies of scale and network effects lead technology markets to converge on a handful of leading platforms, which then often outsource research and development by funding and sometimes acquiring smaller firms that develop complementary technologies. This symbiotic relationship encourages entry and benefits consumers by bringing new products to market as efficiently as possible.
If antitrust interventions based on regulatory fiat, rather than empirical analysis, disrupt settled expectations in the M&A market that innovations can be monetized through acquisition transactions by larger firms, venture capital may be unwilling to fund such startups in the first place. Independent development or an initial public offering are often not feasible exit options. It is likely that innovation will then retreat to the confines of large incumbents that can fund research internally but often execute it less effectively.
Wireless Communications: DOJ Takes Aim at Standard-Essential Patents
Wireless communications stand at the heart of the global transition to a 5G-enabled “Internet of Things” that will transform business models and unlock efficiencies in myriad industries. It is therefore of paramount importance that policy actions in this sector rest on a rigorous economic basis. Unfortunately, a recent policy shift proposed by the U.S. Department of Justice’s (DOJ) Antitrust Division does not meet this standard.
In December 2021, the Antitrust Division released a draft policy statement that would largely bar owners of standard-essential patents from seeking injunctions against infringers, which are usually large device manufacturers. These patents cover wireless functionalities that enable transformative solutions in myriad industries, ranging from communications to transportation to health care. A handful of U.S. and European firms lead in wireless chip design and rely on patent licensing to disseminate technology to device manufacturers and to fund billions of dollars in research and development. The result is a technology ecosystem that has enjoyed continuous innovation, widespread user adoption, and declining quality-adjusted prices.
Rather than promoting competition or innovation, the proposed policy would simply transfer wealth from firms that develop new technologies at great cost and risk to firms that prefer to use those technologies at no cost at all. This does not benefit anyone other than device manufacturers that already capture the largest portion of economic value in the smartphone supply chain.
From international trade to antitrust to patent policy, the administration’s actions imply little appreciation for the property rights and contractual infrastructure that support real-world innovation markets. In particular, the administration’s policies endanger the intellectual-property rights and monetization pathways that support market incentives to invest in the development and commercialization of transformative technologies.
This creates an inviting vacuum for strategic rivals that are vigorously pursuing leadership positions in global technology markets. In industries that stand at the heart of the knowledge economy—life sciences, genomics, and wireless communications—the administration is on a counterproductive trajectory that overlooks the business realities of technology markets and threatens to push capital away from the entrepreneurs that drive a robust innovation ecosystem. It is time to reverse course.
During the exceptional rise in stock-market valuations from March 2020 to January 2022, both equity investors and antitrust regulators have implicitly agreed that so-called “Big Tech” firms enjoyed unbeatable competitive advantages as gatekeepers with largely unmitigated power over the digital ecosystem.
Investors bid up the value of tech stocks to exceptional levels, anticipating no competitive threat to incumbent platforms. Antitrust enforcers and some legislators have exhibited belief in the same underlying assumption. In their case, it has spurred advocacy of dramatic remedies—including breaking up the Big Tech platforms—as necessary interventions to restore competition.
Other voices in the antitrust community have been more circumspect. A key reason is the theory of contestable markets, developed in the 1980s by the late William Baumol and other economists, which holds that even extremely large market shares are at best a potential indicator of market power. To illustrate, consider the extreme case of a market occupied by a single firm. Intuitively, the firm would appear to have unqualified pricing power. Not so fast, say contestable market theorists. Suppose entry costs into the market are low and consumers can easily move to other providers. This means that the apparent monopolist will act as if the market is populated by other competitors. The takeaway: market share alone cannot demonstrate market power without evidence of sufficiently strong barriers to market entry.
While regulators and some legislators have overlooked this inconvenient principle, it appears the market has not. To illustrate, look no further than the Feb. 3 $230 billion crash in the market value of Meta Platforms—parent company of Facebook, Instagram, and WhatsApp, among other services.
In its antitrust suit against Meta, the Federal Trade Commission (FTC) has argued that Meta’s Facebook service enjoys a social-networking monopoly, a contention that the judge in the case initially rejected in June 2021 as so lacking in factual support that the suit was provisionally dismissed. The judge’s ruling (which he withdrew last month, allowing the suit to go forward after the FTC submitted a revised complaint) has been portrayed as evidence for the view that existing antitrust law sets overly demanding evidentiary standards that unfairly shelter corporate defendants.
Yet, the record-setting single-day loss in Meta’s value suggests the evidentiary standard is set just about right and the judge’s skepticism was fully warranted. Consider one of the principal reasons behind Meta’s plunge in value: its service had suffered substantial losses of users to TikTok, a formidable rival in a social-networking market in which the FTC claims that Facebook faces no serious competition. The market begs to differ. In light of the obvious competitive threat posed by TikTok and other services, investors reassessed Facebook’s staying power, which was then reflected in its owner Meta’s downgraded stock price.
Just as the investment bubble that had supported the stock market’s case for Meta has popped, so too must the regulatory bubble that had supported the FTC’s antitrust case against it. Investors’ reevaluation rebuts the FTC’s strained market definition that had implausibly excluded TikTok as a competitor.
Even more fundamentally, the market’s assessment shows that Facebook’s users face nominal switching costs—in which case, its leadership position is contestable and the Facebook “monopoly” is not much of a monopoly. While this conclusion might seem surprising, Facebook’s vulnerability is hardly exceptional: Nokia, Blackberry, AOL, Yahoo, Netscape, and PalmPilot illustrate how often seemingly unbeatable tech leaders have been toppled with remarkable speed.
The unraveling of the FTC’s case against what would appear to be an obviously dominant platform should be a wake-up call for those policymakers who have embraced populist antitrust’s view that existing evidentiary requirements, which minimize the risk of “false positive” findings of anticompetitive conduct, should be set aside as an inconvenient obstacle to regulatory and judicial intervention.
None of this should be interpreted to deny that concentration levels in certain digital markets raise significant antitrust concerns that merit close scrutiny. In particular, regulators have overlooked how some leading platforms have devalued intellectual-property rights in a manner that distorts technology and content markets by advantaging firms that operate integrated product and service ecosystems while disadvantaging firms that specialize in supplying the technological and creative inputs on which those ecosystems rely.
The fundamental point is that potential risks to competition posed by any leading platform’s business practices can be assessed through rigorous fact-based application of the existing toolkit of antitrust analysis. This is critical to evaluate whether a given firm likely occupies a transitory, rather than durable, leadership position. The plunge in Meta’s stock in response to a revealed competitive threat illustrates the perils of discarding that surgical toolkit in favor of a blunt “big is bad” principle.
Contrary to what has become an increasingly common narrative in policy discussions and political commentary, the existing framework of antitrust analysis was not designed by scholars strategically acting to protect “big business.” Rather, this framework was designed and refined by scholars dedicated to rationalizing, through the rigorous application of economic principles, an incoherent body of case law that had often harmed consumers by shielding incumbents against threats posed by more efficient rivals. The legal shortcuts being pursued by antitrust populists to detour around appropriately demanding evidentiary requirements are writing a “back to the future” script that threatens to return antitrust law to that unfortunate predicament.
Over the past decade and a half, virtually every branch of the federal government has taken steps to weaken the patent system. As reflected in President Joe Biden’s July 2021 executive order, these restraints on patent enforcement are now being coupled with antitrust policies that, in large part, adopt a “big is bad” approach in place of decades of economically grounded case law and agency guidelines.
This policy bundle is nothing new. It largely replicates the innovation policies pursued during the late New Deal and the postwar decades. That historical experience suggests that a “weak-patent/strong-antitrust” approach is likely to encourage neither innovation nor competition.
The Overlooked Shortfalls of New Deal Innovation Policy
Starting in the early 1930s, the U.S. Supreme Court issued a sequence of decisions that raised obstacles to patent enforcement. The Franklin Roosevelt administration sought to take this policy a step further, advocating compulsory licensing for all patents. While Congress did not adopt this proposal, it was partially implemented as a de facto matter through antitrust enforcement. Starting in the early 1940s and continuing throughout the postwar decades, the antitrust agencies secured judicial precedents that treated a broad range of licensing practices as per se illegal. Perhaps most dramatically, the U.S. Justice Department (DOJ) secured more than 100 compulsory licensing orders against some of the nation’s largest companies.
The rationale behind these policies was straightforward. By compelling access to incumbents’ patented technologies, courts and regulators would lower barriers to entry and competition would intensify. The postwar economy declined to comply with policymakers’ expectations. Implementation of a weak-IP/strong-antitrust innovation policy over the course of four decades yielded the opposite of its intended outcome.
Market concentration did not diminish, turnover in market leadership was slow, and private research and development (R&D) was confined mostly to the research labs of the largest corporations (who often relied on generous infusions of federal defense funding). These tendencies are illustrated by the dramatically unequal allocation of innovation capital in the postwar economy. As of the late 1950s, small firms represented approximately 7% of all private U.S. R&D expenditures. Two decades later, that figure had fallen even further. By the late 1970s, patenting rates had plunged, and entrepreneurship and innovation were in a state of widely lamented decline.
Why Weak IP Raises Entry Costs and Promotes Concentration
The decline in entrepreneurial innovation under a weak-IP regime was not accidental. Rather, this outcome can be derived logically from the economics of information markets.
Without secure IP rights to establish exclusivity, engage securely with business partners, and deter imitators, potential innovator-entrepreneurs had little hope to obtain funding from investors. In contrast, incumbents could fund R&D internally (or with federal funds that flowed mostly to the largest computing, communications, and aerospace firms) and, even under a weak-IP regime, were protected by difficult-to-match production and distribution efficiencies. As a result, R&D mostly took place inside the closed ecosystems maintained by incumbents such as AT&T, IBM, and GE.
Paradoxically, the antitrust campaign against patent “monopolies” most likely raised entry barriers and promoted industry concentration by removing a critical tool that smaller firms might have used to challenge incumbents that could outperform on every competitive parameter except innovation. While the large corporate labs of the postwar era are rightly credited with technological breakthroughs, incumbents such as AT&T were often slow in transforming breakthroughs in basic research into commercially viable products and services for consumers. Without an immediate competitive threat, there was no rush to do so.
Back to the Future: Innovation Policy in the New New Deal
Policymakers are now at work reassembling almost the exact same policy bundle that ended in the innovation malaise of the 1970s, accompanied by a similar reliance on public R&D funding disbursed through administrative processes. However well-intentioned, these processes are inherently exposed to political distortions that are absent in an innovation environment that relies mostly on private R&D funding governed by price signals.
This policy bundle has emerged incrementally since approximately the mid-2000s, through a sequence of complementary actions by every branch of the federal government.
In 2011, Congress enacted the America Invents Act, which enables any party to challenge the validity of an issued patent through the U.S. Patent and Trademark Office’s (USPTO) Patent Trial and Appeals Board (PTAB). Since PTAB’s establishment, large information-technology companies that advocated for the act have been among the leading challengers.
In May 2021, the Office of the U.S. Trade Representative (USTR) declared its support for a worldwide suspension of IP protections over Covid-19-related innovations (rather than adopting the more nuanced approach of preserving patent protections and expanding funding to accelerate vaccine distribution).
President Biden’s July 2021 executive order states that “the Attorney General and the Secretary of Commerce are encouraged to consider whether to revise their position on the intersection of the intellectual property and antitrust laws, including by considering whether to revise the Policy Statement on Remedies for Standard-Essential Patents Subject to Voluntary F/RAND Commitments.” This suggests that the administration has already determined to retract or significantly modify the 2019 joint policy statement in which the DOJ, USPTO, and the National Institutes of Standards and Technology (NIST) had rejected the view that standard-essential patent owners posed a high risk of patent holdup, which would therefore justify special limitations on enforcement and licensing activities.
The history of U.S. technology markets and policies casts great doubt on the wisdom of this weak-IP policy trajectory. The repeated devaluation of IP rights is likely to be a “lose-lose” approach that does little to promote competition, while endangering the incentive and transactional structures that sustain robust innovation ecosystems. A weak-IP regime is particularly likely to disadvantage smaller firms in biotech, medical devices, and certain information-technology segments that rely on patents to secure funding from venture capital and to partner with larger firms that can accelerate progress toward market release. The BioNTech/Pfizer alliance in the production and distribution of a Covid-19 vaccine illustrates how patents can enable such partnerships to accelerate market release.
The innovative contribution of BioNTech is hardly a one-off occurrence. The restoration of robust patent protection in the early 1980s was followed by a sharp increase in the percentage of private R&D expenditures attributable to small firms, which jumped from about 5% as of 1980 to 21% by 1992. This contrasts sharply with the unequal allocation of R&D activities during the postwar period.
Remarkably, the resurgence of small-firm innovation following the strong-IP policy shift, starting in the late 20th century, mimics tendencies observed during the late 19th and early-20th centuries, when U.S. courts provided a hospitable venue for patent enforcement; there were few antitrust constraints on licensing activities; and innovation was often led by small firms in partnership with outside investors. This historical pattern, encompassing more than a century of U.S. technology markets, strongly suggests that strengthening IP rights tends to yield a policy “win-win” that bolsters both innovative and competitive intensity.
An Alternate Path: ‘Bottom-Up’ Innovation Policy
To be clear, the alternative to the policy bundle of weak-IP/strong antitrust does not consist of a simple reversion to blind enforcement of patents and lax administration of the antitrust laws. A nuanced innovation policy would couple modern antitrust’s commitment to evidence-based enforcement—which, in particular cases, supports vigorous intervention—with a renewed commitment to protecting IP rights for innovator-entrepreneurs. That would promote competition from the “bottom up” by bolstering maverick innovators who are well-positioned to challenge (or sometimes partner with) incumbents and maintaining the self-starting engine of creative disruption that has repeatedly driven entrepreneurial innovation environments. Tellingly, technology incumbents have often been among the leading advocates for limiting patent and copyright protections.
Advocates of a weak-patent/strong-antitrust policy believe it will enhance competitive and innovative intensity in technology markets. History suggests that this combination is likely to produce the opposite outcome.
Advocates of legislative action to “reform” antitrust law have already pointed to the U.S. District Court for the District of Columbia’s dismissal of the state attorneys general’s case and the “conditional” dismissal of the Federal Trade Commission’s case against Facebook as evidence that federal antitrust case law is lax and demands correction. In fact, the court’s decisions support the opposite implication.
The Risks of Antitrust by Anecdote
The failure of a well-resourced federal regulator, and more than 45 state attorney-general offices, to avoid dismissal at an early stage of the litigation testifies to the dangers posed by a conclusory approach toward antitrust enforcement that seeks to unravel acquisitions consummated almost a decade ago without even demonstrating the factual predicates to support consideration of such far-reaching interventions. The dangers to the rule of law are self-evident. Irrespective of one’s views on the appropriate direction of antitrust law, this shortcut approach would substitute prosecutorial fiat, ideological predilection, and popular sentiment for decades of case law and agency guidelines grounded in the rigorous consideration of potential evidence of competitive harm.
The paucity of empirical support for the exceptional remedial action sought by the FTC is notable. As the district court observed, there was little systematic effort made to define the economically relevant market or provide objective evidence of market power, beyond the assertion that Facebook has a market share of “in excess of 60%.” Remarkably, the denominator behind that 60%-plus assertion is not precisely defined, since the FTC’s brief does not supply any clear metric by which to measure market share. As the court pointed out, this is a nontrivial task in multi-sided environments in which one side of the potentially relevant market delivers services to users at no charge.
While the point may seem uncontroversial, it is important to re-appreciate why insisting on a rigorous demonstration of market power is critical to preserving a coherent body of law that provides the market with a basis for reasonably anticipating the likelihood of antitrust intervention. At least since the late 1970s, courts have recognized that “big is not always bad” and can often yield cost savings that ultimately redound to consumers’ benefit. That is: firm size and consumer welfare do not stand in inherent opposition. If courts were to abandon safeguards against suits that cannot sufficiently define the relevant market and plausibly show market power, antitrust litigation could easily be used as a tool to punish successful firms that prevail over competitors simply by being more efficient. In other words: antitrust law could become a tool to preserve competitor welfare at the expense of consumer welfare.
The Specter of No-Fault Antitrust Liability
The absence of any specific demonstration of market power suggests deficient lawyering or the inability to gather supporting evidence. Giving the FTC litigation team the benefit of the doubt, the latter becomes the stronger possibility. If that is the case, this implies an effort to persuade courts to adopt a de facto rule of per se illegality for any firm that achieves a certain market share. (The same concept lies behind legislative proposals to bar acquisitions for firms that cross a certain revenue or market capitalization threshold.) Effectively, any firm that reached a certain size would operate under the presumption that it has market power and has secured or maintained such power due to anticompetitive practices, rather than business prowess. This would effectively convert leading digital platforms into quasi-public utilities subject to continuous regulatory intervention. Such an approach runs counter to antitrust law’s mission to preserve, rather than displace, private ordering by market forces.
Even at the high-water point of post-World War II antitrust zealotry (a period that ultimately ended in economic malaise), proposals to adopt a rule of no-fault liability for alleged monopolization were rejected. This was for good reason. Any such rule would likely injure consumers by precluding them from enjoying the cost savings that result from the “sweet spot” scenario in which the scale and scope economies of large firms are combined with sufficiently competitive conditions to yield reduced prices and increased convenience for consumers. Additionally, any such rule would eliminate incumbents’ incentives to work harder to offer consumers reduced prices and increased convenience, since any market share preserved or acquired as a result would simply invite antitrust scrutiny as a reward.
Remembering Why Market Power Matters
To be clear, this is not to say that “Big Tech” does not deserve close antitrust scrutiny, does not wield market power in certain segments, or has not potentially engaged in anticompetitive practices. The fundamental point is that assertions of market power and anticompetitive conduct must be demonstrated, rather than being assumed or “proved” based largely on suggestive anecdotes.
Perhaps market power will be shown sufficiently in Facebook’s case if the FTC elects to respond to the court’s invitation to resubmit its brief with a plausible definition of the relevant market and indication of market power at this stage of the litigation. If that threshold is satisfied, then thorough consideration of the allegedly anticompetitive effect of Facebook’s WhatsApp and Instagram acquisitions may be merited. However, given the policy interest in preserving the market’s confidence in relying on the merger-review process under the Hart-Scott-Rodino Act, the burden of proof on the government should be appropriately enhanced to reflect the significant time that has elapsed since regulatory decisions not to intervene in those transactions.
It would once have seemed mundane to reiterate that market power must be reasonably demonstrated to support a monopolization claim that could lead to a major divestiture remedy. Given the populist thinking that now leads much of the legislative and regulatory discussion on antitrust policy, it is imperative to reiterate the rationale behind this elementary principle.
This principle reflects the fact that, outside collusion scenarios, antitrust law is typically engaged in a complex exercise to balance the advantages of scale against the risks of anticompetitive conduct. At its best, antitrust law weighs competing facts in a good faith effort to assess the net competitive harm posed by a particular practice. While this exercise can be challenging in digital markets that naturally converge upon a handful of leading platforms or multi-dimensional markets that can have offsetting pro- and anti-competitive effects, these are not reasons to treat such an exercise as an anachronistic nuisance. Antitrust cases are inherently challenging and proposed reforms to make them easier to win are likely to endanger, rather than preserve, competitive markets.
AT&T’s $102 billion acquisition of Time Warner in 2019 will go down in M&A history as an exceptionally ill-advised transaction, resulting in the loss of tens of billions of dollars of shareholder value. It should also go down in history as an exceptional ill-chosen target of antitrust intervention. The U.S. Department of Justice, with support from many academic and policy commentators, asserted with confidence that the vertical combination of these content and distribution powerhouses would result in an entity that could exercise market power to the detriment of competitors and consumers.
The chorus of condemnation continued with vigor even after the DOJ’s loss in court and AT&T’s consummation of the transaction. With AT&T’s May 17 announcement that it will unwind the two-year-old acquisition and therefore abandon its strategy to integrate content and distribution, it is clear these predictions of impending market dominance were unfounded.
This widely shared overstatement of antitrust risk derives from a simple but fundamental error: regulators and commentators were looking at the wrong market.
The DOJ’s Antitrust Case against the Transaction
The business case for the AT&T/Time Warner transaction was straightforward: it promised to generate synergies by combining a leading provider of wireless, broadband, and satellite television services with a leading supplier of video content. The DOJ’s antitrust case against the transaction was similarly straightforward: the combined entity would have the ability to foreclose “must have” content from other “pay TV” (cable and satellite television) distributors, resulting in adverse competitive effects.
This foreclosure strategy was expected to take two principal forms. First, AT&T could temporarily withhold (or threaten to withhold) content from rival distributors absent payment of a higher carriage fee, which would then translate into higher fees for subscribers. Second, AT&T could permanently withhold content from rival distributors, who would then lose subscribers to AT&T’s DirectTV satellite television service, further enhancing AT&T’s market power.
Many commentators, both in the trade press and significant portions of the scholarly community, characterized the transaction as posing a high-risk threat to competitive conditions in the pay TV market. These assertions reflected the view that the new entity would exercise a bottleneck position over video-content distribution in the pay TV market and would exercise that power to impose one-sided terms to the detriment of content distributors and consumers.
Notwithstanding this bevy of endorsements, the DOJ’s case was rejected by the district court and the decision was upheld by the D.C. appellate court. The district judge concluded that the DOJ had failed to show that the combined entity would exercise any credible threat to withhold “must have” content from distributors. A key reason: the lost carriage fees AT&T would incur if it did withhold content were so high, and the migration of subscribers from rival pay TV services so speculative, that it would represent an obviously irrational business strategy. In short: no sophisticated business party would ever take AT&T’s foreclosure threat seriously, in which case the DOJ’s predictions of market power were insufficiently compelling to justify the use of government power to block the transaction.
The Fundamental Flaws in the DOJ’s Antitrust Case
The logical and factual infirmities of the DOJ’s foreclosure hypothesis have been extensively and ably covered elsewhere and I will not repeat that analysis. Following up on my previous TOTM commentary on the transaction, I would like to emphasize the point that the DOJ’s case against the transaction was flawed from the outset for two more fundamental reasons.
False Assumption #1
The assumption that the combined entity could withhold so-called “must have” content to cause significant and lasting competitive injury to rival distributors flies in the face of market realities. Content is an abundant, renewable, and mobile resource. There are few entry barriers to the content industry: a commercially promising idea will likely attract capital, which will in turn secure the necessary equipment and personnel for production purposes. Any rival distributor can access a rich menu of valuable content from a plethora of sources, both domestically and worldwide, each of which can provide new content, as required. Even if the combined entity held a license to distribute purportedly “must have” content, that content would be up for sale (more precisely, re-licensing) to the highest bidder as soon as the applicable contract term expired. This is not mere theorizing: it is a widely recognized feature of the entertainment industry.
False Assumption #2
Even assuming the combined entity could wield a portfolio of “must have” content to secure a dominant position in the pay TV market and raise content acquisition costs for rival pay TV services, it still would lack any meaningful pricing power in the relevant consumer market. The reason: significant portions of the viewing population do not want any pay TV or only want dramatically “slimmed-down” packages. Instead, viewers increasingly consume content primarily through video-streaming services—a market in which platforms such as Amazon and Netflix already enjoyed leading positions at the time of the transaction. Hence, even accepting the DOJ’s theory that the combined entity could somehow monopolize the pay TV market consisting of cable and satellite television services, the theory still fails to show any reasonable expectation of anticompetitive effects in the broader and economically relevant market comprising pay TV and streaming services. Any attempt to exercise pricing power in the pay TV market would be economically self-defeating, since it would likely prompt a significant portion of consumers to switch to (or start to only use) streaming services.
The Antitrust Case for the Transaction
When properly situated within the market that was actually being targeted in the AT&T/Time Warner acquisition, the combined entity posed little credible threat of exercising pricing power. To the contrary, the combined entity was best understood as an entrant that sought to challenge the two pioneer entities—Amazon and Netflix—in the “over the top” content market.
Each of these incumbent platforms individually had (and have) multi-billion-dollar content production budgets that rival or exceed the budgets of major Hollywood studios and enjoy worldwide subscriber bases numbering in the hundreds of millions. If that’s not enough, AT&T was not the only entity that observed the displacement of pay TV by streaming services, as illustrated by the roughly concurrent entry of Disney’s Disney+ service, Apple’s Apple TV+ service, Comcast NBCUniversal’s Peacock service, and others. Both the existing and new competitors are formidable entities operating in a market with formidable capital requirements. In 2019, Netflix, Amazon, and Apple TV expended approximately $15 billion, $6 billion, and again, $6 billion, respectively, on content; by contrast, HBO Max, AT&T’s streaming service, expended approximately $3.5 billion.
In short, the combined entity faced stiff competition from existing and reasonably anticipated competitors, requiring several billions of dollars on “content spend” to even stay in the running. Far from being able to exercise pricing power in an imaginary market defined by DOJ litigators for strategic purposes, the AT&T/Time Warner entity faced the challenge of merely surviving in a real-world market populated by several exceptionally well-financed competitors. At best, the combined entity “threatened” to deliver incremental competitive benefits by adding a robust new platform to the video-streaming market; at worst, it would fail in this objective and cause no incremental competitive harm. As it turns out, the latter appears to be the case.
The Enduring Virtues of Antitrust Prudence
AT&T’s M&A fiasco has important lessons for broader antitrust debates about the evidentiary standards that should be applied by courts and agencies when assessing alleged antitrust violations, in general, and vertical restraints, in particular.
Among some scholars, regulators, and legislators, it has become increasingly received wisdom that prevailing evidentiary standards, as reflected in federal case law and agency guidelines, are excessively demanding, and have purportedly induced chronic underenforcement. It has been widely asserted that the courts’ and regulators’ focus on avoiding “false positives” and the associated costs of disrupting innocuous or beneficial business practices has resulted in an overly cautious enforcement posture, especially with respect to mergers and vertical restraints.
In fact, these views were expressed by some commentators in endorsing the antitrust case against the AT&T/Time-Warner transaction. Some legislators have gone further and argued for substantial amendments to the antitrust law to provide enforcers and courts with greater latitude to block or re-engineer combinations that would not pose sufficiently demonstrated competitive risks under current statutory or case law.
The swift downfall of the AT&T/Time-Warner transaction casts great doubt on this critique and accompanying policy proposals. It was precisely the district court’s rigorous application of those “overly” demanding evidentiary standards that avoided what would have been a clear false-positive error. The failure of the “blockbuster” combination to achieve not only market dominance, but even reasonably successful entry, validates the wisdom of retaining those standards.
The fundamental mismatch between the widely supported antitrust case against the transaction and the widely overlooked business realities of the economically relevant consumer market illustrates the ease with which largely theoretical and decontextualized economic models of competitive harm can lead to enforcement actions that lack any reasonable basis in fact.
In current discussions of technology markets, few words are heard more often than “platform.” Initial public offering (IPO) prospectuses use “platform” to describe a service that is bound to dominate a digital market. Antitrust regulators use “platform” to describe a service that dominates a digital market or threatens to do so. In either case, “platform” denotes power over price. For investors, that implies exceptional profits; for regulators, that implies competitive harm.
Conventional wisdom holds that platforms enjoy high market shares, protected by high barriers to entry, which yield high returns. This simple logic drives the market’s attribution of dramatically high valuations to dramatically unprofitable businesses and regulators’ eagerness to intervene in digital platform markets characterized by declining prices, increased convenience, and expanded variety, often at zero out-of-pocket cost. In both cases, “burning cash” today is understood as the path to market dominance and the ability to extract a premium from consumers in the future.
This logic is usually wrong.
The Overlooked Basics of Platform Economics
To appreciate this perhaps surprising point, it is necessary to go back to the increasingly overlooked basics of platform economics. A platform can refer to any service that matches two complementary populations. A search engine matches advertisers with consumers, an online music service matches performers and labels with listeners, and a food-delivery service matches restaurants with home diners. A platform benefits everyone by facilitating transactions that otherwise might never have occurred.
A platform’s economic value derives from its ability to lower transaction costs by funneling a multitude of individual transactions into a single convenient hub. In pursuit of minimum costs and maximum gains, users on one side of the platform will tend to favor the most popular platforms that offer the largest number of users on the other side of the platform. (There are partial exceptions to this rule when users value being matched with certain typesof other users, rather than just with more users.) These “network effects” mean that any successful platform market will always converge toward a handful of winners. This positive feedback effect drives investors’ exuberance and regulators’ concerns.
There is a critical point, however, that often seems to be overlooked.
Market share only translates into market power to the extent the incumbent is protected against entry within some reasonable time horizon. If Warren Buffett’s moat requirement is not met, market share is immaterial. If XYZ.com owns 100% of the online pet food delivery market but entry costs are asymptotic, then market power is negligible. There is another important limiting principle. In platform markets, the depth of the moat depends not only on competitors’ costs to enter the market, but users’ costs in switching from one platform to another or alternating between multiple platforms. If users can easily hop across platforms, then market share cannot confer market power given the continuous threat of user defection. Put differently: churn limits power over price.
Contrary to natural intuitions, this is why a platform market consisting of only a few leaders can still be intensely competitive, keeping prices low (down to and including $0) even if the number of competitors is low. It is often asserted, however, that users are typically locked into the dominant platform and therefore face high switching costs, which therefore implicitly satisfies the moat requirement. If that is true, then the “high churn” scenario is a theoretical curiosity and a leading platform’s high market share would be a reliable signal of market power. In fact, this common assumption likely describes the atypical case.
AWS and the Cloud Data-Storage Market
This point can be illustrated by considering the cloud data-storage market. This would appear to be an easy case where high switching costs (due to the difficulty in shifting data among storage providers) insulate the market leader against entry threats. Yet the real world does not conform to these expectations.
While Amazon Web Services pioneered the $100 billion-plus market and is still the clear market leader, it now faces vigorous competition from Microsoft Azure, Google Cloud, and other data-storage or other cloud-related services. This may reflect the fact that the data storage market is far from saturated, so new users are up for grabs and existing customers can mitigate lock-in by diversifying across multiple storage providers. Or it may reflect the fact that the market’s structure is fluid as a function of technological changes, enabling entry at formerly bundled portions of the cloud data-services package. While it is not always technologically feasible, the cloud storage market suggests that users’ resistance to platform capture can represent a competitive opportunity for entrants to challenge dominant vendors on price, quality, and innovation parameters.
The Surprising Instability of Platform Dominance
The instability of leadership positions in the cloud storage market is not exceptional.
Consider a handful of once-powerful platforms that were rapidly dethroned once challenged by a more efficient or innovative rival: Yahoo and Alta Vista in the search-engine market (displaced by Google); Netscape in the browser market (displaced by Microsoft’s Internet Explorer, then displaced by Google Chrome); Nokia and then BlackBerry in the mobile wireless-device market (displaced by Apple and Samsung); and Friendster in the social-networking market (displaced by Myspace, then displaced by Facebook). AOL was once thought to be indomitable; now it is mostly referenced as a vintage email address. The list could go on.
Overestimating platform dominance—or more precisely, assuming platform dominance without close factual inquiry—matters because it promotes overestimates of market power. That, in turn, cultivates both market and regulatory bubbles: investors inflate stock valuations while regulators inflate the risk of competitive harm.
DoorDash and the Food-Delivery Services Market
Consider the DoorDash IPO that launched in early December 2020. The market’s current approximately $50 billion valuation of a business that has been almost consistently unprofitable implicitly assumes that DoorDash will maintain and expand its position as the largest U.S. food-delivery platform, which will then yield power over price and exceptional returns for investors.
There are reasons to be skeptical. Even where DoorDash captures and holds a dominant market share in certain metropolitan areas, it still faces actual and potential competition from other food-delivery services, in-house delivery services (especially by well-resourced national chains), and grocery and other delivery services already offered by regional and national providers. There is already evidence of these expected responses to DoorDash’s perceived high delivery fees, a classic illustration of the disciplinary effect of competitive forces on the pricing choices of an apparently dominant market leader. These “supply-side” constraints imposed by competitors are compounded by “demand-side” constraints imposed by customers. Home diners incur no more than minimal costs when swiping across food-delivery icons on a smartphone interface, casting doubt that high market share is likely to translate in this context into market power.
Deliveroo and the Costs of Regulatory Autopilot
Just as the stock market can suffer from delusions of platform grandeur, so too some competition regulators appear to have fallen prey to the same malady.
A vivid illustration is provided by the 2019 decision by the Competition Markets Authority (CMA), the British competition regulator, to challenge Amazon’s purchase of a 16% stake in Deliveroo, one of three major competitors in the British food-delivery services market. This intervention provides perhaps the clearest illustration of policy action based on a reflexive assumption of market power, even in the face of little to no indication that the predicate conditions for that assumption could plausibly be satisfied.
Far from being a dominant platform, Deliveroo was (and is) a money-losing venture lagging behind money-losing Just Eat (now Just Eat Takeaway) and Uber Eats in the U.K. food-delivery services market. Even Amazon had previously closed its own food-delivery service in the U.K. due to lack of profitability. Despite Deliveroo’s distressed economic circumstances and the implausibility of any market power arising from Amazon’s investment, the CMA nonetheless elected to pursue the fullest level of investigation. While the transaction was ultimately approved in August 2020, this intervention imposed a 15-month delay and associated costs in connection with an investment that almost certainly bolstered competition in a concentrated market by funding a firm reportedly at risk of insolvency. This is the equivalent of a competition regulator driving in reverse.
There seems to be an increasingly common assumption in commentary by the press, policymakers, and even some scholars that apparently dominant platforms usually face little competition and can set, at will, the terms of exchange. For investors, this is a reason to buy; for regulators, this is a reason to intervene. This assumption is sometimes realized, and, in that case, antitrust intervention is appropriate whenever there is reasonable evidence that market power is being secured through something other than “competition on the merits.” However, several conditions must be met to support the market power assumption without which any such inquiry would be imprudent. Contrary to conventional wisdom, the economics and history of platform markets suggest that those conditions are infrequently satisfied.
Without closer scrutiny, reflexively equating market share with market power is prone to lead both investors and regulators astray.
The Competition and Antitrust Law Enforcement Reform Act (CALERA), recently introduced in the U.S. Senate, exhibits a remarkable willingness to cast aside decades of evidentiary standards that courts have developed to uphold the rule of law by precluding factually and economically ungrounded applications of antitrust law. Without those safeguards, antitrust enforcement is prone to be driven by a combination of prosecutorial and judicial fiat. That would place at risk the free play of competitive forces that the antitrust laws are designed to protect.
Antitrust law inherently lends itself to the risk of erroneous interpretations of ambiguous evidence. Outside clear cases of interfirm collusion, virtually all conduct that might appear anti-competitive might just as easily be proven, after significant factual inquiry, to be pro-competitive. This fundamental risk of a false diagnosis has guided antitrust case law and regulatory policy since at least the Supreme Court’s landmark Continental Television v. GTE Sylvania decision in 1977 and arguably earlier. Judicial and regulatory efforts to mitigate this ambiguity, while preserving the deterrent power of the antitrust laws, have resulted in the evidentiary requirements that are targeted by the proposed bill.
Proponents of the legislative “reforms” might argue that modern antitrust case law’s careful avoidance of enforcement error yields excessive caution. To relieve regulators and courts from having to do their homework before disrupting a targeted business and its employees, shareholders, customers and suppliers, the proposed bill empowers plaintiffs to allege and courts to “find” anti-competitive conduct without having to be bound to the reasonably objective metrics upon which courts and regulators have relied for decades. That runs the risk of substituting rhetoric and intuition for fact and analysis as the guiding principles of antitrust enforcement and adjudication.
This dismissal of even a rudimentary commitment to rule-of-law principles is illustrated by two dramatic departures from existing case law in the proposed bill. Each constitutes a largely unrestrained “blank check” for regulatory and judicial overreach.
Blank Check #1
The bill includes a broad prohibition on “exclusionary” conduct, which is defined to include any conduct that “materially disadvantages 1 or more actual or potential competitors” and “presents an appreciable risk of harming competition.” That amorphous language arguably enables litigants to target a firm that offers consumers lower prices but “disadvantages” less efficient competitors that cannot match that price.
In fact, the proposed legislation specifically facilitates this litigation strategy by relieving predatory pricing claims from having to show that pricing is below cost or likely to result ultimately in profits for the defendant. While the bill permits a defendant to escape liability by showing sufficiently countervailing “procompetitive benefits,” the onus rests on the defendant to show otherwise. This burden-shifting strategy encourages lagging firms to shift competition from the marketplace to the courthouse.
Blank Check #2
The bill then removes another evidentiary safeguard by relieving plaintiffs from always having to define a relevant market. Rather, it may be sufficient to show that the contested practice gives rise to an “appreciable risk of harming competition … based on the totality of the circumstances.” It is hard to miss the high degree of subjectivity in this standard.
This ambiguous threshold runs counter to antitrust principles that require a credible showing of market power in virtually all cases except horizontal collusion. Those principles make perfect sense. Market power is the gateway concept that enables courts to distinguish between claims that plausibly target alleged harms to competition and those that do not. Without a well-defined market, it is difficult to know whether a particular practice reflects market power or market competition. Removing the market power requirement can remove any meaningful grounds on which a defendant could avoid a nuisance lawsuit or contest or appeal a conclusory allegation or finding of anticompetitive conduct.
The bill’s transparently outcome-driven approach is likely to give rise to a cloud of liability that penalizes businesses that benefit consumers through price and quality combinations that competitors cannot replicate. This obviously runs directly counter to the purpose of the antitrust laws. Certainly, winners can and sometimes do entrench themselves through potentially anticompetitive practices that should be closely scrutinized. However, the proposed legislation seems to reflect a presumption that successful businesses usually win by employing illegitimate tactics, rather than simply being the most efficient firm in the market. Under that assumption, competition law becomes a tool for redoing, rather than enabling, competitive outcomes.
While this populist approach may be popular, it is neither economically sound nor consistent with a market-driven economy in which resources are mostly allocated through pricing mechanisms and government intervention is the exception, not the rule. It would appear that some legislators would like to reverse that presumption. Far from being a victory for consumers, that outcome would constitute a resounding loss.
In a constructive development, the Federal Trade Commission has joined its British counterpart in investigating Nvidia’s proposed $40 billion acquisition of chip designer Arm, a subsidiary of Softbank. Arm provides the technological blueprints for wireless communications devices and, subject to a royalty fee, makes those crown-jewel assets available to all interested firms. Notwithstanding Nvidia’s stated commitment to keep the existing policy in place, there is an obvious risk that the new parent, one of the world’s leading chip makers, would at some time modify this policy with adverse competitive effects.
Ironically, the FTC is likely part of the reason that the Nvidia-Arm transaction is taking place.
Since the mid-2000s, the FTC and other leading competition regulators (except for the U.S. Department of Justice’s Antitrust Division under the leadership of former Assistant Attorney General Makan Delrahim) have intervened extensively in licensing arrangements in wireless device markets, culminating in the FTC’s recent failed suit against Qualcomm. The Nvidia-Arm transaction suggests that these actions may simply lead chip designers to abandon the licensing model and shift toward structures that monetize chip-design R&D through integrated hardware and software ecosystems. Amazon and Apple are already undertaking chip innovation through this model. Antitrust action that accelerates this movement toward in-house chip design is likely to have adverse effects for the competitive health of the wireless ecosystem.
How IP Licensing Promotes Market Access
Since its inception, the wireless communications market has relied on a handful of IP licensors to supply device producers and other intermediate users with a common suite of technology inputs. The result has been an efficient division of labor between firms that specialize in upstream innovation and firms that specialize in production and other downstream functions. Contrary to the standard assumption that IP rights limit access, this licensing-based model ensures technology access to any firm willing to pay the royalty fee.
Efforts by regulators to reengineer existing relationships between innovators and implementers endanger this market structure by inducing innovators to abandon licensing-based business models, which now operate under a cloud of legal insecurity, for integrated business models in which returns on R&D investments are captured internally through hardware and software products. Rather than expanding technology access and intensifying competition, antitrust restraints on licensing freedom are liable to limit technology access and increase market concentration.
Regulatory Intervention and Market Distortion
This interventionist approach has relied on the assertion that innovators can “lock in” producers and extract a disproportionate fee in exchange for access. This prediction has never found support in fact. Contrary to theoretical arguments that patent owners can impose double-digit “royalty stacks” on device producers, empirical researchers have repeatedly found that the estimated range of aggregate rates lies in the single digits. These findings are unsurprising given market performance over more than two decades: adoption has accelerated as quality-adjusted prices have fallen and innovation has never ceased. If rates were exorbitant, market growth would have been slow, and the smartphone would be a luxury for the rich.
Despite these empirical infirmities, the FTC and other competition regulators have persisted in taking action to mitigate “holdup risk” through policy statements and enforcement actions designed to preclude IP licensors from seeking injunctive relief. The result is a one-sided legal environment in which the world’s largest device producers can effectively infringe patents at will, knowing that the worst-case scenario is a “reasonable royalty” award determined by a court, plus attorneys’ fees. Without any credible threat to deny access even after a favorable adjudication on the merits, any IP licensor’s ability to negotiate a royalty rate that reflects the value of its technology contribution is constrained.
Assuming no change in IP licensing policy on the horizon, it is therefore not surprising that an IP licensor would seek to shift toward an integrated business model in which IP is not licensed but embedded within an integrated suite of products and services. Or alternatively, an IP licensor entity might seek to be acquired by a firm that already has such a model in place. Hence, FTC v. Qualcomm leads Arm to Nvidia.
The Error Costs of Non-Evidence-Based Antitrust
These counterproductive effects of antitrust intervention demonstrate the error costs that arise when regulators act based on unverified assertions of impending market failure. Relying on the somewhat improbable assumption that chip suppliers can dictate licensing terms to device producers that are among the world’s largest companies, competition regulators have placed at risk the legal predicates of IP rights and enforceable contracts that have made the wireless-device market an economic success. As antitrust risk intensifies, the return on licensing strategies falls and competitive advantage shifts toward integrated firms that can monetize R&D internally through stand-alone product and service ecosystems.
Far from increasing competitiveness, regulators’ current approach toward IP licensing in wireless markets is likely to reduce it.