Archives For competition

The wave of populist antitrust that has been embraced by regulators and legislators in the United States, United Kingdom, European Union, and other jurisdictions rests on the assumption that currently dominant platforms occupy entrenched positions that only government intervention can dislodge. Following this view, Facebook will forever dominate social networking, Amazon will forever dominate cloud computing, Uber and Lyft will forever dominate ridesharing, and Amazon and Netflix will forever dominate streaming. This assumption of platform invincibility is so well-established that some policymakers advocate significant interventions without making any meaningful inquiry into whether a seemingly dominant platform actually exercises market power.

Yet this assumption is not supported by historical patterns in platform markets. It is true that network effects drive platform markets toward “winner-take-most” outcomes. But the winner is often toppled quickly and without much warning. There is no shortage of examples.

In 2007, a columnist in The Guardian observed that “it may already be too late for competitors to dislodge MySpace” and quoted an economist as authority for the proposition that “MySpace is well on the way to becoming … a natural monopoly.” About one year later, Facebook had overtaken MySpace “monopoly” in the social-networking market. Similarly, it was once thought that Blackberry would forever dominate the mobile-communications device market, eBay would always dominate the online e-commerce market, and AOL would always dominate the internet-service-portal market (a market that no longer even exists). The list of digital dinosaurs could go on.

All those tech leaders were challenged by entrants and descended into irrelevance (or reduced relevance, in eBay’s case). This occurred through the force of competition, not government intervention.

Why This Time is Probably Not Different

Given this long line of market precedents, current legislative and regulatory efforts to “restore” competition through extensive intervention in digital-platform markets require that we assume that “this time is different.” Just as that slogan has been repeatedly rebutted in the financial markets, so too is it likely to be rebutted in platform markets. 

There is already supporting evidence. 

In the cloud market, Amazon’s AWS now faces vigorous competition from Microsoft Azure and Google Cloud. In the streaming market, Amazon and Netflix face stiff competition from Disney+ and Apple TV+, just to name a few well-resourced rivals. In the social-networking market, Facebook now competes head-to-head with TikTok and seems to be losing. The market power once commonly attributed to leading food-delivery platforms such as Grubhub, UberEats, and DoorDash is implausible after persistent losses in most cases, and the continuous entry of new services into a rich variety of local and product-market niches.

Those who have advocated antitrust intervention on a fast-track schedule may remain unconvinced by these inconvenient facts. But the market is not. 

Investors have already recognized Netflix’s vulnerability to competition, as reflected by a 35% fall in its stock price on April 20 and a decline of more than 60% over the past 12 months. Meta, Facebook’s parent, also experienced a reappraisal, falling more than 26% on Feb. 3 and more than 35% in the past 12 months. Uber, the pioneer of the ridesharing market, has declined by almost 50% over the past 12 months, while Lyft, its principal rival, has lost more than 60% of its value. These price freefalls suggest that antitrust populists may be pursuing solutions to a problem that market forces are already starting to address.

The Forgotten Curse of the Incumbent

For some commentators, the sharp downturn in the fortunes of the so-called “Big Tech” firms would not come as a surprise.

It has long been observed by some scholars and courts that a dominant firm “carries the seeds of its own destruction”—a phrase used by then-professor and later-Judge Richard Posner, writing in the University of Chicago Law Review in 1971. The reason: a dominant firm is liable to exhibit high prices, mediocre quality, or lackluster innovation, which then invites entry by more adept challengers. However, this view has been dismissed as outdated in digital-platform markets, where incumbents are purportedly protected by network effects and switching costs that make it difficult for entrants to attract users. Depending on the set of assumptions selected by an economic modeler, each contingency is equally plausible in theory.

The plunging values of leading platforms supplies real-world evidence that favors the self-correction hypothesis. It is often overlooked that network effects can work in both directions, resulting in a precipitous fall from market leader to laggard. Once users start abandoning a dominant platform for a new competitor, network effects operating in reverse can cause a “run for the exits” that leaves the leader with little time to recover. Just ask Nokia, the world’s leading (and seemingly unbeatable) smartphone brand until the Apple iPhone came along.

Why Market Self-Correction Outperforms Regulatory Correction

Market self-correction inherently outperforms regulatory correction: it operates far more rapidly and relies on consumer preferences to reallocate market leadership—a result perfectly consistent with antitrust’s mission to preserve “competition on the merits.” In contrast, policymakers can misdiagnose the competitive effects of business practices; are susceptible to the influence of private interests (especially those that are unable to compete on the merits); and often mispredict the market’s future trajectory. For Exhibit A, see the protracted antitrust litigation by the U.S. Department against IBM, which started in 1975 and ended in withdrawal of the suit in 1982. Given the launch of the Apple II in 1977, the IBM PC in 1981, and the entry of multiple “PC clones,” the forces of creative destruction swiftly displaced IBM from market leadership in the computing industry.

Regulators and legislators around the world have emphasized the urgency of taking dramatic action to correct claimed market failures in digital environments, casting aside prudential concerns over the consequences if any such failure proves to be illusory or temporary. 

But the costs of regulatory failure can be significant and long-lasting. Markets must operate under unnecessary compliance burdens that are difficult to modify. Regulators’ enforcement resources are diverted, and businesses are barred from adopting practices that would benefit consumers. In particular, proposed breakup remedies advocated by some policymakers would undermine the scale economies that have enabled platforms to push down prices, an important consideration in a time of accelerating inflation.

Conclusion

The high concentration levels and certain business practices in digital-platform markets certainly raise important concerns as a matter of antitrust (as well as privacy, intellectual property, and other bodies of) law. These concerns merit scrutiny and may necessitate appropriately targeted interventions. Yet, any policy steps should be anchored in the factually grounded analysis that has characterized decades of regulatory and judicial action to implement the antitrust laws with appropriate care. Abandoning this nuanced framework for a blunt approach based on reflexive assumptions of market power is likely to undermine, rather than promote, the public interest in competitive markets.

Federal Trade Commission (FTC) Chair Lina Khan missed the mark once again in her May 6 speech on merger policy, delivered at the annual meeting of the International Competition Network (ICN). At a time when the FTC and U.S. Justice Department (DOJ) are presumably evaluating responses to the agencies’ “request for information” on possible merger-guideline revisions (see here, for example), Khan’s recent remarks suggest a predetermination that merger policy must be “toughened” significantly to disincentivize a larger portion of mergers than under present guidance. A brief discussion of Khan’s substantively flawed remarks follows.

Discussion

Khan’s remarks begin with a favorable reference to the tendentious statement from President Joe Biden’s executive order on competition that “broad government inaction has allowed far too many markets to become uncompetitive, with consolidation and concentration now widespread across our economy, resulting in higher prices, lower wages, declining entrepreneurship, growing inequality, and a less vibrant democracy.” The claim that “government inaction” has enabled increased market concentration and reduced competition has been shown to be  inaccurate, and therefore cannot serve as a defensible justification for a substantive change in antitrust policy. Accordingly, Khan’s statement that the executive order “underscores a deep mandate for change and a commitment to creating the enabling environment for reform” rests on foundations of sand.

Khan then shifts her narrative to a consideration of merger policy, stating:

Merger investigations invite us to make a set of predictive assessments, and for decades we have relied on models that generally assumed markets are self-correcting and that erroneous enforcement is more costly than erroneous non-enforcement. Both the experience of the U.S. antitrust agencies and a growing set of empirical research is showing that these assumptions appear to have been at odds with market realities.

Digital Markets

Khan argues, without explanation, that “the guidelines must better account for certain features of digital markets—including zero-price dynamics, the competitive significance of data, and the network externalities that can swiftly lead markets to tip.” She fails to make any showing that consumer welfare has been harmed by mergers involving digital markets, or that the “zero-price” feature is somehow troublesome. Moreover, the reference to “data” as being particularly significant to antitrust analysis appears to ignore research (see here) indicating there is an insufficient basis for having an antitrust presumption involving big data, and that big data (like R&D) may be associated with innovation, which enhances competitive vibrancy.

Khan also fails to note that network externalities are beneficial; when users are added to a digital platform, the platform’s value to other users increases (see here, for example). What’s more (see here), “gateways and multihoming can dissipate any monopoly power enjoyed by large networks[,] … provid[ing] another reason” why network effects may not raise competitive problems. In addition, the implicit notion that “tipping” is a particular problem is belied by the ability of new competitors to “knock off” supposed entrenched digital monopolists (think, for example, of Yahoo being displaced by Google, and Myspace being displaced by Facebook). Finally, a bit of regulatory humility is in order. Given the huge amount of consumer surplus generated by digital platforms (see here, for example), enforcers should be particularly cautious about avoiding more aggressive merger (and antitrust in general) policies that could detract from, rather than enhance, welfare.

Labor Markets

Khan argues that guidelines drafters should “incorporate new learning” embodied in “empirical research [that] has shown that labor markets are highly concentrated” and a “U.S. Treasury [report] recently estimating that a lack of competition may be costing workers up to 20% of their wages.” Unfortunately for Khan’s argument, these claims have been convincingly debunked (see here) in a new study by former FTC economist Julie Carlson (see here). As Carlson carefully explains, labor markets are not highly concentrated and labor-market power is largely due to market frictions (such as occupational licensing), rather than concentration. In a similar vein, a recent article by Richard Epstein stresses that heightened antitrust enforcement in labor markets would involve “high administrative and compliance costs to deal with a largely nonexistent threat.” Epstein points out:

[T]raditional forms of antitrust analysis can perfectly deal with labor markets. … What is truly needed is a close examination of the other impediments to labor, including the full range of anticompetitive laws dealing with minimum wage, overtime, family leave, anti-discrimination, and the panoply of labor union protections, where the gains to deregulation should be both immediate and large.

Nonhorizontal Mergers

Khan notes:

[W]e are looking to sharpen our insights on non-horizontal mergers, including deals that might be described as ecosystem-driven, concentric, or conglomerate. While the U.S. antitrust agencies energetically grappled with some of these dynamics during the era of industrial-era conglomerates in the 1960s and 70s, we must update that thinking for the current economy. We must examine how a range of strategies and effects, including extension strategies and portfolio effects, may warrant enforcement action.

Khan’s statement on non-horizontal mergers once again is fatally flawed.

With regard to vertical mergers (not specifically mentioned by Khan), the FTC abruptly withdrew, without explanation, its approval of the carefully crafted 2020 vertical-merger guidelines. That action offends the rule of law, creating unwarranted and costly business-sector confusion. Khan’s lack of specific reference to vertical mergers does nothing to solve this problem.

With regard to other nonhorizontal mergers, there is no sound economic basis to oppose mergers involving unrelated products. Threatening to do so would have no procompetitive rationale and would threaten to reduce welfare by preventing the potential realization of efficiencies. In a 2020 OECD paper drafted principally by DOJ and FTC economists, the U.S. government meticulously assessed the case for challenging such mergers and rejected it on economic grounds. The OECD paper is noteworthy in its entirely negative assessment of 1960s and 1970s conglomerate cases which Khan implicitly praises in suggesting they merely should be “updated” to deal with the current economy (citations omitted):

Today, the United States is firmly committed to the core values that antitrust law protect competition, efficiency, and consumer welfare rather than individual competitors. During the ten-year period from 1965 to 1975, however, the Agencies challenged several mergers of unrelated products under theories that were antithetical to those values. The “entrenchment” doctrine, in particular, condemned mergers if they strengthened an already dominant firm through greater efficiencies, or gave the acquired firm access to a broader line of products or greater financial resources, thereby making life harder for smaller rivals. This approach is no longer viewed as valid under U.S. law or economic theory. …

These cases stimulated a critical examination, and ultimate rejection, of the theory by legal and economic scholars and the Agencies. In their Antitrust Law treatise, Phillip Areeda and Donald Turner showed that to condemn conglomerate mergers because they might enable the merged firm to capture cost savings and other efficiencies, thus giving it a competitive advantage over other firms, is contrary to sound antitrust policy, because cost savings are socially desirable. It is now recognized that efficiency and aggressive competition benefit consumers, even if rivals that fail to offer an equally “good deal” suffer loss of sales or market share. Mergers are one means by which firms can improve their ability to compete. It would be illogical, then, to prohibit mergers because they facilitate efficiency or innovation in production. Unless a merger creates or enhances market power or facilitates its exercise through the elimination of competition—in which case it is prohibited under Section 7—it will not harm, and more likely will benefit, consumers.

Given the well-reasoned rejection of conglomerate theories by leading antitrust scholars and modern jurisprudence, it would be highly wasteful for the FTC and DOJ to consider covering purely conglomerate (nonhorizontal and nonvertical) mergers in new guidelines. Absent new legislation, challenges of such mergers could be expected to fail in court. Regrettably, Khan appears oblivious to that reality.

Khan’s speech ends with a hat tip to internationalism and the ICN:

The U.S., of course, is far from alone in seeing the need for a course correction, and in certain regards our reforms may bring us in closer alignment with other jurisdictions. Given that we are here at ICN, it is worth considering how we, as an international community, can or should react to the shifting consensus.

Antitrust laws have been adopted worldwide, in large part at the urging of the United States (see here). They remain, however, national laws. One would hope that the United States, which in the past was the world leader in developing antitrust economics and enforcement policy, would continue to seek to retain this role, rather than merely emulate other jurisdictions to join an “international community” consensus. Regrettably, this does not appear to be the case. (Indeed, European Commissioner for Competition Margrethe Vestager made specific reference to a “coordinated approach” and convergence between U.S. and European antitrust norms in a widely heralded October 2021 speech at the annual Fordham Antitrust Conference in New York. And Vestager specifically touted European ex ante regulation as well as enforcement in a May 5 ICN speech that emphasized multinational antitrust convergence.)

Conclusion

Lina Khan’s recent ICN speech on merger policy sends all the wrong signals on merger guidelines revisions. It strongly hints that new guidelines will embody pre-conceived interventionist notions at odds with sound economics. By calling for a dramatically new direction in merger policy, it interjects uncertainty into merger planning. Due to its interventionist bent, Khan’s remarks, combined with prior statements by U.S. Assistant Attorney General Jonathan Kanter (see here) may further serve to deter potentially welfare-enhancing consolidations. Whether the federal courts will be willing to defer to a drastically different approach to mergers by the agencies (one at odds with several decades of a careful evolutionary approach, rooted in consumer welfare-oriented economics) is, of course, another story. Stay tuned.  

Biden administration enforcers at the U.S. Justice Department (DOJ) and the Federal Trade Commission (FTC) have prioritized labor-market monopsony issues for antitrust scrutiny (see, for example, here and here). This heightened interest comes in light of claims that labor markets are highly concentrated and are rife with largely neglected competitive problems that depress workers’ income. Such concerns are reflected in a March 2022 U.S. Treasury Department report on “The State of Labor Market Competition.”

Monopsony is the “flip side” of monopoly and U.S. antitrust law clearly condemns agreements designed to undermine the “buyer side” competitive process (see, for example, this U.S. government submission to the OECD). But is a special new emphasis on labor markets warranted, given that antitrust enforcers ideally should seek to allocate their scarce resources to the most pressing (highest valued) areas of competitive concern?

A May 2022 Information Technology & Innovation (ITIF) study from ITIF Associate Director (and former FTC economist) Julie Carlson indicates that the degree of emphasis the administration’s antitrust enforcers are placing on labor issues may be misplaced. In particular, the ITIF study debunks the Treasury report’s findings of high levels of labor-market concentration and the claim that workers face a “decrease in wages [due to labor market power] at roughly 20 percent relative to the level in a fully competitive market.” Furthermore, while noting the importance of DOJ antitrust prosecutions of hard-core anticompetitive agreements among employers (wage-fixing and no-poach agreements), the ITIF report emphasizes policy reforms unrelated to antitrust as key to improving workers’ lot.

Key takeaways from the ITIF report include:

  • Labor markets are not highly concentrated. Local labor-market concentration has been declining for decades, with the most concentrated markets seeing the largest declines.
  • Labor-market power is largely due to labor-market frictions, such as worker preferences, search costs, bargaining, and occupational licensing, rather than concentration.
  • As a case study, changes in concentration in the labor market for nurses have little to no effect on wages, whereas nurses’ preferences over job location are estimated to lead to wage markdowns of 50%.
  • Firms are not profiting at the expense of workers. The decline in the labor share of national income is primarily due to rising home values, not increased labor-market concentration.
  • Policy reform should focus on reducing labor-market frictions and strengthening workers’ ability to collectively bargain. Policies targeting concentration are misguided and will be ineffective at improving outcomes for workers.

The ITIF report also throws cold water on the notion of emphasizing labor-market issues in merger reviews, which was teed up in the January 2022 joint DOJ/FTC request for information (RFI) on merger enforcement. The ITIF report explains:

Introducing the evaluation of labor market effects unnecessarily complicates merger review and needlessly ties up agency resources at a time when the agencies are facing severe resource constraints.48 As discussed previously, labor markets are not highly concentrated, nor is labor market concentration a key factor driving down wages.

A proposed merger that is reportable to the agencies under the Hart-Scott-Rodino Act and likely to have an anticompetitive effect in a relevant labor market is also likely to have an anticompetitive effect in a relevant product market. … Evaluating mergers for labor market effects is unnecessary and costly for both firms and the agencies. The current merger guidelines adequately address competition concerns in input markets, so any contemplated revision to the guidelines should not incorporate a “framework to analyze mergers that may lessen competition in labor markets.” [Citation to Request for Information on Merger Enforcement omitted.]

In sum, the administration’s recent pronouncements about highly anticompetitive labor markets that have resulted in severely underpaid workers—used as the basis to justify heightened antitrust emphasis on labor issues—appear to be based on false premises. As such, they are a species of government misinformation, which, if acted upon, threatens to misallocate scarce enforcement resources and thereby undermine efficient government antitrust enforcement. What’s more, an unnecessary overemphasis on labor-market antitrust questions could impose unwarranted investigative costs on companies and chill potentially efficient business transactions. (Think of a proposed merger that would reduce production costs and benefit consumers but result in a workforce reduction by the merged firm.)

Perhaps the administration will take heed of the ITIF report and rethink its plans to ramp up labor-market antitrust-enforcement initiatives. Promoting pro-market regulatory reforms that benefit both labor and consumers (for instance, excessive occupational-licensing restrictions) would be a welfare-superior and cheaper alternative to misbegotten antitrust actions.

A new scholarly study of economic concentration sheds further light on the flawed nature of the Neo-Brandeisian claim that the United States has a serious “competition problem” due to decades of increasing concentration and ineffective antitrust enforcement (see here and here, for example). In a recent article, economist Yueran Ma—assistant professor at the University of Chicago’s Booth School of Business—found that economies of scale (an efficiency) were associated with a U.S. economy-wide rise in concentration in economic activities (not antitrust markets) and a growth in output over the last century. In particular, Ma explained (emphasis added):

New research observing 100 years of concentration in economic activities and investment in research and development shows that the dominance of large businesses has been increasing for at least a century and, as Marx conjectured, may be a feature of the increasingly stronger economies of scale that accompany industrial development. . . .

To understand the broad historical currents of concentration, we collected financial information of all US corporations by size groups for the past 100 years. . . .

To be clear, our focus is not market concentration for a particular product, which would require defining markets based on consumption activities. Instead, our focus is the business size distribution in the US, namely the extent to which larger businesses dominate in the total volume of production activities across the economy. . . .

The data reveals a persistent rise in the dominance of the top 1 percent and top 0.1 percent of businesses in the US. From 1918 to 1975, the SOI provided size groups sorted by net income (green line with circles). Starting in 1959, the SOI also provided size groups sorted by sales (red line with diamonds). The longest and most comprehensive size groups are sorted by assets, available since 1931 (blue line with triangles). No matter the measure you choose, the long-run increase in corporate concentration is clear. . . .

Just as Stigler, Marx and Lenin had predicted, the reason for increased concentration appears to be economies of scale. Among different industries, we find that the timing and the degree of rising concentration align closely with rising investment in research and development (R&D) and information technology (IT), measured using additional data from the Bureau of Economic Analysis (BEA). These types of investments usually require a certain degree of scale due to upfront spending, while also producing technological changes that enhance economies of scale. Accordingly, we use investment intensity in R&D and IT as a general indicator of firms exploiting economies of scale. . . .

We also find that increases in concentration are positively associated with industry growth. In particular, over the medium term (e.g., twenty years), industries that experience higher increases in concentration are also the ones that experience higher growth in real gross output. Correspondingly, their shares of economic output expand as well. . . .

A[ ] natural question is whether regulatory policies and antitrust enforcement drive the main trends we find. For instance, regulatory restrictions on interstate banking could have a direct impact on the size of banks (and we indeed observe rising concentration in banking when these restrictions were lifted). In most other sectors, we are not aware of policies that align with the patterns of rising concentration in our data. The past century witnessed several regimes of antitrust enforcement—however, rising corporate concentration has been a secular trend throughout these different antitrust regimes. We do not observe a significant relationship between corporate concentration in our data and standard aggregate antitrust enforcement measures, such as the number of antitrust cases filed by the Department of Justice (DOJ) or the budget of the DOJ’s antitrust division. Overall, we do not find evidence that antitrust shapes the economy-wide business size distribution, although it could have a more visible impact on the market for a particular product (which is closer to the domain of antitrust analyses).

Even if higher concentration in production activities comes from economies of scale, some contemporary observers fear that economies of scale will ultimately weaken competition and cultivate monopoly power (Lenin highlighted such concerns as well). Analyzing this question requires reliable measurement of market power. So far, most studies do not find rising markups (the standard measure of market power) before the 1980s, and some argue that markups have increased since the 1980s. Combined with our findings, the evidence suggests that stronger economies of scale does not always lead to stronger market power. It is possible that such a link may exist under certain conditions, and future research could shed more light on this topic.

In broad terms, Ma’s study describes a long-term rise in economic concentration (again, something entirely different from antitrust-relevant market concentration) in tandem with substantial increases in economies of scale and output expansion—overall, a story of long-term welfare enhancement. Antitrust-enforcement levels are not portrayed as significantly related to this trend, and there is no showing that rising economies of scale inevitably enhance market power. (Even possible increases in markups, whose existence has been contested, do not necessarily reflect an increase in market power, see here and here.)

Admittedly, Ma was engaged in a positive analysis of concentration, not a normative assessment. But her research certainly lends no support to the normative neo-Brandeisian notion that a drastic interventionist-minded overhaul of antitrust is required to address major competitive ills. To the contrary, one could logically infer that a dramatic rise in antitrust interventionism is not only uncalled for, but it could threaten the beneficial nature of rising economies of scale and output that have been shown to characterize the U.S. economy. One would hope that this inference would give Congress and U.S. antitrust enforcers pause before they embark on a novel interventionist path.

For decades, consumer-welfare enhancement appeared to be a key enforcement goal of competition policy (antitrust, in the U.S. usage) in most jurisdictions:

  • The U.S. Supreme Court famously proclaimed American antitrust law to be a “consumer welfare prescription” in Reiter v. Sonotone Corp. (1979).
  • A study by the current adviser to the European Competition Commission’s chief economist found that that there are “many statements indicating that, seen from the European Commission, modern EU competition policy to a large extent is about protecting consumer welfare.”
  • A comprehensive international survey presented at the 2011 Annual International Competition Network Conference, found that a majority of competition authorities state that “their national [competition] legislation refers either directly or indirectly to consumer welfare,” and that most competition authorities “base their enforcement efforts on the premise that they enlarge consumer welfare.”  

Recently, however, the notion that a consumer welfare standard (CWS) should guide antitrust enforcement has come under attack (see here). In the United States, this movement has been led by populist “neo-Brandeisians” who have “call[ed] instead for enforcement that takes into account firm size, fairness, labor rights, and the protection of smaller enterprises.” (Interestingly, there appear to be more direct and strident published attacks on the CWS from American critics than from European commentators, perhaps reflecting an unspoken European assumption that “ordoliberal” strong government oversight of markets advances the welfare of consumers and society in general.) The neo-Brandeisian critique is badly flawed and should be rejected.

Assuming that the focus on consumer welfare in U.S. antitrust enforcement survives this latest populist challenge, what considerations should inform the design and application of a CWS? Before considering this question, one must confront the context in which it arises—the claim that the U.S. economy has become far less competitive in recent decades and that antitrust enforcement has been ineffective at addressing this problem. After dispatching with this flawed claim, I advance four principles aimed at properly incorporating consumer-welfare considerations into antitrust-enforcement analysis.  

Does the US Suffer from Poor Antitrust Enforcement and Declining Competition?

Antitrust interventionists assert that lax U.S. antitrust enforcement has coincided with a serious decline in competition—a claim deployed to argue that, even if one assumes that promoting consumer welfare remains an overarching goal, U.S. antitrust policy nonetheless requires a course correction. After all, basic price theory indicates that a reduction in market competition raises deadweight loss and reduces consumers’ relative share of total surplus. As such, it might seem to follow that “ramping up antitrust” would lead to more vigorously competitive markets, featuring less deadweight loss and relatively more consumer surplus.

This argument, of course, avoids error cost, rent seeking, and public choice issues that raise serious questions about the welfare effects of more aggressive “invigorated” enforcement (see here, for example). But more fundamentally, the argument is based on two incorrect premises:

  1. That competition has declined; and
  2. That U.S. trustbusters have applied the CWS in a narrow manner ineffective to address competitive problems.

Those premises (which also underlie President Joe Biden’s July 2021 Executive Order on Promoting Competition in the American Economy) do not stand up to scrutiny.

In a recent article in the Stigler Center journal Promarket, Yale University economics professor Fiona Scott-Morton and Yale Law student Leah Samuel accepted those premises in complaining about poor antitrust enforcement and substandard competition (hyperlinks omitted and emphasis in the original):

In recent years, the [CWS] term itself has become the target of vocal criticism in light of mounting evidence that recent enforcement—and what many call the “consumer welfare standard era” of antitrust enforcement—has been a failure. …

This strategy of non-enforcement has harmed markets and consumers. Today we see the evidence of this under-enforcement in a range of macroeconomic measures, studies of markups, as well as in merger post-mortems and studies of anticompetitive behavior that agencies have not pursued. Non-economist observers– journalists, advocates, and lawyers – who have noticed the lack of enforcement and the pernicious results have learned to blame “economics” and the CWS. They are correct that using CWS, as defined and warped by Chicago-era jurists and economists, has been a failure. That kind of enforcement—namely, insufficient enforcement—does not protect competition. But we argue that the “economics” at fault are the corporate-sponsored Chicago School assumptions, which are at best outdated, generally unjustified, and usually incorrect.

While the Chicago School caused the “consumer welfare standard” to become associated with an anti-enforcement philosophy in the legal community, it has never changed its meaning among PhD-trained economists.

To an economist, consumer welfare is a well-defined concept. Price, quality, and innovation are all part of the demand curve and all form the basis for the standard academic definition of consumer welfare. CW is the area under the demand curve and above the quality-adjusted price paid. … Quality-adjusted price represents all the value consumers get from the product less the price they paid, and therefore encapsulates the role of quality of any kind, innovation, and price on the welfare of the consumer.

In my published response to Scott-Morton and Samuel, I summarized recent economic literature that contradicts the “competition is declining” claim. I also demonstrated that antitrust enforcement has been robust and successful, refuting the authors’ claim to the contrary (cross links to economic literature omitted):

There are only two problems with the [authors’] argument. First, it is not clear at all that competition has declined during the reign of this supposedly misused [CWS] concept. Second, the consumer welfare standard has not been misapplied at all. Indeed, as antitrust scholars and enforcement officials have demonstrated … modern antitrust enforcement has not adopted a narrow “Chicago School” view of the world. To the contrary, it has incorporated the more sophisticated analysis the authors advocate, and enforcement initiatives have been vigorous and largely successful. Accordingly, the authors’ call for an adjustment in antitrust enforcement is a solution in search of a non-existent problem.

In short, competitive conditions in U.S. markets are robust and have not been declining. Moreover, U.S. antitrust enforcement has been sophisticated and aggressive, fully attuned to considerations of quality and innovation.

A Suggested Framework for Consumer Welfare Analysis

Although recent claims of “weak” U.S. antitrust enforcement are baseless, they do, nevertheless, raise “front and center” the nature of the CWS. The CWS is a worthwhile concept, but it eludes a precise definition. That is as it should be. In our common law system, fact-specific analyses of particular competitive practices are key to determining whether welfare is or is not being advanced in the case at hand. There is no simple talismanic CWS formula that is readily applicable to diverse cases.

While Scott-Morton argues that the area under the demand curve (consumer surplus) is essentially coincident with the CWS, other leading commentators take account of the interests of producers as well. For example, the leading antitrust treatise writer, Herbert Hovenkamp, suggests thinking about consumer welfare in terms of “maxim[izing] output that is consistent with sustainable competition. Output includes quantity, quality, and improvements in innovation. As an aside, it is worth noting that high output favors suppliers, including labor, as well as consumers because job opportunities increase when output is higher.” (Hovenkamp, Federal Antitrust Policy 102 (6th ed. 2020).)

Federal Trade Commission (FTC) Commissioner Christine Wilson (like Ken Heyer and other scholars) advocates a “total welfare standard” (consumer plus producer surplus). She stresses that it would beneficially:

  1. Make efficiencies more broadly cognizable, capturing cost reductions not passed through in the short run;
  2. Better enable the agencies to consider multi-market effects (whether consumer welfare gains in one market swamp consumer welfare losses in another market); and
  3. Better capture dynamic efficiencies (such as firm-specific efficiencies that are emulated by other “copycat” firms in the market).

Hovenkamp and Wilson point to the fact that efficiency-enhancing business conduct often has positive ramifications for both consumers and producers. As such, a CWS that focuses narrowly on short-term consumer surplus may prompt antitrust challenges to conduct that, properly understood, will prove beneficial to both consumers and producers over time.

With this in mind, I will now suggest four general “framework principles” to inform a CWS analysis that properly accounts for innovation and dynamic factors. These principles are tentative and merely suggestive, intended to prompt a further dialogue on CWS among interested commentators. (Also, many practical details will need to be filled in, based on further analysis.)

  1. Enforcers should consider all effects on consumer welfare in evaluating a transaction. Under the rule of reason, a reduction in surplus to particular defined consumers should not condemn a business practice (merger or non-merger) if other consumers are likely to enjoy accretions to surplus and if aggregate consumer surplus appears unlikely to decline, on net, due to the practice. Surplus need not be quantified—the likely direction of change in surplus is all that is required. In other words, “actual welfare balancing” is not required, consistent with the practical impossibility of quantifying new welfare effects in almost all cases (see, e.g., Hovenkamp, here). This principle is unaffected by market definition—all affected consumers should be assessed, whether they are “in” or “out” of a hypothesized market.
  2. Vertical intellectual-property-licensing contracts should not be subject to antitrust scrutiny unless there is substantial evidence that they are being used to facilitate horizontal collusion. This principle draws on the “New Madison Approach” associated with former Assistant Attorney General for Antitrust Makan Delrahim. It applies to a set of practices that further the interests of both consumers and producers. Vertical IP licensing (particularly patent licensing) “is highly important to the dynamic and efficient dissemination of new technologies throughout the economy, which, in turn, promotes innovation and increased welfare (consumer and producer surplus).” (See here, for example.) The 9th U.S. Circuit Court of Appeals’ refusal to condemn Qualcomm’s patent-licensing contracts (which had been challenged by the FTC) is consistent with this principle; it “evinces a refusal to find anticompetitive harm in licensing markets without hard empirical support.” (See here.)
  3. Furthermore, enforcers should carefully assess the ability of “non-standard” commercial contracts—horizontal and vertical—to overcome market failures, as described by transaction-cost economics (see here, and here, for example). Non-standard contracts may be designed to deal with problems (for instance) of contractual incompleteness and opportunism that stymie efforts to advance new commercial opportunities. To the extent that such contracts create opportunities for transactions that expand or enhance market offerings, they generate new consumer surplus (new or “shifted out” demand curves) and enhance consumer welfare. Thus, they should enjoy a general (though rebuttable) presumption of legality.
  4. Fourth, and most fundamentally, enforcers should take account of cost-benefit analysis, rooted in error-cost considerations, in their enforcement initiatives, in order to further consumer welfare. As I have previously written:

Assuming that one views modern antitrust enforcement as an exercise in consumer welfare maximization, what does that tell us about optimal antitrust enforcement policy design? In order to maximize welfare, enforcers must have an understanding of – and seek to maximize the difference between – the aggregate costs and benefits that are likely to flow from their policies. It therefore follows that cost-benefit analysis should be applied to antitrust enforcement design. Specifically, antitrust enforcers first should ensure that the rules they propagate create net welfare benefits. Next, they should (to the extent possible) seek to calibrate those rules so as to maximize net welfare. (Significantly, Federal Trade Commissioner Josh Wright also has highlighted the merits of utilizing cost-benefit analysis in the work of the FTC.) [Eight specific suggestions for implementing cost-beneficial antitrust evaluations are then put forth in this article.]

Conclusion

One must hope that efforts to eliminate consumer welfare as the focal point of U.S. antitrust will fail. But even if they do, market-oriented commentators should be alert to any efforts to “hijack” the CWS by interventionist market-skeptical scholars. A particular threat may involve efforts to define the CWS as merely involving short-term consumer surplus maximization in narrowly defined markets. Such efforts could, if successful, justify highly interventionist enforcement protocols deployed against a wide variety of efficient (though too often mischaracterized) business practices.

To counter interventionist antitrust proposals, it is important to demonstrate that claims of faltering competition and inadequate antitrust enforcement under current norms simply are inaccurate. Such an effort, though necessary, is not enough.

In order to win the day, it will be important for market mavens to explain that novel business practices aimed at promoting producer surplus tend to increase consumer surplus as well. That is because efficiency-enhancing stratagems (often embodied in restrictive IP-licensing agreements and non-standard contracts) that further innovation and overcome transaction-cost difficulties frequently pave the way for innovation and the dissemination of new technologies throughout the economy. Those effects, in turn, expand and create new market opportunities, yielding huge additions to consumer surplus—accretions that swamp short-term static effects.

Enlightened enforcers should apply enforcement protocols that allow such benefits to be taken into account. They should also focus on the interests of all consumers affected by a practice, not just a narrow subset of targeted potentially “harmed” consumers. Finally, public officials should view their enforcement mission through a cost-benefit lens, which is designed to promote welfare. 

President Joe Biden’s July 2021 executive order set forth a commitment to reinvigorate U.S. innovation and competitiveness. The administration’s efforts to pass the America COMPETES Act would appear to further demonstrate a serious intent to pursue these objectives.

Yet several actions taken by federal agencies threaten to undermine the intellectual-property rights and transactional structures that have driven the exceptional performance of U.S. firms in key areas of the global innovation economy. These regulatory missteps together represent a policy “lose-lose” that lacks any sound basis in innovation economics and threatens U.S. leadership in mission-critical technology sectors.

Life Sciences: USTR Campaigns Against Intellectual-Property Rights

In the pharmaceutical sector, the administration’s signature action has been an unprecedented campaign by the Office of the U.S. Trade Representative (USTR) to block enforcement of patents and other intellectual-property rights held by companies that have broken records in the speed with which they developed and manufactured COVID-19 vaccines on a mass scale.

Patents were not an impediment in this process. To the contrary: they were necessary predicates to induce venture-capital investment in a small firm like BioNTech, which undertook drug development and then partnered with the much larger Pfizer to execute testing, production, and distribution. If success in vaccine development is rewarded with expropriation, this vital public-health sector is unlikely to attract investors in the future. 

Contrary to increasingly common assertions that the Bayh-Dole Act (which enables universities to seek patents arising from research funded by the federal government) “robs” taxpayers of intellectual property they funded, the development of Covid-19 vaccines by scientist-founded firms illustrates how the combination of patents and private capital is essential to convert academic research into life-saving medical solutions. The biotech ecosystem has long relied on patents to structure partnerships among universities, startups, and large firms. The costly path from lab to market relies on a secure property-rights infrastructure to ensure exclusivity, without which no investor would put capital at stake in what is already a high-risk, high-cost enterprise.  

This is not mere speculation. During the decades prior to the Bayh-Dole Act, the federal government placed strict limitations on the ability to patent or exclusively license innovations arising from federally funded research projects. The result: the market showed little interest in making the investment needed to convert those innovations into commercially viable products that might benefit consumers. This history casts great doubt on the wisdom of the USTR’s campaign to limit the ability of biopharmaceutical firms to maintain legal exclusivity over certain life sciences innovations.

Genomics: FTC Attempts to Block the Illumina/GRAIL Acquisition

In the genomics industry, the Federal Trade Commission (FTC) has devoted extensive resources to oppose the acquisition by Illumina—the market leader in next-generation DNA-sequencing equipment—of a medical-diagnostics startup, GRAIL (an Illumina spinoff), that has developed an early-stage cancer screening test.

It is hard to see the competitive threat. GRAIL is a pre-revenue company that operates in a novel market segment and its diagnostic test has not yet received approval from the Food and Drug Administration (FDA). To address concerns over barriers to potential competitors in this nascent market, Illumina has committed to 12-year supply contracts that would bar price increases or differential treatment for firms that develop oncology-detection tests requiring use of the Illumina platform.

One of Illumina’s few competitors in the global market is the BGI Group, a China-based company that, in 2013, acquired Complete Genomics, a U.S. target that Illumina pursued but relinquished due to anticipated resistance from the FTC in the merger-review process.  The transaction was then cleared by the Committee on Foreign Investment in the United States (CFIUS).

The FTC’s case against Illumina’s re-acquisition of GRAIL relies on theoretical predictions of consumer harm in a market that is not yet operational. Hypothetical market failure scenarios may suit an academic seminar but fall well below the probative threshold for antitrust intervention. 

Most critically, the Illumina enforcement action places at-risk a key element of well-functioning innovation ecosystems. Economies of scale and network effects lead technology markets to converge on a handful of leading platforms, which then often outsource research and development by funding and sometimes acquiring smaller firms that develop complementary technologies. This symbiotic relationship encourages entry and benefits consumers by bringing new products to market as efficiently as possible. 

If antitrust interventions based on regulatory fiat, rather than empirical analysis, disrupt settled expectations in the M&A market that innovations can be monetized through acquisition transactions by larger firms, venture capital may be unwilling to fund such startups in the first place. Independent development or an initial public offering are often not feasible exit options. It is likely that innovation will then retreat to the confines of large incumbents that can fund research internally but often execute it less effectively. 

Wireless Communications: DOJ Takes Aim at Standard-Essential Patents

Wireless communications stand at the heart of the global transition to a 5G-enabled “Internet of Things” that will transform business models and unlock efficiencies in myriad industries.  It is therefore of paramount importance that policy actions in this sector rest on a rigorous economic basis. Unfortunately, a recent policy shift proposed by the U.S. Department of Justice’s (DOJ) Antitrust Division does not meet this standard.

In December 2021, the Antitrust Division released a draft policy statement that would largely bar owners of standard-essential patents from seeking injunctions against infringers, which are usually large device manufacturers. These patents cover wireless functionalities that enable transformative solutions in myriad industries, ranging from communications to transportation to health care. A handful of U.S. and European firms lead in wireless chip design and rely on patent licensing to disseminate technology to device manufacturers and to fund billions of dollars in research and development. The result is a technology ecosystem that has enjoyed continuous innovation, widespread user adoption, and declining quality-adjusted prices.

The inability to block infringers disrupts this equilibrium by signaling to potential licensees that wireless technologies developed by others can be used at-will, with the terms of use to be negotiated through costly and protracted litigation. A no-injunction rule would discourage innovation while encouraging delaying tactics favored by well-resourced device manufacturers (including some of the world’s largest companies by market capitalization) that occupy bottleneck pathways to lucrative retail markets in the United States, China, and elsewhere.

Rather than promoting competition or innovation, the proposed policy would simply transfer wealth from firms that develop new technologies at great cost and risk to firms that prefer to use those technologies at no cost at all. This does not benefit anyone other than device manufacturers that already capture the largest portion of economic value in the smartphone supply chain.

Conclusion

From international trade to antitrust to patent policy, the administration’s actions imply little appreciation for the property rights and contractual infrastructure that support real-world innovation markets. In particular, the administration’s policies endanger the intellectual-property rights and monetization pathways that support market incentives to invest in the development and commercialization of transformative technologies.

This creates an inviting vacuum for strategic rivals that are vigorously pursuing leadership positions in global technology markets. In industries that stand at the heart of the knowledge economy—life sciences, genomics, and wireless communications—the administration is on a counterproductive trajectory that overlooks the business realities of technology markets and threatens to push capital away from the entrepreneurs that drive a robust innovation ecosystem. It is time to reverse course.

The Jan. 18 Request for Information on Merger Enforcement (RFI)—issued jointly by the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ)—sets forth 91 sets of questions (subsumed under 15 headings) that provide ample opportunity for public comment on a large range of topics.

Before chasing down individual analytic rabbit holes related to specific questions, it would be useful to reflect on the “big picture” policy concerns raised by this exercise (but not hinted at in the questions). Viewed from a broad policy perspective, the RFI initiative risks undermining the general respect that courts have accorded merger guidelines over the years, as well as disincentivizing economically beneficial business consolidations.

Policy concerns that flow from various features of the RFI, which could undermine effective merger enforcement, are highlighted below. These concerns counsel against producing overly detailed guidelines that adopt a merger-skeptical orientation.

The RFI Reflects the False Premise that Competition is Declining in the United States

The FTC press release that accompanied the RFI’s release made clear that a supposed weakening of competition under the current merger-guidelines regime is a key driver of the FTC and DOJ interest in new guidelines:

Today, the Federal Trade Commission (FTC) and the Justice Department’s Antitrust Division launched a joint public inquiry aimed at strengthening enforcement against illegal mergers. Recent evidence indicates that many industries across the economy are becoming more concentrated and less competitive – imperiling choice and economic gains for consumers, workers, entrepreneurs, and small businesses.

This premise is not supported by the facts. Based on a detailed literature review, Chapter 6 of the 2020 Economic Report of the President concluded that “the argument that the U.S. economy is suffering from insufficient competition is built on a weak empirical foundation and questionable assumptions.” More specifically, the 2020 Economic Report explained:

Research purporting to document a pattern of increasing concentration and increasing markups uses data on segments of the economy that are far too broad to offer any insights about competition, either in specific markets or in the economy at large. Where data do accurately identify issues of concentration or supercompetitive profits, additional analysis is needed to distinguish between alternative explanations, rather than equating these market indicators with harmful market power.

Soon to-be-published quantitative research by Robert Kulick of NERA Economic Consulting and the American Enterprise Institute, presented at the Jan. 26 Mercatus Antitrust Forum, is consistent with the 2020 Economic Report’s findings. Kulick stressed that there was no general trend toward increasing industrial concentration in the U.S. economy from 2002 to 2017. In particular, industrial concentration has been declining since 2007; the Herfindahl–Hirschman index (HHI) for manufacturing has declined significantly since 2002; and the economywide four-firm concentration ratio (CR4) in 2017 was approximately the same as in 2002. 

Even in industries where concentration may have risen, “the evidence does not support claims that concentration is persistent or harmful.” In that regard, Kulick’s research finds that higher-concentration industries tend to become less concentrated, while lower-concentration industries tend to become more concentrated over time; increases in industrial concentration are associated with economic growth and job creation, particularly for high-growth industries; and rising industrial concentration may be driven by increasing market competition.

In short, the strongest justification for issuing new merger guidelines is based on false premises: an alleged decline in competition within the Unites States. Given this reality, the adoption of revised guidelines designed to “ratchet up” merger enforcement would appear highly questionable.

The RFI Strikes a Merger-Skeptical Tone Out of Touch with Modern Mainstream Antitrust Scholarship

The overall tone of the RFI reflects a skeptical view of the potential benefits of mergers. It ignores overarching beneficial aspects of mergers, which include reallocating scarce resources to higher-valued uses (through the market for corporate control) and realizing standard efficiencies of various sorts (including cost-based efficiencies and incentive effects, such as the elimination of double marginalization through vertical integration). Mergers also generate benefits by bringing together complementary assets and by generating synergies of various sorts, including the promotion of innovation and scaling up the fruits of research and development. (See here, for example.)

What’s more, as the Organisation for Economic Co-operation and Development (OECD) has explained, “[e]vidence suggests that vertical mergers are generally pro-competitive, as they are driven by efficiency-enhancing motives such as improving vertical co-ordination and realizing economies of scope.”

Given the manifold benefits of mergers in general, the negative and merger-skeptical tone of the RFI is regrettable. It not only ignores sound economics, but it is at odds with recent pronouncements by the FTC and DOJ. Notably, the 2010 DOJ-FTC Horizontal Merger Guidelines (issued by Obama administration enforcers) struck a neutral tone. Those guidelines recognized the duty to challenge anticompetitive mergers while noting the public interest in avoiding unnecessary interference with non-anticompetitive mergers (“[t]he Agencies seek to identify and challenge competitively harmful mergers while avoiding unnecessary interference with mergers that are either competitively beneficial or neutral”). The same neutral approach is found in the 2020 DOJ-FTC Vertical Merger Guidelines (“the Agencies use a consistent set of facts and assumptions to evaluate both the potential competitive harm from a vertical merger and the potential benefits to competition”).

The RFI, however, expresses no concern about unnecessary government interference, and strongly emphasizes the potential shortcomings of the existing guidelines in questioning whether they “adequately equip enforcers to identify and proscribe unlawful, anticompetitive mergers.” Merger-skepticism is also reflected throughout the RFI’s 91 sets of questions. A close reading reveals that they are generally phrased in ways that implicitly assume competitive problems or reject potential merger justifications.

For example, the questions addressing efficiencies, under RFI heading 14, casts efficiencies in a generally negative light. Thus, the RFI asks whether “the [existing] guidelines’ approach to efficiencies [is] consistent with the prevailing legal framework as enacted by Congress and interpreted by the courts,” citing the statement in FTC v. Procter & Gamble (1967) that “[p]ossible economies cannot be used as a defense to illegality.”

The view that antitrust disfavors mergers that enhance efficiencies (the “efficiencies offense”) has been roundly rejected by mainstream antitrust scholarship (see, for example, here, here, and here). It may be assumed that today’s Supreme Court (which has deemed consumer welfare to be the lodestone of antitrust enforcement since Reiter v. Sonotone (1979)) would give short shrift to an “efficiencies offense” justification for a merger challenge.

Another efficiencies-related question, under RFI heading 14.d, may in application fly in the face of sound market-oriented economics: “Where a merger is expected to generate cost savings via the elimination of ‘excess’ or ‘redundant’ capacity or workers, should the guidelines treat these savings as cognizable ‘efficiencies’?”

Consider a merger that generates synergies and thereby expands and/or raises the quality of goods and services produced with reduced capacity and fewer workers. This merger would allow these resources to be allocated to higher-valued uses elsewhere in the economy, yielding greater economic surplus for consumers and producers. But there is the risk that such a merger could be viewed unfavorably under new merger guidelines that were revised in light of this question. (Although heading 14.d includes a separate question regarding capacity reductions that have the potential to reduce supply resilience or product or service quality, it is not stated that this provision should be viewed as a limitation on the first sentence.)

The RFI’s discussion of topics other than efficiencies similarly sends the message that existing guidelines are too “pro-merger.” Thus, for example, under RFI heading 5 (“presumptions”), one finds the rhetorical question: “[d]o the [existing] guidelines adequately identify mergers that are presumptively unlawful under controlling case law?”

This question answers itself, by citing to the Philadelphia National Bank (1963) statement that “[w]ithout attempting to specify the smallest market share which would still be considered to threaten undue concentration, we are clear that 30% presents that threat.” This statement predates all of the merger guidelines and is out of step with the modern economic analysis of mergers, which the existing guidelines embody. It would, if taken seriously, threaten a huge number of proposed mergers that, until now, have not been subject to second-request review by the DOJ and FTC. As Judge Douglas Ginsburg and former Commissioner Joshua Wright have explained:

The practical effect of the PNB presumption is to shift the burden of proof from the plaintiff, where it rightfully resides, to the defendant, without requiring evidence – other than market shares – that the proposed merger is likely to harm competition. . . . The presumption ought to go the way of the agencies’ policy decision to drop reliance upon the discredited antitrust theories approved by the courts in such cases as Brown Shoe, Von’s Grocery, and Utah Pie. Otherwise, the agencies will ultimately have to deal with the tension between taking advantage of a favorable presumption in litigation and exerting a reformative influence on the direction of merger law.

By inviting support for PNB-style thinking, RFI heading 5’s lead question effectively rejects the economic effects-based analysis that has been central to agency merger analysis for decades. Guideline revisions that downplay effects in favor of mere concentration would likely be viewed askance by reviewing courts (and almost certainly would be rejected by the Supreme Court, as currently constituted, if the occasion arose).

These particularly striking examples are illustrative of the questioning tone regarding existing merger analysis that permeates the RFI.

New Merger Guidelines, if Issued, Should Not Incorporate the Multiplicity of Issues Embodied in the RFI

The 91 sets of questions in the RFI read, in large part, like a compendium of theoretical harms to the working of markets that might be associated with mergers. While these questions may be of general academic interest, and may shed some light on particular merger investigations, most of them should not be incorporated into guidelines.

As Justice Stephen Breyer has pointed out, antitrust is a legal regime that must account for administrative practicalities. Then-Judge Breyer described the nature of the problem in his 1983 Barry Wright opinion (affirming the dismissal of a Sherman Act Section 2 complaint based on “unreasonably low” prices):

[W]hile technical economic discussion helps to inform the antitrust laws, those laws cannot precisely replicate the economists’ (sometimes conflicting) views. For, unlike economics, law is an administrative system the effects of which depend upon the content of rules and precedents only as they are applied by judges and juries in courts and by lawyers advising their clients. Rules that seek to embody every economic complexity and qualification may well, through the vagaries of administration, prove counter-productive, undercutting the very economic ends they seek to serve.

It follows that any effort to include every theoretical merger-related concern in new merger guidelines would undercut their (presumed) overarching purpose, which is providing useful guidance to the private sector. All-inclusive “guidelines” in reality provide no guidance at all. Faced with a laundry list of possible problems that might prompt the FTC or DOJ to oppose a merger, private parties would face enormous uncertainty, which could deter them from proposing a large number of procompetitive, welfare-enhancing or welfare-neutral consolidations. This would “undercut the very economic ends” of promoting competition that is served by Section 7 enforcement.

Furthermore, all-inclusive merger guidelines could be seen by judges as undermining the rule of law (see here, for example). If DOJ and FTC were able to “pick and choose” at will from an enormously wide array of considerations to justify opposing a proposed merger, they could be seen as engaged in arbitrary enforcement, rather than in a careful weighing of evidence aimed at condemning only anticompetitive transactions. This would be at odds with the promise of fair and dispassionate enforcement found in the 2010 Horizontal Merger Guidelines, namely, to “seek to identify and challenge competitively harmful mergers while avoiding unnecessary interference with mergers that are either competitively beneficial or neutral.”

Up until now, federal courts have virtually always implicitly deferred to (and not questioned) the application of merger-guideline principles by the DOJ and FTC. The agencies have won or lost cases based on courts’ weighing of particular factual and economic evidence, not on whether guideline principles should have been applied by the enforcers.

One would expect courts to react very differently, however, to cases brought in light of ridiculously detailed “guidelines” that did not provide true guidance (particularly if they were heavy on competitive harm possibilities and discounted efficiencies). The agencies’ selective reliance on particular anticompetitive theories could be seen as exercises in arbitrary “pre-cooked” condemnations, not dispassionate enforcement. As such, the courts would tend to be far more inclined to reject (or accord far less deference to) the new guidelines in evaluating agency merger challenges. Even transactions that would have been particularly compelling candidates for condemnation under prior guidelines could be harder to challenge successfully, due to the taint of the new guidelines.

In short, the adoption of highly detailed guidelines that emphasize numerous theories of harm would likely undermine the effectiveness of DOJ and FTC merger enforcement, the precise opposite of what the agencies would have intended.

New Merger Guidelines, if Issued, Should Avoid Relying on Outdated Case Law and Novel Section 7 Theories, and Should Give Due Credit to Economic Efficiencies

The DOJ and FTC could, of course, acknowledge the problem of administrability  and issue more straightforward guideline revisions, of comparable length and detail to prior guidelines. If they choose to do so, they would be well-advised to eschew relying on dated precedents and novel Section 7 theories. They should also give due credit to efficiencies. Seemingly biased guidelines would undermine merger enforcement, not strengthen it.

As discussed above, the RFI’s implicitly favorable references to Philadelphia National Bank and Procter & Gamble are at odds with contemporary economics-based antitrust thinking, which has been accepted by the federal courts. The favorable treatment of those antediluvian holdings, and Brown Shoe Co. v. United States (1962) (another horribly dated case cited multiple times in the RFI), would do much to discredit new guidelines.

In that regard, the suggestion in RFI heading 1 that existing merger guidelines may not “faithfully track the statutory text, legislative history, and established case law around merger enforcement” touts the Brown Shoe and PNB concerns with a “trend toward concentration” and “the danger of subverting congressional intent by permitting a too-broad economic investigation.”

New guidelines that focus on (or even give lip service to) a “trend” toward concentration and eschew overly detailed economic analyses (as opposed, perhaps, to purely concentration-based negative rules of thumb?) would predictably come in for judicial scorn as economically unfounded. Such references would do as much (if not more) to ensure judicial rejection of enforcement-agency guidelines as endless lists of theoretically possible sources of competitive harm, discussed previously.

Of particular concern are those references that implicitly reject the need to consider efficiencies, which is key to modern enlightened merger evaluations. It is ludicrous to believe that a majority of the current Supreme Court would have a merger-analysis epiphany and decide that the RFI’s preferred interventionist reading of Section 7 statutory language and legislative history trumps decades of economically centered consumer-welfare scholarship and agency guidelines.

Herbert Hovenkamp, author of the leading American antitrust treatise and a scholar who has been cited countless times by the Supreme Court, recently put it well (in an article coauthored with Carl Shapiro):

When the FTC investigates vertical and horizontal mergers will it now take the position that efficiencies are irrelevant, even if they are proven? If so, the FTC will face embarrassing losses in court.

Reviewing courts wound no doubt take heed of this statement in assessing any future merger guidelines that rely on dated and discredited cases or that minimize efficiencies.

New Guidelines, if Issued, Should Give Due Credit to Efficiencies

Heading 14 of the RFI—listing seven sets of questions that deal with efficiencies—is in line with the document’s implicitly negative portrayal of mergers. The heading begins inauspiciously, with a question that cites Procter & Gamble in suggesting that the current guidelines’ approach to efficiencies is “[in]consistent with the prevailing legal framework as enacted by Congress and interpreted by the courts.” As explained above, such an anti-efficiencies reference would be viewed askance by most, if not all, reviewing judges.

Other queries in heading 14 also view efficiencies as problematic. They suggest that efficiency claims should be treated negatively because efficiency claims are not always realized after the fact. But merger activity is a private-sector search process, and the ability to predict ex post effects with perfect accuracy is an inevitable part of market activity. Using such a natural aspect of markets as an excuse to ignore efficiencies would prevent many economically desirable consolidations from being achieved.

Furthermore, the suggestion under heading 14 that parties should have to show with certainty that cognizable efficiencies could not have been achieved through alternative means asks the impossible. Theoreticians may be able to dream up alternative means by which efficiencies might have been achieved (say, through convoluted contracts), but such constructs may not be practical in real-world settings. Requiring businesses to follow dubious theoretical approaches to achieve legitimate business ends, rather than allowing them to enter into arrangements they favor that appear efficient, would manifest inappropriate government interference in markets. (It would be just another example of the “pretense of knowledge” that Friedrich Hayek brilliantly described in his 1974 Nobel Prize lecture.)

Other questions under heading 14 raise concerns about the lack of discussion of possible “inefficiencies” in current guidelines, and speculate about possible losses of “product or service quality” due to otherwise efficient reductions in physical capacity and employment. Such theoretical musings offer little guidance to the private sector, and further cast in a negative light potential real resource savings.

Rather than incorporate the unhelpful theoretical efficiencies critiques under heading 14, the agencies should consider a more helpful approach to clarifying the evaluation of efficiencies in new guidelines. Such a clarification could be based on Commissioner Christine Wilson’s helpful discussion of merger efficiencies in recent writings (see, for example, here and here). Wilson has appropriately called for the symmetric treatment of both the potential harms and benefits arising from mergers, explaining that “the agencies readily credit harms but consistently approach potential benefits with extreme skepticism.”

She and Joshua Wright have also explained (see here, here, and here) that overly narrow product-market definitions may sometimes preclude consideration of substantial “out-of-market” efficiencies that arise from certain mergers. The consideration of offsetting “out-of-market” efficiencies that greatly outweigh competitive harms might warrant inclusion in new guidelines.

The FTC and DOJ could be heading for a merger-enforcement train wreck if they adopt new guidelines that incorporate the merger-skeptical tone and excruciating level of detail found in the RFI. This approach would yield a lengthy and uninformative laundry list of potential competitive problems that would allow the agencies to selectively pick competitive harm “stories” best adapted to oppose particular mergers, in tension with the rule of law.

Far from “strengthening” merger enforcement, such new guidelines would lead to economically harmful business uncertainty and would severely undermine judicial respect for the federal merger-enforcement process. The end result would be a “lose-lose” for businesses, for enforcers, and for the American economy.

Conclusion

If the agencies enact new guidelines, they should be relatively short and straightforward, designed to give private parties the clearest possible picture of general agency enforcement intentions. In particular, new guidelines should:

  1. Eschew references to dated and discredited case law;
  2. Adopt a neutral tone that acknowledges the beneficial aspects of mergers;
  3. Recognize the duty to challenge anticompetitive mergers, while at the same time noting the public interest in avoiding unnecessary interference with non-anticompetitive mergers (consistent with the 2010 Horizontal Merger Guidelines); and
  4. Acknowledge the importance of efficiencies, treating them symmetrically with competitive harm and according appropriate weight to countervailing out-of-market efficiencies (a distinct improvement over existing enforcement policy).

Merger enforcement should continue to be based on fact-based case-specific evaluations, informed by sound economics. Populist nostrums that treat mergers with suspicion and that ignore their beneficial aspects should be rejected. Such ideas are at odds with current scholarly thinking and judicial analysis, and should be relegated to the scrap heap of outmoded and bad public policies.

Intermediaries may not be the consumer welfare hero we want, but more often than not, they are one that we need.

In policy discussions about the digital economy, a background assumption that frequently underlies the discourse is that intermediaries and centralization always and only serve as a cost to consumers, and to society more generally. Thus, one commonly sees arguments that consumers would be better off if they could freely combine products from different trading partners. According to this logic, bundled goods, walled gardens, and other intermediaries are always to be regarded with suspicion, while interoperability, open source, and decentralization are laudable features of any market.

However, as with all economic goods, intermediation offers both costs and benefits. The challenge for market players is to assess these tradeoffs and, ultimately, to produce the optimal level of intermediation.

As one example, some observers assume that purchasing food directly from a producer benefits consumers because intermediaries no longer take a cut of the final purchase price. But this overlooks the tremendous efficiencies supermarkets can achieve in terms of cost savings, reduced carbon emissions (because consumers make fewer store trips), and other benefits that often outweigh the costs of intermediation.

The same anti-intermediary fallacy is plain to see in countless other markets. For instance, critics readily assume that insurance, mortgage, and travel brokers are just costly middlemen.

This unduly negative perception is perhaps even more salient in the digital world. Policymakers are quick to conclude that consumers are always better off when provided with “more choice.” Draft regulations of digital platforms have been introduced on both sides of the Atlantic that repeat this faulty argument ad nauseam, as do some antitrust decisions.

Even the venerable Tyler Cowen recently appeared to sing the praises of decentralization, when discussing the future of Web 3.0:

One person may think “I like the DeFi options at Uniswap,” while another may say, “I am going to use the prediction markets over at Hedgehog.” In this scenario there is relatively little intermediation and heavy competition for consumer attention. Thus most of the gains from competition accrue to the users. …

… I don’t know if people are up to all this work (or is it fun?). But in my view this is the best-case scenario — and the most technologically ambitious. Interestingly, crypto’s radical ability to disintermediate, if extended to its logical conclusion, could bring about a radical equalization of power that would lower the prices and values of the currently well-established crypto assets, companies and platforms.

While disintermediation certainly has its benefits, critics often gloss over its costs. For example, scams are practically nonexistent on Apple’s “centralized” App Store but are far more prevalent with Web3 services. Apple’s “power” to weed out nefarious actors certainly contributes to this difference. Similarly, there is a reason that “middlemen” like supermarkets and travel agents exist in the first place. They notably perform several complex tasks (e.g., searching for products, negotiating prices, and controlling quality) that leave consumers with a manageable selection of goods.

Returning to the crypto example, besides being a renowned scholar, Tyler Cowen is also an extremely savvy investor. What he sees as fun investment choices may be nightmarish (and potentially dangerous) decisions for less sophisticated consumers. The upshot is that intermediaries are far more valuable than they are usually given credit for.

Bringing People Together

The reason intermediaries (including online platforms) exist is to reduce transaction costs that suppliers and customers would face if they tried to do business directly. As Daniel F. Spulber argues convincingly:

Markets have two main modes of organization: decentralized and centralized. In a decentralized market, buyers and sellers match with each other and determine transaction prices. In a centralized market, firms act as intermediaries between buyers and sellers.

[W]hen there are many buyers and sellers, there can be substantial transaction costs associated with communication, search, bargaining, and contracting. Such transaction costs can make it more difficult to achieve cross-market coordination through direct communication. Intermediary firms have various means of reducing transaction costs of decentralized coordination when there are many buyers and sellers.

This echoes the findings of Nobel laureate Ronald Coase, who observed that firms emerge when they offer a cheaper alternative to multiple bilateral transactions:

The main reason why it is profitable to establish a firm would seem to be that there is a cost of using the price mechanism. The most obvious cost of “organising ” production through the price mechanism is that of discovering what the relevant prices are. […] The costs of negotiating and concluding a separate contract for each exchange transaction which takes place on a market must also be taken into account.

Economists generally agree that online platforms also serve this cost-reduction function. For instance, David Evans and Richard Schmalensee observe that:

Multi-sided platforms create value by bringing two or more different types of economic agents together and facilitating interactions between them that make all agents better off.

It’s easy to see the implications for today’s competition-policy debates, and for the online intermediaries that many critics would like to see decentralized. Particularly salient examples include app store platforms (such as the Apple App Store and the Google Play Store); online retail platforms (such as Amazon Marketplace); and online travel agents (like Booking.com and Expedia). Competition policymakers have embarked on countless ventures to “open up” these platforms to competition, essentially moving them further toward disintermediation. In most of these cases, however, policymakers appear to be fighting these businesses’ very raison d’être.

For example, the purpose of an app store is to curate the software that users can install and to offer payment solutions; in exchange, the store receives a cut of the proceeds. If performing these tasks created no value, then to a first approximation, these services would not exist. Users would simply download apps via their web browsers, and the most successful smartphones would be those that allowed users to directly install apps (“sideloading,” to use the more technical terms). Forcing these platforms to “open up” and become neutral is antithetical to the value proposition they offer.

Calls for retail and travel platforms to stop offering house brands or displaying certain products more favorably are equally paradoxical. Consumers turn to these platforms because they want a selection of goods. If that was not the case, users could simply bypass the platforms and purchase directly from independent retailers or hotels.Critics sometimes retort that some commercial arrangements, such as “most favored nation” clauses, discourage consumers from doing exactly this. But that claim only reinforces the point that online platforms must create significant value, or they would not be able to obtain such arrangements in the first place.

All of this explains why characterizing these firms as imposing a “tax” on their respective ecosystems is so deeply misleading. The implication is that platforms are merely passive rent extractors that create no value. Yet, barring the existence of market failures, both their existence and success is proof to the contrary. To argue otherwise places no faith in the ability of firms and consumers to act in their own self-interest.

A Little Evolution

This last point is even more salient when seen from an evolutionary standpoint. Today’s most successful intermediaries—be they online platforms or more traditional brick-and-mortar firms like supermarkets—mostly had to outcompete the alternative represented by disintermediated bilateral contracts.

Critics of intermediaries rarely contemplate why the app-store model outpaced the more heavily disintermediated software distribution of the desktop era. Or why hotel-booking sites exist, despite consumers’ ability to use search engines, hotel websites, and other product-search methods that offer unadulterated product selections. Or why mortgage brokers are so common when borrowers can call local banks directly. The list is endless.

Indeed, as I have argued previously:

Digital markets could have taken a vast number of shapes, so why have they systematically gravitated towards those very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones? Indeed, if recent commentary is to be believed, it is the latter that should succeed because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see [other] intermediaries step into the breach – i.e. arbitrage. This does not seem to be happening in the digital economy. The naïve answer is to say that this is precisely the problem, the harder one is to actually understand why.

Fiat Versus Emergent Disintermediation

All of this is not to say that intermediaries are perfect, or that centralization always beats decentralization. Instead, the critical point is about the competitive process. There are vast differences between centralization that stems from government fiat and that which emerges organically.

(Dis)intermediation is an economic good. Markets thus play a critical role in deciding how much or little of it is provided. Intermediaries must charge fees that cover their costs, while bilateral contracts entail transaction costs. In typically Hayekian fashion, suppliers and buyers will weigh the costs and benefits of these options.

Intermediaries are most likely to emerge in markets prone to excessive transaction costs and competitive processes ensure that only valuable intermediaries survive. Accordingly, there is no guarantee that government-mandated disintermediation would generate net benefits in any given case.

Of course, the market does not always work perfectly. Sometimes, market failures give rise to excessive (or insufficient) centralization. And policymakers should certainly be attentive to these potential problems and address them on a case-by-case basis. But there is little reason to believe that today’s most successful intermediaries are the result of market failures, and it is thus critical that policymakers do not undermine the valuable role they perform.

For example, few believe that supermarkets exist merely because government failures (such as excessive regulation) or market failures (such as monopolization) prevent the emergence of smaller rivals. Likewise, the app-store model is widely perceived as an improvement over previous software platforms; few consumers appear favorably disposed toward its replacement with sideloading of apps (for example, few Android users choose to sideload apps rather than purchase them via the Google Play Store). In fact, markets appear to be moving in the opposite direction: even traditional software platforms such as Windows OS increasingly rely on closed stores to distribute software on their platforms.

More broadly, this same reasoning can (and has) been applied to other social institutions, such as the modern family. For example, the late Steven Horwitz observed that family structures have evolved in order to adapt to changing economic circumstances. Crucially, this process is driven by the same cost-benefit tradeoff that we see in markets. In both cases, agents effectively decide which functions are better performed within a given social structure, and which ones are more efficiently completed outside of it.

Returning to Tyler Cowen’s point about the future of Web3, the case can be made that whatever level of centralization ultimately emerges is most likely the best case scenario. Sure, there may be some market failures and suboptimal outcomes along the way, but they ultimately pale in comparison to the most pervasive force: namely, economic agents’ ability to act in what they perceive to be their best interest. To put it differently, if Web3 spontaneously becomes as centralized as Web 2.0 has been, that would be testament to the tremendous role that intermediaries play throughout the economy.

Even as delivery services work to ship all of those last-minute Christmas presents that consumers bought this season from digital platforms and other e-commerce sites, the U.S. House and Senate are contemplating Grinch-like legislation that looks to stop or limit how Big Tech companies can “self-preference” or “discriminate” on their platforms.

A platform “self-preferences” when it blends various services into the delivery of a given product in ways that third parties couldn’t do themselves. For example, Google self-preferences when it puts a Google Shopping box at the top of a Search page for Adidas sneakers. Amazon self-preferences when it offers its own AmazonBasics USB cables alongside those offered by Apple or Anker. Costco’s placement of its own Kirkland brand of paper towels on store shelves can also be a form of self-preferencing.

Such purportedly “discriminatory” behavior constitutes much of what platforms are designed to do. Virtually every platform that offers a suite of products and services will combine them in ways that users find helpful, even if competitors find it infuriating. It surely doesn’t help Yelp if Google Search users can see a Maps results box next to a search for showtimes at a local cinema. It doesn’t help other manufacturers of charging cables if Amazon sells a cheaper version under a brand that consumers trust. But do consumers really care about Yelp or Apple’s revenues, when all they want are relevant search results and less expensive products?

Until now, competition authorities have judged this type of conduct under the consumer welfare standard: does it hurt consumers in the long run, or does it help them? This test does seek to evaluate whether the conduct deprives consumers of choice by foreclosing rivals, which could ultimately allow the platform to exploit its customers. But it doesn’t treat harm to competitors—in the form of reduced traffic and profits for Yelp, for example—as a problem in and of itself.

“Non-discrimination” bills introduced this year in both the House and Senate aim to change that, but they would do so in ways that differ in important respects.

The House bill would impose a blanket ban on virtually all “discrimination” by platforms. This means that even such benign behavior as Facebook linking to Facebook Marketplace on its homepage would become presumptively unlawful. The measure would, as I’ve written before, break a lot of the Internet as we know it, but it has the virtue of being explicit and clear about its effects.

The Senate bill is, in this sense, a lot more circumspect. Instead of a blanket ban, it would prohibit what the bill refers to as “unfair” discrimination that “materially harm[s] competition on the covered platform,” with a carve-out exception for discrimination that was “necessary” to maintain or enhance the “core functionality” of the platform. In theory, this would avoid a lot of the really crazy effects of the House bill. Apple likely still could, for example, pre-install a Camera app on the iPhone.

But this greater degree of reasonableness comes at the price of ambiguity. The bill does not define “unfair discrimination,” nor what it would mean for something to be “necessary” to improve the core functionality of a platform. Faced with this ambiguity, companies would be wise to be overly cautious, given the steep penalties they would face for conduct found to be “unfair”: 15% of total U.S. revenues earned during the period when the conduct was ongoing. That’s a lot of money to risk over a single feature!

Also unlike the House legislation, the Senate bill would not create a private right of action, thereby limiting litigation to enforce the bill’s terms to actions brought by the Federal Trade Commission (FTC), U.S. Justice Department (DOJ), or state attorneys general.

Put together, these features create the perfect recipe for extensive discretionary power held by a handful of agencies. With such vague criteria and such massive penalties for lawbreaking, the mere threat of a lawsuit could force a company to change its behavior. The rules are so murky that companies might even be threatened with a lawsuit over conduct in one area in order to make them change their behavior in another.

It’s hardly unprecedented for powers like this to be misused. During the Obama administration, the Internal Revenue Service (IRS) was alleged to have targeted conservative groups for investigation, for which the agency eventually had to apologize (and settle a lawsuit brought by some of the targeted groups). More than a decade ago, the Bank Secrecy Act was used to uncover then-New York Attorney General Eliot Spitzer’s involvement in an international prostitution ring. Back in 2008, the British government used anti-terrorism powers to seize the assets of some Icelandic banks that had become insolvent and couldn’t repay their British depositors. To this day, municipal governments in Britain use anti-terrorism powers to investigate things like illegal waste dumping and people who wrongly park in spots reserved for the disabled.

The FTC itself has a history of abusing its authority. As Commissioners Noah Phillips and Christine Wilson remind us, the commission was nearly shut down in the 1970s after trying to use its powers to “protect” children from seeing ads for sugary foods, interpreting its consumer-protection mandate so broadly that it considered tooth decay as falling within its scope.

As I’ve written before, both Chair Lina Khan and Commissioner Rebecca Kelly Slaughter appear to believe that the FTC ought to take a broad vision of its goals. Slaughter has argued that antitrust ought to be “antiracist.” Khan believes that the “the dispersion of political and economic control” is the proper goal of antitrust, not consumer welfare or some other economic goal.

Khan in particular does not appear especially bound by the usual norms that might constrain this sort of regulatory overreach. In recent weeks, she has pushed through contentious decisions by relying on more than 20 “zombie votes” cast by former Commissioner Rohit Chopra on the final day before he left the agency. While it has been FTC policy since 1984 to count votes cast by departed commissioners unless they are superseded by their successors, Khan’s FTC has invoked this relatively obscure rule to swing more decisions than every single predecessor combined.

Thus, while the Senate bill may avoid immediately breaking large portions of the Internet in ways the House bill would, it would instead place massive discretionary powers into the hands of authorities who have expansive views about the goals those powers ought to be used to pursue.

This ought to be concerning to anyone who disapproves of public policy being made by unelected bureaucrats, rather than the people’s chosen representatives. If Republicans find an empowered Khan-led FTC worrying today, surely Democrats ought to feel the same about an FTC run by Trump-style appointees in a few years. Both sides may come to regret creating an agency with so much unchecked power.

On both sides of the Atlantic, 2021 has seen legislative and regulatory proposals to mandate that various digital services be made interoperable with others. Several bills to do so have been proposed in Congress; the EU’s proposed Digital Markets Act would mandate interoperability in certain contexts for “gatekeeper” platforms; and the UK’s competition regulator will be given powers to require interoperability as part of a suite of “pro-competitive interventions” that are hoped to increase competition in digital markets.

The European Commission plans to require Apple to use USB-C charging ports on iPhones to allow interoperability among different chargers (to save, the Commission estimates, two grams of waste per-European per-year). Interoperability demands for forms of interoperability have been at the center of at least two major lawsuits: Epic’s case against Apple and a separate lawsuit against Apple by the app called Coronavirus Reporter. In July, a group of pro-intervention academics published a white paper calling interoperability “the ‘Super Tool’ of Digital Platform Governance.”

What is meant by the term “interoperability” varies widely. It can refer to relatively narrow interventions in which user data from one service is made directly portable to other services, rather than the user having to download and later re-upload it. At the other end of the spectrum, it could mean regulations to require virtually any vertical integration be unwound. (Should a Tesla’s engine be “interoperable” with the chassis of a Land Rover?) And in between are various proposals for specific applications of interoperability—some product working with another made by another company.

Why Isn’t Everything Interoperable?

The world is filled with examples of interoperability that arose through the (often voluntary) adoption of standards. Credit card companies oversee massive interoperable payments networks; screwdrivers are interoperable with screws made by other manufacturers, although different standards exist; many U.S. colleges accept credits earned at other accredited institutions. The containerization revolution in shipping is an example of interoperability leading to enormous efficiency gains, with a government subsidy to encourage the adoption of a single standard.

And interoperability can emerge over time. Microsoft Word used to be maddeningly non-interoperable with other word processors. Once OpenOffice entered the market, Microsoft patched its product to support OpenOffice files; Word documents now work slightly better with products like Google Docs, as well.

But there are also lots of things that could be interoperable but aren’t, like the Tesla motors that can’t easily be removed and added to other vehicles. The charging cases for Apple’s AirPods and Sony’s wireless earbuds could, in principle, be shaped to be interoperable. Medical records could, in principle, be standardized and made interoperable among healthcare providers, and it’s easy to imagine some of the benefits that could come from being able to plug your medical history into apps like MyFitnessPal and Apple Health. Keurig pods could, in principle, be interoperable with Nespresso machines. Your front door keys could, in principle, be made interoperable with my front door lock.

The reason not everything is interoperable like this is because interoperability comes with costs as well as benefits. It may be worth letting different earbuds have different designs because, while it means we sacrifice easy interoperability, we gain the ability for better designs to be brought to market and for consumers to have choice among different kinds. We may find that, while digital health records are wonderful in theory, the compliance costs of a standardized format might outweigh those benefits.

Manufacturers may choose to sell an expensive device with a relatively cheap upfront price tag, relying on consumer “lock in” for a stream of supplies and updates to finance the “full” price over time, provided the consumer likes it enough to keep using it.

Interoperability can remove a layer of security. I don’t want my bank account to be interoperable with any payments app, because it increases the risk of getting scammed. What I like about my front door lock is precisely that it isn’t interoperable with anyone else’s key. Lots of people complain about popular Twitter accounts being obnoxious, rabble-rousing, and stupid; it’s not difficult to imagine the benefits of a new, similar service that wanted everyone to start from the same level and so did not allow users to carry their old Twitter following with them.

There thus may be particular costs that prevent interoperability from being worth the tradeoff, such as that:

  1. It might be too costly to implement and/or maintain.
  2. It might prescribe a certain product design and prevent experimentation and innovation.
  3. It might add too much complexity and/or confusion for users, who may prefer not to have certain choices.
  4. It might increase the risk of something not working, or of security breaches.
  5. It might prevent certain pricing models that increase output.
  6. It might compromise some element of the product or service that benefits specifically from not being interoperable.

In a market that is functioning reasonably well, we should be able to assume that competition and consumer choice will discover the desirable degree of interoperability among different products. If there are benefits to making your product interoperable with others that outweigh the costs of doing so, that should give you an advantage over competitors and allow you to compete them away. If the costs outweigh the benefits, the opposite will happen—consumers will choose products that are not interoperable with each other.

In short, we cannot infer from the absence of interoperability that something is wrong, since we frequently observe that the costs of interoperability outweigh the benefits.

Of course, markets do not always lead to optimal outcomes. In cases where a market is “failing”—e.g., because competition is obstructed, or because there are important externalities that are not accounted for by the market’s prices—certain goods may be under-provided. In the case of interoperability, this can happen if firms struggle to coordinate upon a single standard, or because firms’ incentives to establish a standard are not aligned with the social optimum (i.e., interoperability might be optimal and fail to emerge, or vice versa).

But the analysis cannot stop here: just because a market might not be functioning well and does not currently provide some form of interoperability, we cannot assume that if it was functioning well that it would provide interoperability.

Interoperability for Digital Platforms

Since we know that many clearly functional markets and products do not provide all forms of interoperability that we could imagine them providing, it is perfectly possible that many badly functioning markets and products would still not provide interoperability, even if they did not suffer from whatever has obstructed competition or effective coordination in that market. In these cases, imposing interoperability would destroy value.

It would therefore be a mistake to assume that more interoperability in digital markets would be better, even if you believe that those digital markets suffer from too little competition. Let’s say, for the sake of argument, that Facebook/Meta has market power that allows it to keep its subsidiary WhatsApp from being interoperable with other competing services. Even then, we still would not know if WhatsApp users would want that interoperability, given the trade-offs.

A look at smaller competitors like Telegram and Signal, which we have no reason to believe have market power, demonstrates that they also are not interoperable with other messaging services. Signal is run by a nonprofit, and thus has little incentive to obstruct users for the sake of market power. Why does it not provide interoperability? I don’t know, but I would speculate that the security risks and technical costs of doing so outweigh the expected benefit to Signal’s users. If that is true, it seems strange to assume away the potential costs of making WhatsApp interoperable, especially if those costs may relate to things like security or product design.

Interoperability and Contact-Tracing Apps

A full consideration of the trade-offs is also necessary to evaluate the lawsuit that Coronavirus Reporter filed against Apple. Coronavirus Reporter was a COVID-19 contact-tracing app that Apple rejected from the App Store in March 2020. Its makers are now suing Apple for, they say, stifling competition in the contact-tracing market. Apple’s defense is that it only allowed COVID-19 apps from “recognised entities such as government organisations, health-focused NGOs, companies deeply credentialed in health issues, and medical or educational institutions.” In effect, by barring it from the App Store, and offering no other way to install the app, Apple denied Coronavirus Reporter interoperability with the iPhone. Coronavirus Reporter argues it should be punished for doing so.

No doubt, Apple’s decision did reduce competition among COVID-19 contact tracing apps. But increasing competition among COVID-19 contact-tracing apps via mandatory interoperability might have costs in other parts of the market. It might, for instance, confuse users who would like a very straightforward way to download their country’s official contact-tracing app. Or it might require access to certain data that users might not want to share, preferring to let an intermediary like Apple decide for them. Narrowing choice like this can be valuable, since it means individual users don’t have to research every single possible option every time they buy or use some product. If you don’t believe me, turn off your spam filter for a few days and see how you feel.

In this case, the potential costs of the access that Coronavirus Reporter wants are obvious: while it may have had the best contact-tracing service in the world, sorting it from other less reliable and/or scrupulous apps may have been difficult and the risk to users may have outweighed the benefits. As Apple and Facebook/Meta constantly point out, the security risks involved in making their services more interoperable are not trivial.

It isn’t competition among COVID-19 apps that is important, per se. As ever, competition is a means to an end, and maximizing it in one context—via, say, mandatory interoperability—cannot be judged without knowing the trade-offs that maximization requires. Even if we thought of Apple as a monopolist over iPhone users—ignoring the fact that Apple’s iPhones obviously are substitutable with Android devices to a significant degree—it wouldn’t follow that the more interoperability, the better.

A ‘Super Tool’ for Digital Market Intervention?

The Coronavirus Reporter example may feel like an “easy” case for opponents of mandatory interoperability. Of course we don’t want anything calling itself a COVID-19 app to have totally open access to people’s iPhones! But what’s vexing about mandatory interoperability is that it’s very hard to sort the sensible applications from the silly ones, and most proposals don’t even try. The leading U.S. House proposal for mandatory interoperability, the ACCESS Act, would require that platforms “maintain a set of transparent, third-party-accessible interfaces (including application programming interfaces) to facilitate and maintain interoperability with a competing business or a potential competing business,” based on APIs designed by the Federal Trade Commission.

The only nod to the costs of this requirement are provisions that further require platforms to set “reasonably necessary” security standards, and a provision to allow the removal of third-party apps that don’t “reasonably secure” user data. No other costs of mandatory interoperability are acknowledged at all.

The same goes for the even more substantive proposals for mandatory interoperability. Released in July 2021, “Equitable Interoperability: The ‘Super Tool’ of Digital Platform Governance” is co-authored by some of the most esteemed competition economists in the business. While it details obscure points about matters like how chat groups might work across interoperable chat services, it is virtually silent on any of the costs or trade-offs of its proposals. Indeed, the first “risk” the report identifies is that regulators might be too slow to impose interoperability in certain cases! It reads like interoperability has been asked what its biggest weaknesses are in a job interview.

Where the report does acknowledge trade-offs—for example, interoperability making it harder for a service to monetize its user base, who can just bypass ads on the service by using a third-party app that blocks them—it just says that the overseeing “technical committee or regulator may wish to create conduct rules” to decide.

Ditto with the objection that mandatory interoperability might limit differentiation among competitors – like, for example, how imposing the old micro-USB standard on Apple might have stopped us from getting the Lightning port. Again, they punt: “We recommend that the regulator or the technical committee consult regularly with market participants and allow the regulated interface to evolve in response to market needs.”

But if we could entrust this degree of product design to regulators, weighing the costs of a feature against its benefits, we wouldn’t need markets or competition at all. And the report just assumes away many other obvious costs: “​​the working hypothesis we use in this paper is that the governance issues are more of a challenge than the technical issues.” Despite its illustrious panel of co-authors, the report fails to grapple with the most basic counterargument possible: its proposals have costs as well as benefits, and it’s not straightforward to decide which is bigger than which.

Strangely, the report includes a section that “looks ahead” to “Google’s Dominance Over the Internet of Things.” This, the report says, stems from the company’s “market power in device OS’s [that] allows Google to set licensing conditions that position Google to maintain its monopoly and extract rents from these industries in future.” The report claims this inevitability can only be avoided by imposing interoperability requirements.

The authors completely ignore that a smart home interoperability standard has already been developed, backed by a group of 170 companies that include Amazon, Apple, and Google, as well as SmartThings, IKEA, and Samsung. It is open source and, in principle, should allow a Google Home speaker to work with, say, an Amazon Ring doorbell. In markets where consumers really do want interoperability, it can emerge without a regulator requiring it, even if some companies have apparent incentive not to offer it.

If You Build It, They Still Might Not Come

Much of the case for interoperability interventions rests on the presumption that the benefits will be substantial. It’s hard to know how powerful network effects really are in preventing new competitors from entering digital markets, and none of the more substantial reports cited by the “Super Tool” report really try.

In reality, the cost of switching among services or products is never zero. Simply pointing out that particular costs—such as network effect-created switching costs—happen to exist doesn’t tell us much. In practice, many users are happy to multi-home across different services. I use at least eight different messaging apps every day (Signal, WhatsApp, Twitter DMs, Slack, Discord, Instagram DMs, Google Chat, and iMessage/SMS). I don’t find it particularly costly to switch among them, and have been happy to adopt new services that seemed to offer something new. Discord has built a thriving 150-million-user business, despite these switching costs. What if people don’t actually care if their Instagram DMs are interoperable with Slack?

None of this is to argue that interoperability cannot be useful. But it is often overhyped, and it is difficult to do in practice (because of those annoying trade-offs). After nearly five years, Open Banking in the UK—cited by the “Super Tool” report as an example of what it wants for other markets—still isn’t really finished yet in terms of functionality. It has required an enormous amount of time and investment by all parties involved and has yet to deliver obvious benefits in terms of consumer outcomes, let alone greater competition among the current accounts that have been made interoperable with other services. (My analysis of the lessons of Open Banking for other services is here.) Phone number portability, which is also cited by the “Super Tool” report, is another example of how hard even simple interventions can be to get right.

The world is filled with cases where we could imagine some benefits from interoperability but choose not to have them, because the costs are greater still. None of this is to say that interoperability mandates can never work, but their benefits can be oversold, especially when their costs are ignored. Many of mandatory interoperability’s more enthusiastic advocates should remember that such trade-offs exist—even for policies they really, really like.

The leading contribution to sound competition policy made by former Assistant U.S. Attorney General Makan Delrahim was his enunciation of the “New Madison Approach” to patent-antitrust enforcement—and, in particular, to the antitrust treatment of standard essential patent licensing (see, for example, here, here, and here). In short (citations omitted):

The New Madison Approach (“NMA”) advanced by former Assistant Attorney General for Antitrust Makan Delrahim is a simple analytical framework for understanding the interplay between patents and antitrust law arising out of standard setting. A key aspect of the NMA is its rejection of the application of antitrust law to the “hold-up” problem, whereby patent holders demand supposedly supra-competitive licensing fees to grant access to their patents that “read on” a standard – standard essential patents (“SEPs”). This scenario is associated with an SEP holder’s prior commitment to a standard setting organization (“SSO”), that is: if its patented technology is included in a proposed new standard, it will license its patents on fair, reasonable, and non-discriminatory (“FRAND”) terms. “Hold-up” is said to arise subsequently, when the SEP holder reneges on its FRAND commitment and demands that a technology implementer pay higher-than-FRAND licensing fees to access its SEPs.

The NMA has four basic premises that are aimed at ensuring that patent holders have adequate incentives to innovate and create welfare-enhancing new technologies, and that licensees have appropriate incentives to implement those technologies:

1. Hold-up is not an antitrust problem. Accordingly, an antitrust remedy is not the correct tool to resolve patent licensing disputes between SEP-holders and implementers of a standard.

2. SSOs should not allow collective actions by standard-implementers to disfavor patent holders in setting the terms of access to patents that cover a new standard.

3. A fundamental element of patent rights is the right to exclude. As such, SSOs and courts should be hesitant to restrict SEP holders’ right to exclude implementers from access to their patents, by, for example, seeking injunctions.

4. Unilateral and unconditional decisions not to license a patent should be per se legal.

Delrahim emphasizes that the threat of antitrust liability, specifically treble damages, distorts the incentives associated with good faith negotiations with SSOs over patent inclusion. Contract law, he goes on to note, is perfectly capable of providing an ex post solution to licensing disputes between SEP holders and implementers of a standard. Unlike antitrust law, a contract law framework allows all parties equal leverage in licensing negotiations.

As I have explained elsewhere, the NMA is best seen as a set of policies designed to spark dynamic economic growth:

[P]atented technology serves as a catalyst for the wealth-creating diffusion of innovation. This occurs through numerous commercialization methods; in the context of standardized technologies, the development of standards is a process of discovery. At each [SSO], the process of discussion and negotiation between engineers, businesspersons, and all other relevant stakeholders reveals the relative value of alternative technologies and tends to result in the best patents being integrated into a standard.

The NMA supports this process of discovery and implementation of the best patented technology born of the labors of the innovators who created it. As a result, the NMA ensures SEP valuations that allow SEP holders to obtain an appropriate return for the new economic surplus that results from the commercialization of standard-engendered innovations. It recognizes that dynamic economic growth is fostered through the incentivization of innovative activities backed by patents.

In sum, the NMA seeks to promote innovation by offering incentives for SEP-driven technological improvements. As such, it rejects as ill-founded prior Federal Trade Commission (FTC) litigation settlements and Obama-era U.S. Justice Department (DOJ) Antitrust Division policy statements that artificially favored implementor licensees’ interests over those of SEP licensors (see here).

In light of the NMA, DOJ cooperated with the U.S. Patent and Trademark Office and National Institute of Standards and Technology (NIST) in issuing a 2019 SEP Policy Statement clarifying that an SEP holder’s promise to license a patent on fair, reasonable, and non-discriminatory (FRAND) terms does not bar it from seeking any available remedy for patent infringement, including an injunction. This signaled that SEPs and non-SEP patents enjoy equivalent legal status.

DOJ also issued a 2020 supplement to its 2015 Institute of Electrical and Electronics Engineers (IEEE) business review letter. The 2015 letter had found no legal fault with revised IEEE standard-setting policies that implicitly favored implementers of standardized technology over SEP holders. The 2020 supplement characterized key elements of the 2015 letter as “outdated,” and noted that the anti-SEP bias of that document could “harm competition and chill innovation.”   

Furthermore, DOJ issued a July 2019 Statement of Interest before the 9th U.S. Circuit Court of Appeals in FTC v. Qualcomm, explaining that unilateral and unconditional decisions not to license a patent are legal under the antitrust laws. In October 2020, the 9th Circuit reversed a district court decision and rejected the FTC’s monopolization suit against Qualcomm. The circuit court, among other findings, held that Qualcomm had no antitrust duty to license its SEPs to competitors.

Regrettably, the Biden Administration appears to be close to rejecting the NMA and to reinstituting the anti-strong patents SEP-skeptical views of the Obama administration (see here and here). DOJ already has effectively repudiated the 2020 supplement to the 2015 IEEE letter and the 2019 SEP Policy Statement. Furthermore, written responses to Senate Judiciary Committee questions by assistant attorney general nominee Jonathan Kanter suggest support for renewed antitrust scrutiny of SEP licensing. These developments are highly problematic if one supports dynamic economic growth.

Conclusion

The NMA represents a pro-American, pro-growth innovation policy prescription. Its abandonment would reduce incentives to invest in patents and standard-setting activities, to the detriment of the U.S. economy. Such a development would be particularly unfortunate at a time when U.S. Supreme Court decisions have weakened American patent rights (see here); China is taking steps to strengthen Chinese patents and raise incentives to obtain Chinese patents (see here); and China is engaging in litigation to weaken key U.S. patents and undermine American technological leadership (see here).

The rejection of NMA would also be in tension with the logic of the 5th U.S. Circuit Court of Appeals’ 2021 HTC v. Ericsson decision, which held that the non-discrimination portion of the FRAND commitment required Ericsson to give HTC the same licensing terms as given to larger mobile-device manufacturers. Furthermore, recent important European court decisions are generally consistent with NMA principles (see here).

Given the importance of dynamic competition in an increasingly globalized world economy, Biden administration officials may wish to take a closer look at the economic arguments supporting the NMA before taking final action to condemn it. Among other things, the administration might take note that major U.S. digital platforms, which are the subject of multiple U.S. and foreign antitrust enforcement investigations, tend to firmly oppose strong patents rights. As one major innovation economist recently pointed out:

If policymakers and antitrust gurus are so concerned about stemming the rising power of Big Tech platforms, they should start by first stopping the relentless attack on IP. Without the IP system, only the big and powerful have the privilege to innovate[.]

Federal Trade Commission (FTC) Chair Lina Khan’s Sept. 22 memorandum to FTC commissioners and staff—entitled “Vision and Priorities for the FTC” (VP Memo)—offers valuable insights into the chair’s strategy and policy agenda for the commission. Unfortunately, it lacks an appreciation for the limits of antitrust and consumer-protection law; it also would have benefited from greater regulatory humility. After summarizing the VP Memo’s key sections, I set forth four key takeaways from this rather unusual missive.

Introduction

The VP Memo begins appropriately enough, with praise for commission staff and a call to focus on key FTC strategic priorities and operational objectives. So far, so good. Regrettably, the introductory section is the memo’s strongest feature.

Strategic Approach

The VP Memo’s first substantive section, which lays out Khan’s strategic approach, raises questions that require further clarification.

This section is long on glittering generalities. First, it begins with the need to take a “holistic approach” that recognizes law violations harm workers and independent businesses, as well as consumers. Legal violations that reflect “power asymmetries” and harm to “marginalized communities” are emphasized, but not defined. Are new enforcement standards to supplement or displace consumer welfare enhancement being proposed?

Second, similar ambiguity surrounds the need to target enforcement efforts toward “root causes” of unlawful conduct, rather than “one-off effects.” Root causes are said to involve “structural incentives that enable unlawful conduct” (such as conflicts of interest, business models, or structural dominance), as well as “upstream” examination of firms that profit from such conduct. How these observations may be “operationalized” into case-selection criteria (and why these observations are superior to alternative means for spotting illegal behavior) is left unexplained.

Third, the section endorses a more “rigorous and empiricism-driven approach” to the FTC’s work, a “more interdisciplinary approach” that incorporates “a greater range of analytical tools and skillsets.” This recommendation is not problematic on its face, though it is a bit puzzling. The FTC already relies heavily on economics and empirical work, as well as input from technologists, advertising specialists, and other subject matter experts, as required. What other skillsets are being endorsed? (A more far-reaching application of economic thinking in certain consumer-protection cases would be helpful, but one suspects that is not the point of the paragraph.)

Fourth, the need to be especially attentive to next-generation technologies, innovations, and nascent industries is trumpeted. Fine, but the FTC already does that in its competition and consumer-protection investigations.

Finally, the need to “democratize” the agency is highlighted, to keep the FTC in tune with “the real problems that Americans are facing in their daily lives and using that understanding to inform our work.” This statement seems to imply that the FTC is not adequately dealing with “real problems.” The FTC, however, has not been designated by Congress to be a general-purpose problem solver. Rather, the agency has a specific statutory remit to combat anticompetitive activity and unfair acts or practices that harm consumers. Ironically, under Chair Khan, the FTC has abruptly implemented major changes in key areas (including rulemaking, the withdrawal of guidance, and merger-review practices) without prior public input or consultation among the commissioners (see, for example, here)—actions that could be deemed undemocratic.

Policy Priorities

The memo’s brief discussion of Khan’s policy priorities raises three significant concerns.

First, Khan stresses the “need to address rampant consolidation and the dominance that it has enabled across markets” in the areas of merger enforcement and dominant-firm scrutiny. The claim that competition has substantially diminished has been critiqued by leading economists, and is dubious at best (see, for example, here). This flat assertion is jarring, and in tension with the earlier call for more empirical analysis. Khan’s call for revision of the merger guidelines (presumably both horizontal and vertical), in tandem with the U.S. Justice Department (DOJ), will be headed for trouble if it departs from the economic reasoning that has informed prior revisions of those guidelines. (The memo’s critical and cryptic reference to the “narrow and outdated framework” of recent guidelines provides no clue as to the new guidelines format that Chair Khan might deem acceptable.) 

Second, the chair supports prioritizing “dominant intermediaries” and “extractive business models,” while raising concerns about “private equity and other investment vehicles” that “strip productive capacity” and “target marginalized communities.” No explanation is given as to why such prioritization will best utilize the FTC’s scarce resources to root out harmful anticompetitive behavior and consumer-protection harms. By assuming from the outset that certain “unsavory actors” merit prioritization, this discussion also is in tension with an empirical approach that dispassionately examines the facts in determining how resources should best be allocated to maximize the benefits of enforcement.

Third, the chair wants to direct special attention to “one-sided contract provisions” that place “[c]onsumers, workers, franchisees, and other market participants … at a significant disadvantage.” Non-competes, repair restrictions, and exclusionary clauses are mentioned as examples. What is missing is a realistic acknowledgement of the legal complications that would be involved in challenging such provisions, and a recognition of possible welfare benefits that such restraints could generate under many circumstances. In that vein, mere perceived inequalities in bargaining power alluded to in the discussion do not, in and of themselves, constitute antitrust or consumer-protection violations.

Operational Objectives

The closing section, on “operational objectives,” is not particularly troublesome. It supports an “integrated approach” to enforcement and policy tools, and endorses “breaking down silos” between competition (BC) and consumer-protection (BCP) staff. (Of course, while greater coordination between BC and BCP occasionally may be desirable, competition and consumer-protection cases will continue to feature significant subject matter and legal differences.) It also calls for greater diversity in recruitment and a greater staffing emphasis on regional offices. Finally, it endorses bringing in more experts from “outside disciplines” and more rigorous analysis of conduct, remedies, and market studies. These points, although not controversial, do not directly come to grip with questions of optimal resource allocation within the agency, which the FTC will have to address.

Evaluating the VP Memo: 4 Key Takeaways

The VP Memo is a highly aggressive call-to-arms that embodies Chair Khan’s full-blown progressive vision for the FTC. There are four key takeaways:

  1. Promoting the consumer interest, which for decades has been the overarching principle in both FTC antitrust and consumer-protection cases (which address different sources of consumer harm), is passé. Protecting consumers is only referred to in passing. Rather, the concerns of workers, “honest businesses,” and “marginalized communities” are emphasized. Courts will, however, continue to focus on established consumer-welfare and consumer-harm principles in ruling on antitrust and consumer-protection cases. If the FTC hopes to have any success in winning future cases based on novel forms of harm, it will have to ensure that its new case-selection criteria also emphasize behavior that harms consumers.
  2. Despite multiple references to empiricism and analytical rigor, the VP Memo ignores the potential economic-welfare benefits of the categories of behavior it singles out for condemnation. The memo’s critiques of “middlemen,” “gatekeepers,” “extractive business models,” “private equity,” and various types of vertical contracts, reference conduct that frequently promotes efficiency, generating welfare benefits for producers and consumers. Even if FTC lawsuits or regulations directed at these practices fail, the business uncertainty generated by the critiques could well disincentivize efficient forms of conduct that spark innovation and economic growth.
  3. The VP Memo in effect calls for new enforcement initiatives that challenge conduct different in nature from FTC cases brought in recent decades. This implicit support for lawsuits that would go well beyond existing judicial interpretations of the FTC’s competition and consumer-protection authority reflects unwarranted hubris. This April, in the AMG case, the U.S. Supreme Court unanimously rejected the FTC’s argument that it had implicit authority to obtain monetary relief under Section 13(b) of the FTC Act, which authorizes permanent injunctions – despite the fact that several appellate courts had found such authority existed. The Court stated that the FTC could go to Congress if it wanted broader authority. This decision bodes ill for any future FTC efforts to expand its authority into new realms of “unfair” activity through “creative” lawyering.
  4. Chair Khan’s unilateral statement of her policy priorities embodied in the VP Memo bespeaks a lack of humility. It ignores a long history of consensus FTC statements on agency priorities, reflected in numerous commission submissions to congressional committees in connection with oversight hearings. Although commissioners have disagreed on specific policy statements or enforcement complaints, general “big picture” policy statements to congressional overseers typically have been by unanimous vote. By ignoring this tradition, the VP Memo departs from a longstanding bipartisan tradition that will tend to undermine the FTC’s image as a serious deliberative body that seeks to reconcile varying viewpoints (while recognizing that, at times, different positions will be expressed on particular matters). If the FTC acts more and more like a one-person executive agency, why does it need to be “independent,” and, indeed, what special purpose does it serve as a second voice on federal antitrust matters? Under seeming unilateral rule, the prestige of the FTC before federal courts may suffer, undermining its effectiveness in defending enforcement actions and promulgating rules. This will particularly be the case if more and more FTC decisions are taken by a 3-2 vote and appear to reflect little or no consultation with minority commissioners.

Conclusion

The VP Memo reflects a lack of humility and strategic insight. It sets forth priorities that are disconnected from the traditional core of the FTC’s consumer-welfare-centric mission. It emphasizes new sorts of initiatives that are likely to “crash and burn” in the courts, unless they are better anchored to established case law and FTC enforcement principles. As a unilateral missive announcing an unprecedented change in policy direction, the memo also undermines the tradition of collegiality and reasoned debate that generally has characterized the commission’s activities in recent decades.

As such, the memo will undercut, not advance, the effectiveness of FTC advocacy before the courts. It will also undermine the FTC’s reputation as a truly independent deliberative body. Accordingly, one may hope that Chair Khan will rethink her approach, withdraw the VP Memo, and work with all of her fellow commissioners to recraft a new consensus policy document.