Archives For Monopolization

The Biden administration’s antitrust reign of error continues apace. The U.S. Justice Department’s (DOJ) Antitrust Division has indicated in recent months that criminal prosecutions may be forthcoming under Section 2 of the Sherman Antitrust Act, but refuses to provide any guidance regarding enforcement criteria.

Earlier this month, Deputy Assistant Attorney General Richard Powers stated that “there’s ample case law out there to help inform those who have concerns or questions” regarding Section 2 criminal enforcement, conveniently ignoring the fact that criminal Section 2 cases have not been brought in almost half a century. Needless to say, those ancient Section 2 cases (which are relatively few in number) antedate the modern era of economic reasoning in antitrust analysis. What’s more, unlike Section 1 price-fixing and market-division precedents, they yield no clear rule as to what constitutes criminal unilateral behavior. Thus, DOJ’s suggestion that old cases be consulted for guidance is disingenuous at best. 

It follows that DOJ criminal-monopolization prosecutions would be sheer folly. They would spawn substantial confusion and uncertainty and disincentivize dynamic economic growth.

Aggressive unilateral business conduct is a key driver of the competitive process. It brings about “creative destruction” that transforms markets, generates innovation, and thereby drives economic growth. As such, one wants to be particularly careful before condemning such conduct on grounds that it is anticompetitive. Accordingly, error costs here are particularly high and damaging to economic prosperity.

Moreover, error costs in assessing unilateral conduct are more likely than in assessing joint conduct, because it is very hard to distinguish between procompetitive and anticompetitive single-firm conduct, as DOJ’s 2008 Report on Single Firm Conduct Under Section 2 explains (citations omitted):

Courts and commentators have long recognized the difficulty of determining what means of acquiring and maintaining monopoly power should be prohibited as improper. Although many different kinds of conduct have been found to violate section 2, “[d]efining the contours of this element … has been one of the most vexing questions in antitrust law.” As Judge Easterbrook observes, “Aggressive, competitive conduct by any firm, even one with market power, is beneficial to consumers. Courts should prize and encourage it. Aggressive, exclusionary conduct is deleterious to consumers, and courts should condemn it. The big problem lies in this: competitive and exclusionary conduct look alike.”

The problem is not simply one that demands drawing fine lines separating different categories of conduct; often the same conduct can both generate efficiencies and exclude competitors. Judicial experience and advances in economic thinking have demonstrated the potential procompetitive benefits of a wide variety of practices that were once viewed with suspicion when engaged in by firms with substantial market power. Exclusive dealing, for example, may be used to encourage beneficial investment by the parties while also making it more difficult for competitors to distribute their products.

If DOJ does choose to bring a Section 2 criminal case soon, would it target one of the major digital platforms? Notably, a U.S. House Judiciary Committee letter recently called on DOJ to launch a criminal investigation of Amazon (see here). Also, current Federal Trade Commission (FTC) Chair Lina Khan launched her academic career with an article focusing on Amazon’s “predatory pricing” and attacking the consumer welfare standard (see here).

Khan’s “analysis” has been totally discredited. As a trenchant scholarly article by Timothy Muris and Jonathan Nuechterlein explains:

[DOJ’s criminal Section 2 prosecution of A&P, begun in 1944,] bear[s] an eerie resemblance to attacks today on leading online innovators. Increasingly integrated and efficient retailers—first A&P, then “big box” brick-and-mortar stores, and now online retailers—have challenged traditional retail models by offering consumers lower prices and greater convenience. For decades, critics across the political spectrum have reacted to such disruption by urging Congress, the courts, and the enforcement agencies to stop these American success stories by revising antitrust doctrine to protect small businesses rather than the interests of consumers. Using antitrust law to punish pro-competitive behavior makes no more sense today than it did when the government attacked A&P for cutting consumers too good a deal on groceries. 

Before bringing criminal Section 2 charges against Amazon, or any other “dominant” firm, DOJ leaders should read and absorb the sobering Muris and Nuechterlein assessment. 

Finally, not only would DOJ Section 2 criminal prosecutions represent bad public policy—they would also undermine the rule of law. In a very thoughtful 2017 speech, then-Acting Assistant Attorney General for Antitrust Andrew Finch succinctly summarized the importance of the rule of law in antitrust enforcement:

[H]ow do we administer the antitrust laws more rationally, accurately, expeditiously, and efficiently? … Law enforcement requires stability and continuity both in rules and in their application to specific cases.

Indeed, stability and continuity in enforcement are fundamental to the rule of law. The rule of law is about notice and reliance. When it is impossible to make reasonable predictions about how a law will be applied, or what the legal consequences of conduct will be, these important values are diminished. To call our antitrust regime a “rule of law” regime, we must enforce the law as written and as interpreted by the courts and advance change with careful thought.

The reliance fostered by stability and continuity has obvious economic benefits. Businesses invest, not only in innovation but in facilities, marketing, and personnel, and they do so based on the economic and legal environment they expect to face.

Of course, we want businesses to make those investments—and shape their overall conduct—in accordance with the antitrust laws. But to do so, they need to be able to rely on future application of those laws being largely consistent with their expectations. An antitrust enforcement regime with frequent changes is one that businesses cannot plan for, or one that they will plan for by avoiding certain kinds of investments.

Bringing criminal monopolization cases now, after a half-century of inaction, would be antithetical to the stability and continuity that underlie the rule of law. What’s worse, the failure to provide prosecutorial guidance would be squarely at odds with concerns of notice and reliance that inform the rule of law. As such, a DOJ decision to target firms for Section 2 criminal charges would offend the rule of law (and, sadly, follow the FTC ‘s recent example of flouting the rule of law, see here and here).

In sum, the case against criminal Section 2 prosecutions is overwhelming. At a time when DOJ is facing difficulties winning “slam dunk” criminal Section 1  prosecutions targeting facially anticompetitive joint conduct (see here, here, and here), the notion that it would criminally pursue unilateral conduct that may generate substantial efficiencies is ludicrous. Hopefully, DOJ leadership will come to its senses and drop any and all plans to bring criminal Section 2 cases.

[The following is a guest post from Andrew Mercado, a research assistant at the Mercatus Center at George Mason University and an adjunct professor and research assistant at George Mason’s Antonin Scalia Law School.]

The Competition and Transparency in Digital Advertising Act (CTDAA), introduced May 19 by Sens. Mike Lee (R-Utah), Ted Cruz (R-Texas), Amy Klobuchar (D-Minn.), and Richard Blumenthal (D-Conn.), is the latest manifestation of the congressional desire to “do something” legislatively about big digital platforms. Although different in substance from the other antitrust bills introduced this Congress, it shares one key characteristic: it is fatally flawed and should not be enacted.  

Restrictions

In brief, the CTDAA imposes revenue-based restrictions on the ownership structure of firms engaged in digital advertising. The CTDAA bars a firm with more than $20 billion in annual advertising revenue (adjusted annually for inflation) from:

  1. owning a digital-advertising exchange if it owns either a sell-side ad brokerage or a buy-side ad brokerage; and
  2. owning a sell-side brokerage if it owns a buy-side brokerage, or from owning a buy-side or sell-side brokerage if it is also a buyer or seller of advertising space.

The proposal’s ownership restrictions present the clearest harm to the future of the digital-advertising market. From an efficiency perspective, vertical integration of both sides of the market can lead to enormous gains. Since, for example, Google owns and operates an ad exchange, a sell-side broker, and a buy-side broker, there are very few frictions that exist between each side of the market. All of the systems are integrated and the supply of advertising space, demand for that space, and the marketplace conducting price-discovery auctions are automatically updated in real time.

While this instantaneous updating is not unique to Google’s system, and other buy- and sell-side firms can integrate into the system, the benefit to advertisers and publishers can be found in the cost savings that come from the integration. Since Google is able to create synergies on all sides of the market, the fees on any given transaction are lower. Further, incorporating Google’s vast trove of data allows for highly relevant and targeted ads. All of this means that advertisers spend less for the same quality of ad; publishers get more for each ad they place; and consumers see higher-quality, more relevant ads.

Without the ability to own and invest in the efficiency and transaction-cost reduction of an integrated platform, there will likely be less innovation and lower quality on all sides of the market. Further, advertisers and publishers will have to shoulder the burden of using non-integrated marketplaces and would likely pay higher fees for less-efficient brokers. Since Google is a one-stop shop for all of a company’s needs—whether that be on the advertising side or the publishing side—companies can move seamlessly from one side of the market to the other, all while paying lower costs per transaction, because of the integrated nature of the platform.

In the absence of such integration, a company would have to seek out one buy-side brokerage to place ads and another, separate sell-side brokerage to receive ads. These two brokers would then have to go to an ad exchange to facilitate the deal, bringing three different brokers into the mix. Each of these middlemen would take a proportionate cut of the deal. When comparing the situation between an integrated and non-integrated market, the fees associated with serving ads in a non-integrated market are almost certainly higher.

Additionally, under this proposal, the innovative potential of each individual firm is capped. If a firm grows big enough and gains sufficient revenue through integrating different sides of the market, they will be forced to break up their efficiency-inducing operations. Marginal improvements on each side of the market may be possible, but without integrating different sides of the market, the scale required to justify those improvements would be insurmountable.

Assumptions

The CTDAA assumes that:

  1. there is a serious competitive problem in digital advertising; and
  2. the structural separation and regulation of advertising brokerages run by huge digital-advertising platforms (as specified in the CTDAA) would enhance competition and benefit digital advertising customers and consumers.

The first assumption has not been proven and is subject to debate, while the second assumption is likely to be false.

Fundamental to the bill’s assumption that the digital-advertising market lacks competition is a misunderstanding of competitive forces and the idea that revenue and profit are inversely related to competition. While it is true that high profits can be a sign of consolidation and anticompetitive outcomes, the dynamic nature of the internet economy makes this theory unlikely.

As Christopher Kaiser and I have discussed, competition in the internet economy is incredibly dynamic. Vigorous competition can be achieved with just a handful of firms,  despite claims from some quarters that four competitors is necessarily too few. Even in highly concentrated markets, there is the omnipresent threat that new entrants will emerge to usurp an incumbent’s reign. Additionally, while some studies may show unusually large profits in those markets, when adjusted for the consumer welfare created by large tech platforms, profits should actually be significantly higher than they are.

Evidence of dynamic entry in digital markets can be found in a recently announced product offering from a small (but more than $6 billion in revenue) competitor in digital advertising. Following the outcry associated with Google’s alleged abuse with Project Bernanke, the Trade Desk developed OpenPath. This allowed the Trade Desk, a buy-side broker, to handle some of the functions of a sell-side broker and eliminate harms from Google’s alleged bid-rigging to better serve its clients.

In developing the platform, the Trade Desk said it would discontinue serving any Google-based customers, effectively severing ties with the largest advertising exchange on the market. While this runs afoul of the letter of the law spelled out in CTDAA, it is well within the spirit its sponsor’s stated goal: businesses engaging in robust free-market competition. If Google’s market power was as omnipresent and suffocating as the sponsors allege, then eliminating traffic from Google would have been a death sentence for the Trade Desk.

While various theories of vertical and horizontal competitive harm have been put forward, there has not been an empirical showing that consumers and advertising customers have failed to benefit from the admittedly efficient aspects of digital-brokerage auctions administered by Google, Facebook, and a few other platforms. The rapid and dramatic growth of digital advertising and associated commerce strongly suggests that this has been an innovative and welfare-enhancing development. Moreover, the introduction of a new integrated brokerage platform by a “small” player in the advertising market indicates there is ample opportunity to increase this welfare further.  

Interfering in brokerage operations under the unproven assumption that “monopoly rents” are being charged and that customers are being “exploited” is rhetoric unmoored from hard evidence. Furthermore, if specific platform practices are shown inefficiently to exclude potential entrants, existing antitrust law can be deployed on a case-specific basis. This approach is currently being pursued by a coalition of state attorneys general against Google (the merits of which are not relevant to this commentary).   

Even assuming for the sake of argument that there are serious competition problems in the digital-advertising market, there is no reason to believe that the arbitrary provisions and definitions found in the CTDAA would enhance welfare. Indeed, it is likely that the act would have unforeseen consequences:

  • It would lead to divestitures supervised by the U.S. Justice Department (DOJ) that could destroy efficiencies derived from efficient targeting by brokerages integrated into platforms;
  • It would disincentivize improvements in advertising brokerages and likely would reduce future welfare on both the buy and sell sides of digital advertising;
  • It would require costly recordkeeping and disclosures by covered platforms that could have unforeseen consequences for privacy and potentially reduce the efficiency of bidding practices;
  • It would establish a fund for damage payments that would encourage wasteful litigation (see next two points);
  • It would spawn a great deal of wasteful private rent-seeking litigation that would discourage future platform and brokerage innovations; and
  • It would likely generate wasteful lawsuits by rent-seeking state attorneys general (and perhaps the DOJ as well).

The legislation would ultimately harm consumers who currently benefit from a highly efficient form of targeted advertising (for more on the welfare benefits of targeted advertising, see here). Since Google continually invests in creating a better search engine (to deliver ads directly to consumers) and collects more data to better target ads (to deliver ads to specific consumers), the value to advertisers of displaying ads on Google constantly increases.

Proposing a new regulatory structure that would directly affect the operations of highly efficient auction markets is the height of folly. It ignores the findings of Nobel laureate James M. Buchanan (among others) that, to justify regulation, there should first be a provable serious market failure and that, even if such a failure can be shown, the net welfare costs of government intervention should be smaller than the net welfare costs of non-intervention.

Given the likely substantial costs of government intervention and the lack of proven welfare costs from the present system (which clearly has been associated with a growth in output), the second prong of the Buchanan test clearly has not been met.

Conclusion

While there are allegations of abuses in the digital-advertising market, it is not at all clear that these abuses have had a long-term negative economic impact. As shown in a study by Erik Brynjolfsson and his student Avinash Collis—recently summarized in the Harvard Business Review (Alden Abbott offers commentary here)—the consumer surplus generated by digital platforms has far outstripped the advertising and services revenues received by the platforms. The CTDAA proposal would seek to unwind much of these gains.

If the goal is to create a multitude of small, largely inefficient advertising companies that charge high fees and provide low-quality service, this bill will deliver. The market for advertising will have a far greater number of players but it will be far less competitive, since no companies will be willing to exceed the $20 billion revenue threshold that would leave them subject to the proposal’s onerous ownership standards.

If, however, the goal is to increase consumer welfare, increase rigorous competition, and cement better outcomes for advertisers and publishers, then it is likely to fail. Ownership requirements laid out in the proposal will lead to a stagnant advertising market, higher fees for all involved, and lower-quality, less-relevant ads. Government regulatory interference in highly successful and efficient platform markets are a terrible idea.

Sens. Amy Klobuchar (D-Minn.) and Chuck Grassley (R-Iowa)—cosponsors of the American Innovation Online and Choice Act, which seeks to “rein in” tech companies like Apple, Google, Meta, and Amazon—contend that “everyone acknowledges the problems posed by dominant online platforms.”

In their framing, it is simply an acknowledged fact that U.S. antitrust law has not kept pace with developments in the digital sector, allowing a handful of Big Tech firms to exploit consumers and foreclose competitors from the market. To address the issue, the senators’ bill would bar “covered platforms” from engaging in a raft of conduct, including self-preferencing, tying, and limiting interoperability with competitors’ products.

That’s what makes the open letter to Congress published late last month by the usually staid American Bar Association’s (ABA) Antitrust Law Section so eye-opening. The letter is nothing short of a searing critique of the legislation, which the section finds to be poorly written, vague, and departing from established antitrust-law principles.

The ABA, of course, has a reputation as an independent, highly professional, and heterogenous group. The antitrust section’s membership includes not only in-house corporate counsel, but lawyers from nonprofits, consulting firms, federal and state agencies, judges, and legal academics. Given this context, the comments must be read as a high-level judgment that recent legislative and regulatory efforts to “discipline” tech fall outside the legal mainstream and would come at the cost of established antitrust principles, legal precedent, transparency, sound economic analysis, and ultimately consumer welfare.

The Antitrust Section’s Comments

As the ABA Antitrust Law Section observes:

The Section has long supported the evolution of antitrust law to keep pace with evolving circumstances, economic theory, and empirical evidence. Here, however, the Section is concerned that the Bill, as written, departs in some respects from accepted principles of competition law and in so doing risks causing unpredicted and unintended consequences.

Broadly speaking, the section’s criticisms fall into two interrelated categories. The first relates to deviations from antitrust orthodoxy and the principles that guide enforcement. The second is a critique of the AICOA’s overly broad language and ambiguous terminology.

Departing from established antitrust-law principles

Substantively, the overarching concern expressed by the ABA Antitrust Law Section is that AICOA departs from the traditional role of antitrust law, which is to protect the competitive process, rather than choosing to favor some competitors at the expense of others. Indeed, the section’s open letter observes that, out of the 10 categories of prohibited conduct spelled out in the legislation, only three require a “material harm to competition.”

Take, for instance, the prohibition on “discriminatory” conduct. As it stands, the bill’s language does not require a showing of harm to the competitive process. It instead appears to enshrine a freestanding prohibition of discrimination. The bill targets tying practices that are already prohibited by U.S. antitrust law, but while similarly eschewing the traditional required showings of market power and harm to the competitive process. The same can be said, mutatis mutandis, for “self-preferencing” and the “unfair” treatment of competitors.

The problem, the section’s letter to Congress argues, is not only that this increases the teleological chasm between AICOA and the overarching goals and principles of antitrust law, but that it can also easily lead to harmful unintended consequences. For instance, as the ABA Antitrust Law Section previously observed in comments to the Australian Competition and Consumer Commission, a prohibition of pricing discrimination can limit the extent of discounting generally. Similarly, self-preferencing conduct on a platform can be welfare-enhancing, while forced interoperability—which is also contemplated by AICOA—can increase prices for consumers and dampen incentives to innovate. Furthermore, some of these blanket prohibitions are arguably at loggerheads with established antitrust doctrine, such as in, e.g., Trinko, which established that even monopolists are generally free to decide with whom they will deal.

In response to the above, the ABA Antitrust Law Section (reasonably) urges Congress explicitly to require an effects-based showing of harm to the competitive process as a prerequisite for all 10 of the infringements contemplated in the AICOA. This also means disclaiming generalized prohibitions of “discrimination” and of “unfairness” and replacing blanket prohibitions (such as the one for self-preferencing) with measured case-by-case analysis.

Arguably, the reason why the Klobuchar-Grassley bill can so seamlessly exclude or redraw such a central element of antitrust law as competitive harm is because it deliberately chooses to ignore another, preceding one. Namely, the bill omits market power as a requirement for a finding of infringement or for the legislation’s equally crucial designation as a “covered platform.” It instead prescribes size metrics—number of users, market capitalization—to define which platforms are subject to intervention. Such definitions cast an overly wide net that can potentially capture consumer-facing conduct that doesn’t have the potential to harm competition at all.

It is precisely for this reason that existing antitrust laws are tethered to market power—i.e., because it long has been recognized that only companies with market power can harm competition. As John B. Kirkwood of Seattle University School of Law has written:

Market power’s pivotal role is clear…This concept is central to antitrust because it distinguishes firms that can harm competition and consumers from those that cannot.

In response to the above, the ABA Antitrust Law Section (reasonably) urges Congress explicitly to require an effects-based showing of harm to the competitive process as a prerequisite for all 10 of the infringements contemplated in the AICOA. This also means disclaiming generalized prohibitions of “discrimination” and of “unfairness” and replacing blanket prohibitions (such as the one for self-preferencing) with measured case-by-case analysis.

Opaque language for opaque ideas

Another underlying issue is that the Klobuchar-Grassley bill is shot through with indeterminate language and fuzzy concepts that have no clear limiting principles. For instance, in order either to establish liability or to mount a successful defense to an alleged violation, the bill relies heavily on inherently amorphous terms such as “fairness,” “preferencing,” and “materiality,” or the “intrinsic” value of a product. But as the ABA Antitrust Law Section letter rightly observes, these concepts are not defined in the bill, nor by existing antitrust case law. As such, they inject variability and indeterminacy into how the legislation would be administered.

Moreover, it is also unclear how some incommensurable concepts will be weighed against each other. For example, how would concerns about safety and security be weighed against prohibitions on self-preferencing or requirements for interoperability? What is a “core function” and when would the law determine it has been sufficiently “enhanced” or “maintained”—requirements the law sets out to exempt certain otherwise prohibited behavior? The lack of linguistic and conceptual clarity not only explodes legal certainty, but also invites judicial second-guessing into the operation of business decisions, something against which the U.S. Supreme Court has long warned.

Finally, the bill’s choice of language and recent amendments to its terminology seem to confirm the dynamic discussed in the previous section. Most notably, the latest version of AICOA replaces earlier language invoking “harm to the competitive process” with “material harm to competition.” As the ABA Antitrust Law Section observes, this “suggests a shift away from protecting the competitive process towards protecting individual competitors.” Indeed, “material harm to competition” deviates from established categories such as “undue restraint of trade” or “substantial lessening of competition,” which have a clear focus on the competitive process. As a result, it is not unreasonable to expect that the new terminology might be interpreted as meaning that the actionable standard is material harm to competitors.

In its letter, the antitrust section urges Congress not only to define more clearly the novel terminology used in the bill, but also to do so in a manner consistent with existing antitrust law. Indeed:

The Section further recommends that these definitions direct attention to analysis consistent with antitrust principles: effects-based inquiries concerned with harm to the competitive process, not merely harm to particular competitors

Conclusion

The AICOA is a poorly written, misguided, and rushed piece of regulation that contravenes both basic antitrust-law principles and mainstream economic insights in the pursuit of a pre-established populist political goal: punishing the success of tech companies. If left uncorrected by Congress, these mistakes could have potentially far-reaching consequences for innovation in digital markets and for consumer welfare. They could also set antitrust law on a regressive course back toward a policy of picking winners and losers.

Biden administration enforcers at the U.S. Justice Department (DOJ) and the Federal Trade Commission (FTC) have prioritized labor-market monopsony issues for antitrust scrutiny (see, for example, here and here). This heightened interest comes in light of claims that labor markets are highly concentrated and are rife with largely neglected competitive problems that depress workers’ income. Such concerns are reflected in a March 2022 U.S. Treasury Department report on “The State of Labor Market Competition.”

Monopsony is the “flip side” of monopoly and U.S. antitrust law clearly condemns agreements designed to undermine the “buyer side” competitive process (see, for example, this U.S. government submission to the OECD). But is a special new emphasis on labor markets warranted, given that antitrust enforcers ideally should seek to allocate their scarce resources to the most pressing (highest valued) areas of competitive concern?

A May 2022 Information Technology & Innovation (ITIF) study from ITIF Associate Director (and former FTC economist) Julie Carlson indicates that the degree of emphasis the administration’s antitrust enforcers are placing on labor issues may be misplaced. In particular, the ITIF study debunks the Treasury report’s findings of high levels of labor-market concentration and the claim that workers face a “decrease in wages [due to labor market power] at roughly 20 percent relative to the level in a fully competitive market.” Furthermore, while noting the importance of DOJ antitrust prosecutions of hard-core anticompetitive agreements among employers (wage-fixing and no-poach agreements), the ITIF report emphasizes policy reforms unrelated to antitrust as key to improving workers’ lot.

Key takeaways from the ITIF report include:

  • Labor markets are not highly concentrated. Local labor-market concentration has been declining for decades, with the most concentrated markets seeing the largest declines.
  • Labor-market power is largely due to labor-market frictions, such as worker preferences, search costs, bargaining, and occupational licensing, rather than concentration.
  • As a case study, changes in concentration in the labor market for nurses have little to no effect on wages, whereas nurses’ preferences over job location are estimated to lead to wage markdowns of 50%.
  • Firms are not profiting at the expense of workers. The decline in the labor share of national income is primarily due to rising home values, not increased labor-market concentration.
  • Policy reform should focus on reducing labor-market frictions and strengthening workers’ ability to collectively bargain. Policies targeting concentration are misguided and will be ineffective at improving outcomes for workers.

The ITIF report also throws cold water on the notion of emphasizing labor-market issues in merger reviews, which was teed up in the January 2022 joint DOJ/FTC request for information (RFI) on merger enforcement. The ITIF report explains:

Introducing the evaluation of labor market effects unnecessarily complicates merger review and needlessly ties up agency resources at a time when the agencies are facing severe resource constraints.48 As discussed previously, labor markets are not highly concentrated, nor is labor market concentration a key factor driving down wages.

A proposed merger that is reportable to the agencies under the Hart-Scott-Rodino Act and likely to have an anticompetitive effect in a relevant labor market is also likely to have an anticompetitive effect in a relevant product market. … Evaluating mergers for labor market effects is unnecessary and costly for both firms and the agencies. The current merger guidelines adequately address competition concerns in input markets, so any contemplated revision to the guidelines should not incorporate a “framework to analyze mergers that may lessen competition in labor markets.” [Citation to Request for Information on Merger Enforcement omitted.]

In sum, the administration’s recent pronouncements about highly anticompetitive labor markets that have resulted in severely underpaid workers—used as the basis to justify heightened antitrust emphasis on labor issues—appear to be based on false premises. As such, they are a species of government misinformation, which, if acted upon, threatens to misallocate scarce enforcement resources and thereby undermine efficient government antitrust enforcement. What’s more, an unnecessary overemphasis on labor-market antitrust questions could impose unwarranted investigative costs on companies and chill potentially efficient business transactions. (Think of a proposed merger that would reduce production costs and benefit consumers but result in a workforce reduction by the merged firm.)

Perhaps the administration will take heed of the ITIF report and rethink its plans to ramp up labor-market antitrust-enforcement initiatives. Promoting pro-market regulatory reforms that benefit both labor and consumers (for instance, excessive occupational-licensing restrictions) would be a welfare-superior and cheaper alternative to misbegotten antitrust actions.

[Continuing our FTC UMC Rulemaking symposium, today’s first guest post is from Richard J. Pierce Jr., the Lyle T. Alverson Professor of Law at George Washington University Law School. We are also publishing a related post today from Andrew K. Magloughlin and Randolph J. May of the Free State Foundation. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

FTC Rulemaking Power

In 2021, President Joe Biden appointed a prolific young scholar, Lina Khan, to chair the Federal Trade Commission (FTC). Khan strongly dislikes almost every element of antitrust law. She has stated her intention to use notice and comment rulemaking to change antitrust law in many ways. She was unable to begin this process for almost a year because the FTC was evenly divided between Democratic and Republican appointees, and she has not been able to elicit any support for her agenda from the Republican members. She will finally get the majority she needs to act in the next few days, as the U.S. Senate appears set to confirm Alvaro Bedoya to the fifth spot on the commission.   

Chair Khan has argued that the FTC has the power to use notice-and-comment rulemaking to define the term “unfair methods of competition” as that term is used in Section 5 of the Federal Trade Commission Act. Section 5 authorizes the FTC to define and to prohibit both “unfair acts” and “unfair methods of competition.” For more than 50 years after the 1914 enactment of the statute, the FTC, Congress, courts, and scholars interpreted it to empower the FTC to use adjudication to implement Section 5, but not to use rulemaking for that purpose.

In 1973, the U.S. Court of Appeals for the D.C. Circuit held that the FTC has the power to use notice-and-comment rulemaking to implement Section 5. Congress responded by amending the statute in 1975 and 1980 to add many time-consuming and burdensome procedures to the notice-and-comment process. Those added procedures had the effect of making the rulemaking process so long that the FTC gave up on its attempts to use rulemaking to implement Section 5.

Khan claims that the FTC has the power to use notice-and-comment rulemaking to define “unfair methods of competition,” even though it must use the extremely burdensome procedures that Congress added in 1975 and 1980 to define “unfair acts.” Her claim is based on a combination of her belief that the current U.S. Supreme Court would uphold the 1973 D.C. Circuit decision that held that the FTC has the power to use notice-and-comment rulemaking to implement Section 5 and her belief that a peculiarly worded provision of the 1975 amendment to the FTC Act allows the FTC to use notice-and-comment rulemaking to define “unfair methods of competition,” even though it requires the FTC to use the extremely burdensome procedure to issue rules that define “unfair acts.” The FTC has not attempted to use notice-and-comment rulemaking to define “unfair methods of competition” since Congress amended the statute in 1975. 

I am skeptical of Khan’s argument. I doubt that the Supreme Court would uphold the 1973 D.C. Circuit opinion, because the D.C. Circuit used a method of statutory interpretation that no modern court uses and that is inconsistent with the methods of statutory interpretation that the Supreme Court uses today. I also doubt that the Supreme Court would interpret the 1975 statutory amendment to distinguish between “unfair acts” and “unfair methods of competition” for purposes of the procedures that the FTC is required to use to issue rules to implement Section 5.

Even if the FTC has the power to use notice-and-comment rulemaking to define “unfair methods of competition,” I am confident that the Supreme Court would not uphold an exercise of that power that has the effect of making a significant change in antitrust law. That would be a perfect candidate for application of the major questions doctrine. The court will not uphold an “unprecedented” action of “vast economic or political significance” unless it has “unmistakable legislative support.” I will now describe four hypothetical exercises of the rulemaking power that Khan believes that the FTC possesses to illustrate my point.

Hypothetical Exercises of FTC Rulemaking Power

Creation of a Right to Repair

President Biden has urged the FTC to create a right for an owner of any product to repair the product or to have it repaired by an independent service organization (ISO). The Supreme Court’s 1992 opinion in Eastman Kodak v. Image Technical Services tells us all we need to know about the likelihood that it would uphold a rule that confers a right to repair. When Kodak took actions that made it impossible for ISOs to repair Kodak photocopiers, the ISOs argued that Kodak’s action violated both Section 1 and Section 2 of the Sherman Act. The Court held that Kodak could prevail only if it could persuade a jury that its view of the facts was accurate. The Court remanded the case for a jury trial to address three contested issues of fact.

The Court’s reasoning in Kodak is inconsistent with any version of a right to repair that the FTC might attempt to create through rulemaking. The Court expressed its view that allowing an ISO to repair a product sometimes has good effects and sometimes has bad effects. It concluded that it could not decide whether Kodak’s new policy was good or bad without first resolving the three issues of fact on which the parties disagreed. In a 2021 report to Congress, the FTC agreed with the Supreme Court. It identified seven factual contingencies that can cause a prohibition on repair of a product by an ISO to have good effects or bad effects. It is naïve to expect the Supreme Court to change its approach to repair rights in response to a rule in which the FTC attempts to create a right to repair, particularly when the FTC told Congress that it agrees with the Court’s approach immediately prior to Khan’s arrival at the agency.

Prohibition of Reverse-Payment Settlements of Patent Disputes Involving Prescription Drugs

Some people believe that settlements of patent-infringement disputes in which the manufacturer of a generic drug agrees not to market the drug in return for a cash payment from the manufacturer of the brand-name drug are thinly disguised agreements to create a monopoly and to share the monopoly rents. Khan has argued that the FTC could issue a rule that prohibits such reverse-payment settlements. Her belief that a court would uphold such a rule is contradicted by the Supreme Court’s 2013 opinion in FTC v. Actavis. The Court unanimously rejected the FTC’s argument in support of a rebuttable presumption that reverse payments are illegal. Four justices argued that reverse-payment settlements can never be illegal if they are within the scope of the patent. The five-justice majority held that a court can determine that a reverse-payment settlement is illegal only after a hearing in which it applies the rule of reason to determine whether the payment was reasonable.

A Prohibition on Below-Cost Pricing When the Firm Cannot Recoup Its Losses

Khan believes that illegal predatory pricing by dominant firms is widespread and extremely harmful to competition. She particularly dislikes the Supreme Court’s test for identifying predatory pricing. That test requires proof that a firm that engages in below-cost pricing has a reasonable prospect of recouping its losses. She wants the FTC to issue a rule in which it defines predatory pricing as below-cost pricing without any prospect that the firm will be able to recoup its losses.

The history of the Court’s predatory-pricing test shows how unrealistic it is to expect the Court to uphold such a rule. The Court first announced the test in a Sherman Act case in 1986. Plaintiffs attempted to avoid the precedential effect of that decision by filing complaints based on predatory pricing under the Robinson-Patman Act. The Court rejected that attempt in a 1993 opinion. The Court made it clear that the test for determining whether a firm is engaged in illegal predatory pricing is the same no matter whether the case arises under the Sherman Act or the Robinson-Patman Act. The Court undoubtedly would reject the FTC’s effort to change the definition of predatory pricing by relying on the FTC Act instead of the Sherman Act or the Robinson-Patman Act.

A Prohibition of Noncompete Clauses in Contracts to Employ Low-Wage Employees

President Biden has expressed concern about the increasing prevalence of noncompete clauses in employment contracts applicable to low wage employees. He wants the FTC to issue a rule that prohibits inclusion of noncompete clauses in contracts to employ low-wage employees. The Supreme Court would be likely to uphold such a rule.

A rule that prohibits inclusion of noncompete clauses in employment contracts applicable to low-wage employees would differ from the other three rules I discussed in many respects. First, it has long been the law that noncompete clauses can be included in employment contracts only in narrow circumstances, none of which have any conceivable application to low-wage contracts. The only reason that competition authorities did not bring actions against firms that include noncompete clauses in low-wage employment contracts was their belief that state labor law would be effective in deterring firms from engaging in that practice. Thus, the rule would be entirely consistent with existing antitrust law.

Second, there are many studies that have found that state labor law has not been effective in deterring firms from including noncompete clauses in low-wage employment contracts and many studies that have found that the increasing use of noncompete clauses in low-wage contracts is causing a lot of damage to the performance of labor markets. Thus, the FTC would be able to support its rule with high-quality evidence.

Third, the Supreme Court’s unanimous 2021 opinion in NCAA v. Alstom indicates that the Court is receptive to claims that a practice that harms the performance of labor markets is illegal. Thus, I predict that the Court would uphold a rule that prohibits noncompete clauses in employment contracts applicable to low-wage employees if it holds that the FTC can use notice-and-comment rulemaking to define “unfair methods of competition,” as that term is used in Section 5 of the FTC Act. That caveat is important, however. As I indicated at the beginning of this essay, I doubt that the FTC has that power.

I would urge the FTC not to use notice-and comment rulemaking to address the problems that are caused by the increasing use of noncompete clauses in low-wage contracts. There is no reason for the FTC to put a lot of time and effort into a notice-and-comment rulemaking in the hope that the Court will conclude that the FTC has the power to use notice-and-comment rulemaking to implement Section 5. The FTC can implement an effective prohibition on the inclusion of noncompete clauses in employment contracts applicable to low-wage employees by using a combination of legal tools that it has long used and that it clearly has the power to use—issuance of interpretive rules and policy statements coupled with a few well-chosen enforcement actions.

Alternative Ways to Improve Antitrust Law       

There are many other ways in which Khan can move antitrust law in the directions that she prefers. She can make common cause with the many mainstream antitrust scholars who have urged incremental changes in antitrust law and who have conducted the studies needed to support those proposed changes. Thus, for instance, she can move aggressively against other practices that harm the performance of labor markets, change the criteria that the FTC uses to decide whether to challenge proposed mergers and acquisitions, and initiate actions against large platform firms that favor their products over the products of third parties that they sell on their platforms.     

As the European Union’s Digital Markets Act (DMA) has entered the final stage of its approval process, one matter the inter-institutional negotiations appears likely to leave unresolved is how the DMA’s the relationship with competition law affects the very rationale and legal basis for the intervention. 

The DMA is explicitly grounded on the questionable assumption that competition law alone is insufficient to rein in digital gatekeepers. Accordingly, EU lawmakers have declared the proposal to be a necessary regulatory intervention that will complement antitrust rules by introducing a set of ex ante obligations.

To support this line of reasoning, the DMA’s drafters insist that it protects a different legal interest from antitrust. Indeed, the intervention is ostensibly grounded in Article 114 of the Treaty on the Functioning of the European Union (TFEU), rather than Article 103—the article that spells out the implementation of competition law. Pursuant to Article 114, the DMA opts for centralized enforcement at the EU level to ensure harmonized implementation of the new rules.

It has nonetheless been clear from the very beginning that the DMA lacks a distinct purpose. Indeed, the interests it nominally protects (the promotion of fairness and contestability) do not differ from the substance and scope of competition law. The European Parliament has even suggested that the law’s aims should also include fostering innovation and increasing consumer welfare, which also are within the purview of competition law. Moreover, the DMA’s obligations focus on practices that have already been the subject of past and ongoing antitrust investigations.

Where the DMA differs in substance from competition law is simply that it would free enforcers from the burden of standard antitrust analysis. The law is essentially a convenient shortcut that would dispense with the need to define relevant markets, prove dominance, and measure market effects (see here). It essentially dismisses economic analysis and the efficiency-oriented consumer welfare test in order to lower the legal standards and evidentiary burdens needed to bring an investigation.

Acknowledging the continuum between competition law and the DMA, the European Competition Network and some member states (self-appointed as “friends of an effective DMA”) have proposed empowering national competition authorities (NCAs) to enforce DMA obligations.

Against this background, my new ICLE working paper pursues a twofold goal. First, it aims to show how, because of its ambiguous relationship with competition law, the DMA falls short of its goal of preventing regulatory fragmentation. Moreover, despite having significant doubts about the DMA’s content and rationale, I argue that fully centralized enforcement at the EU level should be preserved and that frictions with competition law would be better confined by limiting the law’s application to a few large platforms that are demonstrably able to orchestrate an ecosystem.

Welcome to the (Regulatory) Jungle

The DMA will not replace competition rules. It will instead be implemented alongside them, creating several overlapping layers of regulation. Indeed, my paper broadly illustrates how the very same practices that are targeted by the DMA may also be investigated by NCAs under European and national-level competition laws, under national competition laws specific to digital markets, and under national rules on economic dependence.

While the DMA nominally prohibits EU member states from imposing additional obligations on gatekeepers, member states remain free to adapt their competition laws to digital markets in accordance with the leeway granted by Article 3(3) of the Modernization Regulation. Moreover, NCAs may be eager to exploit national rules on economic dependence to tackle perceived imbalances of bargaining power between online platforms and their business counterparties.

The risk of overlap with competition law is also fostered by the DMA’s designation process, which may further widen the law’s scope in the future in terms of what sorts of digital services and firms may fall under the law’s rubric. As more and more industries explore platform business models, the DMA would—without some further constraints on its scope—be expected to cover a growing number of firms, including those well outside Big Tech or even native tech companies.

As a result, the European regulatory landscape could become even more fragmented in the post-DMA world. The parallel application of the DMA and antitrust rules poses the risks of double jeopardy (see here) and of conflicting decisions.

A Fully Centralized and Ecosystem-Based Regulatory Regime

To counter the risk that digital-market activity will be subject to regulatory double jeopardy and conflicting decisions across EU jurisdictions, DMA enforcement should not only be fully centralized at the EU level, but that centralization should be strengthened. This could be accomplished by empowering the Commission with veto rights, as was requested by the European Parliament.

This veto power should certainly extend to national measures targeting gatekeepers that run counter to the DMA or to decisions adopted by the Commission under the DMA. But it should also include prohibiting national authorities from carrying out investigations on their own initiative without prior authorization by the Commission.

Moreover, it will also likely be necessary to significantly redefine the DMA’s scope. Notably, EU leaders could mitigate the risk of fragmentation from the DMA’s frictions with competition law by circumscribing the law to ecosystem-related issues. This would effectively limit its application to a few large platforms who are demonstrably able to orchestrate an ecosystem. It also would reinstate the DMA’s original justification, which was to address the emergence of a few large platforms who are able act as gatekeepers and enjoy an entrenched position as a result of conglomerate ecosystems.

Changes to the designation process should also be accompanied by confining the list of ex ante obligations the law imposes. These should reflect relevant differences in platforms’ business models and be tailored to the specific firm under scrutiny, rather than implementing a one-size-fits-all approach.

There are compelling arguments against the policy choice to regulate platforms and their ecosystems like utilities. The suggested adaptations would at least acknowledge the regulatory nature of the DMA, removing the suspicion that it is just an antitrust intervention vested by regulation.

During the exceptional rise in stock-market valuations from March 2020 to January 2022, both equity investors and antitrust regulators have implicitly agreed that so-called “Big Tech” firms enjoyed unbeatable competitive advantages as gatekeepers with largely unmitigated power over the digital ecosystem.

Investors bid up the value of tech stocks to exceptional levels, anticipating no competitive threat to incumbent platforms. Antitrust enforcers and some legislators have exhibited belief in the same underlying assumption. In their case, it has spurred advocacy of dramatic remedies—including breaking up the Big Tech platforms—as necessary interventions to restore competition. 

Other voices in the antitrust community have been more circumspect. A key reason is the theory of contestable markets, developed in the 1980s by the late William Baumol and other economists, which holds that even extremely large market shares are at best a potential indicator of market power. To illustrate, consider the extreme case of a market occupied by a single firm. Intuitively, the firm would appear to have unqualified pricing power. Not so fast, say contestable market theorists. Suppose entry costs into the market are low and consumers can easily move to other providers. This means that the apparent monopolist will act as if the market is populated by other competitors. The takeaway: market share alone cannot demonstrate market power without evidence of sufficiently strong barriers to market entry.

While regulators and some legislators have overlooked this inconvenient principle, it appears the market has not. To illustrate, look no further than the Feb. 3 $230 billion crash in the market value of Meta Platforms—parent company of Facebook, Instagram, and WhatsApp, among other services.

In its antitrust suit against Meta, the Federal Trade Commission (FTC) has argued that Meta’s Facebook service enjoys a social-networking monopoly, a contention that the judge in the case initially rejected in June 2021 as so lacking in factual support that the suit was provisionally dismissed. The judge’s ruling (which he withdrew last month, allowing the suit to go forward after the FTC submitted a revised complaint) has been portrayed as evidence for the view that existing antitrust law sets overly demanding evidentiary standards that unfairly shelter corporate defendants. 

Yet, the record-setting single-day loss in Meta’s value suggests the evidentiary standard is set just about right and the judge’s skepticism was fully warranted. Consider one of the principal reasons behind Meta’s plunge in value: its service had suffered substantial losses of users to TikTok, a formidable rival in a social-networking market in which the FTC claims that Facebook faces no serious competition. The market begs to differ. In light of the obvious competitive threat posed by TikTok and other services, investors reassessed Facebook’s staying power, which was then reflected in its owner Meta’s downgraded stock price.

Just as the investment bubble that had supported the stock market’s case for Meta has popped, so too must the regulatory bubble that had supported the FTC’s antitrust case against it. Investors’ reevaluation rebuts the FTC’s strained market definition that had implausibly excluded TikTok as a competitor.

Even more fundamentally, the market’s assessment shows that Facebook’s users face nominal switching costs—in which case, its leadership position is contestable and the Facebook “monopoly” is not much of a monopoly. While this conclusion might seem surprising, Facebook’s vulnerability is hardly exceptional: Nokia, Blackberry, AOL, Yahoo, Netscape, and PalmPilot illustrate how often seemingly unbeatable tech leaders have been toppled with remarkable speed.

The unraveling of the FTC’s case against what would appear to be an obviously dominant platform should be a wake-up call for those policymakers who have embraced populist antitrust’s view that existing evidentiary requirements, which minimize the risk of “false positive” findings of anticompetitive conduct, should be set aside as an inconvenient obstacle to regulatory and judicial intervention. 

None of this should be interpreted to deny that concentration levels in certain digital markets raise significant antitrust concerns that merit close scrutiny. In particular, regulators have overlooked how some leading platforms have devalued intellectual-property rights in a manner that distorts technology and content markets by advantaging firms that operate integrated product and service ecosystems while disadvantaging firms that specialize in supplying the technological and creative inputs on which those ecosystems rely.  

The fundamental point is that potential risks to competition posed by any leading platform’s business practices can be assessed through rigorous fact-based application of the existing toolkit of antitrust analysis. This is critical to evaluate whether a given firm likely occupies a transitory, rather than durable, leadership position. The plunge in Meta’s stock in response to a revealed competitive threat illustrates the perils of discarding that surgical toolkit in favor of a blunt “big is bad” principle.

Contrary to what has become an increasingly common narrative in policy discussions and political commentary, the existing framework of antitrust analysis was not designed by scholars strategically acting to protect “big business.” Rather, this framework was designed and refined by scholars dedicated to rationalizing, through the rigorous application of economic principles, an incoherent body of case law that had often harmed consumers by shielding incumbents against threats posed by more efficient rivals. The legal shortcuts being pursued by antitrust populists to detour around appropriately demanding evidentiary requirements are writing a “back to the future” script that threatens to return antitrust law to that unfortunate predicament.

This post is the second in a planned series. The first installment can be found here.

In just over a century since its dawn, liberalism had reshaped much of the world along the lines of individualism, free markets, private property, contract, trade, and competition. A modest laissez-faire political philosophy that had begun to germinate in the minds of French Physiocrats in the early 18th century had, scarcely 150 years later, inspired the constitution of the world’s nascent leading power, the United States. But it wasn’t all plain sailing, as liberalism’s expansion eventually galvanized strong social, political, cultural, economic and even spiritual opposition, which coalesced around two main ideologies: socialism and fascism.

In this post, I explore the collectivist backlash against liberalism, its deeper meaning from the perspective of political philosophy, and the main features of its two main antagonists—especially as they relate to competition and competition regulation. Ultimately, the purpose is to show that, in trying to respond to the collectivist threat, successive iterations of neoliberalism integrated some of collectivism’s key postulates in an attempt to create a synthesis between opposing philosophical currents. Yet this “mostly” liberal synthesis, which serves as the philosophical basis of many competition systems today, is afflicted with the same collectivist flaws that the synthesis purported to overthrow (as I will elaborate in subsequent posts).

The Collectivist Backlash

By the early 20th century, two deeply illiberal movements bent on exposing and demolishing the fallacies and contradictions of liberalism had succeeded in capturing the imagination and support of the masses. These collectivist ideologies were Marxian socialism/communism on the left and fascism/Nazism on the right. Although ultimately distinct, they both rejected the basic postulates of classical liberalism. 

Socially, both agreed that liberalism uprooted traditional ways of life and dissolved the bonds of solidarity that had hitherto governed social relationships. This is the view expressed, e.g., in Karl Polanyi’s influential book The Great Transformation, in which the Christian socialist Polanyi contends that “disembedded” liberal markets would inevitably come to be governed again by the principles of solidarity and reciprocity (under socialism/communism). Similarly, although not technically a work on political economy or philosophy, Knut Hamsun’s 1917 novel Growth of the Soil perfectly captures the right’s rejection of liberal progress, materialism, industrialization, and the idealization of traditional bucolic life. The Norwegian Hamsun, winner of the 1920 Nobel Prize in Literature, later became an enthusiastic supporter of the Third Reich. 

Politically and culturally, Marxist historical materialism posited that liberal democracy (individual freedoms, periodic elections, etc.) and liberal culture (literature, art, cinema) served the interests of the economically dominant class: the bourgeoisie, i.e., the owners of the means of production. Fascists and Nazis likewise deplored liberal democracy as a sign of decadence and weakness and viewed liberal culture as an oxymoron: a hotbed of degeneracy built on the dilution of national and racial identities. 

Economically, the more theoretically robust leftist critiques rallied around Marx’ scientific socialism, which held that capitalism—the economic system that served as the embodiment of a liberal social order built on private property, contract, and competition—was exploitative and doomed to consume itself. From the right, it was argued that liberalism enabled individual interest to override what was good for the collective—an unpardonable sin in the eyes of an ideology built around robust nodes of collectivist identity, such as nation, race, and history.

A Recurrent Civilizational Struggle

The rise of socialism and fascism marked the beginning of a civilizational shift that many have referred to as the lowest ebb of liberalism. By the 1930s, totalitarian regimes utterly incompatible with a liberal worldview were in place in several European countries, such as Italy, Russia, Germany, Portugal, Spain, and Romania. As Austrian economist Ludwig Von Mises lamented, liberals and liberal ideas—at least, in the classical sense—had been driven to the fringes of society and academia, subject of scorn and ridicule. Even the liberally oriented, like economist John Maynard Keynes, were declaring the “end of laissez-faire.” 

At its most basic level, I believe that the conflict can be understood, from a philosophical perspective, as an iteration of the recurrent struggle between individualism and collectivism.

For instance, the German sociologist Ferdinand Tonnies has described the perennial tension between two elementary ways of conceiving the social order: Gesellschaft and Gemeinschaft. Gesellschaft refers to societies made up of individuals held together by formal bonds, such as contracts, whereas Gemeinschaft refers to communities held together by organic bonds, such as kinship, which function together as parts of an integrated whole. American law professor David Gerber explains that, from the Gemeinschaft perspective, competition was seen as an enemy:

Gemeinschaft required co-operation and the accommodation of individual interests to the commonwealth, but competition, in contrast, demanded that individuals be concerned first and foremost with their own self-interest. From this communitarian perspective, competition looked suspiciously like exploitation. The combined effect of competition and of political and economic inequality was that the strong would get stronger, the weak would get weaker, and the strong would use their strength to take from the weak.

Tonnies himself thought that dominant liberal notions of Gesellschaft would inevitably give way to greater integration of a socialist Gemeinschaft. This was somewhat reminiscent of Polanyi’s distinction between embedded and disembedded markets; Karl Popper’s “open” and “closed” societies; and possibly, albeit somewhat more remotely, David Hume’s distinction between “concord” and “union.” While we should be wary of reductivism, a common theme underlying these works (at least two of which are not liberal) is the conflict between opposing views of society: one that posits the subordination of the individual to some larger community or group versus another that anoints the individual’s well-being as the ultimate measure of the value of social arrangements. That basic tension, in turn, reverberates across social and economic questions, including as they relate to markets, competition, and the functions of the state.

 Competition Under Marxism

Karl Marx argued that the course of history was determined by material relations among the social classes under any given system of production (historical materialism and dialectical materialism, respectively). Under that view, communism was not a desirable “state of affairs,” but the inevitable consequence of social forces as they then existed. As Marx and Friedrich Engels wrote in The Communist Manifesto:

Communism is for us not a state of affairs which is to be established, an ideal to which reality [will] have to adjust itself. We call communism the real movement which abolishes the present state of things. The conditions of this movement result from the premises now in existence.

Thus, following the ineluctable laws of history, which Marx claimed to have discovered, capitalism would inevitably come to be replaced by socialism and, subsequently, communism. Under socialism, the means of production would be controlled not by individuals interacting in a free market, but by the political process under the aegis of the state, with the corollary that planning would come to substitute for competition as the economy’s steering mechanism. This would then give way to communism: a stateless utopia in which everything would be owned by the community and where there would be no class divisions. This would come about as a result of the interplay of several factors inherent to capitalism, such as the exploitation of the working class and the impossibility of sustained competition.

Per Marx, under capitalism, owners of the means of production (i.e., the capitalists or the bourgeoisie) appropriate the surplus value (i.e., the difference between the sale price of a product and the cost to produce it) generated by workers. Thus, the lower the wages and the longer the working hours of the worker, the greater the profit accrued to the capitalist. This was not an unfortunate byproduct that could be reformed, Marx posited, but a central feature of the system that was solvable only through revolution. Moreover, the laws, culture, media, politics, faith, and other institutions that might ordinarily open alternative avenues to nonviolent resolution of class tensions (the “super-structure”) were themselves byproducts of the underlying material relations of production (“structure” or “base”), and thus served to justify and uphold them.

The Marxian position further held that competition—the lodestar and governing principle of the capitalist economy—was, like the system itself, unsustainable. It would inevitably end up cannibalizing itself. But the claim is a bit more subtle than critics of communism often assume. As Leon Trotsky wrote in the 1939 pamphlet Marxism in our time:

Relations between capitalists, who exploit the workers, are defined by competition, which for long endures as the mainspring of capitalist progress.

Two notions expressed seamlessly in Trotsky’s statement need to be understood about the Marxian perception of competition. The first is that, since capitalism is exploitative of workers and competition among capitalists is the engine of capitalism, competition is itself effectively a mechanism of exploitation. Capitalists compete through the cheapening of commodities and the subsequent reinvestment of the surplus appropriated from labor into the expansion of productivity. The most exploitative capitalist, therefore, generally has the advantage (this hinges, of course, largely on the validity of the labor theory of value).

At the same time, however, Marxists (including Marx himself) recognized the economic and technological progress brought about through capitalism and competition. This is what Trotsky means when he refers to competition as the “mainspring of capitalist progress” and, by extension, the “historical justification of the capitalist.” The implication is that, if competition were to cease, the entire capitalist edifice and the political philosophy undergirding it (liberalism) would crumble, as well.

Whereas liberalism and competition were intertwined, liberalism and monopoly could not coexist. Instead, monopolists demanded—and, due to their political clout, were able to obtain—an increasingly powerful central state capable of imposing protective tariffs and other measures for their benefit and protection. Trotsky again:

The elimination of competition by monopoly marks the beginning of the disintegration of capitalist society. Competition was the creative mainspring of capitalism and the historical justification of the capitalist. By the same token the elimination of competition marks the transformation of stockholders into social parasites. Competition had to have certain liberties, a liberal atmosphere, a regime of democracy, of commercial cosmopolitanism. Monopoly needs as authoritative government as possible, tariff walls, “its own” sources of raw materials and arenas of marketing (colonies). The last word in the disintegration of monopolistic capital is fascism.

Marxian theory posited that this outcome was destined to happen for two reasons. First, because:

The battle of competition is fought by cheapening of commodities. The cheapness of commodities depends, ceteris paribus, on the productiveness of labor, and this again on the scale of production. Therefore, the larger capital beats the smaller.

In other words, competition stimulated the progressive development of productivity, which depended on the scale of production, which depended, in turn, on firm size. Ultimately, therefore, competition ended up producing a handful of large companies that would subjugate competitors and cannibalize competition. Thus, the more wealth that capitalism generated—and Marx had no doubts that capitalism was a wealth-generating machine—the more it sowed the seeds of its own destruction. Hence:

While stimulating the progressive development of technique, competition gradually consumes, not only the intermediary layers but itself as well. Over the corpses and the semi-corpses of small and middling capitalists, emerges an ever-decreasing number of ever more powerful capitalist overlords. Thus, out of “honest”, “democratic”, “progressive” competition grows irrevocably “harmful”, “parasitic”, “reactionary” monopoly.

The second reason Marxists believed the downfall of capitalism was inevitable is that the capitalists squeezed out of the market by the competitive process would become proletarians, which would create a glut of labor (“a growing reserve army of the unemployed”), which would in turn depress wages. This process of proletarianization, combined with the “revolutionary combination by association” of workers in factories would raise class consciousness and ultimately lead to the toppling of capitalism and the ushering in of socialism.

Thus, there is a clear nexus in Marxian theory between the end of competition and the end of capitalism (and therefore liberalism), whereby monopoly is deduced from the inherent tendencies of capitalism, and the end of capitalism, in turn, is deduced from the ineluctable advent of monopoly. What follows (i.e., socialism and communism) are collectivist systems that purport to be run according to the principles of solidarity and cooperation (“from each according to his abilities, to each according to his needs”), where there is therefore no place (and no need) for competition. Instead, the Marxian Gemeinschaft would organize the economy around rationalistic lines, substituting cut-throat competition for centralized command by the state (later, the community) that would rein in hitherto uncontrollable economic forces in a heroic victory over the chaos and unpredictability of capitalism. This would, of course, also bring about the end of liberalism, with individualism, private property, and other liberal freedoms jettisoned as mouthpieces of bourgeoisie class interests. Chairman Mao Zedong put it succinctly:

We must affirm anew the discipline of the Party, namely:

1. The individual is subordinate to the organization;

2. The minority is subordinate to the majority.

Competition Under Fascism/Nazism

Formidable as it was, the Marxian attack on liberalism was just one side of the coin. Decades after the articulation of Marxian theory in the mid-19th century, fascism—founded by former socialist Benito Mussolini in 1915—emerged as a militant alternative to both liberalism and socialism/communism.

In essence, fascism was, like communism, unapologetically collectivist. But whereas socialists considered class to be the relevant building block of society, fascists viewed the individual as part of a greater national, racial, and historical entity embodied in the state and its leadership. As Mussolini wrote in his 1932 pamphlet The Doctrine of Fascism:

Anti-individualistic, the Fascist conception of life stresses the importance of the State and accepts the individual only in so far as his interests coincide with those of the State, which stands for the conscience of the universal, will of man as a historic entity. It is opposed to classical liberalism […] liberalism denied the State in the name of the individual; Fascism reasserts.

Accordingly, fascism leads to an amalgamation of state and individual that is not just a politico-economic arrangement where the latter formally submits to the former, but a conception of life. This worldview is, of course, diametrically opposed to core liberal principles, such as personal freedom, individualism, and the minimal state. And surely enough, fascists saw these liberal values as signs of civilizational decadence (as expressed most notably by Oswald Spengler in The Decline of the West—a book that greatly inspired Nazi ideology). Instead, they posited that the only freedom worthy of the name existed within the state; that peace and cosmopolitanism were illusory; and that man was man only by virtue of his membership and contribution to nation and race.

But fascism was also opposed to Marxian socialism. At its most basic, the schism between the two worldviews can be understood in terms of the fascist rejection of materialism, which was a centerpiece of Marxian thought. Fascists denied the equivalence of material well-being and happiness, instead viewing man as fulfilled by hardship, war, and by playing his part in the grand tapestry of history, whose real protagonists were nation-states. While admitting the importance of economic life—e.g., of efficiency and technological innovation—fascists denied that material relations unequivocally determined the course of history, insisting instead on the preponderance of spiritual and heroic acts (i.e., acts with no economic motive) as drivers of social change. “Sanctity and heroism,” Mussolini wrote, are at the root of the fascist belief system, not material self-interest.  

This belief system also extended to economic matters, including competition. The Third Reich respected private property rights to some degree—among other reasons, because Adolf Hitler believed it would encourage creative competition and innovation. The Nazis’ overarching principle, however, was that all economic activity and all private property ultimately be subordinated to the “common good,” as interpreted by the state. In the words of Hitler:

I want everyone to keep what he has earned subject to the principle that the good of the community takes priority over that of the individual. But the State should retain control; every owner should feel himself to be an agent of the State. […] The Third Reich will always retain the right to control property owners.

The solution was a totalitarian system of government control that maintained private enterprise and profit incentives as spurs to efficient management, but narrowly circumscribed the traditional freedom of entrepreneurs. Economic historians Christoph Buchheim and Jonas Scherner have characterized the Nazis’ economic system as a “state-directed private ownership economy,” a partnership in which the state was the principal and the business was the agent. Economic activity would be judged according to the criteria of “strategic necessity and social utility,” encompassing an array of social, political, practical, and ideological goals. Some have referred to this as the “primacy of politics over economics” approach.

For instance, in supervising cross-border acquisitions (today’s mergers), the state “sought to suppress purely economic motives and to substitute some rough notion of ‘racial political’ priority when supervising industrial acquisitions or controlling existing German subsidiaries.” The Reich selectively applied the 1933 Act for the Formation of Compulsory Cartels in regulating cartels that had been formed under the Weimar Republic with the Cartel Act of 1923. But the legislation also appears to have been applied to protect small and medium-sized enterprises, an important source of the party’s political support, from ruinous competition. This is reminiscent of German industrialist and Nazi supporter Gustav Krupp’s “Third Form”: 

Between “free” economy and state capitalism there is a third form: the economy that is free from obligations, but has a sense of inner duty to the state. 

In short, competition and individual achievement had to be balanced with cooperation, mediated by the self-appointed guardians of the “general interest.” In contrast with Marxian socialism/communism, the long-term goal of the Nazi regime was not to abolish competition, but to harness it to serve the aims of the regime. As Franz Böhm—cofounder, with Walter Eucken, of the Freiburg School and its theory of “ordoliberalism”—wrote in his advice to the Nazi government:

The state regulatory framework gives the Reich economic leadership the power to make administrative commands applying either the indirect or the direct steering competence according to need, functionality, and political intent. The leadership may go as far as it wishes in this regard, for example, by suspending competition-based economic steering and returning to it when appropriate. 

Conclusion

After a century of expansion, opposition to classical liberalism started to coalesce around two nodes: Marxism on the left, and fascism/Nazism on the right. What ensued was a civilizational crisis of material, social, and spiritual proportions that, at its most basic level, can be understood as an iteration of the perennial struggle between individualism and collectivism. On the one hand, liberals like J.S. Mill had argued forcefully that “the only freedom which deserves the name, is that of pursuing our own good in our own way.” In stark contrast, Mussolini wrote that “fascism stands for liberty, and for the only liberty worth having, the liberty of the state and of the individual within the state.” The former position is rooted in a humanist view that enshrines the individual at the center of the social order; the latter in a communitarian ideal that sees him as subordinate to forces that supersede him.

As I have explained in the previous post, the philosophical undercurrents of both positions are ancient. A more immediate precursor of the collectivist standpoint, however, can be found in German idealism and particularly in Georg Wilhelm Friedrich Hegel. In The Philosophy of Right, he wrote:

A single person, I need hardly say, is something subordinate, and as such he must dedicate himself to the ethical whole. Hence, if the state claims life, the individual must surrender it. All the worth which the human being possesses […] he possesses only through the state.

This broader clash is reflected, directly and indirectly, in notions of competition and competition regulation. Classical liberals sought to liberate competition from regulatory fetters. Marxism “predicted” its downfall and envisioned a social order without it. Fascism/Nazism sought to wrest it from the hands of greedy self-interest and mold it to serve the many and the fluctuating objectives of the state and its vision of the common good

In the next post, I will discuss how this has influenced the neoliberal philosophy that is still at the heart of many competition systems today. I will argue that two strands of neoliberalism emerged, which each attempted to resolve the challenge of collectivism in distinct ways. 

One strand, associated with a continental understanding of liberalism and epitomized by the Freiburg School, sought to strike a “mostly liberal” compromise between liberalism and collectivism—a “Third Way” between opposites. In doing so, however, it may have indulged in some of the same collectivist vices that it initially sought to avoid— such as vast government discretion and the imposition of myriad “higher” goals on society. 

The other strand, represented by Anglo-American liberalism of the sort espoused by Friedrich Hayek and Milton Friedman, was less conciliatory. It attempted to reform, rather than reinvent, liberalism. Their prescriptions involved creating a strong legal framework conducive to economic efficiency against a background of limited government discretion, freedom, and the rule of law.

In a new paper, Giuseppe Colangelo and Oscar Borgogno investigate whether antitrust policy is sufficiently flexible to keep up with the dynamics of digital app stores, and whether regulatory interventions are required in order to address their unique features. The authors summarize their findings in this blog post.

App stores are at the forefront of policy debates surrounding digital markets. The gatekeeping position of Apple and Google in the App Store and Google Play Store, respectively, and related concerns about the companies’ rule-setting and dual role, have been the subject of market studies launched by the Australian Competition and Consumer Commission (ACCC), the Netherlands Authority for Consumers & Markets (ACM), the U.K. Competition and Markets Authority (CMA), the Japan Federal Trade Commission (JFTC), and the U.S. House of Representatives.

Likewise, the terms and conditions for accessing app stores—such as in-app purchasing rules, restrictions on freedom of choice for smartphone payment apps, and near field communication (NFC) limitations—face scrutiny from courts and antitrust authorities around the world.

Finally, legislative initiatives envisage obligations explicitly addressed to app stores. Notably, the European Digital Markets Act (DMA) and some U.S. bills (e.g., the American Innovation and Choice Online Act and the Open App Markets Act, both of which are scheduled to be marked up Jan. 20 by the Senate Judiciary Committee) prohibit designated platforms from, for example: discriminating among users by engaging in self-preferencing and applying unfair access conditions; preventing users from sideloading and uninstalling pre-installed apps; impeding data portability and interoperability; or imposing anti-steering provisions. Likewise, South Korea has recently prohibited app-store operators in dominant market positions from forcing payment systems upon content providers and inappropriately delaying the review of, or deleting, mobile content from app markets.

Despite their differences, these international legislative initiatives do share the same aims and concerns. By and large, they question the role of competition law in the digital economy. In the case of app stores, these regulatory interventions attempt to introduce a neutrality regime, with the aim of increasing contestability, facilitating the possibility of switching by users, tackling conflicts of interests, and addressing imbalances in the commercial relationship. Ultimately, these proposals would treat online platforms as akin to common carriers or public utilities.

All of these initiatives assume antitrust is currently falling, because competition rules apply ex post and require an extensive investigation on a case-by-case basis. But is that really the case?

Platform and Device Neutrality Regime

Focusing on the content of the European, German, and U.S. legislative initiatives, the neutrality regime envisaged for app stores would introduce obligations in terms of both device and platform neutrality. The former includes provisions on app uninstalling, sideloading, app switching, access to technical functionality, and the possibility of changing default settings.  The latter entail data portability and interoperability obligations, and the ban on self-preferencing, Sherlocking, and unfair access conditions.

App Store Obligations: Comparison of EU, German, and U.S. Initiatives

Antitrust v. Regulation

Despite the growing consensus regarding the need to rely on ex ante regulation to govern digital markets and tackle the practices of large online platforms, recent and ongoing antitrust investigations demonstrate that standard competition law still provides a flexible framework to scrutinize several practices sometimes described as new and peculiar to app stores.

This is particularly true in Europe, where the antitrust framework grants significant leeway to antitrust enforcers relative to the U.S. scenario, as illustrated by the recent Google Shopping decision.

Indeed, considering legislative proposals to modernize antitrust law and to strengthen its enforcement, the U.S. House Judiciary Antitrust Subcommittee, along with some authoritative scholars, have suggested emulating the European model—imposing particular responsibility on dominant firms through the notion of abuse of dominant position and overriding several Supreme Court decisions in order to clarify the prohibitions on monopoly leveraging, predatory pricing, denial of essential facilities, refusals to deal, and tying.

By contrast, regulation appears better suited to support interventions intended to implement industrial-policy objectives. This applies, in particular, to provisions prohibiting app stores from impeding or restricting sideloading, app uninstalling, the possibility of choosing third-party apps and app stores as defaults, as well as provisions that would mandate data portability and interoperability.

However, such regulatory proposals may ultimately harm consumers. Indeed, by questioning the core of digital platform business models and affecting their governance design, these interventions entrust public authorities with mammoth tasks that could ultimately jeopardize the profitability of app-store ecosystems. They also overlook the differences that may exist between the business models of different platforms, such as Google and Apple’s app stores.

To make matters worse, the  difficulties encountered by regulators that have imposed product-design remedies on firms suggest that regulators may struggle to craft feasible and effective solutions. For instance, when the European General Court found that Google favored its own services in the Google Shopping case, it noted that this finding rested on the differential positioning and display of Shopping Units when compared to generic results. As a consequence, it could be argued that Google’s proposed auction remedy (whereby Google would compete with rivals for Shopping box placement) is compliant with the Court’s ruling because there is no dicrimination, regardless of the fact that Google might ultimately outbid its rivals (see here).

Finally, the neutrality principle cannot be transposed perfectly to all online platforms. Indeed, the workings of the app-discovery and distribution markets differ from broadband networks, as rankings and mobile services by definition involve some form of continuous selection and differentiated treatment to optimize the mobile-customer experience.

For all these reasons, our analysis suggests that antitrust law provides a less intrusive and more individualized approach, which would eventually benefit consumers by safeguarding quality and innovation.

As a new year dawns, the Biden administration remains fixated on illogical, counterproductive “big is bad” nostrums.

Noted economist and former Clinton Treasury Secretary Larry Summers correctly stressed recently that using antitrust to fight inflation represents “science denial,” tweeting that:

In his extended Twitter thread, Summers notes that labor shortages are the primary cause of inflation over time and that lowering tariffs, paring back import restrictions (such as the Buy America Act), and reducing regulatory delays are vital to combat inflation.

Summers’ points, of course, are right on the mark. Indeed, labor shortages, supply-chain issues, and a dramatic increase in regulatory burdens have been key to the dramatic run-up of prices during the Biden administration’s first year. Reducing the weight of government on the private sector and thereby enhancing incentives for increased investment, labor participation, and supply are the appropriate weapons to slow price rises and incentivize economic growth.

More specifically, administration policies can be pinpointed as the cause, not the potential solution to, rapid price increases in specific sectors, particularly the oil and gas industry. As I recently commented, policies that disincentivize new energy production, and fail to lift excessive regulatory burdens, have been a key factor in sparking rises in gasoline prices. Administration claims that anticompetitive activity is behind these prices increases should be discounted. New Federal Trade Commission (FTC) investigations of oil and gas companies would waste resources and increase already large governmental burdens on those firms.

The administration, nevertheless, appears committed to using antitrust as an anti-inflationary “tool” against “big business” (or perhaps, really, as a symbolic hammer to shift blame to the private sector for rising prices). Recent  pronouncements about combatting “big meat” are a case in point.

The New ‘Big Meat’ Crusade

Part of the administration’s crusade against “big meat” involves providing direct government financial support for favored firms. A U.S. Department of Agriculture (USDA) plan to spend up to $1 billion to assist smaller meat processors is a subsidy that artificially favors one group of competitors. This misguided policy, which bears the scent of special-interest favoritism, wastes taxpayer dollars and distorts free-market outcomes. It will do nothing to cure supply and regulatory problems that affect rising meat prices. It will, however, misallocate resources.

The other key aspect of the big meat initiative smacks more of problematic, old-style, economics-free antitrust. It centers on: (1) threatening possible antitrust actions against four large meat processors based principally on their size and market share; and (2) initiating a planned rulemaking under the Packers and Stockyards Act. (That rulemaking was foreshadowed by language in the July 2021 Biden Administration Executive Order on Competition.)

The administration’s apparent focus on the “dominance” of four large meatpacking firms (which have the temerity to collectively hold greater than 50% market shares in the hog, cattle, and chicken sectors) and the 120% jump in their gross profits since the pandemic began is troubling. It echoes the structuralist “big is bad” philosophy of the 1950s and 1960s. In and of itself, large market share is not, of course, an antitrust problem, nor are large gross profits. Rather, those metrics typically signal a particular firm’s superior efficiency relative to the competition. (Gross profit “reflects the efficiency of a business in terms of making use of its labor, raw material and other supplies.”) Antitrust investigations of firms merely because they are large would inefficiently bloat those companies’ costs and discourage them from engaging in cost-reducing new capacity and production improvements. This would tend to raise, not lower, prices by major firms. It thus would lower consumer welfare, a result at odds with the guiding policy goal of antitrust, which is to promote consumer welfare.

The administration’s announcement that the USDA “will also propose rules this year to strengthen enforcement of the Packers and Stockyards Act” is troublesome. That act, dating back to 1921, uses broad terms that extend beyond antitrust law (such as a prohibition on “giv[ing] any undue or unreasonable preference or advantage to any particular person”) and threatens to penalize efficient conduct by individual competitors. “Ratcheting up” enforcement under this act also could undermine business efficiency and paradoxically raise, not lower, prices.

Obviously, the specifics of the forthcoming proposed rules have not yet been revealed. Nevertheless, the administration’s “big is bad” approach to “big meat” strongly signals that one may expect rules to generate new costly and inefficient restrictions on meat-packer conduct. Such restrictions, of course, would be at odds with vibrant competition and consumer-welfare enhancement.    

This is not to say, of course, that meat packing should be immune from antitrust attention. Such scrutiny, however, should not be transfixed by “big is bad” concerns. Rather, it should center on the core antitrust goal of combatting harmful business conduct that unreasonably restrains competition and reduces consumer welfare. A focus on ferreting out collusive agreements among meat processors, such as price-fixing schemes, should have pride of place. The U.S. Justice Department’s already successful ongoing investigation into price fixing in the broiler-chicken industry is precisely the sort of antitrust initiative on which the administration should expend its scarce enforcement resources.

Conclusion

In sum, the Biden administration could do a lot of good in antitrust land if it would only set aside its nostalgic “big is bad” philosophy. It should return to the bipartisan enlightened understanding that antitrust is a consumer-welfare prescription that is based on sound and empirically based economics and is concerned with economically inefficient conduct that softens or destroys competition.

If it wants to stray beyond mere enforcement, the administration could turn its focus toward dismantling welfare-reducing anticompetitive federal regulatory schemes, rather than adding to private-sector regulatory burdens. For more about how to do this, we recommend that the administration consult a just-released Mercatus Center policy brief that Andrew Mercado and I co-authored.

Others already have noted that the Federal Trade Commission’s (FTC) recently released 6(b) report on the privacy practices of Internet service providers (ISPs) fails to comprehend that widespread adoption of privacy-enabling technology—in particular, Hypertext Transfer Protocol Secure (HTTPS) and DNS over HTTPS (DoH), but also the use of virtual private networks (VPNs)—largely precludes ISPs from seeing what their customers do online.

But a more fundamental problem with the report lies in its underlying assumption that targeted advertising is inherently nefarious. Indeed, much of the report highlights not actual violations of the law by the ISPs, but “concerns” that they could use customer data for targeted advertising much like Google and Facebook already do. The final subheading before the report’s conclusion declares: “Many ISPs in Our Study Can Be At Least As Privacy-Intrusive as Large Advertising Platforms.”

The report does not elaborate on why it would be bad for ISPs to enter the targeted advertising market, which is particularly strange given the public focus regulators have shone in recent months on the supposed dominance of Google, Facebook, and Amazon in online advertising. As the International Center for Law & Economics (ICLE) has argued in past filings on the issue, there simply is no justification to apply sector-specific regulations to ISPs for the mere possibility that they will use customer data for targeted advertising.

ISPs Could be Competition for the Digital Advertising Market

It is ironic to witness FTC warnings about ISPs engaging in targeted advertising even as there are open antitrust cases against Google for its alleged dominance of the digital advertising market. In fact, news reports suggest the U.S. Justice Department (DOJ) is preparing to join the antitrust suits against Google brought by state attorneys general. An obvious upshot of ISPs engaging in a larger amount of targeted advertising if that they could serve as a potential source of competition for Google, Facebook, and Amazon.

Despite the fears raised in the 6(b) report of rampant data collection for targeted ads, ISPs are, in fact, just a very small part of the $152.7 billion U.S. digital advertising market. As the report itself notes: “in 2020, the three largest players, Google, Facebook, and Amazon, received almost two-third of all U.S. digital advertising,” while Verizon pulled in just 3.4% of U.S. digital advertising revenues in 2018.

If the 6(b) report is correct that ISPs have access to troves of consumer data, it raises the question of why they don’t enjoy a bigger share of the digital advertising market. It could be that ISPs have other reasons not to engage in extensive advertising. Internet service provision is a two-sided market. ISPs could (and, over the years in various markets, some have) rely on advertising to subsidize Internet access. That they instead rely primarily on charging users directly for subscriptions may tell us something about prevailing demand on either side of the market.

Regardless of the reasons, the fact that ISPs have little presence in digital advertising suggests that it would be a misplaced focus for regulators to pursue industry-specific privacy regulation to crack down on ISP data collection for targeted advertising.

What’s the Harm in Targeted Advertising, Anyway?

At the heart of the FTC report is the commission’s contention that “advertising-driven surveillance of consumers’ online activity presents serious risks to the privacy of consumer data.” In Part V.B of the report, five of the six risks the FTC lists as associated with ISP data collection are related to advertising. But the only argument the report puts forth for why targeted advertising would be inherently pernicious is the assertion that it is contrary to user expectations and preferences.

As noted earlier, in a two-sided market, targeted ads could allow one side of the market to subsidize the other side. In other words, ISPs could engage in targeted advertising in order to reduce the price of access to consumers on the other side of the market. This is, indeed, one of the dominant models throughout the Internet ecosystem, so it wouldn’t be terribly unusual.

Taking away ISPs’ ability to engage in targeted advertising—particularly if it is paired with rumored net neutrality regulations from the Federal Communications Commission (FCC)—would necessarily put upward pricing pressure on the sector’s remaining revenue stream: subscriber fees. With bridging the so-called “digital divide” (i.e., building out broadband to rural and other unserved and underserved markets) a major focus of the recently enacted infrastructure spending package, it would be counterproductive to simultaneously take steps that would make Internet access more expensive and less accessible.

Even if the FTC were right that data collection for targeted advertising poses the risk of consumer harm, the report fails to justify why a regulatory scheme should apply solely to ISPs when they are such a small part of the digital advertising marketplace. Sector-specific regulation only makes sense if the FTC believes that ISPs are uniquely opaque among data collectors with respect to their collection practices.

Conclusion

The sector-specific approach implicitly endorsed by the 6(b) report would limit competition in the digital advertising market, even as there are already legal and regulatory inquiries into whether that market is sufficiently competitive. The report also fails to make the case the data collection for target advertising is inherently bad, or uniquely bad when done by an ISP.

There may or may not be cause for comprehensive federal privacy legislation, depending on whether it would pass cost-benefit analysis, but there is no reason to focus on ISPs alone. The FTC needs to go back to the drawing board.

Why do digital industries routinely lead to one company having a very large share of the market (at least if one defines markets narrowly)? To anyone familiar with competition policy discussions, the answer might seem obvious: network effects, scale-related economies, and other barriers to entry lead to winner-take-all dynamics in platform industries. Accordingly, it is that believed the first platform to successfully unlock a given online market enjoys a determining first-mover advantage.

This narrative has become ubiquitous in policymaking circles. Thinking of this sort notably underpins high-profile reports on competition in digital markets (here, here, and here), as well ensuing attempts to regulate digital platforms, such as the draft American Innovation and Choice Online Act and the EU’s Digital Markets Act.

But are network effects and the like the only ways to explain why these markets look like this? While there is no definitive answer, scholars routinely overlook an alternative explanation that tends to undercut the narrative that tech markets have become non-contestable.

The alternative model is simple: faced with zero prices and the almost complete absence of switching costs, users have every reason to join their preferred platform. If user preferences are relatively uniform and one platform has a meaningful quality advantage, then there is every reason to expect that most consumers will all join the same one—even though the market remains highly contestable. On the other side of the equation, because platforms face very few capacity constraints, there are few limits to a given platform’s growth. As will be explained throughout this piece, this intuition is as old as economics itself.

The Bertrand Paradox

In 1883, French mathematician Joseph Bertrand published a powerful critique of two of the most high-profile economic thinkers of his time: the late Antoine Augustin Cournot and Léon Walras (it would be another seven years before Alfred Marshall published his famous principles of economics).

Bertrand criticized several of Cournot and Walras’ widely accepted findings. This included Cournot’s conclusion that duopoly competition would lead to prices above marginal cost—or, in other words, that duopolies were imperfectly competitive.

By reformulating the problem slightly, Bertand arrived at the opposite conclusion. He argued that each firm’s incentive to undercut its rival would ultimately lead to marginal cost pricing, and one seller potentially capturing the entire market:

There is a decisive objection [to Cournot’s model]: According to his hypothesis, no [supracompetitive] equilibrium is possible. There is no limit to price decreases; whatever the joint price being charged by firms, a competitor could always undercut this price and, with few exceptions, attract all consumers. If the competitor is allowed to get away with this [i.e. the rival does not react], it will double its profits.

This result is mainly driven by the assumption that, unlike in Cournot’s model, firms can immediately respond to their rival’s chosen price/quantity. In other words, Bertrand implicitly framed the competitive process as price competition, rather than quantity competition (under price competition, firms do not face any capacity constraints and they cannot commit to producing given quantities of a good):

If Cournot’s calculations mask this result, it is because of a remarkable oversight. Referring to them as D and D’, Cournot deals with the quantities sold by each of the two competitors and treats them as independent variables. He assumes that if one were to change by the will of one of the two sellers, the other one could remain fixed. The opposite is evidently true.

This later came to be known as the “Bertrand paradox”—the notion that duopoly-market configurations can produce the same outcome as perfect competition (i.e., P=MC).

But while Bertrand’s critique was ostensibly directed at Cournot’s model of duopoly competition, his underlying point was much broader. Above all, Bertrand seemed preoccupied with the notion that expressing economic problems mathematically merely gives them a veneer of accuracy. In that sense, he was one of the first economists (at least to my knowledge) to argue that the choice of assumptions has a tremendous influence on the predictions of economic models, potentially rendering them unreliable:

On other occasions, Cournot introduces assumptions that shield his reasoning from criticism—scholars can always present problems in a way that suits their reasoning.

All of this is not to say that Bertrand’s predictions regarding duopoly competition necessarily hold in real-world settings; evidence from experimental settings is mixed. Instead, the point is epistemological. Bertrand’s reasoning was groundbreaking because he ventured that market structures are not the sole determinants of consumer outcomes. More broadly, he argued that assumptions regarding the competitive process hold significant sway over the results that a given model may produce (and, as a result, over normative judgements concerning the desirability of given market configurations).

The Theory of Contestable Markets

Bertrand is certainly not the only economist to have suggested market structures alone do not determine competitive outcomes. In the early 1980s, William Baumol (and various co-authors) went one step further. Baumol argued that, under certain conditions, even monopoly market structures could deliver perfectly competitive outcomes. This thesis thus rejected the Structure-Conduct-Performance (“SCP”) Paradigm that dominated policy discussions of the time.

Baumol’s main point was that industry structure is not the main driver of market “contestability,” which is the key determinant of consumer outcomes. In his words:

In the limit, when entry and exit are completely free, efficient incumbent monopolists and oligopolists may in fact be able to prevent entry. But they can do so only by behaving virtuously, that is, by offering to consumers the benefits which competition would otherwise bring. For every deviation from good behavior instantly makes them vulnerable to hit-and-run entry.

For instance, it is widely accepted that “perfect competition” leads to low prices because firms are price-takers; if one does not sell at marginal cost, it will be undercut by rivals. Observers often assume this is due to the number of independent firms on the market. Baumol suggests this is wrong. Instead, the result is driven by the sanction that firms face for deviating from competitive pricing.

In other words, numerous competitors are a sufficient, but not necessary condition for competitive pricing. Monopolies can produce the same outcome when there is a present threat of entry and an incumbent’s deviation from competitive pricing would be sanctioned. This is notably the case when there are extremely low barriers to entry.

Take this hypothetical example from the world of cryptocurrencies. It is largely irrelevant to a user whether there are few or many crypto exchanges on which to trade coins, nonfungible tokens (NFTs), etc. What does matter is that there is at least one exchange that meets one’s needs in terms of both price and quality of service. This could happen because there are many competing exchanges, or because a failure to meet my needs by the few (or even one) exchange that does exist would attract the entry of others to which I could readily switch—thus keeping the behavior of the existing exchanges in check.

This has far-reaching implications for antitrust policy, as Baumol was quick to point out:

This immediately offers what may be a new insight on antitrust policy. It tells us that a history of absence of entry in an industry and a high concentration index may be signs of virtue, not of vice. This will be true when entry costs in our sense are negligible.

Given what precedes, Baumol surmised that industry structure must be driven by endogenous factors—such as firms’ cost structures—rather than the intensity of competition that they face. For instance, scale economies might make monopoly (or another structure) the most efficient configuration in some industries. But so long as rivals can sanction incumbents for failing to compete, the market remains contestable. Accordingly, at least in some industries, both the most efficient and the most contestable market configuration may entail some level of concentration.

To put this last point in even more concrete terms, online platform markets may have features that make scale (and large market shares) efficient. If so, there is every reason to believe that competition could lead to more, not less, concentration. 

How Contestable Are Digital Markets?

The insights of Bertrand and Baumol have important ramifications for contemporary antitrust debates surrounding digital platforms. Indeed, it is critical to ascertain whether the (relatively) concentrated market structures we see in these industries are a sign of superior efficiency (and are consistent with potentially intense competition), or whether they are merely caused by barriers to entry.

The barrier-to-entry explanation has been repeated ad nauseam in recent scholarly reports, competition decisions, and pronouncements by legislators. There is thus little need to restate that thesis here. On the other hand, the contestability argument is almost systematically ignored.

Several factors suggest that online platform markets are far more contestable than critics routinely make them out to be.

First and foremost, consumer switching costs are extremely low for most online platforms. To cite but a few examples: Changing your default search engine requires at most a couple of clicks; joining a new social network can be done by downloading an app and importing your contacts to the app; and buying from an alternative online retailer is almost entirely frictionless, thanks to intermediaries such as PayPal.

These zero or near-zero switching costs are compounded by consumers’ ability to “multi-home.” In simple terms, joining TikTok does not require users to close their Facebook account. And the same applies to other online services. As a result, there is almost no opportunity cost to join a new platform. This further reduces the already tiny cost of switching.

Decades of app development have greatly improved the quality of applications’ graphical user interfaces (GUIs), to such an extent that costs to learn how to use a new app are mostly insignificant. Nowhere is this more apparent than for social media and sharing-economy apps (it may be less true for productivity suites that enable more complex operations). For instance, remembering a couple of intuitive swipe motions is almost all that is required to use TikTok. Likewise, ridesharing and food-delivery apps merely require users to be familiar with the general features of other map-based applications. It is almost unheard of for users to complain about usability—something that would have seemed impossible in the early 21st century, when complicated interfaces still plagued most software.

A second important argument in favor of contestability is that, by and large, online platforms face only limited capacity constraints. In other words, platforms can expand output rapidly (though not necessarily costlessly).

Perhaps the clearest example of this is the sudden rise of the Zoom service in early 2020. As a result of the COVID pandemic, Zoom went from around 10 million daily active users in early 2020 to more than 300 million by late April 2020. Despite being a relatively data-intensive service, Zoom did not struggle to meet this new demand from a more than 30-fold increase in its user base. The service never had to turn down users, reduce call quality, or significantly increase its price. In short, capacity largely followed demand for its service. Online industries thus seem closer to the Bertrand model of competition, where the best platform can almost immediately serve any consumers that demand its services.

Conclusion

Of course, none of this should be construed to declare that online markets are perfectly contestable. The central point is, instead, that critics are too quick to assume they are not. Take the following examples.

Scholars routinely cite the putatively strong concentration of digital markets to argue that big tech firms do not face strong competition, but this is a non sequitur. As Bertrand and Baumol (and others) show, what matters is not whether digital markets are concentrated, but whether they are contestable. If a superior rival could rapidly gain user traction, this alone will discipline the behavior of incumbents.

Markets where incumbents do not face significant entry from competitors are just as consistent with vigorous competition as they are with barriers to entry. Rivals could decline to enter either because incumbents have aggressively improved their product offerings or because they are shielded by barriers to entry (as critics suppose). The former is consistent with competition, the latter with monopoly slack.

Similarly, it would be wrong to presume, as many do, that concentration in online markets is necessarily driven by network effects and other scale-related economies. As ICLE scholars have argued elsewhere (here, here and here), these forces are not nearly as decisive as critics assume (and it is debatable that they constitute barriers to entry).

Finally, and perhaps most importantly, this piece has argued that many factors could explain the relatively concentrated market structures that we see in digital industries. The absence of switching costs and capacity constraints are but two such examples. These explanations, overlooked by many observers, suggest digital markets are more contestable than is commonly perceived.

In short, critics’ failure to meaningfully grapple with these issues serves to shape the prevailing zeitgeist in tech-policy debates. Cournot and Bertrand’s intuitions about oligopoly competition may be more than a century old, but they continue to be tested empirically. It is about time those same standards were applied to tech-policy debates.