Archives For Telecommunications

Federal Trade Commission (FTC) Chair Lina Khan recently joined with FTC Commissioner Rebecca Slaughter to file a “written submission on the public interest” in the U.S. International Trade Commission (ITC) Section 337 proceeding concerning imports of certain cellular-telecommunications equipment covered by standard essential patents (SEPs). SEPs are patents that “read on” technology adopted for inclusion in a standard. Regrettably, the commissioners’ filing embodies advice that, if followed, would effectively preclude Section 337 relief to SEP holders. Such a result would substantially reduce the value of U.S. SEPs and thereby discourage investments in standards that help drive American innovation.

Section 337 of the Tariff Act authorizes the ITC to issue “exclusion orders” blocking the importation of products that infringe U.S. patents, subject to certain “public interest” exceptions. Specifically, before issuing an exclusion order, the ITC must consider:

  1. the public health and welfare;
  2. competitive conditions in the U.S. economy;
  3. production of like or directly competitive articles in the United States; and
  4. U.S. consumers.

The Khan-Slaughter filing urges the ITC to consider the impact that issuing an exclusion order against a willing licensee implementing a standard would have on competition and consumers in the United States. The filing concludes that “where a complainant seeks to license and can be made whole through remedies in a different U.S. forum [a federal district court], an exclusion order barring standardized products from the United States will harm consumers and other market participants without providing commensurate benefits.”

Khan and Slaughter’s filing takes a one-dimensional view of the competitive effects of SEP rights. In short, it emphasizes that:

  1. standardization empowers SEP owners to “hold up” licensees by demanding more for a technology than it would have been worth, absent the standard;
  2. “hold ups” lead to higher prices and may discourage standard-setting activities and collaboration, which can delay innovation;
  3. many standard-setting organizations require FRAND (fair, reasonable, and non-discriminatory) licensing commitments from SEP holders to preclude hold-up and encourage standards adoption;
  4. FRAND commitments ensure that SEP licenses will be available at rates limited to the SEP’s “true” value;
  5. the threat of ITC exclusion orders would empower SEP holders to coerce licensees into paying “anticompetitively high” supra-FRAND licensing rates, discouraging investments in standard-compliant products;
  6. inappropriate exclusion orders harm consumers in the short term by depriving them of desired products and, in the longer run, through reduced innovation, competition, quality, and choice;
  7. thus, where the standard implementer is a “willing licensee,” an exclusion order would be contrary to the public interest; and
  8. as a general matter, exclusionary relief is incongruent and against the public interest where a court has been asked to resolve FRAND terms and can make the SEP holder whole.

In essence, Khan and Slaughter recite a parade of theoretical horribles, centered on anticompetitive hold-ups, to call-for denying exclusion orders to SEP owners on public-interest grounds. Their filing’s analysis, however, fails as a matter of empirics, law, and sound economics. 

First, the filing fails to note that there is a lack of empirical support for anticompetitive hold-up being a problem at all (see, for example, here, here, and here). Indeed, a far more serious threat is “hold-out,” whereby the ability of implementers to infringe SEPs without facing serious consequences leads to an inefficient undervaluation of SEP rights (see, for example, here). (At worst, implementers will have to pay at some future time a “reasonable” licensing fee if held to be infringers in court, since U.S. case law (unlike foreign case law) has essentially eliminated SEP holders’ ability to obtain an injunction.)  

Second, as a legal matter, the filing’s logic would undercut the central statutory purpose of Section 337, which is to provide all U.S. patent holders a right to exclude infringing imports. Section 337 does not distinguish between SEPs and other patents—all are entitled to full statutory protection. Former ITC Chair Deanna Tanner Okun, in critiquing a draft administration policy statement that would severely curtail the rights of SEP holders, assessed the denigration of Section 337 statutory protections in a manner that is equally applicable to the Khan-Slaughter filing:

The Draft Policy Statement also circumvents Congress by upending the statutory framework and purpose of Section 337, which includes the ITC’s practice of evaluating all unfair acts equally. Although the draft disclaims any “unique set of legal rules for SEPs,” it does, in fact, create a special and unequal analysis for SEPs. The draft also implies that the ITC should focus on whether the patents asserted are SEPs when judging whether an exclusion order would adversely affect the public interest. The draft fundamentally misunderstands the ITC’s purpose, statutory mandates, and overriding consideration of safeguarding the U.S. public interest and would — again, without statutory approval — elevate SEP status of a single patent over other weighty public interest considerations. The draft also overlooks Presidential review requirements, agency consultation opportunities and the ITC’s ability to issue no remedies at all.

[Notable,] Section 337’s statutory language does not distinguish the types of relief available to patentees when SEPs are asserted.

Third, Khan and Slaughter not only assert theoretical competitive harms from hold-ups that have not been shown to exist (while ignoring the far more real threat of hold-out), they also ignore the foregone dynamic economic gains that would stem from limitations on SEP rights (see, generally, here). Denying SEP holders the right to obtain a Section 337 exclusion order, as advocated by the filing, deprives them of a key property right. It thereby establishes an SEP “liability rule” (SEP holder relegated to seeking damages), as opposed to a “property rule” (SEP holder may seek injunctive relief) as the SEP holder’s sole means to obtain recompense for patent infringement. As my colleague Andrew Mercado and I have explained, a liability-rule approach denies society the substantial economic benefits achievable through an SEP property rule:

[U]nder a property rule, as contrasted to a liability rule, innovation will rise and drive an increase in social surplus, to the benefit of innovators, implementers, and consumers. 

Innovators’ welfare will rise. … First, innovators already in the market will be able to receive higher licensing fees due to their improved negotiating position. Second, new innovators enticed into the market by the “demonstration effect” of incumbent innovators’ success will in turn engage in profitable R&D (to them) that brings forth new cycles of innovation.

Implementers will experience welfare gains as the flood of new innovations enhances their commercial opportunities. New technologies will enable implementers to expand their product offerings and decrease their marginal cost of production. Additionally, new implementers will enter the market as innovation accelerates. Seeing the opportunity to earn high returns, new implementers will be willing to pay innovators a high licensing fee in order to produce novel and improved products.

Finally, consumers will benefit from expanded product offerings and lower quality-adjusted prices. Initial high prices for new goods and services entering the market will fall as companies compete for customers and scale economies are realized. As such, more consumers will have access to new and better products, raising consumers’ surplus.

In conclusion, the ITC should accord zero weight to Khan and Slaughter’s fundamentally flawed filing in determining whether ITC exclusion orders should be available to SEP holders. Denying SEP holders a statutorily provided right to exclude would tend to undermine the value of their property, diminish investment in improved standards, reduce innovation, and ultimately harm consumers—all to the detriment, not the benefit, of the public interest.  

[Wrapping up the first week of our FTC UMC Rulemaking symposium is a post from Truth on the Market’s own Justin (Gus) Hurwitz, director of law & economics programs at the International Center for Law & Economics and an assistant professor of law and co-director of the Space, Cyber, and Telecom Law program at the University of Nebraska College of Law. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

Introduction

In 2014, I published a pair of articles—”Administrative Antitrust” and “Chevron and the Limits of Administrative Antitrust”—that argued that the U.S. Supreme Court’s recent antitrust and administrative-law jurisprudence was pushing antitrust law out of the judicial domain and into the domain of regulatory agencies. The first article focused on the Court’s then-recent antitrust cases, arguing that the Court, which had long since moved away from federal common law, had shown a clear preference that common-law-like antitrust law be handled on a statutory or regulatory basis where possible. The second article evaluated and rejected the FTC’s long-held belief that the Federal Trade Commission’s (FTC) interpretations of the FTC Act do not receive Chevron deference.

Together, these articles made the case (as a descriptive, not normative, matter) that we were moving towards a period of what I called “administrative antitrust.” From today’s perspective, it surely seems that I was right, with the FTC set to embrace Section 5’s broad ambiguities to redefine modern understandings of antitrust law. Indeed, those articles have been cited by both former FTC Commissioner Rohit Chopra and current FTC Chair Lina Khan in speeches and other materials that have led up to our current moment.

This essay revisits those articles, in light of the past decade of Supreme Court precedent. It comes as no surprise to anyone familiar with recent cases that the Court is increasingly viewing the broad deference characteristic of administrative law with what, charitably, can be called skepticism. While I stand by the analysis offered in my previous articles—and, indeed, believe that the Court maintains a preference for administratively defined antitrust law over judicially defined antitrust law—I find it less likely today that the Court would defer to any agency interpretation of antitrust law that represents more than an incremental move away from extant law.

I will approach this discussion in four parts. First, I will offer some reflections on the setting of my prior articles. The piece on Chevron and the FTC, in particular, argued that the FTC had misunderstood how Chevron would apply to its interpretations of the FTC Act because it was beholden to out-of-date understandings of administrative law. I will make the point below that the same thing can be said today. I will then briefly recap the essential elements of the arguments made in both of those prior articles, to the extent needed to evaluate how administrative approaches to antitrust will be viewed by the Court today. The third part of the discussion will then summarize some key elements of administrative law that have changed over roughly the past decade. And, finally, I will bring these elements together to look at the viability of administrative antitrust today, arguing that the FTC’s broad embrace of power anticipated by many is likely to meet an ill fate at the hands of the courts on both antitrust and administrative law grounds.

In reviewing these past articles in light of the past decade’s case law, this essay reaches an important conclusion: for the same reasons that the Court seemed likely in 2013 to embrace an administrative approach to antitrust, today it is likely to view such approaches with great skepticism unless they are undertaken on an incrementalist basis. Others are currently developing arguments that sound primarily in current administrative law: the major questions doctrine and the potential turn away from National Petroleum Refiners. My conclusion is based primarily in the Court’s view that administrative antitrust would prove less indeterminate than judicially defined antitrust law. If the FTC shows that not to be the case, the Court seems likely to close the door on administrative antitrust for reasons sounding in both administrative and antitrust law.

Setting the Stage, Circa 2013

It is useful to start by visiting the stage as it was set when I wrote “Administrative Antitrust” and “Limits of Administrative Antitrust” in 2013. I wrote these articles while doing a fellowship at the University of Pennsylvania Law School, prior to which I had spent several years working at the U.S. Justice Department Antitrust Division’s Telecommunications Section. This was a great time to be involved on the telecom side of antitrust, especially for someone with an interest in administrative law, as well. Recent important antitrust cases included Pacific Bell v. linkLine and Verizon v. Trinko and recent important administrative-law cases included Brand-X, Fox v. FCC, and City of Arlington v. FCC. Telecommunications law was defining the center of both fields.

I started working on “Administrative Antitrust” first, prompted by what I admit today was an overreading of the Court’s 2011 American Electric Power Co. Inc. v. Connecticut opinion, in which the Court held broadly that a decision by Congress to regulate broadly displaces judicial common law. In Trinko and Credit Suisse, the Court had held something similar: roughly, that regulation displaces antitrust law. Indeed, in linkLine,the Court had stated that regulation is preferable to antitrust, known for its vicissitudes and adherence to the extra-judicial development of economic theory. “Administrative Antitrust” tied these strands together, arguing that antitrust law, long-discussed as one of the few remaining bastions of federal common law, would—and in the Court’s eyes, should—be displaced by regulation.

Antitrust and administrative law also came together, and remain together, in the debates over net neutrality. It was this nexus that gave rise to “Limits of Administrative Antitrust,” which I started in 2013 while working on “Administrative Antitrust”and waiting for the U.S. Court of Appeals for the D.C. Circuit’s opinion in Verizon v. FCC.

Some background on the net-neutrality debate is useful. In 2007, the Federal Communications Commission (FCC) attempted to put in place net-neutrality rules by adopting a policy statement on the subject. This approach was rejected by the D.C. Circuit in 2010, on grounds that a mere policy statement lacked the force of law. The FCC then adopted similar rules through a rulemaking process, finding authority to issue those rules in its interpretation of the ambiguous language of Section 706 of the Telecommunications Act. In January 2014, the D.C. Circuit again rejected the specific rules adopted by the FCC, on grounds that those rules violated the Communications Act’s prohibition on treating internet service providers (ISPs) as common carriers. But critically, the court affirmed the FCC’s interpretation of Section 706 as allowing it, in principle, to adopt rules regulating ISPs.

Unsurprisingly, whether the language of Section 706 was either ambiguous or subject to the FCC’s interpretation was a central debate within the regulatory community during 2012 and 2013. The broadest consensus, at least among my peers, was strongly of the view that it was neither: the FCC and industry had long read Section 706 as not giving the FCC authority to regulate ISP conduct and, to the extent that it did confer legislative authority, that authority was expressly deregulatory. I was the lone voice arguing that the D.C. Circuit was likely to find that Chevron applied to Section 706 and that the FCC’s reading was permissible on its own (that is, not taking into account such restrictions as the prohibition on treating non-common carriers as common carriers).

I actually had thought this conclusion quite obvious. The past decade of the Court’s Chevron case law followed a trend of increasing deference. Starting with Mead, then Brand-X, Fox v. FCC, and City of Arlington, the safe money was consistently placed on deference to the agency.

This was the setting in which I started thinking about what became “Chevron and the Limits of Administrative Antitrust.” If my argument in “Administrative Antitrust”was right—that the courts would push development of antitrust law from the courts to regulatory agencies—this would most clearly happen through the FTC’s Section 5 authority over unfair methods of competition (UMC). But there was longstanding debate about the limits of the FTC’s UMC authority. These debates included whether it was necessarily coterminous with the Sherman Act (so limited by the judicially defined federal common law of antitrust).

And there was discussion about whether the FTC would receive Chevron deference to its interpretations of its UMC authority. As with the question of the FCC receiving deference to its interpretation of Section 706, there was widespread understanding that the FTC would not receive Chevron deference to its interpretations of its Section 5 UMC authority. “Chevron and the Limits of Administrative Antitrust” explored that issue, ultimately concluding that the FTC likely would indeed be given the benefit of Chevron deference, tracing the commission’s belief to the contrary back to longstanding institutional memory of pre-Chevron judicial losses.

The Administrative Antitrust Argument

The discussion above is more than mere historical navel-gazing. The context and setting in which those prior articles were written is important to understanding both their arguments and the continual currents that propel us across antitrust’s sea of doubt. But we should also look at the specific arguments from each paper in some detail, as well.

Administrative Antitrust

The opening lines of this paper capture the curious judicial statute of antitrust law:

Antitrust is a peculiar area of law, one that has long been treated as exceptional by the courts. Antitrust cases are uniquely long, complicated, and expensive; individual cases turn on case-specific facts, giving them limited precedential value; and what precedent there is changes on a sea of economic—rather than legal—theory. The principal antitrust statutes are minimalist and have left the courts to develop their meaning. As Professor Thomas Arthur has noted, “in ‘the anti-trust field the courts have been accorded, by common consent, an authority they have in no other branch of enacted law.’” …


This Article argues that the Supreme Court is moving away from this exceptionalist treatment of antitrust law and is working to bring antitrust within a normalized administrative law jurisprudence.

Much of this argument is based in the arguments framed above: Trinko and Credit Suisse prioritize regulation over the federal common law of antitrust, and American Electric Power emphasizes the general displacement of common law by regulation. The article adds, as well, the Court’s focus, at the time, against domain-specific “exceptionalism.” Its opinion in Mayo had rejected the longstanding view that tax law was “exceptional” in some way that excluded it from the Administrative Procedure Act (APA) and other standard administrative law doctrine. And thus, so too must the Court’s longstanding treatment of antitrust as exceptional also fall.

Those arguments can all be characterized as pulling antitrust law toward an administrative approach. But there was a push as well. In his majority opinion, Chief Justice John Roberts expressed substantial concern about the difficulties that antitrust law poses for courts and litigants alike. His opinion for the majority notes that “it is difficult enough for courts to identify and remedy an alleged anticompetitive practice” and laments “[h]ow is a judge or jury to determine a ‘fair price?’” And Justice Stephen Breyer writes in concurrence, that “[w]hen a regulatory structure exists [as it does in this case] to deter and remedy anticompetitive harm, the costs of antitrust enforcement are likely to be greater than the benefits.”

In other words, the argument in “Administrative Antitrust” goes, the Court is motivated both to bring antitrust law into a normalized administrative-law framework and also to remove responsibility for the messiness inherent in antitrust law from the courts’ dockets. This latter point will be of particular importance as we turn to how the Court is likely to think about the FTC’s potential use of its UMC authority to develop new antitrust rules.

Chevron and the Limits of Administrative Antitrust

The core argument in “Limits of Administrative Antitrust” is more doctrinal and institutionally focused. In its simplest statement, I merely applied Chevron as it was understood circa 2013 to the FTC’s UMC authority. There is little argument that “unfair methods of competition” is inherently ambiguous—indeed, the term was used, and the power granted to the FTC, expressly to give the agency flexibility and to avoid the limits the Court was placing on antitrust law in the early 20th century.

There are various arguments against application of Chevron to Section 5; the article goes through and rejects them all. Section 5 has long been recognized as including, but being broader than, the Sherman Act. National Petroleum Refiners has long held that the FTC has substantive-rulemaking authority—a conclusion made even more forceful by the Supreme Court’s more recent opinion in Iowa Utilities Board. Other arguments are (or were) unavailing.

The real puzzle the paper unpacks is why the FTC ever believed it wouldn’t receive the benefit of Chevron deference. The article traces it back to a series of cases the FTC lost in the 1980s, contemporaneous with the development of the Chevron doctrine. The commission had big losses in cases like E.I. Du Pont and Ethyl Corp. Perhaps most important, in its 1986 Indiana Federation of Dentists opinion (two years after Chevron was decided), the Court seemed to adopt a de novo standard for review of Section 5 cases. But, “Limits of Administrative Antitrust” argues, this is a misreading and overreading of Indiana Federation of Dentists (a close reading of which actually suggests that it is entirely in line with Chevron), and it misunderstands the case’s relationship with Chevron (the importance of which did not start to come into focus for another several years).

The curious conclusion of the argument is, in effect, that a generation of FTC lawyers, “shell-shocked by its treatment in the courts,” internalized the lesson that they would not receive the benefits of Chevron deference and that Section 5 was subject to de novo review, but also that this would start to change as a new generation of lawyers, trained in the modern Chevron era, came to practice within the halls of the FTC. Today, that prediction appears to have borne out.

Things Change

The conclusion from “Limits of Administrative Antitrust” that FTC lawyers failed to recognize that the agency would receive Chevron deference because they were half a generation behind the development of administrative-law doctrine is an important one. As much as antitrust law may be adrift in a sea of change, administrative law is even more so. From today’s perspective, it feels as though I wrote those articles at Chevron’s zenith—and watching the FTC consider aggressive use of its UMC authority feels like watching a commission that, once again, is half a generation behind the development of administrative law.

The tide against Chevron’sexpansive deference was already beginning to grow at the time I was writing. City of Arlington, though affirming application of Chevron to agencies’ interpretations of their own jurisdictional statutes in a 6-3 opinion, generated substantial controversy at the time. And a short while later, the Court decided a case that many in the telecom space view as a sea change: Utility Air Regulatory Group (UARG). In UARG, Justice Antonin Scalia, writing for a 9-0 majority, struck down an Environmental Protection Agency (EPA) regulation related to greenhouse gasses. In doing so, he invoked language evocative of what today is being debated as the major questions doctrine—that the Court “expect[s] Congress to speak clearly if it wishes to assign to an agency decisions of vast economic and political significance.” Two years after that, the Court decided Encino Motorcars, in which the Court acted upon a limit expressed in Fox v. FCC that agencies face heightened procedural requirements when changing regulations that “may have engendered serious reliance interests.”

And just like that, the dams holding back concern over the scope of Chevron have burst. Justices Clarence Thomas and Neil Gorsuch have openly expressed their views that Chevron needs to be curtailed or eliminated. Justice Brett Kavanaugh has written extensively in favor of the major questions doctrine. Chief Justice Roberts invoked the major questions doctrine in King v. Burwell. Each term, litigants are more aggressively bringing more aggressive cases to probe and tighten the limits of the Chevron doctrine. As I write this, we await the Court’s opinion in American Hospital Association v. Becerra—which, it is widely believed could dramatically curtail the scope of the Chevron doctrine.

Administrative Antitrust, Redux

The prospects for administrative antitrust look very different today than they did a decade ago. While the basic argument continues to hold—the Court will likely encourage and welcome a transition of antitrust law to a normalized administrative jurisprudence—the Court seems likely to afford administrative agencies (viz., the FTC) much less flexibility in how they administer antitrust law than they would have a decade ago. This includes through both the administrative-law vector, with the Court reconsidering how it views delegation of congressional authority to agencies such as through the major questions doctrine and agency rulemaking authority, as well as through the Court’s thinking about how agencies develop and enforce antitrust law.

Major Questions and Major Rules

Two hotly debated areas where we see this trend: the major questions doctrine and the ongoing vitality of National Petroleum Refiners. These are only briefly recapitulated here. The major questions doctrine is an evolving doctrine, seemingly of great interest to many current justices on the Court, that requires Congress to speak clearly when delegating authority to agencies to address major questions—that is, questions of vast economic and political significance. So, while the Court may allow an agency to develop rules governing mergers when tasked by Congress to prohibit acquisitions likely to substantially lessen competition, it is unlikely to allow that agency to categorically prohibit mergers based upon a general congressional command to prevent unfair methods of competition. The first of those is a narrow rule based upon a specific grant of authority; the other is a very broad rule based upon a very general grant of authority.

The major questions doctrine has been a major topic of discussion in administrative-law circles for the past several years. Interest in the National Petroleum Refiners question has been more muted, mostly confined to those focused on the FTC and FCC. National Petroleum Refiners is a 1973 D.C. Circuit case that found that the FTC Act’s grant of power to make rules to implement the act confers broad rulemaking power relating to the act’s substantive provisions. In 1999, the Supreme Court reached a similar conclusion in Iowa Utilities Board, finding that a provision in Section 202 of the Communications Act allowing the FCC to create rules seemingly for the implementation of that section conferred substantive rulemaking power running throughout the Communications Act.

Both National Petroleum Refiners and Iowa Utilities Board reflect previous generations’ understanding of administrative law—and, in particular, the relationship between the courts and Congress in empowering and policing agency conduct. That understanding is best captured in the evolution of the non-delegation doctrine, and the courts’ broad acceptance of broad delegations of congressional power to agencies in the latter half of the 20th century. National Petroleum Refiners and Iowa Utilities Board are not non-delegation cases-—but, similar to the major questions doctrine, they go to similar issues of how specific Congress must be when delegating broad authority to an agency.

In theory, there is little difference between an agency that can develop legal norms through case-by-case adjudications that are backstopped by substantive and procedural judicial review, on the one hand, and authority to develop substantive rules backstopped by procedural judicial review and by Congress as a check on substantive errors. In practice, there is a world of difference between these approaches. As with the Court’s concerns about the major questions doctrine, were the Court to review National Petroleum Refiners Association or Iowa Utilities Board today, it seems at least possible, if not simply unlikely, that most of the Justices would not so readily find agencies to have such broad rulemaking authority without clear congressional intent supporting such a finding.

Both of these ideas—the major question doctrine and limits on broad rules made using thin grants of rulemaking authority—present potential limits on the potential scope of rules the FTC might make using its UMC authority.

Limits on the Antitrust Side of Administrative Antitrust

The potential limits on FTC UMC rulemaking discussed above sound in administrative-law concerns. But administrative antitrust may also find a tepid judicial reception on antitrust concerns, as well.

Many of the arguments advanced in “Administrative Antitrust” and the Court’s opinions on the antitrust-regulation interface echo traditional administrative-law ideas. For instance, much of the Court’s preference that agencies granted authority to engage in antitrust or antitrust-adjacent regulation take precedence over the application of judicially defined antitrust law track the same separation of powers and expertise concerns that are central to the Chevron doctrine itself.

But the antitrust-focused cases—linkLine, Trinko, Credit Suisse—also express concerns specific to antitrust law. Chief Justice Roberts notes that the justices “have repeatedly emphasized the importance of clear rules in antitrust law,” and the need for antitrust rules to “be clear enough for lawyers to explain them to clients.” And the Court and antitrust scholars have long noted the curiosity that antitrust law has evolved over time following developments in economic theory. This extra-judicial development of the law runs contrary to basic principles of due process and the stability of the law.

The Court’s cases in this area express hope that an administrative approach to antitrust could give a clarity and stability to the law that is currently lacking. These are rules of vast economic significance: they are “the Magna Carta of free enterprise”; our economy organizes itself around them; substantial changes to these rules could have a destabilizing effect that runs far deeper than Congress is likely to have anticipated when tasking an agency with enforcing antitrust law. Empowering agencies to develop these rules could, the Court’s opinions suggest, allow for a more thoughtful, expert, and deliberative approach to incorporating incremental developments in economic knowledge into the law.

If an agency’s administrative implementation of antitrust law does not follow this path—and especially if the agency takes a disruptive approach to antitrust law that deviates substantially from established antitrust norms—this defining rationale for an administrative approach to antitrust would not hold.

The courts could respond to such overreach in several ways. They could invoke the major questions or similar doctrines, as above. They could raise due-process concerns, tracking Fox v. FCC and Encino Motorcars, to argue that any change to antitrust law must not be unduly disruptive to engendered reliance interests. They could argue that the FTC’s UMC authority, while broader than the Sherman Act, must be compatible with the Sherman Act. That is, while the FTC has authority for the larger circle in the antitrust Venn diagram, the courts continue to define the inner core of conduct regulated by the Sherman Act.

A final aspect to the Court’s likely approach to administrative antitrust falls from the Roberts Court’s decision-theoretic approach to antitrust law. First articulated in Judge Frank Easterbrook’s “The Limits of Antitrust,” the decision-theoretic approach to antitrust law focuses on the error costs of incorrect judicial decisions and the likelihood that those decisions will be corrected. The Roberts Court has strongly adhered to this framework in its antitrust decisions. This can be seen, for instance, in Justice Breyer’s statement that: “When a regulatory structure exists to deter and remedy anticompetitive harm, the costs of antitrust enforcement are likely to be greater than the benefits.”

The error-costs framework described by Judge Easterbrook focuses on the relative costs of errors, and correcting those errors, between judicial and market mechanisms. In the administrative-antitrust setting, the relevant comparison is between judicial and administrative error costs. The question on this front is whether an administrative agency, should it get things wrong, is likely to correct. Here there are two models, both of concern. The first is that in which law is policy or political preference. Here, the FCC’s approach to net neutrality and the National Labor Relations Board’s (NLRB) approach to labor law loom large; there have been dramatic swing between binary policy preferences held by different political parties as control of agencies shifts between administrations. The second model is one in which Congress responds to agency rules by refining, rejecting, or replacing them through statute. Here, again, net neutrality and the FCC loom large, with nearly two decades of calls for Congress to clarify the FCC’s authority and statutory mandate, while the agency swings between policies with changing administrations.

Both of these models reflect poorly on the prospects for administrative antitrust and suggest a strong likelihood that the Court would reject any ambitious use of administrative authority to remake antitrust law. The stability of these rules is simply too important to leave to change with changing political wills. And, indeed, concern that Congress no longer does its job of providing agencies with clear direction—that Congress has abdicated its job of making important policy decisions and let them fall instead to agency heads—is one of the animating concerns behind the major questions doctrine.

Conclusion

Writing in 2013, it seemed clear that the Court was pushing antitrust law in an administrative direction, as well as that the FTC would likely receive broad Chevron deference in its interpretations of its UMC authority to shape and implement antitrust law. Roughly a decade later, the sands have shifted and continue to shift. Administrative law is in the midst of a retrenchment, with skepticism of broad deference and agency claims of authority.

Many of the underlying rationales behind the ideas of administrative antitrust remain sound. Indeed, I expect the FTC will play an increasingly large role in defining the contours of antitrust law and that the Court and courts will welcome this role. But that role will be limited. Administrative antitrust is a preferred vehicle for administering antitrust law, not for changing it. Should the FTC use its power aggressively, in ways that disrupt longstanding antitrust principles or seem more grounded in policy better created by Congress, it is likely to find itself on the losing side of the judicial opinion.

[This guest post from Lawrence J. Spiwak of the Phoenix Center for Advanced Legal & Economic Public Policy Studies is the second in our FTC UMC Rulemaking symposium. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

While antitrust and regulation are supposed to be different sides of the same coin, there has always been a healthy debate over which enforcement paradigm is the most efficient. For those who have long suffered under the zealous hand of ex ante regulation, they would gladly prefer to be overseen by the more dispassionate and case-specific oversight of antitrust. Conversely, those dissatisfied with the current state of antitrust enforcement have increased calls to abandon the ex post approach of antitrust and return to some form of active, “always on” regulation.

While the “antitrust versus regulation” debate has raged for some time, the election of President Joe Biden has brought a new wrinkle: Lina Khan, the controversial chair of the Federal Trade Commission (FTC), has made it very clear that she would like to expand the commission’s role from that of a mere enforcer of the nation’s antitrust laws to that of an agency that also promulgates ex ante “bright line” rules. Thus, the “antitrust versus regulation” debate is no longer academic.

Khan’s efforts to convert the FTC into a de facto regulator should surprise no one, however. Even before she was nominated, Khan was quite vocal about her policy vision for the FTC. For example, in 2020, she co-authored an essay with her former boss (and later briefly her FTC colleague) Rohit Chopra in the University of Chicago Law Review titled “The Case for ‘Unfair Methods of Competition’ Rulemaking.” In it, Khan and Chopra lay out both legal and policy arguments to support “unfair methods of competition” (UMC) rulemaking. But as I explain in a law review published last year in the Federalist Society Review titled “A Change in Direction for the Federal Trade Commission?”, Khan and Chopra’s arguments simply do not hold up to scrutiny. While I encourage those interested in the bounds of the FTC’s UMC rulemaking authority to read my paper in full, for purposes of this symposium, I include a brief summary of my analysis below.

At the outset of their essay, Chopra and Khan lay out what they believe to be the shortcomings of modern antitrust enforcement. As they correctly note, “[a]ntitrust law today is developed exclusively through adjudication,” which is designed to “facilitate[] nuanced and fact-specific analysis of liability and well-tailored remedies.” However, the authors contend that, while a case-by-case approach may sound great in theory, “in practice, the reliance on case-by-case adjudication yields a system of enforcement that generates ambiguity, unduly drains resources from enforcers, and deprives individuals and firms of any real opportunity to democratically participate in the process.” Chopra and Khan blame this alleged policy failure on the abandonment of per se rules in favor of the use of the “rule-of-reason” approach in antitrust jurisprudence. In their view, a rule-of-reason approach is nothing more than “a broad and open-ended inquiry into the overall competitive effects of particular conduct [which] asks judges to weigh the circumstances to decide whether the practice at issue violates the antitrust laws.” To remedy this perceived analytical shortcoming, they argue that the commission should step into the breach and promulgate ex ante bright-line rules to better enforce the prohibition against “unfair methods of competition” (UMC) outlined in Section 5 of the Federal Trade Commission Act.

As a threshold matter, while courts have traditionally provided guidance as to what exactly constitutes “unfair methods of competition,” Chopra and Khan argue that it should be the FTC that has that responsibility in the first instance. According to Chopra and Khan, because Congress set up the FTC as the independent expert agency to implement the FTC Act and because the phrase “unfair methods of competition” is ambiguous, courts must accord great deference to “FTC interpretations of ‘unfair methods of competition’” under the Supreme Court’s Chevron doctrine.

The authors then argue that the FTC has statutory authority to promulgate substantive rules to enforce the FTC’s interpretation of UMC. In particular, they point to the broad catch-all provision in Section 6(g) of the FTC Act. Section 6(g) provides, in relevant part, that the FTC may “[f]rom time to time . . . make rules and regulations for the purpose of carrying out the provisions of this subchapter.” Although this catch-all rulemaking provision is far from the detailed statutory scheme Congress set forth in the Magnuson-Moss Act to govern rulemaking to deal with Section 5’s other prohibition against “unfair or deceptive acts and practices” (UDAP), Chopra and Khan argue that the D.C. Circuit’s 1973 ruling in National Petroleum Refiners Association v. FTC—a case that predates the Magnuson-Moss Act—provides judicial affirmation that the FTC has the authority to “promulgate substantive rules, not just procedural rules” under Section 6(g). Stating Khan’s argument differently: although there may be no affirmative specific grant of authority for the FTC to engage in UMC rulemaking, in the absence of any limit on such authority, the FTC may engage in UMC rulemaking subject to the constraints of the Administrative Procedure Act.

As I point out in my paper, while there are certainly strong arguments that the FTC lacks UMC rulemaking authority (see, e.g., Ohlhausen & Rill, “Pushing the Limits? A Primer on FTC Competition Rulemaking”), it is my opinion that, given the current state of administrative law—in particular, the high level of judicial deference accorded to agencies under both Chevron and the “arbitrary and capricious standard”—whether the FTC can engage in UMC rulemaking remains a very open question.

That said, even if we assume arguendo that the FTC does, in fact, have UMC rulemaking authority, the case law nonetheless reveals that, despite Khan’s hopes and desires, the FTC cannot unilaterally abandon the consumer welfare standard. As I explain in detail in my paper, even with great judicial deference, it is well-established that independent agencies simply cannot ignore antitrust terms of art (especially when that agency is specifically charged with enforcing the antitrust laws).  Thus, Khan may get away with initiating UMC rulemaking, but, for example, attempting to impose a mandatory common carrier-style non-discrimination rule may be a bridge too far.

Khan’s Policy Arguments in Favor of UMC Rulemaking

Separate from the legal debate over whether the FTC can engage in UMC rulemaking, it is also important to ask whether the FTC should engage in UMC rulemaking. Khan essentially posits that the American economy needs a generic business regulator possessed with plenary power and expansive jurisdiction. Given the United States’ well-documented (and sordid) experience with public-utility regulation, that’s probably not a good idea.

Indeed, to Khan and Chopra, ex ante regulation is superior to ex post antitrust enforcement. For example, they submit that UMC “rulemaking would enable the Commission to issue clear rules to give market participants sufficient notice about what the law is, helping ensure that enforcement is predictable.” Moreover, they argue that “establishing rules could help relieve antitrust enforcement of steep costs and prolonged trials.” In particular, “[t]argeting conduct through rulemaking, rather than adjudication, would likely lessen the burden of expert fees or protracted litigation, potentially saving significant resources on a present-value basis.” And third, they contend that rulemaking “would enable the Commission to establish rules through a transparent and participatory process, ensuring that everyone who may be affected by a new rule has the opportunity to weigh in on it, granting the rule greater legitimacy.”   

Khan’s published writings argue forcefully for greater regulatory power, but they suffer from analytical omissions that render her judgment questionable. For example, it is axiomatic that, while it is easy to imagine or theorize about the many benefits of regulation, regulation imposes significant costs of both the intended and unintended sorts. These costs can include compliance costs, reductions of innovation and investment, and outright entry deterrence that protects incumbents. Yet nowhere in her co-authored essay does Khan contemplate a cost-benefit analysis before promulgating a new regulation; she appears to assume that regulation is always costless, easy, and beneficial, on net. Unfortunately, history shows that we cannot always count on FTC commissioners to engage in wise policymaking.

Khan also fails to contemplate the possibility that changing market circumstances or inartful drafting might call for the removal of regulations previously imposed. Among other things, this failure calls into question her rationale that “clear rules” would make “enforcement … predictable.” Why, then, does the government not always use clear rules, instead of the ham-handed approach typical of regulatory interventions? More importantly, enforcement of rules requires adjudication on a case-by-case basis that is governed by precedent from prior applications of the rule and due process.

Taken together, Khan’s analytical omissions reveal a lack of historical awareness about (and apparently any personal experience with) the realities of modern public-utility regulation. Indeed, Khan offers up as an example of purported rulemaking success the Federal Communications Commission’s 2015 Open Internet Order, which imposed legacy common-carrier regulations designed for the old Ma Bell monopoly on the internet. But as I detail extensively in my paper, the history of net-neutrality regulation bears witness that Khan’s assertions that this process provided “clear rules,” was faster and cheaper, and allowed for meaningful public participation simply are not true.

President Joe Biden’s July 2021 executive order set forth a commitment to reinvigorate U.S. innovation and competitiveness. The administration’s efforts to pass the America COMPETES Act would appear to further demonstrate a serious intent to pursue these objectives.

Yet several actions taken by federal agencies threaten to undermine the intellectual-property rights and transactional structures that have driven the exceptional performance of U.S. firms in key areas of the global innovation economy. These regulatory missteps together represent a policy “lose-lose” that lacks any sound basis in innovation economics and threatens U.S. leadership in mission-critical technology sectors.

Life Sciences: USTR Campaigns Against Intellectual-Property Rights

In the pharmaceutical sector, the administration’s signature action has been an unprecedented campaign by the Office of the U.S. Trade Representative (USTR) to block enforcement of patents and other intellectual-property rights held by companies that have broken records in the speed with which they developed and manufactured COVID-19 vaccines on a mass scale.

Patents were not an impediment in this process. To the contrary: they were necessary predicates to induce venture-capital investment in a small firm like BioNTech, which undertook drug development and then partnered with the much larger Pfizer to execute testing, production, and distribution. If success in vaccine development is rewarded with expropriation, this vital public-health sector is unlikely to attract investors in the future. 

Contrary to increasingly common assertions that the Bayh-Dole Act (which enables universities to seek patents arising from research funded by the federal government) “robs” taxpayers of intellectual property they funded, the development of Covid-19 vaccines by scientist-founded firms illustrates how the combination of patents and private capital is essential to convert academic research into life-saving medical solutions. The biotech ecosystem has long relied on patents to structure partnerships among universities, startups, and large firms. The costly path from lab to market relies on a secure property-rights infrastructure to ensure exclusivity, without which no investor would put capital at stake in what is already a high-risk, high-cost enterprise.  

This is not mere speculation. During the decades prior to the Bayh-Dole Act, the federal government placed strict limitations on the ability to patent or exclusively license innovations arising from federally funded research projects. The result: the market showed little interest in making the investment needed to convert those innovations into commercially viable products that might benefit consumers. This history casts great doubt on the wisdom of the USTR’s campaign to limit the ability of biopharmaceutical firms to maintain legal exclusivity over certain life sciences innovations.

Genomics: FTC Attempts to Block the Illumina/GRAIL Acquisition

In the genomics industry, the Federal Trade Commission (FTC) has devoted extensive resources to oppose the acquisition by Illumina—the market leader in next-generation DNA-sequencing equipment—of a medical-diagnostics startup, GRAIL (an Illumina spinoff), that has developed an early-stage cancer screening test.

It is hard to see the competitive threat. GRAIL is a pre-revenue company that operates in a novel market segment and its diagnostic test has not yet received approval from the Food and Drug Administration (FDA). To address concerns over barriers to potential competitors in this nascent market, Illumina has committed to 12-year supply contracts that would bar price increases or differential treatment for firms that develop oncology-detection tests requiring use of the Illumina platform.

One of Illumina’s few competitors in the global market is the BGI Group, a China-based company that, in 2013, acquired Complete Genomics, a U.S. target that Illumina pursued but relinquished due to anticipated resistance from the FTC in the merger-review process.  The transaction was then cleared by the Committee on Foreign Investment in the United States (CFIUS).

The FTC’s case against Illumina’s re-acquisition of GRAIL relies on theoretical predictions of consumer harm in a market that is not yet operational. Hypothetical market failure scenarios may suit an academic seminar but fall well below the probative threshold for antitrust intervention. 

Most critically, the Illumina enforcement action places at-risk a key element of well-functioning innovation ecosystems. Economies of scale and network effects lead technology markets to converge on a handful of leading platforms, which then often outsource research and development by funding and sometimes acquiring smaller firms that develop complementary technologies. This symbiotic relationship encourages entry and benefits consumers by bringing new products to market as efficiently as possible. 

If antitrust interventions based on regulatory fiat, rather than empirical analysis, disrupt settled expectations in the M&A market that innovations can be monetized through acquisition transactions by larger firms, venture capital may be unwilling to fund such startups in the first place. Independent development or an initial public offering are often not feasible exit options. It is likely that innovation will then retreat to the confines of large incumbents that can fund research internally but often execute it less effectively. 

Wireless Communications: DOJ Takes Aim at Standard-Essential Patents

Wireless communications stand at the heart of the global transition to a 5G-enabled “Internet of Things” that will transform business models and unlock efficiencies in myriad industries.  It is therefore of paramount importance that policy actions in this sector rest on a rigorous economic basis. Unfortunately, a recent policy shift proposed by the U.S. Department of Justice’s (DOJ) Antitrust Division does not meet this standard.

In December 2021, the Antitrust Division released a draft policy statement that would largely bar owners of standard-essential patents from seeking injunctions against infringers, which are usually large device manufacturers. These patents cover wireless functionalities that enable transformative solutions in myriad industries, ranging from communications to transportation to health care. A handful of U.S. and European firms lead in wireless chip design and rely on patent licensing to disseminate technology to device manufacturers and to fund billions of dollars in research and development. The result is a technology ecosystem that has enjoyed continuous innovation, widespread user adoption, and declining quality-adjusted prices.

The inability to block infringers disrupts this equilibrium by signaling to potential licensees that wireless technologies developed by others can be used at-will, with the terms of use to be negotiated through costly and protracted litigation. A no-injunction rule would discourage innovation while encouraging delaying tactics favored by well-resourced device manufacturers (including some of the world’s largest companies by market capitalization) that occupy bottleneck pathways to lucrative retail markets in the United States, China, and elsewhere.

Rather than promoting competition or innovation, the proposed policy would simply transfer wealth from firms that develop new technologies at great cost and risk to firms that prefer to use those technologies at no cost at all. This does not benefit anyone other than device manufacturers that already capture the largest portion of economic value in the smartphone supply chain.

Conclusion

From international trade to antitrust to patent policy, the administration’s actions imply little appreciation for the property rights and contractual infrastructure that support real-world innovation markets. In particular, the administration’s policies endanger the intellectual-property rights and monetization pathways that support market incentives to invest in the development and commercialization of transformative technologies.

This creates an inviting vacuum for strategic rivals that are vigorously pursuing leadership positions in global technology markets. In industries that stand at the heart of the knowledge economy—life sciences, genomics, and wireless communications—the administration is on a counterproductive trajectory that overlooks the business realities of technology markets and threatens to push capital away from the entrepreneurs that drive a robust innovation ecosystem. It is time to reverse course.

President Joe Biden’s nomination of Gigi Sohn to serve on the Federal Communications Commission (FCC)—scheduled for a second hearing before the Senate Commerce Committee Feb. 9—has been met with speculation that it presages renewed efforts at the FCC to enforce net neutrality. A veteran of tech policy battles, Sohn served as counselor to former FCC Chairman Tom Wheeler at the time of the commission’s 2015 net-neutrality order.

The political prospects for Sohn’s confirmation remain uncertain, but it’s probably fair to assume a host of associated issues—such as whether to reclassify broadband as a Title II service; whether to ban paid prioritization; and whether the FCC ought to exercise forbearance in applying some provisions of Title II to broadband—are likely to be on the FCC’s agenda once the full complement of commissioners is seated. Among these is an issue that doesn’t get the attention it merits: rate regulation of broadband services. 

History has, by now, definitively demonstrated that the FCC’s January 2018 repeal of the Open Internet Order didn’t produce the parade of horribles that net-neutrality advocates predicted. Most notably, paid prioritization—creating so-called “fast lanes” and “slow lanes” on the Internet—has proven a non-issue. Prioritization is a longstanding and widespread practice and, as discussed at length in this piece from The Verge on Netflix’s Open Connect technology, the Internet can’t work without some form of it. 

Indeed, the Verge piece makes clear that even paid prioritization can be an essential tool for edge providers. As we’ve previously noted, paid prioritization offers an economically efficient means to distribute the costs of network optimization. As Greg Sidak and David Teece put it:

Superior QoS is a form of product differentiation, and it therefore increases welfare by increasing the production choices available to content and applications providers and the consumption choices available to end users…. [A]s in other two-sided platforms, optional business-to-business transactions for QoS will allow broadband network operators to reduce subscription prices for broadband end users, promoting broadband adoption by end users, which will increase the value of the platform for all users.

The Perennial Threat of Price Controls

Although only hinted at during Sohn’s initial confirmation hearing in December, the real action in the coming net-neutrality debate is likely to be over rate regulation. 

Pressed at that December hearing by Sen. Marsha Blackburn (R-Tenn.) to provide a yes or no answer as to whether she supports broadband rate regulation, Sohn said no, before adding “That was an easy one.” Current FCC Chair Jessica Rosenworcel has similarly testified that she wants to continue an approach that “expressly eschew[s] future use of prescriptive, industry-wide rate regulation.” 

But, of course, rate regulation is among the defining features of most Title II services. While then-Chairman Wheeler promised to forebear from rate regulation at the time of the FCC’s 2015 Open Internet Order (OIO), stating flatly that “we are not trying to regulate rates,” this was a small consolation. At the time, the agency decided to waive “the vast majority of rules adopted under Title II” (¶ 51), but it also made clear that the commission would “retain adequate authority to” rescind such forbearance (¶ 538) in the future. Indeed, one could argue that the reason the 2015 order needed to declare resolutely that “we do not and cannot envision adopting new ex ante rate regulation of broadband Internet access service in the future” (¶ 451)) is precisely because of how equally resolute it was that the Commission would retain basic Title II authority, including the authority to impose rate regulation (“we are not persuaded that application of sections 201 and 202 is not necessary to ensure just, reasonable, and nondiscriminatory conduct by broadband providers and for the protection of consumers” (¶ 446)). 

This was no mere parsing of words. The 2015 order takes pains to assert repeatedly that forbearance was conditional and temporary, including with respect to rate regulation (¶ 497). As then-Commissioner Ajit Pai pointed out in his dissent from the OIO:

The plan is quite clear about the limited duration of its forbearance decisions, stating that the FCC will revisit them in the future and proceed in an incremental manner with respect to additional regulation. In discussing additional rate regulation, tariffs, last-mile unbundling, burdensome administrative filing requirements, accounting standards, and entry and exit regulation, the plan repeatedly states that it is only forbearing “at this time.” For others, the FCC will not impose rules “for now.” (p. 325)

For broadband providers, the FCC having the ability even to threaten rate regulation could disrupt massive amounts of investment in network buildout. And there is good reason for the sector to be concerned about the prevailing political winds, given the growing (and misguided) focus on price controls and their potential to be used to stem inflation

Indeed, politicians’ interest in controls on broadband rates predates the recent supply-chain-driven inflation. For example, President Biden’s American Jobs Plan called on Congress to reduce broadband prices:

President Biden believes that building out broadband infrastructure isn’t enough. We also must ensure that every American who wants to can afford high-quality and reliable broadband internet. While the President recognizes that individual subsidies to cover internet costs may be needed in the short term, he believes continually providing subsidies to cover the cost of overpriced internet service is not the right long-term solution for consumers or taxpayers. Americans pay too much for the internet – much more than people in many other countries – and the President is committed to working with Congress to find a solution to reduce internet prices for all Americans. (emphasis added)

Senate Majority Leader Chuck Schumer (D-N.Y.) similarly suggested in a 2018 speech that broadband affordability should be ensured: 

[We] believe that the Internet should be kept free and open like our highways, accessible and affordable to every American, regardless of ability to pay. It’s not that you don’t pay, it’s that if you’re a little guy or gal, you shouldn’t pay a lot more than the bigshots. We don’t do that on highways, we don’t do that with utilities, and we shouldn’t do that on the Internet, another modern, 21st century highway that’s a necessity.

And even Sohn herself has a history of somewhat equivocal statements regarding broadband rate regulation. In a 2018 article referencing the Pai FCC’s repeal of the 2015 rules, Sohn lamented in particular that removing the rules from Title II’s purview meant losing the “power to constrain ‘unjust and unreasonable’ prices, terms, and practices by [broadband] providers” (p. 345).

Rate Regulation by Any Other Name

Even if Title II regulation does not end up taking the form of explicit price setting by regulatory fiat, that doesn’t necessarily mean the threat of rate regulation will have been averted. Perhaps even more insidious is de facto rate regulation, in which agencies use their regulatory leverage to shape the pricing policies of providers. Indeed, Tim Wu—the progenitor of the term “net neutrality” and now an official in the Biden White House—has explicitly endorsed the use of threats by regulatory agencies in order to obtain policy outcomes: 

The use of threats instead of law can be a useful choice—not simply a procedural end run. My argument is that the merits of any regulative modality cannot be determined without reference to the state of the industry being regulated. Threat regimes, I suggest, are important and are best justified when the industry is undergoing rapid change—under conditions of “high uncertainty.” Highly informal regimes are most useful, that is, when the agency faces a problem in an environment in which facts are highly unclear and evolving. Examples include periods surrounding a newly invented technology or business model, or a practice about which little is known. Conversely, in mature, settled industries, use of informal procedures is much harder to justify.

The broadband industry is not new, but it is characterized by rapid technological change, shifting consumer demands, and experimental business models. Thus, under Wu’s reasoning, it appears ripe for regulation via threat.

What’s more, backdoor rate regulation is already practiced by the U.S. Department of Agriculture (USDA) in how it distributes emergency broadband funds to Internet service providers (ISPs) that commit to net-neutrality principles. The USDA prioritizes funding for applicants that operate “their networks pursuant to a ‘wholesale’ (in other words, ‘open access’) model and provid[e] a ‘low-cost option,’ both of which unnecessarily and detrimentally inject government rate regulation into the competitive broadband marketplace.”

States have also been experimenting with broadband rate regulation in the form of “affordable broadband” mandates. For example, New York State passed the Affordable Broadband Act (ABA) in 2021, which claimed authority to assist low-income consumers by capping the price of service and mandating provision of a low-cost service tier. As the federal district court noted in striking down the law:

In Defendant’s words, the ABA concerns “Plaintiffs’ pricing practices” by creating a “price regime” that “set[s] a price ceiling,” which flatly contradicts [New York Attorney General Letitia James’] simultaneous assertion that “the ABA does not ‘rate regulate’ broadband services.” “Price ceilings” regulate rates.

The 2015 Open Internet Order’s ban on paid prioritization, couched at the time in terms of “fairness,” was itself effectively a rate regulation that set wholesale prices at zero. The order even empowered the FCC to decide the rates ISPs could charge to edge providers for interconnection or peering agreements on an individual, case-by-case basis. As we wrote at the time:

[T]he first complaint under the new Open Internet rule was brought against Time Warner Cable by a small streaming video company called Commercial Network Services. According to several news stories, CNS “plans to file a peering complaint against Time Warner Cable under the Federal Communications Commission’s new network-neutrality rules unless the company strikes a free peering deal ASAP.” In other words, CNS is asking for rate regulation for interconnection. Under the Open Internet Order, the FCC can rule on such complaints, but it can only rule on a case-by-case basis. Either TWC assents to free peering, or the FCC intervenes and sets the rate for them, or the FCC dismisses the complaint altogether and pushes such decisions down the road…. While the FCC could reject this complaint, it is clear that they have the ability to impose de facto rate regulation through case-by-case adjudication

The FCC’s ability under the OIO to ensure that prices were “fair” contemplated an enormous degree of discretionary power:

Whether it is rate regulation according to Title II (which the FCC ostensibly didn’t do through forbearance) is beside the point. This will have the same practical economic effects and will be functionally indistinguishable if/when it occurs.

The Economics of Price Controls

Economists from across the political spectrum have long decried the use of price controls. In a recent (now partially deleted) tweet, Nobel laureate and liberal New York Times columnist Paul Krugman lambasted calls for price controls in response to inflation as “truly stupid.” In a recent survey of top economists on issues related to inflation, University of Chicago economist Austan Goolsbee, a former chair of the Council of Economic Advisors under President Barack Obama, strongly disagreed that 1970s-style price controls could successfully reduce U.S. inflation over the next 12 months, stating simply: “Just stop. Seriously.”

The reason for the bipartisan consensus is clear: both history and economics have demonstrated that price caps lead to shortages by artificially stimulating demand for a good, while also creating downward pressure on supply for that good.

Broadband rate regulation, whether implicit or explicit, will have similarly negative effects on investment and deployment. Limiting returns on investment reduces the incentive to make those investments. Broadband markets subject to price caps would see particularly large dislocations, given the massive upfront investment required, the extended period over which returns are realized, and the elevated risk of under-recoupment for quality improvements. Not only would existing broadband providers make fewer and less intensive investments to maintain their networks, they would invest less in improving quality:

When it faces a binding price ceiling, a regulated monopolist is unable to capture the full incremental surplus generated by an increase in service quality. Consequently, when the firm bears the full cost of the increased quality, it will deliver less than the surplus-maximizing level of quality. As Spence (1975, p. 420, note 5) observes, “where price is fixed… the firm always sets quality too low.” (p 9-10)

Quality suffers under price regulation not just because firms can’t capture the full value of their investments, but also because it is often difficult to account for quality improvements in regulatory pricing schemes:

The design and enforcement of service quality regulations is challenging for at least three reasons. First, it can be difficult to assess the benefits and the costs of improving service quality. Absent accurate knowledge of the value that consumers place on elevated levels of service quality and the associated costs, it is difficult to identify appropriate service quality standards. It can be particularly challenging to assess the benefits and costs of improved service quality in settings where new products and services are introduced frequently. Second, the level of service quality that is actually delivered sometimes can be difficult to measure. For example, consumers may value courteous service representatives, and yet the courtesy provided by any particular representative may be difficult to measure precisely. When relevant performance dimensions are difficult to monitor, enforcing desired levels of service quality can be problematic. Third, it can be difficult to identify the party or parties that bear primary responsibility for realized service quality problems. To illustrate, a customer may lose telephone service because an underground cable is accidentally sliced. This loss of service could be the fault of the telephone company if the company fails to bury the cable at an appropriate depth in the ground or fails to notify appropriate entities of the location of the cable. Alternatively, the loss of service might reflect a lack of due diligence by field workers from other companies who slice a telephone cable that is buried at an appropriate depth and whose location has been clearly identified. (p 10)

Firms are also less likely to enter new markets, where entry is risky and competition with a price-regulated monopolist can be a bleak prospect. Over time, price caps would degrade network quality and availability. Price caps in sectors characterized by large capital investment requirements also tend to exacerbate the need for an exclusive franchise, in order to provide some level of predictable returns for the regulated provider. Thus, “managed competition” of this sort may actually have the effect of reducing competition.

None of these concerns are dissipated where regulators use indirect, rather than direct, means to cap prices. Interconnection mandates and bans on paid prioritization both set wholesale prices at zero. Broadband is a classic multi-sided market. If the price on one side of the market is set at zero through rate regulation, then there will be upward pricing pressure on the other side of the market. This means higher prices for consumers (or else, it will require another layer of imprecise and complex regulation and even deeper constraints on investment). 

Similarly, implicit rate regulation under an amorphous “general conduct standard” like that included in the 2015 order would allow the FCC to effectively ban practices like zero rating on mobile data plans. At the time, the OIO restricted ISPs’ ability to “unreasonably interfere with or disadvantage”: 

  1. consumer access to lawful content, applications, and services; or
  2. content providers’ ability to distribute lawful content, applications or services.

The FCC thus signaled quite clearly that it would deem many zero-rating arrangements as manifestly “unreasonable.” Yet, for mobile customers who want to consume only a limited amount of data, zero rating of popular apps or other data uses is, in most cases, a net benefit for consumer welfare

These zero-rated services are not typically designed to direct users’ broad-based internet access to certain content providers ahead of others; rather, they are a means of moving users from a world of no access to one of access….

…This is a business model common throughout the internet (and the rest of the economy, for that matter). Service providers often offer a free or low-cost tier that is meant to facilitate access—not to constrain it.

Economics has long recognized the benefits of such pricing mechanisms, which is why competition authorities always scrutinize such practices under a rule of reason, requiring a showing of substantial exclusionary effect and lack of countervailing consumer benefit before condemning such practices. The OIO’s Internet conduct rule, however, encompassed no such analytical limits, instead authorizing the FCC to forbid such practices in the name of a nebulous neutrality principle and with no requirement to demonstrate net harm. Again, although marketed under a different moniker, banning zero rating outright is a de facto price regulation—and one that is particularly likely to harm consumers.

Conclusion

Ultimately, it’s important to understand that rate regulation, whatever the imagined benefits, is not a costless endeavor. Costs and risk do not disappear under rate regulation; they are simply shifted in one direction or another—typically with costs borne by consumers through some mix of reduced quality and innovation. 

While more can be done to expand broadband access in the United States, the Internet has worked just fine without Title II regulation. It’s a bit trite to repeat, but it remains relevant to consider how well U.S. networks fared during the COVID-19 pandemic. That performance was thanks to ongoing investment from broadband companies over the last 20 years, suggesting the market for broadband is far more competitive than net-neutrality advocates often claim.

Government policy may well be able to help accelerate broadband deployment to the unserved portions of the country where it is most needed. But the way to get there is not by imposing price controls on broadband providers. Instead, we should be removing costly, government-erected barriers to buildout and subsidizing and educating consumers where necessary.

Activists who railed against the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA) a decade ago today celebrate the 10th anniversary of their day of protest, which they credit with sending the bills down to defeat.

Much of the anti-SOPA/PIPA campaign was based on a gauzy notion of “realizing [the] democratizing potential” of the Internet. Which is fine, until it isn’t.

But despite the activists’ temporary legislative victory, the methods of combating digital piracy that SOPA/PIPA contemplated have been employed successfully around the world. It may, indeed, be time for the United States to revisit that approach, as the very real problems the legislation sought to combat haven’t gone away.

From the perspective of rightsholders, the bill’s most important feature was also its most contentious: the ability to enforce judicial “site-blocking orders.” A site-blocking order is a type of remedy sometimes referred to as a no-fault injunction. Under SOPA/PIPA, a court would have been permitted to issue orders that could be used to force a range of firms—from financial providers to ISPs—to cease doing business with or suspend the service of a website that hosted infringing content.

Under current U.S. law, even when a court finds that a site has willfully engaged in infringement, stopping the infringement can be difficult, especially when the parties and their facilities are located outside the country. While Section 512 of the Digital Millennium Copyright Act does allow courts to issue injunctions, there is ambiguity as to whether it allows courts to issue injunctions that obligate online service providers (“OSP”) not directly party to a case to remove infringing material.

Section 512(j), for instance, provides for issuing injunctions “against a service provider that is not subject to monetary remedies under this section.” The “not subject to monetary remedies under this section” language could be construed to mean that such injunctions may be obtained even against OSPs that have not been found at fault for the underlying infringement. But as Motion Picture Association President Stanford K. McCoy testified in 2020:

In more than twenty years … these provisions of the DMCA have never been deployed, presumably because of uncertainty about whether it is necessary to find fault against the service provider before an injunction could issue, unlike the clear no-fault injunctive remedies available in other countries.

But while no-fault injunctions for copyright infringement have not materialized in the United States, this remedy has been used widely around the world. In fact, more than 40 countries—including Denmark, Finland, France, India, England, and Wales—have enacted or are under some obligation to enact rules allowing for no-fault injunctions that direct ISPs to disable access to websites that predominantly promote copyright infringement. 

In short, precisely the approach to controlling piracy that SOPA/PIPA envisioned has been in force around the world over the last decade. This demonstrates that, if properly tailored, no-fault injunctions are an ideal tool for courts to use in the fight to combat piracy.

If anything, we should be using the anniversary of SOPA/PIPA as an opportunity to reflect on a missed opportunity. Congress should take this opportunity to amend Section 512 to grant U.S. courts authority to issue no-fault injunctions that require OSPs to block access to sites that willfully engage in mass infringement.

Others already have noted that the Federal Trade Commission’s (FTC) recently released 6(b) report on the privacy practices of Internet service providers (ISPs) fails to comprehend that widespread adoption of privacy-enabling technology—in particular, Hypertext Transfer Protocol Secure (HTTPS) and DNS over HTTPS (DoH), but also the use of virtual private networks (VPNs)—largely precludes ISPs from seeing what their customers do online.

But a more fundamental problem with the report lies in its underlying assumption that targeted advertising is inherently nefarious. Indeed, much of the report highlights not actual violations of the law by the ISPs, but “concerns” that they could use customer data for targeted advertising much like Google and Facebook already do. The final subheading before the report’s conclusion declares: “Many ISPs in Our Study Can Be At Least As Privacy-Intrusive as Large Advertising Platforms.”

The report does not elaborate on why it would be bad for ISPs to enter the targeted advertising market, which is particularly strange given the public focus regulators have shone in recent months on the supposed dominance of Google, Facebook, and Amazon in online advertising. As the International Center for Law & Economics (ICLE) has argued in past filings on the issue, there simply is no justification to apply sector-specific regulations to ISPs for the mere possibility that they will use customer data for targeted advertising.

ISPs Could be Competition for the Digital Advertising Market

It is ironic to witness FTC warnings about ISPs engaging in targeted advertising even as there are open antitrust cases against Google for its alleged dominance of the digital advertising market. In fact, news reports suggest the U.S. Justice Department (DOJ) is preparing to join the antitrust suits against Google brought by state attorneys general. An obvious upshot of ISPs engaging in a larger amount of targeted advertising if that they could serve as a potential source of competition for Google, Facebook, and Amazon.

Despite the fears raised in the 6(b) report of rampant data collection for targeted ads, ISPs are, in fact, just a very small part of the $152.7 billion U.S. digital advertising market. As the report itself notes: “in 2020, the three largest players, Google, Facebook, and Amazon, received almost two-third of all U.S. digital advertising,” while Verizon pulled in just 3.4% of U.S. digital advertising revenues in 2018.

If the 6(b) report is correct that ISPs have access to troves of consumer data, it raises the question of why they don’t enjoy a bigger share of the digital advertising market. It could be that ISPs have other reasons not to engage in extensive advertising. Internet service provision is a two-sided market. ISPs could (and, over the years in various markets, some have) rely on advertising to subsidize Internet access. That they instead rely primarily on charging users directly for subscriptions may tell us something about prevailing demand on either side of the market.

Regardless of the reasons, the fact that ISPs have little presence in digital advertising suggests that it would be a misplaced focus for regulators to pursue industry-specific privacy regulation to crack down on ISP data collection for targeted advertising.

What’s the Harm in Targeted Advertising, Anyway?

At the heart of the FTC report is the commission’s contention that “advertising-driven surveillance of consumers’ online activity presents serious risks to the privacy of consumer data.” In Part V.B of the report, five of the six risks the FTC lists as associated with ISP data collection are related to advertising. But the only argument the report puts forth for why targeted advertising would be inherently pernicious is the assertion that it is contrary to user expectations and preferences.

As noted earlier, in a two-sided market, targeted ads could allow one side of the market to subsidize the other side. In other words, ISPs could engage in targeted advertising in order to reduce the price of access to consumers on the other side of the market. This is, indeed, one of the dominant models throughout the Internet ecosystem, so it wouldn’t be terribly unusual.

Taking away ISPs’ ability to engage in targeted advertising—particularly if it is paired with rumored net neutrality regulations from the Federal Communications Commission (FCC)—would necessarily put upward pricing pressure on the sector’s remaining revenue stream: subscriber fees. With bridging the so-called “digital divide” (i.e., building out broadband to rural and other unserved and underserved markets) a major focus of the recently enacted infrastructure spending package, it would be counterproductive to simultaneously take steps that would make Internet access more expensive and less accessible.

Even if the FTC were right that data collection for targeted advertising poses the risk of consumer harm, the report fails to justify why a regulatory scheme should apply solely to ISPs when they are such a small part of the digital advertising marketplace. Sector-specific regulation only makes sense if the FTC believes that ISPs are uniquely opaque among data collectors with respect to their collection practices.

Conclusion

The sector-specific approach implicitly endorsed by the 6(b) report would limit competition in the digital advertising market, even as there are already legal and regulatory inquiries into whether that market is sufficiently competitive. The report also fails to make the case the data collection for target advertising is inherently bad, or uniquely bad when done by an ISP.

There may or may not be cause for comprehensive federal privacy legislation, depending on whether it would pass cost-benefit analysis, but there is no reason to focus on ISPs alone. The FTC needs to go back to the drawing board.

Capping months of inter-chamber legislative wrangling, President Joe Biden on Nov. 15 signed the $1 trillion Infrastructure Investment and Jobs Act (also known as the bipartisan infrastructure framework, or BIF), which sets aside $65 billion of federal funding for broadband projects. While there is much to praise about the package’s focus on broadband deployment and adoption, whether that money will be well-spent  depends substantially on how the law is implemented and whether the National Telecommunications and Information Administration (NTIA) adopts adequate safeguards to avoid waste, fraud, and abuse. 

The primary aim of the bill’s broadband provisions is to connect the truly unconnected—what the bill refers to as the “unserved” (those lacking a connection of at least 25/3 Mbps) and “underserved” (lacking a connection of at least 100/20 Mbps). In seeking to realize this goal, it’s important to bear in mind that dynamic analysis demonstrates that the broadband market is overwhelmingly healthy, even in locales with relatively few market participants. According to the Federal Communications Commission’s (FCC) latest Broadband Progress Report, approximately 5% of U.S. consumers have no options for at least 25/3 Mbps broadband, and slightly more than 8% have no options for at least 100/10 Mbps).  

Reaching the truly unserved portions of the country will require targeting subsidies toward areas that are currently uneconomic to reach. Without properly targeted subsidies, there is a risk of dampening incentives for private investment and slowing broadband buildout. These tradeoffs must be considered. As we wrote previously in our Broadband Principles issue brief:

  • To move forward successfully on broadband infrastructure spending, Congress must take seriously the roles of both the government and the private sector in reaching the unserved.
  • Current U.S. broadband infrastructure is robust, as demonstrated by the way it met the unprecedented surge in demand for bandwidth during the recent COVID-19 pandemic.
  • To the extent it is necessary at all, public investment in broadband infrastructure should focus on providing Internet access to those who don’t have it, rather than subsidizing competition in areas that already do.
  • Highly prescriptive mandates—like requiring a particular technology or requiring symmetrical speeds— will be costly and likely to skew infrastructure spending away from those in unserved areas.
  • There may be very limited cases where municipal broadband is an effective and efficient solution to a complete absence of broadband infrastructure, but policymakers must narrowly tailor any such proposals to avoid displacing private investment or undermining competition.
  • Consumer-directed subsidies should incentivize broadband buildout and, where necessary, guarantee the availability of minimum levels of service reasonably comparable to those in competitive markets.
  • Firms that take government funding should be subject to reasonable obligations. Competitive markets should be subject to lighter-touch obligations.

The Good

The BIF’s broadband provisions ended up in a largely positive place, at least as written. There are two primary ways it seeks to achieve its goals of promoting adoption and deploying broadband to unserved/underserved areas. First, it makes permanent the Emergency Broadband Benefit program that had been created to provide temporary aid to households who struggled to afford Internet service during the COVID-19 pandemic, though it does lower the monthly user subsidy from $50 to $30. The renamed Affordable Connectivity Program can be used to pay for broadband on its own, or as part of a bundle of other services (e.g., a package that includes telephone, texting, and the rental fee on equipment).

Relatedly, the bill also subsidizes the cost of equipment by extending a one-time reimbursement of up to $100 to broadband providers when a consumer takes advantage of the provider’s discounted sale of connected devices, such as laptops, desktops, or tablet computers capable of Wi-Fi and video conferencing. 

The decision to make the emergency broadband benefit a permanent program broadly comports with recommendations we have made to employ user subsidies (such as connectivity vouchers) to encourage broadband adoption.

The second and arguably more important of the bill’s broadband provisions is its creation of the $42 billion Broadband Equity, Access and Deployment (BEAD) Program. Under the direction of the NTIA, BEAD will direct grants to state governments to help the states expand access to and use of high-speed broadband.  

On the bright side, BEAD does appear to be designed to connect the country’s truly unserved regions—which, as noted above, account for about 8% of the nation’s households. The law explicitly requires prioritizing unserved areas before underserved areas. Even where the text references underserved areas as an additional priority, it does so in a way that won’t necessarily distort private investment.  The bill also creates preferences for projects in persistent and high-poverty areas. Thus, the targeted areas are very likely to fall on the “have-not” side of the digital divide.

On its face, the subsidy and grant approach taken in the bill is, all things considered, commendable. As we note in our broadband report, care must be taken to avoid interventions that distort private investment incentives, particularly in a successful industry like broadband. The goal, after all, is more broadband deployment. If policy interventions only replicate private options (usually at higher cost) or, worse, drive private providers from a market, broadband deployment will be slowed or reversed. The approach taken in this bill attempts to line up private incentives with regulatory goals.

As we discuss below, however, the devil is in the details. In particular, BEAD’s structure could theoretically allow enough discretion in execution that a large amount of waste, fraud, and abuse could end up frustrating the program’s goals.

The Bad

While the bill largely keeps the right focus of building out broadband in unserved areas, there are reasons to question some of its preferences and solutions. For instance, the state subgrant process puts for-profit and government-run broadband solutions on an equal playing field for the purposes of receiving funds, even though the two types of entities exist in very different institutional environments with very different incentives. 

There is also a requirement that projects provide broadband of at least 100/20 Mbps speed, even though the bill defines “unserved”as lacking at least 25/3 Mbps. While this is not terribly objectionable, the preference for 100/20 could have downstream effects on the hardest-to-connect areas. It may only be economically feasible to connect some very remote areas with a 25/3 Mbps connection. Requiring higher speeds in such areas may, despite the best intentions, slow deployment and push providers to prioritize areas that are relatively easier to connect.

For comparison, the FCC’s Connect America Fund and Rural Digital Opportunity Fund programs do place greater weight in bidding for providers that can deploy higher-speed connections. But in areas where a lower speed tier is cost-justified, a provider can still bid and win. This sort of approach would have been preferable in the infrastructure bill. 

But the bill’s largest infirmity is not in its terms or aims, but in the potential for mischief in its implementation. In particular, the BEAD grant program lacks the safeguards that have traditionally been applied to this sort of funding at the FCC. 

Typically, an aid program of this sort would be administered by the FCC under rulemaking bound by the Administrative Procedure Act (APA). As cumbersome as that process may sometimes be, APA rulemaking provides a high degree of transparency that results in fairly reliable public accountability. BEAD, by contrast, eschews this process, and instead permits NTIA to work directly with governors and other relevant state officials to dole out the money.  The funds will almost certainly be distributed more quickly, but with significantly less accountability and oversight. 

A large amount of the implementation detail will be driven at the state level. By definition, this will make it more difficult to monitor how well the program’s aims are being met. It also creates a process with far more opportunities for highly interested parties to lobby state officials to direct funding to their individual pet projects. None of this is to say that BEAD funding will necessarily be misdirected, but NTIA will need to be very careful in how it proceeds.

Conclusion: The Opportunity

Although the BIF’s broadband funds are slated to be distributed next year, we may soon be able to see whether there are warning signs that the legitimate goal of broadband deployment is being derailed for political favoritism. BEAD initially grants a flat $100 million to each state; it is only additional monies over that initial amount that need to be sought through the grant program. Thus, it is highly likely that some states will begin to enact legislation and related regulations in the coming year based on that guaranteed money. This early regulatory and legislative activity could provide insight into the pitfalls the full BEAD grantmaking program will face.

The larger point, however, is that the program needs safeguards. Where Congress declined to adopt them, NTIA would do well to implement them. Obviously, this will be something short of full APA rulemaking, but the NTIA will need to make accountability and reliability a top priority to ensure that the digital divide is substantially closed.

In the U.S. system of dual federal and state sovereigns, a normative analysis reveals principles that could guide state antitrust-enforcement priorities, to promote complementarity in federal and state antitrust policy, and thereby advance consumer welfare.

Discussion

Positive analysis reveals that state antitrust enforcement is a firmly entrenched feature of American antitrust policy. The U.S. Supreme Court (1) has consistently held that federal antitrust law does not displace state antitrust law (see, for example, California v. ARC America Corp. (U.S., 1989) (“Congress intended the federal antitrust laws to supplement, not displace, state antitrust remedies”)); and (2) has upheld state antitrust laws even when they have some impact on interstate commerce (see, for example, Exxon Corp. v. Governor of Maryland (U.S., 1978)).

The normative question remains, however, as to what the appropriate relationship between federal and state antitrust enforcement should be. Should federal and state antitrust regimes be complementary, with state law enforcement enhancing the effectiveness of federal enforcement? Or should state antitrust enforcement compete with federal enforcement, providing an alternative “vision” of appropriate antitrust standards?

The generally accepted (until very recently) modern American consumer-welfare-centric antitrust paradigm (see here) points to the complementary approach as most appropriate. In other words, if antitrust is indeed the “magna carta” of American free enterprise (see United States v. Topco Associates, Inc., U.S. (U.S. 1972), and if consumer welfare is the paramount goal of antitrust (a position consistently held by the Supreme Court since Reiter v. Sonotone Corp., (U.S., 1979)), it follows that federal and state antitrust enforcement coexist best as complements, directed jointly at maximizing consumer-welfare enhancement. In recent decades it also generally has made sense for state enforcers to defer to U.S. Justice Department (DOJ) and Federal Trade Commission (FTC) matter-specific consumer-welfare assessments. This conclusion follows from the federal agencies’ specialized resource advantage, reflected in large staffs of economic experts and attorneys with substantial industry knowledge.

The reality, nevertheless, is that while state enforcers often have cooperated with their federal colleagues on joint enforcement, state enforcement approaches historically have been imperfectly aligned with federal policy. That imperfect alignment has been at odds with consumer welfare in key instances. Certain state antitrust schemes, for example, continue to treat resale price maintenance (RPM)  as per se illegal (see, for example, here), a position inconsistent with the federal consumer welfare-centric rule of reason approach (see Leegin Creative Leather Products, Inc. v. PSKS, Inc. (U.S., 2007)). The disparate treatment of RPM has a substantial national impact on business conduct, because commercially important states such as California and New York are among those that continue to flatly condemn RPM.

State enforcers also have from time to time sought to oppose major transactions that received federal antitrust clearance, such as several states’ unsuccessful opposition to the merger of Sprint and T-Mobile merger (see here). Although the states failed to block the merger, they did extract settlement concessions that imposed burdens on the merging parties, in addition to the divestiture requirements impose by the DOJ in settling the matter (see here). Inconsistencies between federal and state antitrust-enforcement decisions on cases of nationwide significance generate litigation waste and may detract from final resolutions that optimize consumer welfare.

If consumer-welfare optimization is their goal (which I believe it should be in an ideal world), state attorneys general should seek to direct their limited antitrust resources to their highest valued uses, rather than seeking to second guess federal antitrust policy and enforcement decisions.

An optimal approach might focus first and foremost on allocating state resources to combat primarily intrastate competitive harms that are clear and unequivocal (such as intrastate bid rigging, hard core price fixing, and horizontal market division). This could free up federal resources to focus on matters that are primarily interstate in nature, consistent with federalism. (In this regard, see a thoughtful proposal by D. Bruce Johnsen and Moin A. Yaha.)

Second, state enforcers could also devote some resources to assist federal enforcers in developing state-specific evidence in support of major national cases. (This would allow state attorneys general to publicize their “big case” involvement in a productive manner.)

Third, but not least, competition advocacy directed at the removal of anticompetitive state laws and regulations could prove an effective means of seeking to improve the competitive climate within individual states (see, for example, here). State antitrust enforcers could advance advocacy through amicus curiae briefs, and (where politically feasible) through interventions (perhaps informal) with peer officials who oversee regulation. Subject to this general guidance, the nature of state antitrust resource allocations would depend upon the specific competitive problems particular to each state.

Of course, in the real world, public choice considerations and rent seeking may at times influence antitrust enforcement decision-making by state (and federal) officials. Nonetheless, the capsule idealized normative summary of a suggested ideal state antitrust-enforcement protocol is useful in that it highlights how state enforcers could usefully complement (assumed) sound federal antitrust initiatives.

Great minds think alike. A well-crafted and much more detailed normative exploration of ideal state antitrust enforcement is found in a recently released Pelican Institute policy brief by Ted Bolema and Eric Peterson. Entitled The Proper Role for States in Antitrust Lawsuits, the brief concludes (in a manner consistent with my observations):

This review of cases and leading commentaries shows that states should focus their involvement in antitrust cases on instances where:

· they have unique interests, such as local price-fixing

· play a unique role, such as where they can develop evidence about how alleged anticompetitive behavior uniquely affects local markets

· they can bring additional resources to bear on existing federal litigation.

States can also provide a useful check on overly aggressive federal enforcement by providing courts with a traditional perspective on antitrust law — a role that could become even more important as federal agencies aggressively seek to expand their powers. All of these are important roles for states to play in antitrust enforcement, and translate into positive outcomes that directly benefit consumers.

Conversely, when states bring significant, novel antitrust lawsuits on their own, they don’t tend to benefit either consumers or constituents. These novel cases often move resources away from where they might be used more effectively, and states usually lose (as with the recent dismissal with prejudice of a state case against Facebook). Through more strategic antitrust engagement, with a focus on what states can do well and where they can make a positive difference antitrust enforcement, states would best serve the interests of their consumers, constituents, and taxpayers.

Conclusion

Under a consumer-welfare-centric regime, an appropriate role can be identified for state antitrust enforcement that would helpfully complement federal efforts in an optimal fashion. Unfortunately, in this tumultuous period of federal antitrust policy shifts, in which the central role of the consumer welfare standard has been called into question, it might appear fatuous to speculate on the ideal melding of federal and state approaches to antitrust administration. One should, however, prepare for the time when a more enlightened, economically informed approach will be reinstituted. In anticipation of that day, serious thinking about antitrust federalism should not be neglected.

Why do digital industries routinely lead to one company having a very large share of the market (at least if one defines markets narrowly)? To anyone familiar with competition policy discussions, the answer might seem obvious: network effects, scale-related economies, and other barriers to entry lead to winner-take-all dynamics in platform industries. Accordingly, it is that believed the first platform to successfully unlock a given online market enjoys a determining first-mover advantage.

This narrative has become ubiquitous in policymaking circles. Thinking of this sort notably underpins high-profile reports on competition in digital markets (here, here, and here), as well ensuing attempts to regulate digital platforms, such as the draft American Innovation and Choice Online Act and the EU’s Digital Markets Act.

But are network effects and the like the only ways to explain why these markets look like this? While there is no definitive answer, scholars routinely overlook an alternative explanation that tends to undercut the narrative that tech markets have become non-contestable.

The alternative model is simple: faced with zero prices and the almost complete absence of switching costs, users have every reason to join their preferred platform. If user preferences are relatively uniform and one platform has a meaningful quality advantage, then there is every reason to expect that most consumers will all join the same one—even though the market remains highly contestable. On the other side of the equation, because platforms face very few capacity constraints, there are few limits to a given platform’s growth. As will be explained throughout this piece, this intuition is as old as economics itself.

The Bertrand Paradox

In 1883, French mathematician Joseph Bertrand published a powerful critique of two of the most high-profile economic thinkers of his time: the late Antoine Augustin Cournot and Léon Walras (it would be another seven years before Alfred Marshall published his famous principles of economics).

Bertrand criticized several of Cournot and Walras’ widely accepted findings. This included Cournot’s conclusion that duopoly competition would lead to prices above marginal cost—or, in other words, that duopolies were imperfectly competitive.

By reformulating the problem slightly, Bertand arrived at the opposite conclusion. He argued that each firm’s incentive to undercut its rival would ultimately lead to marginal cost pricing, and one seller potentially capturing the entire market:

There is a decisive objection [to Cournot’s model]: According to his hypothesis, no [supracompetitive] equilibrium is possible. There is no limit to price decreases; whatever the joint price being charged by firms, a competitor could always undercut this price and, with few exceptions, attract all consumers. If the competitor is allowed to get away with this [i.e. the rival does not react], it will double its profits.

This result is mainly driven by the assumption that, unlike in Cournot’s model, firms can immediately respond to their rival’s chosen price/quantity. In other words, Bertrand implicitly framed the competitive process as price competition, rather than quantity competition (under price competition, firms do not face any capacity constraints and they cannot commit to producing given quantities of a good):

If Cournot’s calculations mask this result, it is because of a remarkable oversight. Referring to them as D and D’, Cournot deals with the quantities sold by each of the two competitors and treats them as independent variables. He assumes that if one were to change by the will of one of the two sellers, the other one could remain fixed. The opposite is evidently true.

This later came to be known as the “Bertrand paradox”—the notion that duopoly-market configurations can produce the same outcome as perfect competition (i.e., P=MC).

But while Bertrand’s critique was ostensibly directed at Cournot’s model of duopoly competition, his underlying point was much broader. Above all, Bertrand seemed preoccupied with the notion that expressing economic problems mathematically merely gives them a veneer of accuracy. In that sense, he was one of the first economists (at least to my knowledge) to argue that the choice of assumptions has a tremendous influence on the predictions of economic models, potentially rendering them unreliable:

On other occasions, Cournot introduces assumptions that shield his reasoning from criticism—scholars can always present problems in a way that suits their reasoning.

All of this is not to say that Bertrand’s predictions regarding duopoly competition necessarily hold in real-world settings; evidence from experimental settings is mixed. Instead, the point is epistemological. Bertrand’s reasoning was groundbreaking because he ventured that market structures are not the sole determinants of consumer outcomes. More broadly, he argued that assumptions regarding the competitive process hold significant sway over the results that a given model may produce (and, as a result, over normative judgements concerning the desirability of given market configurations).

The Theory of Contestable Markets

Bertrand is certainly not the only economist to have suggested market structures alone do not determine competitive outcomes. In the early 1980s, William Baumol (and various co-authors) went one step further. Baumol argued that, under certain conditions, even monopoly market structures could deliver perfectly competitive outcomes. This thesis thus rejected the Structure-Conduct-Performance (“SCP”) Paradigm that dominated policy discussions of the time.

Baumol’s main point was that industry structure is not the main driver of market “contestability,” which is the key determinant of consumer outcomes. In his words:

In the limit, when entry and exit are completely free, efficient incumbent monopolists and oligopolists may in fact be able to prevent entry. But they can do so only by behaving virtuously, that is, by offering to consumers the benefits which competition would otherwise bring. For every deviation from good behavior instantly makes them vulnerable to hit-and-run entry.

For instance, it is widely accepted that “perfect competition” leads to low prices because firms are price-takers; if one does not sell at marginal cost, it will be undercut by rivals. Observers often assume this is due to the number of independent firms on the market. Baumol suggests this is wrong. Instead, the result is driven by the sanction that firms face for deviating from competitive pricing.

In other words, numerous competitors are a sufficient, but not necessary condition for competitive pricing. Monopolies can produce the same outcome when there is a present threat of entry and an incumbent’s deviation from competitive pricing would be sanctioned. This is notably the case when there are extremely low barriers to entry.

Take this hypothetical example from the world of cryptocurrencies. It is largely irrelevant to a user whether there are few or many crypto exchanges on which to trade coins, nonfungible tokens (NFTs), etc. What does matter is that there is at least one exchange that meets one’s needs in terms of both price and quality of service. This could happen because there are many competing exchanges, or because a failure to meet my needs by the few (or even one) exchange that does exist would attract the entry of others to which I could readily switch—thus keeping the behavior of the existing exchanges in check.

This has far-reaching implications for antitrust policy, as Baumol was quick to point out:

This immediately offers what may be a new insight on antitrust policy. It tells us that a history of absence of entry in an industry and a high concentration index may be signs of virtue, not of vice. This will be true when entry costs in our sense are negligible.

Given what precedes, Baumol surmised that industry structure must be driven by endogenous factors—such as firms’ cost structures—rather than the intensity of competition that they face. For instance, scale economies might make monopoly (or another structure) the most efficient configuration in some industries. But so long as rivals can sanction incumbents for failing to compete, the market remains contestable. Accordingly, at least in some industries, both the most efficient and the most contestable market configuration may entail some level of concentration.

To put this last point in even more concrete terms, online platform markets may have features that make scale (and large market shares) efficient. If so, there is every reason to believe that competition could lead to more, not less, concentration. 

How Contestable Are Digital Markets?

The insights of Bertrand and Baumol have important ramifications for contemporary antitrust debates surrounding digital platforms. Indeed, it is critical to ascertain whether the (relatively) concentrated market structures we see in these industries are a sign of superior efficiency (and are consistent with potentially intense competition), or whether they are merely caused by barriers to entry.

The barrier-to-entry explanation has been repeated ad nauseam in recent scholarly reports, competition decisions, and pronouncements by legislators. There is thus little need to restate that thesis here. On the other hand, the contestability argument is almost systematically ignored.

Several factors suggest that online platform markets are far more contestable than critics routinely make them out to be.

First and foremost, consumer switching costs are extremely low for most online platforms. To cite but a few examples: Changing your default search engine requires at most a couple of clicks; joining a new social network can be done by downloading an app and importing your contacts to the app; and buying from an alternative online retailer is almost entirely frictionless, thanks to intermediaries such as PayPal.

These zero or near-zero switching costs are compounded by consumers’ ability to “multi-home.” In simple terms, joining TikTok does not require users to close their Facebook account. And the same applies to other online services. As a result, there is almost no opportunity cost to join a new platform. This further reduces the already tiny cost of switching.

Decades of app development have greatly improved the quality of applications’ graphical user interfaces (GUIs), to such an extent that costs to learn how to use a new app are mostly insignificant. Nowhere is this more apparent than for social media and sharing-economy apps (it may be less true for productivity suites that enable more complex operations). For instance, remembering a couple of intuitive swipe motions is almost all that is required to use TikTok. Likewise, ridesharing and food-delivery apps merely require users to be familiar with the general features of other map-based applications. It is almost unheard of for users to complain about usability—something that would have seemed impossible in the early 21st century, when complicated interfaces still plagued most software.

A second important argument in favor of contestability is that, by and large, online platforms face only limited capacity constraints. In other words, platforms can expand output rapidly (though not necessarily costlessly).

Perhaps the clearest example of this is the sudden rise of the Zoom service in early 2020. As a result of the COVID pandemic, Zoom went from around 10 million daily active users in early 2020 to more than 300 million by late April 2020. Despite being a relatively data-intensive service, Zoom did not struggle to meet this new demand from a more than 30-fold increase in its user base. The service never had to turn down users, reduce call quality, or significantly increase its price. In short, capacity largely followed demand for its service. Online industries thus seem closer to the Bertrand model of competition, where the best platform can almost immediately serve any consumers that demand its services.

Conclusion

Of course, none of this should be construed to declare that online markets are perfectly contestable. The central point is, instead, that critics are too quick to assume they are not. Take the following examples.

Scholars routinely cite the putatively strong concentration of digital markets to argue that big tech firms do not face strong competition, but this is a non sequitur. As Bertrand and Baumol (and others) show, what matters is not whether digital markets are concentrated, but whether they are contestable. If a superior rival could rapidly gain user traction, this alone will discipline the behavior of incumbents.

Markets where incumbents do not face significant entry from competitors are just as consistent with vigorous competition as they are with barriers to entry. Rivals could decline to enter either because incumbents have aggressively improved their product offerings or because they are shielded by barriers to entry (as critics suppose). The former is consistent with competition, the latter with monopoly slack.

Similarly, it would be wrong to presume, as many do, that concentration in online markets is necessarily driven by network effects and other scale-related economies. As ICLE scholars have argued elsewhere (here, here and here), these forces are not nearly as decisive as critics assume (and it is debatable that they constitute barriers to entry).

Finally, and perhaps most importantly, this piece has argued that many factors could explain the relatively concentrated market structures that we see in digital industries. The absence of switching costs and capacity constraints are but two such examples. These explanations, overlooked by many observers, suggest digital markets are more contestable than is commonly perceived.

In short, critics’ failure to meaningfully grapple with these issues serves to shape the prevailing zeitgeist in tech-policy debates. Cournot and Bertrand’s intuitions about oligopoly competition may be more than a century old, but they continue to be tested empirically. It is about time those same standards were applied to tech-policy debates.

Still from Squid Game, Netflix and Siren Pictures Inc., 2021

Recent commentary on the proposed merger between WarnerMedia and Discovery, as well as Amazon’s acquisition of MGM, often has included the suggestion that the online content-creation and video-streaming markets are excessively consolidated, or that they will become so absent regulatory intervention. For example, in a recent letter to the U.S. Justice Department (DOJ), the American Antitrust Institute and Public Knowledge opine that:

Slow and inadequate oversight risks the streaming market going the same route as cable—where consumers have little power, few options, and where consolidation and concentration reign supreme. A number of threats to competition are clear, as discussed in this section, including: (1) market power issues surrounding content and (2) the role of platforms in “gatekeeping” to limit competition.

But the AAI/PK assessment overlooks key facts about the video-streaming industry, some of which suggest that, if anything, these markets currently suffer from too much fragmentation.

The problem is well-known: any individual video-streaming service will offer only a fraction of the content that viewers want, but budget constraints limit the number of services that a household can afford to subscribe to. It may be counterintuitive, but consolidation in the market for video-streaming can solve both problems at once.

One subscription is not enough

Surveys find that U.S. households currently maintain, on average, four video-streaming subscriptions. This explains why even critics concede that a plethora of streaming services compete for consumer eyeballs. For instance, the AAI and PK point out that:

Today, every major media company realizes the value of streaming and a bevy of services have sprung up to offer different catalogues of content.

These companies have challenged the market leader, Netflix and include: Prime Video (2006), Hulu (2007), Paramount+ (2014), ESPN+ (2018), Disney+ (2019), Apple TV+ (2019), HBO Max (2020), Peacock (2020), and Discovery+ (2021).

With content scattered across several platforms, multiple subscriptions are the only way for households to access all (or most) of the programs they desire. Indeed, other than price, library sizes and the availability of exclusive content are reportedly the main drivers of consumer purchase decisions.

Of course, there is nothing inherently wrong with the current equilibrium in which consumers multi-home across multiple platforms. One potential explanation is demand for high-quality exclusive content, which requires tremendous investment to develop and promote. Production costs for TV series routinely run in the tens of millions of dollars per episode (see here and here). Economic theory predicts these relationship-specific investments made by both producers and distributors will cause producers to opt for exclusive distribution or vertical integration. The most sought-after content is thus exclusive to each platform. In other words, exclusivity is likely the price that users must pay to ensure that high-quality entertainment continues to be produced.

But while this paradigm has many strengths, the ensuing fragmentation can be detrimental to consumers, as this may lead to double marginalization or mundane issues like subscription fatigue. Consolidation can be a solution to both.

Substitutes, complements, or unrelated?

As Hal Varian explains in his seminal book, the relationship between two goods can range among three extremes: perfect substitutes (i.e., two goods are perfectly interchangeable); perfect complements (i.e., there is no value to owning one good without the other); or goods that exist in independent markets (i.e., the price of one good does not affect demand for the other).

These distinctions are critical when it comes to market concentration. All else equal—which is obviously not the case in reality—increased concentration leads to lower prices for complements, and higher prices for substitutes. Finally, if demand for two goods is unrelated, then bringing them under common ownership should not affect their price.

To at least some extent, streaming services should be seen as complements rather than substitutes—or, at least, as services with unrelated demand. If they were perfect substitutes, consumers would be indifferent between two Netflix subscriptions or one Netflix plan and one Amazon Prime plan. That is obviously not the case. Nor are they perfect complements, which would mean that Netflix is worthless without Amazon Prime, Disney+, and other services.

However, there is reason to believe there exists some complementarity between streaming services, or at least that demand for them is independent. Most consumers subscribe to multiple services, and almost no one subscribes to the same service twice:

SOURCE: Finance Buzz

This assertion is also supported by the ubiquitous bundling of subscriptions in the cable distribution industry, which also has recently been seen in video-streaming markets. For example, in the United States, Disney+ can be purchased in a bundle with Hulu and ESPN+.

The key question is: is each service more valuable, less valuable, or as valuable in isolation than they are when bundled? If households place some additional value on having a complete video offering (one that includes child entertainment, sports, more mature content, etc.), and if they value the convenience of accessing more of their content via a single app, then we can infer these services are to some extent complementary.

Finally, it is worth noting that any complementarity between these services would be largely endogenous. If the industry suddenly switched to a paradigm of non-exclusive content—as is broadly the case for audio streaming—the above analysis would be altered (though, as explained above, such a move would likely be detrimental to users). Streaming services would become substitutes if they offered identical catalogues.

In short, the extent to which streaming services are complements ultimately boils down to an empirical question that may fluctuate with industry practices. As things stand, there is reason to believe that these services feature some complementarities, or at least that demand for them is independent. In turn, this suggests that further consolidation within the industry would not lead to price increases and may even reduce them.

Consolidation can enable price discrimination

It is well-established that bundling entertainment goods can enable firms to better engage in price discrimination, often increasing output and reducing deadweight loss in the process.

Take George Stigler’s famous explanation for the practice of “block booking,” in which movie studios sold multiple films to independent movie theatres as a unit. Stigler assumes the underlying goods are neither substitutes nor complements:

Stigler, George J. (1963) “United States v. Loew’s Inc.: A Note on Block-Booking,” Supreme Court Review: Vol. 1963 : No. 1 , Article 2.

The upshot is that, when consumer tastes for content are idiosyncratic—as is almost certainly the case for movies and television series, movies—it can counterintuitively make sense to sell differing content as a bundle. In doing so, the distributor avoids pricing consumers out of the content upon which they place a lower value. Moreover, this solution is more efficient than price discriminating on an unbundled basis, as doing so would require far more information on the seller’s part and would be vulnerable to arbitrage.

In short, bundling enables each consumer to access a much wider variety of content. This, in turn, provides a powerful rationale for mergers in the video-streaming space—particularly where they can bring together varied content libraries. Put differently, it cuts in favor of more, not less, concentration in video-streaming markets (at least, up to a certain point).

Finally, a wide array of scale-related economies further support the case for concentration in video-streaming markets. These include potential economies of scale, network effects, and reduced transaction costs.

The simplest of these ideas is that the cost of video streaming may decrease at the margin (i.e., serving each marginal viewer might be cheaper than the previous one). In other words, mergers of video-streaming services mayenable platforms to operate at a more efficient scale. There has notably been some discussion of whether Netflix benefits from scale economies of this sort. But this is, of course, ultimately an empirical question. As I have written with Geoffrey Manne, we should not assume that this is the case for all digital platforms, or that these increasing returns are present at all ranges of output.

Likewise, the fact that content can earn greater revenues by reaching a wider audience (or a greater number of small niches) may increase a producer’s incentive to create high-quality content. For example, Netflix’s recent hit series Squid Game reportedly cost $16.8 million to produce a total of nine episodes. This is significant for a Korean-language thriller. These expenditures were likely only possible because of Netflix’s vast network of viewers. Video-streaming mergers can jump-start these effects by bringing previously fragmented audiences onto a single platform.

Finally, operating at a larger scale may enable firms and consumers to economize on various transaction and search costs. For instance, consumers don’t need to manage several subscriptions, and searching for content is easier within a single ecosystem.

Conclusion

In short, critics could hardly be more wrong in assuming that consolidation in the video-streaming industry will necessarily harm consumers. To the contrary, these mergers should be presumptively welcomed because, to a first approximation, they are likely to engender lower prices and reduce deadweight loss.

Critics routinely draw parallels between video streaming and the consolidation that previously moved through the cable industry. They suggest these events as evidence that consolidation was (and still is) inefficient and exploitative of consumers. As AAI and PK frame it:

Moreover, given the broader competition challenges that reside in those markets, and the lessons learned from a failure to ensure competition in the traditional MVPD markets, enforcers should be particularly vigilant.

But while it might not have been ideal for all consumers, the comparatively laissez-faire approach to competition in the cable industry arguably facilitated the United States’ emergence as a global leader for TV programming. We are now witnessing what appears to be a similar trend in the online video-streaming market.

This is mostly a good thing. While a single streaming service might not be the optimal industry configuration from a welfare standpoint, it would be equally misguided to assume that fragmentation necessarily benefits consumers. In fact, as argued throughout this piece, there are important reasons to believe that the status quo—with at least 10 significant players—is too fragmented and that consumers would benefit from additional consolidation.

[This post adapts elements of “Should ASEAN Antitrust Laws Emulate European Competition Policy?”, published in the Singapore Economic Review (2021). Open access working paper here.]

U.S. and European competition laws diverge in numerous ways that have important real-world effects. Understanding these differences is vital, particularly as lawmakers in the United States, and the rest of the world, consider adopting a more “European” approach to competition.

In broad terms, the European approach is more centralized and political. The European Commission’s Directorate General for Competition (DG Comp) has significant de facto discretion over how the law is enforced. This contrasts with the common law approach of the United States, in which courts elaborate upon open-ended statutes through an iterative process of case law. In other words, the European system was built from the top down, while U.S. antitrust relies on a bottom-up approach, derived from arguments made by litigants (including the government antitrust agencies) and defendants (usually businesses).

This procedural divergence has significant ramifications for substantive law. European competition law includes more provisions akin to de facto regulation. This is notably the case for the “abuse of dominance” standard, in which a “dominant” business can be prosecuted for “abusing” its position by charging high prices or refusing to deal with competitors. By contrast, the U.S. system places more emphasis on actual consumer outcomes, rather than the nature or “fairness” of an underlying practice.

The American system thus affords firms more leeway to exclude their rivals, so long as this entails superior benefits for consumers. This may make the U.S. system more hospitable to innovation, since there is no built-in regulation of conduct for innovators who acquire a successful market position fairly and through normal competition.

In this post, we discuss some key differences between the two systems—including in areas like predatory pricing and refusals to deal—as well as the discretionary power the European Commission enjoys under the European model.

Exploitative Abuses

U.S. antitrust is, by and large, unconcerned with companies charging what some might consider “excessive” prices. The late Associate Justice Antonin Scalia, writing for the Supreme Court majority in the 2003 case Verizon v. Trinko, observed that:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period—is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth.

This contrasts with European competition-law cases, where firms may be found to have infringed competition law because they charged excessive prices. As the European Court of Justice (ECJ) held in 1978’s United Brands case: “In this case charging a price which is excessive because it has no reasonable relation to the economic value of the product supplied would be such an abuse.”

While United Brands was the EU’s foundational case for excessive pricing, and the European Commission reiterated that these allegedly exploitative abuses were possible when it published its guidance paper on abuse of dominance cases in 2009, the commission had for some time demonstrated apparent disinterest in bringing such cases. In recent years, however, both the European Commission and some national authorities have shown renewed interest in excessive-pricing cases, most notably in the pharmaceutical sector.

European competition law also penalizes so-called “margin squeeze” abuses, in which a dominant upstream supplier charges a price to distributors that is too high for them to compete effectively with that same dominant firm downstream:

[I]t is for the referring court to examine, in essence, whether the pricing practice introduced by TeliaSonera is unfair in so far as it squeezes the margins of its competitors on the retail market for broadband connection services to end users. (Konkurrensverket v TeliaSonera Sverige, 2011)

As Scalia observed in Trinko, forcing firms to charge prices that are below a market’s natural equilibrium affects firms’ incentives to enter markets, notably with innovative products and more efficient means of production. But the problem is not just one of market entry and innovation.  Also relevant is the degree to which competition authorities are competent to determine the “right” prices or margins.

As Friedrich Hayek demonstrated in his influential 1945 essay The Use of Knowledge in Society, economic agents use information gleaned from prices to guide their business decisions. It is this distributed activity of thousands or millions of economic actors that enables markets to put resources to their most valuable uses, thereby leading to more efficient societies. By comparison, the efforts of central regulators to set prices and margins is necessarily inferior; there is simply no reasonable way for competition regulators to make such judgments in a consistent and reliable manner.

Given the substantial risk that investigations into purportedly excessive prices will deter market entry, such investigations should be circumscribed. But the court’s precedents, with their myopic focus on ex post prices, do not impose such constraints on the commission. The temptation to “correct” high prices—especially in the politically contentious pharmaceutical industry—may thus induce economically unjustified and ultimately deleterious intervention.

Predatory Pricing

A second important area of divergence concerns predatory-pricing cases. U.S. antitrust law subjects allegations of predatory pricing to two strict conditions:

  1. Monopolists must charge prices that are below some measure of their incremental costs; and
  2. There must be a realistic prospect that they will able to recoup these initial losses.

In laying out its approach to predatory pricing, the U.S. Supreme Court has identified the risk of false positives and the clear cost of such errors to consumers. It thus has particularly stressed the importance of the recoupment requirement. As the court found in 1993’s Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., without recoupment, “predatory pricing produces lower aggregate prices in the market, and consumer welfare is enhanced.”

Accordingly, U.S. authorities must prove that there are constraints that prevent rival firms from entering the market after the predation scheme, or that the scheme itself would effectively foreclose rivals from entering the market in the first place. Otherwise, the predator would be undercut by competitors as soon as it attempts to recoup its losses by charging supra-competitive prices.

Without the strong likelihood that a monopolist will be able to recoup lost revenue from underpricing, the overwhelming weight of economic evidence (to say nothing of simple logic) is that predatory pricing is not a rational business strategy. Thus, apparent cases of predatory pricing are most likely not, in fact, predatory; deterring or punishing them would actually harm consumers.

By contrast, the EU employs a more expansive legal standard to define predatory pricing, and almost certainly risks injuring consumers as a result. Authorities must prove only that a company has charged a price below its average variable cost, in which case its behavior is presumed to be predatory. Even when a firm charges prices that are between its average variable and average total cost, it can be found guilty of predatory pricing if authorities show that its behavior was part of a plan to eliminate a competitor. Most significantly, in neither case is it necessary for authorities to show that the scheme would allow the monopolist to recoup its losses.

[I]t does not follow from the case‑law of the Court that proof of the possibility of recoupment of losses suffered by the application, by an undertaking in a dominant position, of prices lower than a certain level of costs constitutes a necessary precondition to establishing that such a pricing policy is abusive. (France Télécom v Commission, 2009).

This aspect of the legal standard has no basis in economic theory or evidence—not even in the “strategic” economic theory that arguably challenges the dominant Chicago School understanding of predatory pricing. Indeed, strategic predatory pricing still requires some form of recoupment, and the refutation of any convincing business justification offered in response. For example, ​​in a 2017 piece for the Antitrust Law Journal, Steven Salop lays out the “raising rivals’ costs” analysis of predation and notes that recoupment still occurs, just at the same time as predation:

[T]he anticompetitive conditional pricing practice does not involve discrete predatory and recoupment periods, as in the case of classical predatory pricing. Instead, the recoupment occurs simultaneously with the conduct. This is because the monopolist is able to maintain its current monopoly power through the exclusionary conduct.

The case of predatory pricing illustrates a crucial distinction between European and American competition law. The recoupment requirement embodied in American antitrust law serves to differentiate aggressive pricing behavior that improves consumer welfare—because it leads to overall price decreases—from predatory pricing that reduces welfare with higher prices. It is, in other words, entirely focused on the welfare of consumers.

The European approach, by contrast, reflects structuralist considerations far removed from a concern for consumer welfare. Its underlying fear is that dominant companies could use aggressive pricing to engender more concentrated markets. It is simply presumed that these more concentrated markets are invariably detrimental to consumers. Both the Tetra Pak and France Télécom cases offer clear illustrations of the ECJ’s reasoning on this point:

[I]t would not be appropriate, in the circumstances of the present case, to require in addition proof that Tetra Pak had a realistic chance of recouping its losses. It must be possible to penalize predatory pricing whenever there is a risk that competitors will be eliminated… The aim pursued, which is to maintain undistorted competition, rules out waiting until such a strategy leads to the actual elimination of competitors. (Tetra Pak v Commission, 1996).

Similarly:

[T]he lack of any possibility of recoupment of losses is not sufficient to prevent the undertaking concerned reinforcing its dominant position, in particular, following the withdrawal from the market of one or a number of its competitors, so that the degree of competition existing on the market, already weakened precisely because of the presence of the undertaking concerned, is further reduced and customers suffer loss as a result of the limitation of the choices available to them.  (France Télécom v Commission, 2009).

In short, the European approach leaves less room to analyze the concrete effects of a given pricing scheme, leaving it more prone to false positives than the U.S. standard explicated in the Brooke Group decision. Worse still, the European approach ignores not only the benefits that consumers may derive from lower prices, but also the chilling effect that broad predatory pricing standards may exert on firms that would otherwise seek to use aggressive pricing schemes to attract consumers.

Refusals to Deal

U.S. and EU antitrust law also differ greatly when it comes to refusals to deal. While the United States has limited the ability of either enforcement authorities or rivals to bring such cases, EU competition law sets a far lower threshold for liability.

As Justice Scalia wrote in Trinko:

Aspen Skiing is at or near the outer boundary of §2 liability. The Court there found significance in the defendant’s decision to cease participation in a cooperative venture. The unilateral termination of a voluntary (and thus presumably profitable) course of dealing suggested a willingness to forsake short-term profits to achieve an anticompetitive end. (Verizon v Trinko, 2003.)

This highlights two key features of American antitrust law with regard to refusals to deal. To start, U.S. antitrust law generally does not apply the “essential facilities” doctrine. Accordingly, in the absence of exceptional facts, upstream monopolists are rarely required to supply their product to downstream rivals, even if that supply is “essential” for effective competition in the downstream market. Moreover, as Justice Scalia observed in Trinko, the Aspen Skiing case appears to concern only those limited instances where a firm’s refusal to deal stems from the termination of a preexisting and profitable business relationship.

While even this is not likely the economically appropriate limitation on liability, its impetus—ensuring that liability is found only in situations where procompetitive explanations for the challenged conduct are unlikely—is completely appropriate for a regime concerned with minimizing the cost to consumers of erroneous enforcement decisions.

As in most areas of antitrust policy, EU competition law is much more interventionist. Refusals to deal are a central theme of EU enforcement efforts, and there is a relatively low threshold for liability.

In theory, for a refusal to deal to infringe EU competition law, it must meet a set of fairly stringent conditions: the input must be indispensable, the refusal must eliminate all competition in the downstream market, and there must not be objective reasons that justify the refusal. Moreover, if the refusal to deal involves intellectual property, it must also prevent the appearance of a new good.

In practice, however, all of these conditions have been relaxed significantly by EU courts and the commission’s decisional practice. This is best evidenced by the lower court’s Microsoft ruling where, as John Vickers notes:

[T]he Court found easily in favor of the Commission on the IMS Health criteria, which it interpreted surprisingly elastically, and without relying on the special factors emphasized by the Commission. For example, to meet the “new product” condition it was unnecessary to identify a particular new product… thwarted by the refusal to supply but sufficient merely to show limitation of technical development in terms of less incentive for competitors to innovate.

EU competition law thus shows far less concern for its potential chilling effect on firms’ investments than does U.S. antitrust law.

Vertical Restraints

There are vast differences between U.S. and EU competition law relating to vertical restraints—that is, contractual restraints between firms that operate at different levels of the production process.

On the one hand, since the Supreme Court’s Leegin ruling in 2006, even price-related vertical restraints (such as resale price maintenance (RPM), under which a manufacturer can stipulate the prices at which retailers must sell its products) are assessed under the rule of reason in the United States. Some commentators have gone so far as to say that, in practice, U.S. case law on RPM almost amounts to per se legality.

Conversely, EU competition law treats RPM as severely as it treats cartels. Both RPM and cartels are considered to be restrictions of competition “by object”—the EU’s equivalent of a per se prohibition. This severe treatment also applies to non-price vertical restraints that tend to partition the European internal market.

Furthermore, in the Consten and Grundig ruling, the ECJ rejected the consequentialist, and economically grounded, principle that inter-brand competition is the appropriate framework to assess vertical restraints:

Although competition between producers is generally more noticeable than that between distributors of products of the same make, it does not thereby follow that an agreement tending to restrict the latter kind of competition should escape the prohibition of Article 85(1) merely because it might increase the former. (Consten SARL & Grundig-Verkaufs-GMBH v. Commission of the European Economic Community, 1966).

This treatment of vertical restrictions flies in the face of longstanding mainstream economic analysis of the subject. As Patrick Rey and Jean Tirole conclude:

Another major contribution of the earlier literature on vertical restraints is to have shown that per se illegality of such restraints has no economic foundations.

Unlike the EU, the U.S. Supreme Court in Leegin took account of the weight of the economic literature, and changed its approach to RPM to ensure that the law no longer simply precluded its arguable consumer benefits, writing: “Though each side of the debate can find sources to support its position, it suffices to say here that economics literature is replete with procompetitive justifications for a manufacturer’s use of resale price maintenance.” Further, the court found that the prior approach to resale price maintenance restraints “hinders competition and consumer welfare because manufacturers are forced to engage in second-best alternatives and because consumers are required to shoulder the increased expense of the inferior practices.”

The EU’s continued per se treatment of RPM, by contrast, strongly reflects its “precautionary principle” approach to antitrust. European regulators and courts readily condemn conduct that could conceivably injure consumers, even where such injury is, according to the best economic understanding, exceedingly unlikely. The U.S. approach, which rests on likelihood rather than mere possibility, is far less likely to condemn beneficial conduct erroneously.

Political Discretion in European Competition Law

EU competition law lacks a coherent analytical framework like that found in U.S. law’s reliance on the consumer welfare standard. The EU process is driven by a number of laterally equivalent—and sometimes mutually exclusive—goals, including industrial policy and the perceived need to counteract foreign state ownership and subsidies. Such a wide array of conflicting aims produces lack of clarity for firms seeking to conduct business. Moreover, the discretion that attends this fluid arrangement of goals yields an even larger problem.

The Microsoft case illustrates this problem well. In Microsoft, the commission could have chosen to base its decision on various potential objectives. It notably chose to base its findings on the fact that Microsoft’s behavior reduced “consumer choice.”

The commission, in fact, discounted arguments that economic efficiency may lead to consumer welfare gains, because it determined “consumer choice” among media players was more important:

Another argument relating to reduced transaction costs consists in saying that the economies made by a tied sale of two products saves resources otherwise spent for maintaining a separate distribution system for the second product. These economies would then be passed on to customers who could save costs related to a second purchasing act, including selection and installation of the product. Irrespective of the accuracy of the assumption that distributive efficiency gains are necessarily passed on to consumers, such savings cannot possibly outweigh the distortion of competition in this case. This is because distribution costs in software licensing are insignificant; a copy of a software programme can be duplicated and distributed at no substantial effort. In contrast, the importance of consumer choice and innovation regarding applications such as media players is high. (Commission Decision No. COMP. 37792 (Microsoft)).

It may be true that tying the products in question was unnecessary. But merely dismissing this decision because distribution costs are near-zero is hardly an analytically satisfactory response. There are many more costs involved in creating and distributing complementary software than those associated with hosting and downloading. The commission also simply asserts that consumer choice among some arbitrary number of competing products is necessarily a benefit. This, too, is not necessarily true, and the decision’s implication that any marginal increase in choice is more valuable than any gains from product design or innovation is analytically incoherent.

The Court of First Instance was only too happy to give the commission a pass in its breezy analysis; it saw no objection to these findings. With little substantive reasoning to support its findings, the court fully endorsed the commission’s assessment:

As the Commission correctly observes (see paragraph 1130 above), by such an argument Microsoft is in fact claiming that the integration of Windows Media Player in Windows and the marketing of Windows in that form alone lead to the de facto standardisation of the Windows Media Player platform, which has beneficial effects on the market. Although, generally, standardisation may effectively present certain advantages, it cannot be allowed to be imposed unilaterally by an undertaking in a dominant position by means of tying.

The Court further notes that it cannot be ruled out that third parties will not want the de facto standardisation advocated by Microsoft but will prefer it if different platforms continue to compete, on the ground that that will stimulate innovation between the various platforms. (Microsoft Corp. v Commission, 2007)

Pointing to these conflicting effects of Microsoft’s bundling decision, without weighing either, is a weak basis to uphold the commission’s decision that consumer choice outweighs the benefits of standardization. Moreover, actions undertaken by other firms to enhance consumer choice at the expense of standardization are, on these terms, potentially just as problematic. The dividing line becomes solely which theory the commission prefers to pursue.

What such a practice does is vest the commission with immense discretionary power. Any given case sets up a “heads, I win; tails, you lose” situation in which defendants are easily outflanked by a commission that can change the rules of its analysis as it sees fit. Defendants can play only the cards that they are dealt. Accordingly, Microsoft could not successfully challenge a conclusion that its behavior harmed consumers’ choice by arguing that it improved consumer welfare, on net.

By selecting, in this instance, “consumer choice” as the standard to be judged, the commission was able to evade the constraints that might have been imposed by a more robust welfare standard. Thus, the commission can essentially pick and choose the objectives that best serve its interests in each case. This vastly enlarges the scope of potential antitrust liability, while also substantially decreasing the ability of firms to predict when their behavior may be viewed as problematic. It leads to what, in U.S. courts, would be regarded as an untenable risk of false positives that chill innovative behavior and create nearly unwinnable battles for targeted firms.