Archives For Richard Posner

The wave of populist antitrust that has been embraced by regulators and legislators in the United States, United Kingdom, European Union, and other jurisdictions rests on the assumption that currently dominant platforms occupy entrenched positions that only government intervention can dislodge. Following this view, Facebook will forever dominate social networking, Amazon will forever dominate cloud computing, Uber and Lyft will forever dominate ridesharing, and Amazon and Netflix will forever dominate streaming. This assumption of platform invincibility is so well-established that some policymakers advocate significant interventions without making any meaningful inquiry into whether a seemingly dominant platform actually exercises market power.

Yet this assumption is not supported by historical patterns in platform markets. It is true that network effects drive platform markets toward “winner-take-most” outcomes. But the winner is often toppled quickly and without much warning. There is no shortage of examples.

In 2007, a columnist in The Guardian observed that “it may already be too late for competitors to dislodge MySpace” and quoted an economist as authority for the proposition that “MySpace is well on the way to becoming … a natural monopoly.” About one year later, Facebook had overtaken MySpace “monopoly” in the social-networking market. Similarly, it was once thought that Blackberry would forever dominate the mobile-communications device market, eBay would always dominate the online e-commerce market, and AOL would always dominate the internet-service-portal market (a market that no longer even exists). The list of digital dinosaurs could go on.

All those tech leaders were challenged by entrants and descended into irrelevance (or reduced relevance, in eBay’s case). This occurred through the force of competition, not government intervention.

Why This Time is Probably Not Different

Given this long line of market precedents, current legislative and regulatory efforts to “restore” competition through extensive intervention in digital-platform markets require that we assume that “this time is different.” Just as that slogan has been repeatedly rebutted in the financial markets, so too is it likely to be rebutted in platform markets. 

There is already supporting evidence. 

In the cloud market, Amazon’s AWS now faces vigorous competition from Microsoft Azure and Google Cloud. In the streaming market, Amazon and Netflix face stiff competition from Disney+ and Apple TV+, just to name a few well-resourced rivals. In the social-networking market, Facebook now competes head-to-head with TikTok and seems to be losing. The market power once commonly attributed to leading food-delivery platforms such as Grubhub, UberEats, and DoorDash is implausible after persistent losses in most cases, and the continuous entry of new services into a rich variety of local and product-market niches.

Those who have advocated antitrust intervention on a fast-track schedule may remain unconvinced by these inconvenient facts. But the market is not. 

Investors have already recognized Netflix’s vulnerability to competition, as reflected by a 35% fall in its stock price on April 20 and a decline of more than 60% over the past 12 months. Meta, Facebook’s parent, also experienced a reappraisal, falling more than 26% on Feb. 3 and more than 35% in the past 12 months. Uber, the pioneer of the ridesharing market, has declined by almost 50% over the past 12 months, while Lyft, its principal rival, has lost more than 60% of its value. These price freefalls suggest that antitrust populists may be pursuing solutions to a problem that market forces are already starting to address.

The Forgotten Curse of the Incumbent

For some commentators, the sharp downturn in the fortunes of the so-called “Big Tech” firms would not come as a surprise.

It has long been observed by some scholars and courts that a dominant firm “carries the seeds of its own destruction”—a phrase used by then-professor and later-Judge Richard Posner, writing in the University of Chicago Law Review in 1971. The reason: a dominant firm is liable to exhibit high prices, mediocre quality, or lackluster innovation, which then invites entry by more adept challengers. However, this view has been dismissed as outdated in digital-platform markets, where incumbents are purportedly protected by network effects and switching costs that make it difficult for entrants to attract users. Depending on the set of assumptions selected by an economic modeler, each contingency is equally plausible in theory.

The plunging values of leading platforms supplies real-world evidence that favors the self-correction hypothesis. It is often overlooked that network effects can work in both directions, resulting in a precipitous fall from market leader to laggard. Once users start abandoning a dominant platform for a new competitor, network effects operating in reverse can cause a “run for the exits” that leaves the leader with little time to recover. Just ask Nokia, the world’s leading (and seemingly unbeatable) smartphone brand until the Apple iPhone came along.

Why Market Self-Correction Outperforms Regulatory Correction

Market self-correction inherently outperforms regulatory correction: it operates far more rapidly and relies on consumer preferences to reallocate market leadership—a result perfectly consistent with antitrust’s mission to preserve “competition on the merits.” In contrast, policymakers can misdiagnose the competitive effects of business practices; are susceptible to the influence of private interests (especially those that are unable to compete on the merits); and often mispredict the market’s future trajectory. For Exhibit A, see the protracted antitrust litigation by the U.S. Department against IBM, which started in 1975 and ended in withdrawal of the suit in 1982. Given the launch of the Apple II in 1977, the IBM PC in 1981, and the entry of multiple “PC clones,” the forces of creative destruction swiftly displaced IBM from market leadership in the computing industry.

Regulators and legislators around the world have emphasized the urgency of taking dramatic action to correct claimed market failures in digital environments, casting aside prudential concerns over the consequences if any such failure proves to be illusory or temporary. 

But the costs of regulatory failure can be significant and long-lasting. Markets must operate under unnecessary compliance burdens that are difficult to modify. Regulators’ enforcement resources are diverted, and businesses are barred from adopting practices that would benefit consumers. In particular, proposed breakup remedies advocated by some policymakers would undermine the scale economies that have enabled platforms to push down prices, an important consideration in a time of accelerating inflation.

Conclusion

The high concentration levels and certain business practices in digital-platform markets certainly raise important concerns as a matter of antitrust (as well as privacy, intellectual property, and other bodies of) law. These concerns merit scrutiny and may necessitate appropriately targeted interventions. Yet, any policy steps should be anchored in the factually grounded analysis that has characterized decades of regulatory and judicial action to implement the antitrust laws with appropriate care. Abandoning this nuanced framework for a blunt approach based on reflexive assumptions of market power is likely to undermine, rather than promote, the public interest in competitive markets.

[The ideas in this post from Truth on the Market regular Jonathan M. Barnett of USC Gould School of Law—the eighth entry in our FTC UMC Rulemaking symposiumare developed in greater detail in “Regulatory Rents: An Agency-Cost Analysis of the FTC Rulemaking Initiative,” a chapter in the forthcoming book FTC’s Rulemaking Authority, which will be published by Concurrences later this year. This is the first of two posts we are publishing today; see also this related post from Aaron Nielsen of BYU Law. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

In December 2021, the Federal Trade Commission (FTC) released its statement of regulatory priorities for 2022, which describes its intention to expand the agency’s rulemaking activities to target “unfair methods of competition” (UMC) under Section 5 of the Federal Trade Commission Act (FTC Act), in addition to (and in some cases, presumably in place of) the conventional mechanism of case-by-case adjudication. Agency leadership (meaning, the FTC chair and the majority commissioners) largely characterizes the rulemaking initiative as a logistical improvement to enable the agency to more efficiently execute its statutory commitment to preserve competitive markets. Unburdened by the costs and delays inherent to the adjudicative process (which, in the antitrust context, typically requires evidence of actual or likely competitive harm), the agency will be able to take expedited action against UMCs based on rules preemptively set forth by the agency. 

This shift from enforcement by adjudication to enforcement by rulemaking is far from a mechanical adjustment. Rather, it is best understood as part of an initiative to make fundamental changes to the substance and methodology of antitrust enforcement.  Substantively, the initiative appears to be part of a broader effort to alter the goals of antitrust enforcement so that it promotes what are deemed to be “equitable” market outcomes, rather than preserving the competitive process through which outcomes are determined by market forces. Methodologically, the initiative appears to be part of a broader effort to displace rule-of-reason treatment with the practical equivalent of per se prohibitions in a wide range of putatively “unfair” practices. Both steps would be inconsistent with the agency’s statutory mission to safeguard the competitive process or a meaningful commitment to a market-driven economy and the rule of law.

Abandoning Competitive Markets

Little steps sometimes portend bigger changes. 

In July 2021, FTC leadership removed the following words from the mission description of the agency’s Bureau of Competition: “The Bureau’s work aims to preserve the free market system and assure the unfettered operation of the forces of supply and demand.” This omitted statement had tracked what remains the standard characterization by federal courts and agency guidelines of the core objective of the antitrust laws. Following this characterization, the antitrust laws seek to preserve the “rules of the game” for market competition, while remaining indifferent to the outcomes of such competition in any particular market. It is the competitive process, not the fortunes of particular competitors, that matters.

Other statements by FTC leadership suggest that they seek to abandon this outcome-agnostic perspective. A memo from the FTC chair to staff, distributed in September 2021, states that the agency’s actions “shape the distribution of power and opportunity” and encourages staff “to take a holistic approach to identifying harms, recognizing that antitrust and consumer protection violations harm workers and independent businesses as well as consumers.” In a draft strategic plan distributed by FTC leadership in October 2021, the agency described its mission as promoting “fair competition” for the “benefit of the public.”  In contrast, the agency’s previously released strategic plan had described the agency’s mission as promoting “competition” for the benefit of consumers, consistent with the case law’s commitment to protecting consumer welfare, dating at least to the Supreme Court’s 1979 decision in Reiter v. Sonotone Corp. et al. The change in language suggests that the agency’s objectives encompass a broad range of stakeholders and policies (including distributive objectives) that extends beyond, and could conflict with, its commitment to preserve the integrity of the competitive process.

These little steps are part of a broader package of “big steps” undertaken during 2021 by FTC leadership. 

In July 2021, the agency abandoned decades of federal case law and agency guidelines by rejecting the consumer-welfare standard for purposes of enforcement of Section 5 of the FTC Act against UMCs. Relatedly, FTC leadership asserted in the same statement that Congress had delegated to the agency authority under Section 5 “to determine which practices fell into the category of ‘unfair methods of competition’”. Remarkably, the agency’s claimed ambit of prosecutorial discretion to identify “unfair” practices is apparently only limited by a commitment to exercise such power “responsibly.”

This largely unbounded redefinition of the scope of Section 5 divorces the FTC’s enforcement authority from the concepts and methods as embodied in decades of federal case law and agency guidelines interpreting the Sherman and Clayton Acts. Those concepts and methods are in turn anchored in the consumer-welfare principle, which ensures that regulatory and judicial actions promote the public interest in the competitive process, rather than the private interests of any particular competitor or other policy goals not contemplated by the antitrust laws. Effectively, agency leadership has unilaterally converted Section 5 into an empty vessel into which enforcers may insert a fluid range of business practices that are deemed by fiat to pose a risk to “fair” competition. 

Abandoning the Rule of Reason

In the same statement in which FTC leadership rejected the consumer-welfare principle for purposes of Section 5 enforcement, it rejected the relevance of the rule of reason for these same purposes. In that statement, agency leadership castigated the rule of reason as a standard that “leads to soaring enforcement costs” and asserted that it is incompatible with Section 5 of the FTC Act. In March 2021 remarks delivered to the House Judiciary Committee’s Antitrust Subcommittee, Commissioner Rebecca Kelly Slaughter similarly lamented “[t]he effect of cramped case law,” specifically viewing as problematic the fact that “[u]nder current Section 5 jurisprudence, courts have to consider conduct under the ‘rule of reason,’ a fact-intensive investigation into whether the anticompetitive effects of the conduct outweigh the procompetitive justifications.” Hence, it appears that the FTC, in exercising its purported rulemaking powers against UMCs under Section 5, does not intend to undertake the balancing of competitive harms and gains that is the signature element of rule-of-reason analysis. Tellingly, the agency’s draft strategic plan, released in October 2021, omits language that it would execute its enforcement mission “without unduly burdening legitimate business activity” (language that had appeared in the previously released strategic plan)—again, suggesting that it plans to take littleaccount of the offsetting competitive gains attributable to a particular business practice.

This change in methodology has two profound and concerning implications. 

First, it means that any “unfair” practice targeted by the agency under Section 5 is effectively subject to a per se prohibition—that is, the agency can prevail merely by identifying that the defendant engaged in a particular practice, rather than having to show competitive harm. Note that this would represent a significant step beyond the per se rule that Sherman Act case law applies to certain cases of horizontal collusion. In those cases, a per se rule has been adopted because economic analysis indicates that these types of practices in general pose such a high risk of net anticompetitive harm that a rule-of-reason inquiry is likely to fail a cost-benefit test almost all of the time. By contrast, there is no indication that FTC leadership plans to confine its rulemaking activities to practices that systematically pose an especially high risk of anticompetitive harm, in part because it is not clear that agency leadership still views harm to the competitive process as being the determinative criterion in antitrust analysis.  

Second, without further clarification from agency leadership, this means that the agency appears to place substantially reduced weight on the possibility of “false positive” error costs. This would be a dramatic departure from the conventional approach to error costs as reflected in federal antitrust case law. Antitrust scholars have long argued, and many courts have adopted the view, that “false positive” costs should be weighted more heavily relative to “false negative” error costs, principally on the ground that, as Judge Richard Posner once put it, “a cartel . . . carries within it the seeds of its own destruction.” To be clear, this weighted approach should still meaningfully assess the false-negative error costs that arise from mistaken failures to intervene. By contrast, the agency’s blanket rejection of the rule of reason in all circumstances for Section 5 purposes raises doubt as to whether it would assign any material weight to false-positive error costs in exercising its purported rulemaking power under Section 5 against UMCs. Consistent with this possibility, the agency’s July 2021 statement—which rejected the rule of reason specifically—adopted the view that Section 5 enforcement should target business practices in their “incipiency,” even absent evidence of a “likely” anticompetitive effect.

While there may be reasonable arguments in favor of an equal weighting of false-positive and false-negative error costs (on the grounds that markets are sometimes slow to correct anticompetitive conduct, as compared to the speed with which courts correct false-positive interventions), it is hard to fathom a reasonable policy argument in favor of placing no material weight on the former cost category. Under conditions of uncertainty, the net economic effect of any particular enforcement action, or failure to take such action, gives rise to a mix of probability-adjusted false-positive and false-negative error costs. Hence, any sound policy framework seeks to minimize the sum of those costs. Moreover, the wholesale rejection of a balancing analysis overlooks extensive scholarship identifying cases in which federal courts, especially during the period prior to the Supreme Court’s landmark 1977 decision in Continental TV Inc. v. GTE Sylvania Inc., applied per se rules that erroneously targeted business practices that were almost certainly generating net-positive competitive gains. Any such mistaken intervention counterproductively penalizes the efforts and ingenuity of the most efficient firms, which then harms consumers, who are compelled to suffer higher prices, lower quality, or fewer innovations than would otherwise have been the case.

The dismissal of efficiency considerations and false-positive error costs is difficult to reconcile with an economically informed approach that seeks to take enforcement actions only where there is a high likelihood of improving economic welfare based on available evidence. On this point, it is worth quoting Oliver Williamson’s well-known critique of 1960s-era antitrust: “[I]f neither the courts nor the enforcement agencies are sensitive to these [efficiency] considerations, the system fails to meet a basic test of economic rationality. And without this the whole enforcement system lacks defensible standards and becomes suspect.”

Abandoning the Rule of Law

In a liberal democratic system of government, the market relies on the state’s commitment to set forth governing laws with adequate notice and specificity, and then to enforce those laws in a manner that is reasonably amenable to judicial challenge in case of prosecutorial error or malfeasance. Without that commitment, investors are exposed to arbitrary enforcement and would be reluctant to place capital at stake. In light of the agency’s concurrent rejection of the consumer-welfare and rule-of-reason principles, any future attempt by the FTC to exercise its purported Section 5 rulemaking powers against UMCs under what currently appears to be a regime of largely unbounded regulatory discretion is likely to violate these elementary conditions for a rule-of-law jurisdiction. 

Having dismissed decades of learning and precedent embodied in federal case law and agency guidelines, FTC leadership has declined to adopt any substitute guidelines to govern its actions under Section 5 and, instead, has stated (in its July 2021 statement rejecting the consumer-welfare principle) that there are few bounds on its authority to specify and target practices that it deems to be “unfair.” This blunt approach contrasts sharply with the measured approach reflected in existing agency guidelines and federal case law, which seek to delineate reasonably objective standards to govern enforcers’ and courts’ decision making when evaluating the competitive merits of a particular business practice.  

This approach can be observed, even if imperfectly, in the application of the Herfindahl-Hirschman Index (HHI) metric in the merger-review process and the use of “safety zones” (defined principally by reference to market-share thresholds) in the agencies’ Antitrust Guidelines for the Licensing of Intellectual Property, Horizontal Merger Guidelines, and Antitrust Guidelines for Collaborations Among Competitors. This nuanced and evidence-based approach can also be observed in a decision such as California Dental Association v. FTC (1999), which provides a framework for calibrating the intensity of a rule-of-reason inquiry based on a preliminary assessment of the likely net competitive effect of a particular practice. In making these efforts to develop reasonably objective thresholds for triggering closer scrutiny, regulators and courts have sought to reconcile the open-ended language of the offenses described in the antitrust statutes—“restraint of trade” (Sherman Act Section 1) or “monopolization” (Sherman Act Section 2)—with a meaningful commitment to providing the market with adequate notice of the inherently fuzzy boundary between competitive and anti-competitive practices in most cases (and especially, in cases involving single-firm conduct that is most likely to be targeted by the agency under its Section 5 authority). 

It does not appear that agency leadership intends to adopt this calibrated approach in implementing its rulemaking initiative, in light of its largely unbounded understanding of its Section 5 enforcement authority and wholesale rejection of the rule-of-reason methodology. If Section 5 is understood to encompass a broad and fluid set of social goals, including distributive objectives that can conflict with a commitment to the competitive process, then there is no analytical reference point by which markets can reliably assess the likelihood of antitrust liability and plan transactions accordingly. If enforcement under Section 5, including exercise of any purported rulemaking powers, does not require the agency to consider offsetting efficiencies attributable to any particular practice, then a chilling effect on everyday business activity and, more broadly, economic growth can easily ensue. In particular, firms may abstain from practices that may have mostly or even entirely procompetitive effects simply because there is some material likelihood that any such practice will be subject to investigation and enforcement under the agency’s understanding of its Section 5 authority and its adoption of a per se approach for which even strong evidence of predominantly procompetitive effects would be moot.

From Free Markets to Administered Markets

The FTC’s proposed rulemaking initiative, when placed within the context of other fundamental changes in substance and methodology adopted by agency leadership, is not easily reconciled with a market-driven economy in which resources are principally directed by the competitive forces of supply and demand. FTC leadership has reserved for the agency discretion to deem a business practice as “unfair,” while defining fairness by reference to an agglomeration of loosely described policy goals that include—but go beyond, and in some cases may conflict with—the agency’s commitment to preserve market competition. Concurrently, FTC leadership has rejected the rule-of-reason balancing approach and, by implication, may place no material weight on (or even fail to consider entirely) the efficiencies attributable to a particular business practice. 

In the aggregate, any rulemaking activity undertaken within this unstructured framework would make it challenging for firms and investors to assess whether any particular action is likely to trigger agency scrutiny. Faced with this predicament, firms could only substantially reduce exposure to antitrust liability by seeking various forms of preclearance with FTC staff, who would in turn be led to issue supplemental guidance, rules, and regulations to handle the high volume of firm inquiries. Contrary to the advertised advantages of enforcement by rulemaking, this unavoidable cycle of rule interpretation and adjustment would likely increase substantially aggregate transaction and compliance costs as compared to enforcement by adjudication. While enforcement by adjudication occurs only periodically and impacts a limited number of firms, enforcement by rulemaking is a continuous activity that impacts all firms. The ultimate result: the free play of the forces of supply and demand would be replaced by a continuously regulated environment where market outcomes are constantly being reviewed through the administrative process, rather than being worked out through the competitive process.  

This is a state of affairs substantially removed from the “free market system” to which the FTC’s Bureau of Competition had once been committed. Of course, that may be exactly what current agency leadership has in mind.

U.S. antitrust law is designed to protect competition, not individual competitors. That simple observation lies at the heart of the Consumer Welfare Standard that for years has been the cornerstone of American antitrust policy. An alternative enforcement policy focused on protecting individual firms would discourage highly efficient and innovative conduct by a successful entity, because such conduct, after all, would threaten to weaken or displace less efficient rivals. The result would be markets characterized by lower overall levels of business efficiency and slower innovation, yielding less consumer surplus and, thus, reduced consumer welfare, as compared to the current U.S. antitrust system.

The U.S. Supreme Court gets it. In Reiter v. Sonotone (1979), the court stated plainly that “Congress designed the Sherman Act as a ‘consumer welfare prescription.’” Consistent with that understanding, the court subsequently stressed in Spectrum Sports v. McQuillan (1993) that “[t]he purpose of the [Sherman] Act is not to protect businesses from the working of the market, it is to protect the public from the failure of the market.” This means that a market leader does not have an antitrust duty to assist its struggling rivals, even if it is flouting a regulatory duty to deal. As a unanimous Supreme Court held in Verizon v. Trinko (2004): “Verizon’s alleged insufficient assistance in the provision of service to rivals [in defiance of an FCC-imposed regulatory obligation] is not a recognized antitrust claim under this Court’s existing refusal-to-deal precedents.”

Unfortunately, the New York State Senate seems to have lost sight of the importance of promoting vigorous competition and consumer welfare, not competitor welfare, as the hallmark of American antitrust jurisprudence. The chamber on June 7 passed the ill-named 21st Century Antitrust Act (TCAA), legislation that, if enacted and signed into law, would seriously undermine consumer welfare and innovation. Let’s take a quick look at the TCAA’s parade of horribles.

The TCAA makes it unlawful for any person “with a dominant position in the conduct of any business, trade or commerce, in any labor market, or in the furnishing of any service in this state to abuse that dominant position.”

A “dominant position” may be established through “direct evidence” that “may include, but is not limited to, the unilateral power to set prices, terms, power to dictate non-price contractual terms without compensation; or other evidence that a person is not constrained by meaningful competitive pressures, such as the ability to degrade quality without suffering reduction in profitability. In labor markets, direct evidence of a dominant position may include, but is not limited to, the use of non-compete clauses or no-poach agreements, or the unilateral power to set wages.”

The “direct evidence” language is unbounded and hopelessly vague. What does it mean to not be “constrained by meaningful competitive pressures”? Such an inherently subjective characterization would give prosecutors carte blanche to find dominance. What’s more, since “no court shall require definition of a relevant market” to find liability in the face of “direct evidence,” multiple competitors in a vigorously competitive market might be found “dominant.” Thus, for example, the ability of a firm to use non-compete clauses or no-poach agreements for efficient reasons (such as protecting against competitor free-riding on investments in human capital or competitor theft of trade secrets) would be undermined, even if it were commonly employed in a market featuring several successful and aggressive rivals.

“Indirect evidence” based on market share also may establish a dominant position under the TCAA. Dominance would be presumed if a competitor possessed a market “share of forty percent or greater of a relevant market as a seller” or “thirty percent or greater of a relevant market as a buyer”. 

Those numbers are far below the market ranges needed to find a “monopoly” under Section 2 of the Sherman Act. Moreover, given inevitable error associated with both market definitions and share allocations—which, in any event, may fluctuate substantially—potential arbitrariness would attend share based-dominance calculations. Most significantly, of course, market shares may say very little about actual market power. Where entry barriers are low and substitutes wait in the wings, a temporarily large market share may not bestow any ability on a “dominant” firm to exercise power over price or to exclude competitors.

In short, it would be trivially easy for non-monopolists possessing very little, if any, market power to be characterized as “dominant” under the TCAA, based on “direct evidence” or “indirect evidence.”

Once dominance is established, what constitutes an abuse of dominance? The TCAA states that an “abuse of a dominant position may include, but is not limited to, conduct that tends to foreclose or limit the ability or incentive of one or more actual or potential competitors to compete, such as leveraging a dominant position in one market to limit competition in a separate market, or refusing to deal with another person with the effect of unnecessarily excluding or handicapping actual or potential competitors.” In addition, “[e]vidence of pro-competitive effects shall not be a defense to abuse of dominance and shall not offset or cure competitive harm.” 

This language is highly problematic. Effective rivalrous competition by its very nature involves behavior by a firm or firms that may “limit the ability or incentive” of rival firms to compete. For example, a company’s introduction of a new cost-reducing manufacturing process, or of a patented product improvement that far surpasses its rivals’ offerings, is the essence of competition on the merits. Nevertheless, it may limit the ability of its rivals to compete, in violation of the TCAA. Moreover, so-called “monopoly leveraging” typically generates substantial efficiencies, and very seldom undermines competition (see here, for example), suggesting that (at best) leveraging theories would generate enormous false positives in prosecution. The TCAA’s explicit direction that procompetitive effects not be considered in abuse of dominance cases further detracts from principled enforcement; it denigrates competition, the very condition that American antitrust law has long sought to promote.

Put simply, under the TCAA, “dominant” firms engaging in normal procompetitive conduct could be held liable (and no doubt frequently would be held liable, given their inability to plead procompetitive justifications) for “abuses of dominance.” To top it off, firms convicted of abusing a dominant position would be liable for treble damages. As such, the TCAA would strongly disincentivize aggressive competitive behavior that raises consumer welfare. 

The TCAA’s negative ramifications would be far-reaching. By embracing a civil law “abuse of dominance” paradigm, the TCAA would run counter to a longstanding U.S. common law antitrust tradition that largely gives free rein to efficiency-seeking competition on the merits. It would thereby place a new and unprecedented strain on antitrust federalism. In a digital world where the effects of commercial conduct frequently are felt throughout the United States, the TCAA’s attack on efficient welfare-inducing business practices would have national (if not international) repercussions.

The TCAA would alter business planning calculations for the worse and could interfere directly in the setting of national antitrust policy through congressional legislation and federal antitrust enforcement initiatives. It would also signal to foreign jurisdictions that the United States’ long-expressed staunch support for reliance on the Consumer Welfare Standard as the touchtone of sound antitrust enforcement is no longer fully operative.

Judge Richard Posner is reported to have once characterized state antitrust enforcers as “barnacles on the ship of federal antitrust” (see here). The TCAA is more like a deadly torpedo aimed squarely at consumer welfare and the American common law antitrust tradition. Let us hope that the New York State Assembly takes heed and promptly rejects the TCAA.    

A screenshot of a cell phone

Description automatically generated

This is the first in a series of TOTM blog posts discussing the Commission’s recently published Google Android decision. It draws on research from a soon-to-be published ICLE white paper.

The European Commission’s recent Google Android decision will surely go down as one of the most important competition proceedings of the past decade. And yet, an in-depth reading of the 328 page decision should leave attentive readers with a bitter taste.

One of the Commission’s most significant findings is that the Android operating system and Apple’s iOS are not in the same relevant market, along with the related conclusion that Apple’s App Store and Google Play are also in separate markets.

This blog post points to a series of flaws that undermine the Commission’s reasoning on this point. As a result, the Commission’s claim that Google and Apple operate in separate markets is mostly unsupported.

1. Everyone but the European Commission thinks that iOS competes with Android

Surely the assertion that the two predominant smartphone ecosystems in Europe don’t compete with each other will come as a surprise to… anyone paying attention: 

A screenshot of a cell phone

Description automatically generated

Apple 10-K:

The Company believes the availability of third-party software applications and services for its products depends in part on the developers’ perception and analysis of the relative benefits of developing, maintaining and upgrading such software and services for the Company’s products compared to competitors’ platforms, such as Android for smartphones and tablets and Windows for personal computers.

Google 10-K:

We face competition from: Companies that design, manufacture, and market consumer electronics products, including businesses that have developed proprietary platforms.

This leads to a critical question: Why did the Commission choose to depart from the instinctive conclusion that Google and Apple compete vigorously against each other in the smartphone and mobile operating system market? 

As explained below, its justifications for doing so were deeply flawed.

2. It does not matter that OEMs cannot license iOS (or the App Store)

One of the main reasons why the Commission chose to exclude Apple from the relevant market is that OEMs cannot license Apple’s iOS or its App Store.

But is it really possible to infer that Google and Apple do not compete against each other because their products are not substitutes from OEMs’ point of view? 

The answer to this question is likely no.

Relevant markets, and market shares, are merely a proxy for market power (which is the appropriate baseline upon which build a competition investigation). As Louis Kaplow puts it:

[T]he entire rationale for the market definition process is to enable an inference about market power.

If there is a competitive market for Android and Apple smartphones, then it is somewhat immaterial that Google is the only firm to successfully offer a licensable mobile operating system (as opposed to Apple and Blackberry’s “closed” alternatives).

By exercising its “power” against OEMs by, for instance, degrading the quality of Android, Google would, by the same token, weaken its competitive position against Apple. Google’s competition with Apple in the smartphone market thus constrains Google’s behavior and limits its market power in Android-specific aftermarkets (on this topic, see Borenstein et al., and Klein).

This is not to say that Apple’s iOS (and App Store) is, or is not, in the same relevant market as Google Android (and Google Play). But the fact that OEMs cannot license iOS or the App Store is mostly immaterial for market  definition purposes.

 3. Google would find itself in a more “competitive” market if it decided to stop licensing the Android OS

The Commission’s reasoning also leads to illogical outcomes from a policy standpoint. 

Google could suddenly find itself in a more “competitive” market if it decided to stop licensing the Android OS and operated a closed platform (like Apple does). The direct purchasers of its products – consumers – would then be free to switch between Apple and Google’s products.

As a result, an act that has no obvious effect on actual market power — and that could have a distinctly negative effect on consumers — could nevertheless significantly alter the outcome of competition proceedings on the Commission’s theory. 

One potential consequence is that firms might decide to close their platforms (or refuse to open them in the first place) in order to avoid competition scrutiny (because maintaining a closed platform might effectively lead competition authorities to place them within a wider relevant market). This might ultimately reduce product differentiation among mobile platforms (due to the disappearance of open ecosystems) – the exact opposite of what the Commission sought to achieve with its decision.

This is, among other things, what Antonin Scalia objected to in his Eastman Kodak dissent: 

It is quite simply anomalous that a manufacturer functioning in a competitive equipment market should be exempt from the per se rule when it bundles equipment with parts and service, but not when it bundles parts with service [when the manufacturer has a high share of the “market” for its machines’ spare parts]. This vast difference in the treatment of what will ordinarily be economically similar phenomena is alone enough to call today’s decision into question.

4. Market shares are a poor proxy for market power, especially in narrowly defined markets

Finally, the problem with the Commission’s decision is not so much that it chose to exclude Apple from the relevant markets, but that it then cited the resulting market shares as evidence of Google’s alleged dominance:

(440) Google holds a dominant position in the worldwide market (excluding China) for the licensing of smart mobile OSs since 2011. This conclusion is based on: 

(1) the market shares of Google and competing developers of licensable smart mobile OSs […]

In doing so, the Commission ignored one of the critical findings of the law & economics literature on market definition and market power: Although defining a narrow relevant market may not itself be problematic, the market shares thus adduced provide little information about a firm’s actual market power. 

For instance, Richard Posner and William Landes have argued that:

If instead the market were defined narrowly, the firm’s market share would be larger but the effect on market power would be offset by the higher market elasticity of demand; when fewer substitutes are included in the market, substitution of products outside of the market is easier. […]

If all the submarket approach signifies is willingness in appropriate cases to call a narrowly defined market a relevant market for antitrust purposes, it is unobjectionable – so long as appropriately less weight is given to market shares computed in such a market.

Likewise, Louis Kaplow observes that:

In choosing between a narrower and a broader market (where, as mentioned, we are supposing that the truth lies somewhere in between), one would ask whether the inference from the larger market share in the narrower market overstates market power by more than the inference from the smaller market share in the broader market understates market power. If the lesser error lies with the former choice, then the narrower market is the relevant market; if the latter minimizes error, then the broader market is best.

The Commission failed to heed these important findings.

5. Conclusion

The upshot is that Apple should not have been automatically excluded from the relevant market. 

To be clear, the Commission did discuss this competition from Apple later in the decision. And it also asserted that its findings would hold even if Apple were included in the OS and App Store markets, because Android’s share of devices sold would have ranged from 45% to 79%, depending on the year (although this ignores other potential metrics such as the value of devices sold or Google’s share of advertising revenue

However, by gerrymandering the market definition (which European case law likely permitted it to do), the Commission ensured that Google would face an uphill battle, starting from a very high market share and thus a strong presumption of dominance. 

Moreover, that it might reach the same result by adopting a more accurate market definition is no excuse for adopting a faulty one and resting its case (and undertaking its entire analysis) on it. In fact, the Commission’s choice of a faulty market definition underpins its entire analysis, and is far from a “harmless error.” 

I shall discuss the consequences of this error in an upcoming blog post. Stay tuned.

Today would have been Henry Manne’s 90th birthday. When he passed away in 2015 he left behind an immense and impressive legacy. In 1991, at the inaugural meeting of the American Law & Economics Association (ALEA), Manne was named a Life Member of ALEA and, along with Nobel Laureate Ronald Coase, and federal appeals court judges Richard Posner and Guido Calabresi, one of the four Founders of Law and Economics. The organization I founded, the International Center for Law & Economics is dedicated to his memory, along with that of his great friend and mentor, UCLA economist Armen Alchian.

Manne is best known for his work in corporate governance and securities law and regulation, of course. But sometimes forgotten is that his work on the market for corporate control was motivated by concerns about analytical flaws in merger enforcement. As former FTC commissioners Maureen Ohlhausen and Joshua Wright noted in a 2015 dissenting statement:

The notion that the threat of takeover would induce current managers to improve firm performance to the benefit of shareholders was first developed by Henry Manne. Manne’s pathbreaking work on the market for corporate control arose out of a concern that antitrust constraints on horizontal mergers would distort its functioning. See Henry G. Manne, Mergers and the Market for Corporate Control, 73 J. POL. ECON. 110 (1965).

But Manne’s focus on antitrust didn’t end in 1965. Moreover, throughout his life he was a staunch critic of misguided efforts to expand the power of government, especially when these efforts claimed to have their roots in economic reasoning — which, invariably, was hopelessly flawed. As his obituary notes:

In his teaching, his academic writing, his frequent op-eds and essays, and his work with organizations like the Cato Institute, the Liberty Fund, the Institute for Humane Studies, and the Mont Pèlerin Society, among others, Manne advocated tirelessly for a clearer understanding of the power of markets and competition and the importance of limited government and economically sensible regulation.

Thus it came to be, in 1974, that Manne was called to testify before the Senate Judiciary Committee, Subcommittee on Antitrust and Monopoly, on Michigan Senator Philip A. Hart’s proposed Industrial Reorganization Act. His testimony is a tour de force, and a prescient rejoinder to the faddish advocates of today’s “hipster antitrust”— many of whom hearken longingly back to the antitrust of the 1960s and its misguided “gurus.”

Henry Manne’s trenchant testimony critiquing the Industrial Reorganization Act and its (ostensible) underpinnings is reprinted in full in this newly released ICLE white paper (with introductory material by Geoffrey Manne):

Henry G. Manne: Testimony on the Proposed Industrial Reorganization Act of 1973 — What’s Hip (in Antitrust) Today Should Stay Passé

Sen. Hart proposed the Industrial Reorganization Act in order to address perceived problems arising from industrial concentration. The bill was rooted in the belief that industry concentration led inexorably to monopoly power; that monopoly power, however obtained, posed an inexorable threat to freedom and prosperity; and that the antitrust laws (i.e., the Sherman and Clayton Acts) were insufficient to address the purported problems.

That sentiment — rooted in the reflexive application of the (largely-discredited structure-conduct-performance (SCP) paradigm) — had already become largely passé among economists in the 70s, but it has resurfaced today as the asserted justification for similar (although less onerous) antitrust reform legislation and the general approach to antitrust analysis commonly known as “hipster antitrust.”

The critiques leveled against the asserted economic underpinnings of efforts like the Industrial Reorganization Act are as relevant today as they were then. As Henry Manne notes in his testimony:

To be successful in this stated aim [“getting the government out of the market”] the following dreams would have to come true: The members of both the special commission and the court established by the bill would have to be satisfied merely to complete their assigned task and then abdicate their tremendous power and authority; they would have to know how to satisfactorily define and identify the limits of the industries to be restructured; the Government’s regulation would not sacrifice significant efficiencies or economies of scale; and the incentive for new firms to enter an industry would not be diminished by the threat of a punitive response to success.

The lessons of history, economic theory, and practical politics argue overwhelmingly against every one of these assumptions.

Both the subject matter of and impetus for the proposed bill (as well as Manne’s testimony explaining its economic and political failings) are eerily familiar. The preamble to the Industrial Reorganization Act asserts that

competition… preserves a democratic society, and provides an opportunity for a more equitable distribution of wealth while avoiding the undue concentration of economic, social, and political power; [and] the decline of competition in industries with oligopoly or monopoly power has contributed to unemployment, inflation, inefficiency, an underutilization of economic capacity, and the decline of exports….

The echoes in today’s efforts to rein in corporate power by adopting structural presumptions are unmistakable. Compare, for example, this language from Sen. Klobuchar’s Consolidation Prevention and Competition Promotion Act of 2017:

[C]oncentration that leads to market power and anticompetitive conduct makes it more difficult for people in the United States to start their own businesses, depresses wages, and increases economic inequality;

undue market concentration also contributes to the consolidation of political power, undermining the health of democracy in the United States; [and]

the anticompetitive effects of market power created by concentration include higher prices, lower quality, significantly less choice, reduced innovation, foreclosure of competitors, increased entry barriers, and monopsony power.

Remarkably, Sen. Hart introduced his bill as “an alternative to government regulation and control.” Somehow, it was the antithesis of “government control” to introduce legislation that, in Sen. Hart’s words,

involves changing the life styles of many of our largest corporations, even to the point of restructuring whole industries. It involves positive government action, not to control industry but to restore competition and freedom of enterprise in the economy

Like today’s advocates of increased government intervention to design the structure of the economy, Sen. Hart sought — without a trace of irony — to “cure” the problem of politicized, ineffective enforcement by doubling down on the power of the enforcers.

Henry Manne was having none of it. As he pointedly notes in his testimony, the worst problems of monopoly power are of the government’s own making. The real threat to democracy, freedom, and prosperity is the political power amassed in the bureaucratic apparatus that frequently confers monopoly, at least as much as the monopoly power it spawns:

[I]t takes two to make that bargain [political protection and subsidies in exchange for lobbying]. And as we look around at various industries we are constrained to ask who has not done this. And more to the point, who has not succeeded?

It is unhappily almost impossible to name a significant industry in the United States that has not gained some degree of protection from the rigors of competition from Federal, State or local governments.

* * *

But the solution to inefficiencies created by Government controls cannot lie in still more controls. The politically responsible task ahead for Congress is to dismantle our existing regulatory monster before it strangles us.

We have spawned a gigantic bureaucracy whose own political power threatens the democratic legitimacy of government.

We are rapidly moving toward the worst features of a centrally planned economy with none of the redeeming political, economic, or ethical features usually claimed for such systems.

The new white paper includes Manne’s testimony in full, including his exchange with Sen. Hart and committee staffers following his prepared remarks.

It is, sadly, nearly as germane today as it was then.

One final note: The subtitle for the paper is a reference to the song “What Is Hip?” by Tower of Power. Its lyrics are decidedly apt:

You done went and found you a guru,

In your effort to find you a new you,

And maybe even managed

To raise your conscious level.

While you’re striving to find the right road,

There’s one thing you should know:

What’s hip today

Might become passé.

— Tower of Power, What Is Hip? (Emilio Castillo, John David Garibaldi & Stephen M. Kupka, What Is Hip? (Bob-A-Lew Songs 1973), from the album TOWER OF POWER (Warner Bros. 1973))

And here’s the song, in all its glory:

 

Like taxation, government regulation imposes indirect deadweight efficiency losses on the economy as well as direct costs on affected businesses and consumers.  Unlike taxation, however, whose direct costs (payments made to government) are on public display, the heavy direct burden of regulation is far less visible to the public.  This creates a strong incentive for legislators to substitute regulatory mechanisms for taxation when possible (for example, regulation has been used instead of taxation as an indirect means of redistributing income, as documented by Richard Posner, among others).  It also encourages the growth of regulation, rather than taxation, to satisfy the demands of interest groups.  Making the direct costs of regulation more visible might at least partially rein in these malign governmental tendencies.  Is such a goal unattainable as a practical matter?  Perhaps not.

In a recent paper, Sean Speer of the R Street Institute suggests that Congress take a page from the Canadian Government and consider imposing “regulatory budgets” on federal agencies. As Speer explains, “[r]egulatory budgeting requires government departments and agencies to price their ‘regulatory expenditures,’ just as they do fiscal expenditures.” More specifically:

Regulatory budgeting is based on the premise that regulatory costs – the administrative costs incurred by the state to enforce a regulation and the compliance costs incurred by individuals and businesses to conform to a regulation – are conceptually similar to government expenditures through the budget process. . . .

The regulatory budget . . . operates analogously to the fiscal budget. Each year, the government establishes an upper limit on the economic costs of its regulatory activities. It then apportions that expenditure cap across the government to various departments and agencies, who are expected to live within their respective regulatory budgets. . . .

[T]he [regulatory budgeting] regime requires that departments and agencies can only exceed their budgetary limit by offsetting the costs of new regulations with “savings” realized by eliminating existing regulatory requirements. The expectation is that this comprehensive process provides incentives to review the existing stock of regulatory requirements regularly. It also rewards simplifying or removing outdated and ineffective regulations. . . .

Spicer points out that while regulatory cost calculations are complicated and fallible, so are the estimates and projections that are part and parcel of fiscal budgeting.  Thus, as in fiscal budgeting, the estimates produced by regulatory budgeting “do not need to be infallible for the system to work. They just need to be seen as defensible, unbiased and a reasonable basis for making trade-offs”.  Spicer goes on to discuss the cost savings achieved by the Canadian province of British Columbia (a 43 percent reduction in regulatory requirements imposed on individuals and businesses achieved over the past 15 years), and by the Canadian federal government under former Prime Minister Stephen Harper (annual reductions of C$32 million —  roughly $24.7 million in U.S. dollars  —  in administrative burdens on business and 750,000 hours in compliance costs), in implementing regulatory budgeting.

Spicer then presents a brief summary of recent U.S. congressional proposals for the establishment of federal regulatory budgeting, and concludes his analysis in a positive vein:

The costing methodology proposed in the bills would capture “all costs” imposed on regulated entities (defined as companies, nonprofit organizations, and local and state governments), as well as the administrative costs incurred by the federal government. . .

Regardless which [regulatory budgeting] model the U.S. Congress ultimately chooses, it is right to focus on regulatory reform as part of a low-cost, pro-growth agenda. That the federal government enacted 84 new regulatory requirements in 2014 that each exceeded $100 million in estimated burdens on the economy, is strong evidence that the time for reform has come. 

In sum, regulatory budgeting is a creative institutional reform that has shown real promise in reducing the economic burdens imposed by government on businesses and individuals.  It merits careful attention by the next administration and the next Congress, as they seek practical ways to constrain the bureaucratic leviathan.

On January 26 the Heritage Foundation hosted a one-day conference on “Antitrust Policy for a New Administration.”  Featured speakers included three former heads of the U.S. Department of Justice’s Antitrust Division (DOJ) (D.C. Circuit Senior Judge Douglas Ginsburg, James Rill, and Thomas Barnett) and a former Chairman of the U.S. Federal Trade Commission (FTC) (keynote speaker Professor William Kovacic), among other leading experts on foreign and domestic antitrust.  The conference addressed developments at DOJ, the FTC, and overseas.  The entire program (which will be posted for viewing very shortly at Heritage.org) has generated substantial trade press coverage (see, for example, two articles published by Global Competition Review).  Four themes highlighted during the presentations are particularly worth noting.

First, the importance of the federal judiciary – and judicial selection – in the development and direction of U.S. antitrust policy.  In his opening address, Professor Bill Kovacic described the central role the federal judiciary plays in shaping American antitrust principles.  He explained how a few key judges with academic backgrounds (for example, Frank Easterbrook, Richard Posner, Stephen Breyer, and Antonin Scalia) had a profound effect in reorienting American antitrust rules toward the teachings of law and economics, and added that the Reagan Administration focused explicitly on appointing free market-oriented law professors for key appellate judgeships.  Since the new President will appoint a large proportion of the federal judiciary, the outcome of the 2016 election could profoundly influence the future direction of antitrust, according to Professor Kovacic.  (Professor Kovacic also made anecdotal comments about various candidates, noting the short but successful FTC experience of Ted Cruz; Donald Trump having once been an antitrust plaintiff (when the United States Football League sued the National Football League); Hillary Clinton’s misstatement that antitrust has not been applied to anticompetitive payoffs made by big drug companies to generic producers; and Bernie Sanders’ pronouncements suggesting a possible interest in requiring the breakup of large companies.)

Second, the loss of American global economic leadership on antitrust enforcement policy.  There was a consensus that jurisdictions around the world increasingly have opted for the somewhat more interventionist European civil law approach to antitrust, in preference to the American enforcement model.  There are various explanations for this, including the fact that civil law predominates in many (though not all) nations that have adopted antitrust regimes, and the natural attraction many governments have for administrative models of economic regulation that grant the state broad enforcement discretion and authority.  Whatever the explanation, there also seemed to be some sentiment that U.S. government agencies have not been particularly aggressive in seeking to counter this trend by making the case for the U.S. approach (which relies more on flexible common law reasoning to accommodate new facts and new economic learning).  (See here for my views on a desirable approach to antitrust enforcement, rooted in error cost considerations.)

Third, the need to consider reforming current cartel enforcement programs.  Cartel enforcement programs, which are a mainstay of antitrust, received some critical evaluation by the members of the DOJ and international panels.  Judge Ginsburg noted that the pattern of imposing ever- higher fines on companies, which independently have strong incentives to avoid cartel conduct, may be counterproductive, since it is typically “rogue” employees who flout company policies and collaborate in cartels.  The focus thus should be on strong sanctions against such employees.  Others also opined that overly high corporate cartel fines may not be ideal.  Relatedly, some argued that the failure to give “good behavior” credit to companies that have corporate compliance programs may be suboptimal and welfare-reducing, since companies may find that it is not cost-beneficial to invest substantially in such programs if they receive no perceived benefit.  Also, it was pointed out that imposing very onerous and expensive internal compliance mandates would be inappropriate, since companies may avoid them if they perceive the costs of compliance programs to outweigh the expected value of antitrust penalties.  In addition, the programs by which governments grants firms leniency for informing on a cartel in which they participate – instituted by DOJ in the 1990s and widely emulated by foreign enforcement agencies – came in for some critical evaluation.  One international panelist argued that DOJ should not rely solely on leniency to ferret out cartel activity, stressing that other jurisdictions are beginning to apply econometric methods to aid cartel detection.  In sum, while there appeared to be general agreement about the value and overall success of cartel prosecutions, there also was support for consideration of new means to deter and detect cartels.

Fourth, the need to work to enhance due process in agency investigations and enforcement actions.  Concerns about due process surfaced on both the FTC and international panels.  A former FTC general counsel complained about staff’s lack of explanation of theories of violation in FTC consumer protection investigations, and limitations on access to senior level decision-makers, in cases not raising fraud.  It was argued that such investigations may promote the micromanagement of non-deceptive business behavior in areas such as data protection.  Although consumer protection is not antitrust, commentators raised the possibility that foreigner agencies would cite FTC consumer protection due process deficiencies in justifying their antitrust due process inadequacies (since the FTC enforces both antitrust and consumer protection under one statutory scheme).  The international panel discussed the fact that due process problems are particularly bad in Asia but also exist to some extent in Europe.  Particular due process issues panelists found to be pervasive overseas included, for example, documentary request abuses, lack of adequate access to counsel, and inadequate information about the nature or purpose of investigations.  The international panelists agreed that the U.S. antitrust enforcement agencies, bar associations, and international organizations (such as the International Competition Network and the OECD) should continue to work to promote due process, but that there is no magic bullet and this will be require a long-term commitment.  (There was no unanimity as to whether other U.S. governmental organs, such as the State Department and the U.S. Trade Representative’s Office, should be called upon for assistance.)

In conclusion, the 2016 Heritage Foundation antitrust conference shed valuable light on major antitrust policy issues that the next President will have to confront.  The approach the next President takes in dealing with these issues will have major implications for a very significant branch of economic regulation, both here and abroad.

On September 30, in O’Bannon v. NCAA, the U.S. Court of Appeals for the 9th Circuit held that the National Collegiate Athletic Association’s (NCAA) rules that prohibited student athletes from being paid for the use of their names, images, and likenesses are subject to the antitrust laws and constitute an unlawful restraint of trade, under the antitrust rule of reason. This landmark holding represents the first federal appellate condemnation of NCAA limitations on compensating student athletes. (In two previous Truth on the Market posts I discussed this lawsuit and later explained that I agreed with the federal district court’s decision striking down these NCAA rules.) The gist of the 9th Circuit’s opinion is summarized by the Court’s staff:

The [9th Circuit] panel held that it was not precluded from reaching the merits of plaintiffs’ Sherman Act claim because: (1) the Supreme Court did not hold in NCAA v. Bd. of Regents of the Univ. of Okla., 468 U.S. 85 (1984), that the NCAA’s amateurism rules are valid as a matter of law; (2) the rules are subject to the Sherman Act because they regulate commercial activity; and (3) the plaintiffs established that they suffered injury in fact, and therefore had standing, by showing that, absent the NCAA’s rules, video game makers would likely pay them for the right to use their names, images, and likenesses in college sports video games.

The panel held that even though many of the NCAA’s rules were likely to be procompetitive, they were not exempt from antitrust scrutiny and must be analyzed under the Rule of Reason. Applying the Rule of Reason, the panel held that the NCAA’s rules had significant anticompetitive effects within the college education market, in that they fixed an aspect of the “price” that recruits pay to attend college. The record supported the district court’s finding that the rules served the procompetitive purposes of integrating academics with athletics and preserving the popularity of the NCAA’s product by promoting its current understanding of amateurism. The panel concluded that the district court identified one proper less restrictive alternative to the current NCAA rules – i.e., allowing NCAA members to give scholarships up to the full cost of attendance – but the district court’s other remedy, allowing students to be paid cash compensation of up to $5,000 per year, was erroneous. The panel vacated the district court’s judgment and permanent injunction insofar as they required the NCAA to allow its member schools to pay student-athletes up to $5,000 per year in deferred compensation.

Chief Judge Thomas concurred in part and dissented in part. He disagreed with the [two judge panel] majority’s conclusion that the district court clearly erred in ordering the NCAA to permit up to $5,000 in deferred compensation above student-athletes’ full cost of attendance.

The key point of the 9th Circuit’s decision, that competitively restrictive rules are not exempt from antitrust scrutiny because they promote the perception of “amateurism,” is clearly correct, and in line with modern antitrust jurisprudence. The Supreme Court has taught that anticompetitive restrictions aimed at furthering the reputation of the learned professions (see Goldfarb v. Virginia State Bar (1975), striking down a minimum legal fee schedule for title searches), and their ability to advance social goals effectively (see FTC v. Superior Court Trial Lawyers Association (1990), condemning a joint effort to raise government-paid legal aid fees and thereby “enhance” the quality of legal aid representation), are fully subject to antitrust review. Even the alleged desire to ensure that quality medical services are not sacrificed (see FTC v. Indiana Federation of Dentists (1986), rejecting a dental association’s agreement to deny insurers’ request for procedure-specific dental x-rays), and that safety is maintained in major construction projects (see National Society of Professional Engineers v. United States (1978), striking down an ethical canon barring competitive bids for engineering services), do not shield agreements from antitrust evaluation and potential condemnation. In light of those teachings, the NCAA’s claim (based on a clear misreading of the Supreme Court’s NCAA v. Board of Regents (1984) decision) that its highly restrictive “amateurism” rules should be exempt from antitrust review is patently absurd.

Moreover, as a matter of substance, the NCAA is precisely the sort of institution whose rules merit close evaluation by antitrust enforcers. The NCAA is a monopsony cartel, representing the institutions (America’s colleges) which effectively are the sole buyers of the services of high school football and basketball players who hope to pursue professional sports careers. Moreover, the NCAA’s rules regarding student athletes greatly limit competition, artificially limit athletes’ compensation, and are in severe tension with the “scholar-athlete” ideal that the NCAA claims it promotes. In 2011, the late University of Chicago Professor Gary Becker, a Nobel Laureate in Economics, put it starkly:

[T]he NCAA sharply limits the number of athletic scholarships, and even more importantly, limits the size of the scholarships that schools can offer the best players. NCAA rules also severely restricts the gifts and housing players are allowed to receive from alumni and others, do not allow college players to receive pay for playing for professional teams during summers or even before they attended college, and limits what they can be paid for non-playing summer work. The rules are extremely complicated, and they constitute hundreds of pages that lay out what is permitted in recruiting prospective students, when students have to make binding commitments to attend schools, the need to renew athletic scholarships, the assistance that can be provided to players’ parents, and of course the size of scholarships.

It is impossible for an outsider to look at these rules without concluding that their main aim is to make the NCAA an effective cartel that severely constrains competition among schools for players. The NCAA defends these rules by claiming that their main purpose is to prevent exploitation of student-athletes, to provide a more equitable system of recruitment that enables many colleges to maintain football and basketball programs and actively search for athletes, and to insure that the athletes become students as well as athletes. Unfortunately for the NCAA, the facts are blatantly inconsistent with these defenses. . . .
A large fraction of the Division I players in basketball and football, the two big money sports, are recruited from poor families; many of them are African-Americans from inner cities and rural areas. Every restriction on the size of scholarships that can be given to athletes in these sports usually takes money away from poor athletes and their families, and in effect transfers these resources to richer students in the form of lower tuition and cheaper tickets for games. . . .

[T]he graduation rates for these minority students-athletes are depressingly low. For example, the average graduation rate of Division I African American basketball and football players appears to be less than 50%.

Some of the top players quit school to play in the NBA or NFL, but that is a tiny fraction of all athletes who dropout. The vast majority dropout either because they use up their sports eligibility before they completed the required number of classes, or they failed to continue to make the teams. Schools usually forget about athletes when they stop competing. An important further difference between athletes and non-athletes who drop out of school is that athletes would have been able to get much better financial support for themselves and their families but for the NCAA restrictions on compensation to athletes. They could have used these additional assets to help them finish school, or to get a better start if they dropped out.

Also in 2011, Judge Richard Posner of the 7th Circuit echoed Professor Becker’s views regarding NCAA student competition rules and noted the NCAA’s history of avoiding antitrust problems:

The National Collegiate Athletic Association behaves monopsonistically in forbidding its member colleges and universities to pay its athletes. Although cartels, including monopsonistic ones, are generally deemed to be illegal per se under American antitrust law, the NCAA’s monopsonistic behavior has thus far not been successfully challenged. The justification that the NCAA offers – that collegiate athletes are students and would be corrupted by being salaried – coupled with the fact that the members of the NCAA, and the NCAA itself, are formally not-for-profit institutions, have had sufficient appeal to enable the association to continue to impose and enforce its rule against paying student athletes, and a number of subsidiary rules designed to prevent the cheating by cartel members that plagues most cartels.

As Becker points out, were it not for the monopsonistic rule against paying student athletes, these athletes would be paid; the monopsony transfers wealth from them to their “employers,” the colleges. A further consequence is that college teams are smaller and, more important, of lower quality than they would be if the student athletes were paid.

In sum, the 9th Circuit O’Bannon Court merits praise for deciding clearly and unequivocally that antitrust applies to the NCAA’s student athlete rules, irrespective of whether one agrees with the specific holding in the case. The antitrust laws are a “consumer welfare prescription” that applies generally to activities that have an impact on interstate commerce, and short shrift should be given to any institution that claims it should be antitrust-exempt based on the alleged “virtue” or “public-spiritedness” of its actions. (This reasoning also supports the lifting of baseball’s antitrust exemption, which stems from a 1922 Supreme Court decision that is out of step with modern antitrust jurisprudence. But that is a matter for another day.)

Henry Manne was a great man, and a great father. He was, for me as for many others, one of the most important intellectual influences in my life. I will miss him dearly.

Following is his official obituary. RIP, dad.

Henry Girard Manne died on January 17, 2015 at the age of 86. A towering figure in legal education, Manne was one of the founders of the Law and Economics movement, the 20th century’s most important and influential legal academic discipline.

Manne is survived by his wife, Bobbie Manne; his children, Emily and Geoffrey Manne; two grandchildren, Annabelle and Lily Manne; and two nephews, Neal and Burton Manne. He was preceded in death by his parents, Geoffrey and Eva Manne, and his brother, Richard Manne.

Henry Manne was born on May 10, 1928, in New Orleans. The son of merchant parents, he was raised in Memphis, Tennessee. He attended Central High School in Memphis, and graduated with a BA in economics from Vanderbilt University in 1950. Manne received a JD from the University of Chicago in 1952, and a doctorate in law (SJD) from Yale University in 1966. He also held honorary degrees from Seattle University, Universidad Francesco Marroquin in Guatemala and George Mason University.

Following law school Manne served in the Air Force JAG Corps, stationed at Chanute Air Force Base in Illinois and McGuire Air Force Base in New Jersey. He practiced law briefly in Chicago before beginning his teaching career at St. Louis University in 1956. In subsequent years he also taught at the University of Wisconsin, George Washington University, the University of Rochester, Stanford University, the University of Miami, Emory University, George Mason University, the University of Chicago, and Northwestern University.

Throughout his career Henry Manne ’s writings originated, developed or anticipated an extraordinary range of ideas and themes that have animated the past forty years of law and economics scholarship. For his work, Manne was named a Life Member of the American Law and Economics Association and, along with Nobel Laureate Ronald Coase, and federal appeals court judges Richard Posner and Guido Calabresi, one of the four Founders of Law and Economics.

In the 1950s and 60s Manne pioneered the application of economic principles to the study of corporations and corporate law, authoring seminal articles that transformed the field. His article, “Mergers and the Market for Corporate Control,” published in 1965, is credited with opening the field of corporate law to economic analysis and with anticipating what has come to be known as the Efficient Market Hypothesis (for which economist Eugene Fama was awarded the Nobel Prize in 2013). Manne’s 1966 book, Insider Trading and the Stock Market was the first scholarly work to challenge the logic of insider trading laws, and remains the most influential book on the subject today.

In 1968 Manne moved to the University of Rochester with the aim of starting a new law school. Manne anticipated many of the current criticisms that have been aimed at legal education in recent years, and proposed a law school that would provide rigorous training in the economic analysis of law as well as specialized training in specific areas of law that would prepare graduates for practice immediately out of law school. Manne’s proposal for a new law school, however, drew the ire of incumbent law schools in upstate New York, which lobbied against accreditation of the new program.

While at Rochester, in 1971, Manne created the “Economics Institute for Law Professors,” in which, for the first time, law professors were offered intensive instruction in microeconomics with the aim of incorporating economics into legal analysis and theory. The Economics Institute was later moved to the University of Miami when Manne founded the Law &Economics Center there in 1974. While at Miami, Manne also began the John M. Olin Fellows Program in Law and Economics, which provided generous scholarships for professional economists to earn a law degree. That program (and its subsequent iterations) has gone on to produce dozens of professors of law and economics, as well as leading lawyers and influential government officials.

The creation of the Law & Economics Center (which subsequently moved to Emory University and then to George Mason Law School, where it continues today), was one of the foundational events in the Law and Economics Movement. Of particular importance to the development of US jurisprudence, its offerings were expanded to include economics courses for federal judges. At its peak a third of the federal bench and four members of the Supreme Court had attended at least one of its programs, and every major law school in the country today counts at least one law and economics scholar among its faculty. Nearly every legal field has been influenced by its scholarship and teaching.

When Manne became Dean of George Mason Law School in Arlington, Virginia, in 1986, he finally had the opportunity to implement the ideas he had originally developed at Rochester. Manne’s move to George Mason united him with economist James Buchanan, who was awarded the Nobel Prize for Economics in 1986 for his path-breaking work in the field of Public Choice economics, and turned George Mason University into a global leader in law and economics. His tenure as dean of George Mason, where he served as dean until 1997 and George Mason University Foundation Professor until 1999, transformed legal education by integrating a rigorous economic curriculum into the law school, and he remade George Mason Law School into one of the most important law schools in the country. The school’s Henry G. Manne Moot Court Competition for Law & Economics and the Henry G. Manne Program in Law and Economics Studies are named for him.

Manne was celebrated for his independence of mind and respect for sound reasoning and intellectual rigor, instead of academic pedigree. Soon after he left Rochester to start the Law and Economics Center, he received a call from Yale faculty member Ralph Winter (who later became a celebrated judge on the United States Court of Appeals) offering Manne a faculty position. As he recounted in an interview several years later, Manne told Winter, “Ralph, you’re two weeks and five years too late.” When Winter asked Manne what he meant, Manne responded, “Well, two weeks ago, I agreed that I would start this new center on law and economics.” When Winter asked, “And five years?” Manne responded, “And you’re five years too late for me to give a damn.”

The academic establishment’s slow and skeptical response to the ideas of law and economics eventually persuaded Manne that reform of legal education was unlikely to come from within the established order and that it would be necessary to challenge the established order from without. Upon assuming the helm at George Mason, Dean Manne immediately drew to the school faculty members laboring at less-celebrated law schools whom Manne had identified through his economics training seminars for law professors, including several alumni of his Olin Fellows programs. Today the law school is recognized as one of the world’s leading centers of law and economics.

Throughout his career, Manne was an outspoken champion of free markets and liberty. His intellectual heroes and intellectual peers were classical liberal economists like Friedrich Hayek, Ludwig Mises, Armen Alchian and Harold Demsetz, and these scholars deeply influenced his thinking. As economist Donald Boudreax said of Dean Manne, “I think what Henry saw in Alchian – and what Henry’s own admirers saw in Henry – was the reality that each unfailingly understood that competition in human affairs is an intrepid force…”

In his teaching, his academic writing, his frequent op-eds and essays, and his work with organizations like the Cato Institute, the Liberty Fund, the Institute for Humane Studies, and the Mont Pelerin Society, among others, Manne advocated tirelessly for a clearer understanding of the power of markets and competition and the importance of limited government and economically sensible regulation.

After leaving George Mason in 1999, Manne remained an active scholar and commenter on public affairs as a frequent contributor to the Wall Street Journal. He continued to provide novel insights on corporate law, securities law, and the reform of legal education. Following his retirement Manne became a Distinguished Visiting Professor at Ave Maria Law School in Naples, Florida. The Liberty Fund, of Indianapolis, Indiana, recently published The Collected Works of Henry G. Manne in three volumes.

For some, perhaps more than for all of his intellectual accomplishments Manne will be remembered as a generous bon vivant who reveled in the company of family and friends. He was an avid golfer (who never scheduled a conference far from a top-notch golf course), a curious traveler, a student of culture, a passionate eater (especially of ice cream and Peruvian rotisserie chicken from El Pollo Rico restaurant in Arlington, Virginia), and a gregarious debater (who rarely suffered fools gladly). As economist Peter Klein aptly remarked: “He was a charming companion and correspondent — clever, witty, erudite, and a great social and cultural critic, especially of the strange world of academia, where he plied his trade for five decades but always as a slight outsider.”

Scholar, intellectual leader, champion of individual liberty and free markets, and builder of a great law school—Manne’s influence on law and legal education in the Twentieth Century may be unrivaled. Today, the institutions he built and the intellectual movement he led continue to thrive and to draw sustenance from his intellect and imagination.

There will be a memorial service at George Mason University School of Law in Arlington, Virginia on Friday, February 13, at 4:00 pm. In lieu of flowers the family requests that donations be made in his honor to the Law & Economics Center at George Mason University School of Law, 3301 Fairfax Drive, Arlington, VA 22201 or online at http://www.masonlec.org.

You can listen here: http://www.fed-soc.org/publications/detail/is-the-patent-system-working-or-broken-a-discussion-with-judges-posner-and-michel-podcast

Is the Patent System Working or Broken?

A Discussion with Judges Posner and Michel

 Today, people read almost daily reports about the “broken patent system” in newspaper articles, blogs and at social media websites.  Is this true?  On the one hand, the high-tech and biotech industries seem awash in patent litigation, and Congress, regulatory agencies, and courts are considering adopting a variety of reform measures.  On the other hand, patents are securing property rights in technological innovation once imagined only as science fiction — tablet computers, smart phones, genetic testing for cancer, personalized medical treatments for debilitating diseases, and many others — and these technological marvels are now a commonplace feature of our lives.

To discuss these two conflicting stories about whether the patent system promotes or hampers innovation, we will host two distinguished jurists: Paul Michel, former Chief Judge of the Court of Appeals for the Federal Circuit, and Judge Richard Posner of the Court of Appeals for the Seventh Circuit.  Both judges have unparalleled depth in knowledge about patent policy and the working details of the patent system.  This Teleforum brings them together for the first time to discuss their respective views on whether the patent system today is properly securing property rights in new innovation.

Featuring: 

Hon. Paul R. Michel, United States Court of Appeals, Federal Circuit (ret.)

Hon. Richard A. Posner,United States Court of Appeals, Seventh Circuit

Professor Adam Mossoff, George Mason University Law School (Moderator)

Next Wednesday, I’m moderating a teleforum discussion between Judge Michel and Judge Posner on the patent system.  This teleforum is open to the public, and so anyone can call in.  Here’s the information:

The Federalist Society’s Intellectual Property Practice Group and The George Mason University Law School Center for the Protection of Intellectual Property
Present a Teleforum Call 

Is the Patent System Working or Broken?

A Discussion with Judges Posner and Michel

 Today, people read almost daily reports about the “broken patent system” in newspaper articles, blogs and at social media websites.  Is this true?  On the one hand, the high-tech and biotech industries seem awash in patent litigation, and Congress, regulatory agencies, and courts are considering adopting a variety of reform measures.  On the other hand, patents are securing property rights in technological innovation once imagined only as science fiction — tablet computers, smart phones, genetic testing for cancer, personalized medical treatments for debilitating diseases, and many others — and these technological marvels are now a commonplace feature of our lives.

To discuss these two conflicting stories about whether the patent system promotes or hampers innovation, we will host two distinguished jurists: Paul Michel, former Chief Judge of the Court of Appeals for the Federal Circuit, and Judge Richard Posner of the Court of Appeals for the Seventh Circuit.  Both judges have unparalleled depth in knowledge about patent policy and the working details of the patent system.  This Teleforum brings them together for the first time to discuss their respective views on whether the patent system today is properly securing property rights in new innovation.

Featuring: 

Hon. Paul R. Michel, United States Court of Appeals, Federal Circuit (ret.)

Hon. Richard A. Posner,United States Court of Appeals, Seventh Circuit

Professor Adam Mossoff, George Mason University Law School (Moderator)

Wednesday, Decemeber 19th, 2012

at 2:00 p.m. (ET)

 

About a week ago, I was lucky to moderate the digital equivalent of a “fireside chat” with Richard Epstein about the patent system.  The topic was “Patent Rights: A Spark or Hindrance for the Economy?,” and Richard offered his usual brilliant analysis of the systemic viritues of securing patents as property rights.  you can listen to the podcast here.

The podcast is also available via iTunes, for readers of this blog who are members of the “cult of Apple.” 🙂

Here’s the description of the podcast:

Innovation and entrepreneurship are integral to America’s economic strength, and the U.S. patent system has been critical to nurturing the innovation economy.  With its foundation in Article One, Section 8 of the Constitution, the U.S. patent system has been the strongest in the world.  In recent years, some critics, including Judge Richard Posner, have argued that the patent system has led to excessive patenting, too much litigation, and unwarranted costs for consumers.  Patent defenders have responded that with every spike in innovation comes a corresponding increase in the number of patent suits, and efforts to weaken patent rights will inevitably lead to less innovation.  With the passage of the America Invents Act — the broadest overhaul of the patent system in 50 years America — many people believed that the dispute over patent rights would recede.  However, with a string of high profile patent infringement suits in the smartphone industry – and a new effort to roll back patent rights at the International Trade Commission certain patents held by so-called “non-practicing entities” (NPEs) – the debate over intellectual property has grown more intense.  Would reduced patent rights diminish U.S. competitiveness and depress innovation?  In a diversified economy, should NPEs have fewer patent rights than those that manufacture their inventions?   Will innovation continue apace even if patent protections are scaled back?