Archives For

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

Earlier this month, Professors Fiona Scott Morton, Steve Salop, and David Dinielli penned a letter expressing their “strong support” for the proposed American Innovation and Choice Online Act (AICOA). In the letter, the professors address criticisms of AICOA and urge its approval, despite possible imperfections.

“Perhaps this bill could be made better if we lived in a perfect world,” the professors write, “[b]ut we believe the perfect should not be the enemy of the good, especially when change is so urgently needed.”

The problem is that the professors and other supporters of AICOA have shown neither that “change is so urgently needed” nor that the proposed law is, in fact, “good.”

Is Change ‘Urgently Needed’?

With respect to the purported urgency that warrants passage of a concededly imperfect bill, the letter authors assert two points. First, they claim that AICOA’s targets—Google, Apple, Facebook, Amazon, and Microsoft (collectively, GAFAM)—“serve as the essential gatekeepers of economic, social, and political activity on the internet.” It is thus appropriate, they say, to amend the antitrust laws to do something they have never before done: saddle a handful of identified firms with special regulatory duties.

But is this oft-repeated claim about “gatekeeper” status true? The label conjures up the old Terminal Railroad case, where a group of firms controlled the only bridges over the Mississippi River at St. Louis. Freighters had no choice but to utilize their services. Do the GAFAM firms really play a similar role with respect to “economic, social, and political activity on the internet”? Hardly.

With respect to economic activity, Amazon may be a huge player, but it still accounts for only 39.5% of U.S. ecommerce sales—and far less of retail sales overall. Consumers have gobs of other ecommerce options, and so do third-party merchants, which may sell their wares using Shopify, Ebay, Walmart, Etsy, numerous other ecommerce platforms, or their own websites.

For social activity on the internet, consumers need not rely on Facebook and Instagram. They can connect with others via Snapchat, Reddit, Pinterest, TikTok, Twitter, and scores of other sites. To be sure, all these services have different niches, but the letter authors’ claim that the GAFAM firms are “essential gatekeepers” of “social… activity on the internet” is spurious.

Nor are the firms singled out by AICOA essential gatekeepers of “political activity on the internet.” The proposed law touches neither Twitter, the primary hub of political activity on the internet, nor TikTok, which is increasingly used for political messaging.

The second argument the letter authors assert in support of their claim of urgency is that “[t]he decline of antitrust enforcement in the U.S. is well known, pervasive, and has left our jurisprudence unable to protect and maintain competitive markets.” In other words, contemporary antitrust standards are anemic and have led to a lack of market competition in the United States.

The evidence for this claim, which is increasingly parroted in the press and among the punditry, is weak. Proponents primarily point to studies showing:

  1. increasing industrial concentration;
  2. higher markups on goods and services since 1980;
  3. a declining share of surplus going to labor, which could indicate monopsony power in labor markets; and
  4. a reduction in startup activity, suggesting diminished innovation. 

Examined closely, however, those studies fail to establish a domestic market power crisis.

Industrial concentration has little to do with market power in actual markets. Indeed, research suggests that, while industries may be consolidating at the national level, competition at the market (local) level is increasing, as more efficient national firms open more competitive outlets in local markets. As Geoff Manne sums up this research:

Most recently, several working papers looking at the data on concentration in detail and attempting to identify the likely cause for the observed data, show precisely the opposite relationship. The reason for increased concentration appears to be technological, not anticompetitive. And, as might be expected from that cause, its effects are beneficial. Indeed, the story is both intuitive and positive.

What’s more, while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.

With respect to the evidence on markups, the claim of a significant increase in the price-cost margin depends crucially on the measure of cost. The studies suggesting an increase in margins since 1980 use the “cost of goods sold” (COGS) metric, which excludes a firm’s management and marketing costs—both of which have become an increasingly significant portion of firms’ costs. Measuring costs using the “operating expenses” (OPEX) metric, which includes management and marketing costs, reveals that public-company markups increased only modestly since the 1980s and that the increase was within historical variation. (It is also likely that increased markups since 1980 reflect firms’ more extensive use of technology and their greater regulatory burdens, both of which raise fixed costs and require higher markups over marginal cost.)

As for the declining labor share, that dynamic is occurring globally. Indeed, the decline in the labor share in the United States has been less severe than in Japan, Canada, Italy, France, Germany, China, Mexico, and Poland, suggesting that anemic U.S. antitrust enforcement is not to blame. (A reduction in the relative productivity of labor is a more likely culprit.)

Finally, the claim of reduced startup activity is unfounded. In its report on competition in digital markets, the U.S. House Judiciary Committee asserted that, since the advent of the major digital platforms:

  1. “[t]he number of new technology firms in the digital economy has declined”;
  2. “the entrepreneurship rate—the share of startups and young firms in the [high technology] industry as a whole—has also fallen significantly”; and
  3. “[u]nsurprisingly, there has also been a sharp reduction in early-stage funding for technology startups.” (pp. 46-47.)

Those claims, however, are based on cherry-picked evidence.

In support of the first two, the Judiciary Committee report cited a study based on data ending in 2011. As Benedict Evans has observed, “standard industry data shows that startup investment rounds have actually risen at least 4x since then.”

In support of the third claim, the report cited statistics from an article noting that the number and aggregate size of the very smallest venture capital deals—those under $1 million—fell between 2014 and 2018 (after growing substantially from 2008 to 2014). The Judiciary Committee report failed to note, however, the cited article’s observation that small venture deals ($1 million to $5 million) had not dropped and that larger venture deals (greater than $5 million) had grown substantially during the same time period. Nor did the report acknowledge that venture-capital funding has continued to increase since 2018.

Finally, there is also reason to think that AICOA’s passage would harm, not help, the startup environment:

AICOA doesn’t directly restrict startup acquisitions, but the activities it would restrict most certainly do dramatically affect the incentives that drive many startup acquisitions. If a platform is prohibited from engaging in cross-platform integration of acquired technologies, or if it can’t monetize its purchase by prioritizing its own technology, it may lose the motivation to make a purchase in the first place.

Despite the letter authors’ claims, neither a paucity of avenues for “economic, social, and political activity on the internet” nor the general state of market competition in the United States establishes an “urgent need” to re-write the antitrust laws to saddle a small group of firms with unprecedented legal obligations.

Is the Vagueness of AICOA’s Primary Legal Standard a Feature?

AICOA bars covered platforms from engaging in three broad classes of conduct (self-preferencing, discrimination among business users, and limiting business users’ ability to compete) where the behavior at issue would “materially harm competition.” It then forbids several specific business practices, but allows the defendant to avoid liability by proving that their use of the practice would not cause a “material harm to competition.”

Critics have argued that “material harm to competition”—a standard that is not used elsewhere in the antitrust laws—is too indeterminate to provide business planners and adjudicators with adequate guidance. The authors of the pro-AICOA letter, however, maintain that this “different language is a feature, not a bug.”

That is so, the letter authors say, because the language effectively signals to courts and policymakers that antitrust should prohibit more conduct. They explain:

To clarify to courts and policymakers that Congress wants something different (and stronger), new terminology is required. The bill’s language would open up a new space and move beyond the standards imposed by the Sherman Act, which has not effectively policed digital platforms.

Putting aside the weakness of the letter authors’ premise (i.e., that Sherman Act standards have proven ineffective), the legislative strategy they advocate—obliquely signal that you want “change” without saying what it should consist of—is irresponsible and risky.

The letter authors assert two reasons Congress should not worry about enacting a liability standard that has no settled meaning. One is that:

[t]he same judges who are called upon to render decisions under the existing, insufficient, antitrust regime, will also be called upon to render decisions under the new law. They will be the same people with the same worldview.

It is thus unlikely that “outcomes under the new law would veer drastically away from past understandings of core concepts….”

But this claim undermines the argument that a new standard is needed to get the courts to do “something different” and “move beyond the standards imposed by the Sherman Act.” If we don’t need to worry about an adverse outcome from a novel, ill-defined standard because courts are just going to continue applying the standard they’re familiar with, then what’s the point of changing the standard?

A second reason not to worry about the lack of clarity on AICOA’s key liability standard, the letter authors say, is that federal enforcers will define it:

The new law would mandate that the [Federal Trade Commission and the Antitrust Division of the U.S. Department of Justice], the two expert agencies in the area of competition, together create guidelines to help courts interpret the law. Any uncertainty about the meaning of words like ‘competition’ will be resolved in those guidelines and over time with the development of caselaw.

This is no doubt music to the ears of members of Congress, who love to get credit for “doing something” legislatively, while leaving the details to an agency so that they can avoid accountability if things turn out poorly. Indeed, the letter authors explicitly play upon legislators’ unwholesome desire for credit-sans-accountability. They emphasize that “[t]he agencies must [create and] update the guidelines periodically. Congress doesn’t have to do much of anything very specific other than approve budgets; it certainly has no obligation to enact any new laws, let alone amend them.”

AICOA does not, however, confer rulemaking authority on the agencies; it merely directs them to create and periodically update “agency enforcement guidelines” and “agency interpretations” of certain affirmative defenses. Those guidelines and interpretations would not bind courts, which would be free to interpret AICOA’s new standard differently. The letter authors presume that courts would defer to the agencies’ interpretation of the vague standard, and they probably would. But that raises other problems.

For one thing, it reduces certainty, which is likely to chill innovation. Giving the enforcement agencies de facto power to determine and redetermine what behaviors “would materially harm competition” means that the rules are never settled. Administrations differ markedly in their views about what the antitrust laws should forbid, so business planners could never be certain that a product feature or revenue model that is legal today will not be deemed to “materially harm competition” by a future administration with greater solicitude for small rivals and upstarts. Such uncertainty will hinder investment in novel products, services, and business models.

Consider, for example, Google’s investment in the Android mobile operating system. Google makes money from Android—which it licenses to device manufacturers for free—by ensuring that Google’s revenue-generating services (e.g., its search engine and browser) are strongly preferenced on Android products. One administration might believe that this is a procompetitive arrangement, as it creates a different revenue model for mobile operating systems (as opposed to Apple’s generation of revenue from hardware sales), resulting in both increased choice and lower prices for consumers. A subsequent administration might conclude that the arrangement materially harms competition by making it harder for rival search engines and web browsers to gain market share. It would make scant sense for a covered platform to make an investment like Google did with Android if its underlying business model could be upended by a new administration with de facto power to rewrite the law.

A second problem with having the enforcement agencies determine and redetermine what covered platforms may do is that it effectively transforms the agencies from law enforcers into sectoral regulators. Indeed, the letter authors agree that “the ability of expert agencies to incorporate additional protections in the guidelines” means that “the bill is not a pure antitrust law but also safeguards other benefits to consumers.” They tout that “the complementarity between consumer protection and competition can be addressed in the guidelines.”

Of course, to the extent that the enforcement guidelines address concerns besides competition, they will be less useful for interpreting AICOA’s “material harm to competition” standard; they might deem a practice suspect on non-competition grounds. Moreover, it is questionable whether creating a sectoral regulator for five widely diverse firms is a good idea. The history of sectoral regulation is littered with examples of agency capture, rent-seeking, and other public-choice concerns. At a minimum, Congress should carefully examine the potential downsides of sectoral regulation, install protections to mitigate those downsides, and explicitly establish the sectoral regulator.

Will AICOA Break Popular Products and Services?

Many popular offerings by the platforms covered by AICOA involve self-preferencing, discrimination among business users, or one of the other behaviors the bill presumptively bans. Pre-installation of iPhone apps and services like Siri, for example, involves self-preferencing or discrimination among business users of Apple’s iOS platform. But iPhone consumers value having a mobile device that offers extensive services right out of the box. Consumers love that Google’s search result for an establishment offers directions to the place, which involves the preferencing of Google Maps. And consumers positively adore Amazon Prime, which can provide free expedited delivery because Amazon conditions Prime designation on a third-party seller’s use of Amazon’s efficient, reliable “Fulfillment by Amazon” service—something Amazon could not do under AICOA.

The authors of the pro-AICOA letter insist that the law will not ban attractive product features like these. AICOA, they say:

provides a powerful defense that forecloses any thoughtful concern of this sort: conduct otherwise banned under the bill is permitted if it would ‘maintain or substantially enhance the core functionality of the covered platform.’

But the authors’ confidence that this affirmative defense will adequately protect popular offerings is misplaced. The defense is narrow and difficult to mount.

First, it immunizes only those behaviors that maintain or substantially enhance the “core” functionality of the covered platform. Courts would rightly interpret AICOA to give effect to that otherwise unnecessary word, which dictionaries define as “the central or most important part of something.” Accordingly, any self-preferencing, discrimination, or other presumptively illicit behavior that enhances a covered platform’s service but not its “central or most important” functions is not even a candidate for the defense.

Even if a covered platform could establish that a challenged practice would maintain or substantially enhance the platform’s core functionality, it would also have to prove that the conduct was “narrowly tailored” and “reasonably necessary” to achieve the desired end, and, for many behaviors, the “le[ast] discriminatory means” of doing so. That is a remarkably heavy burden, and it beggars belief to suppose that business planners considering novel offerings involving self-preferencing, discrimination, or some other presumptively illicit conduct would feel confident that they could make the required showing. It is likely, then, that AICOA would break existing products and services and discourage future innovation.

Of course, Congress could mitigate this concern by specifying that AICOA does not preclude certain things, such as pre-installed apps or consumer-friendly search results. But the legislation would then lose the support of the many interest groups who want the law to preclude various popular offerings that its text would now forbid. Unlike consumers, who are widely dispersed and difficult to organize, the groups and competitors that would benefit from things like stripped-down smartphones, map-free search results, and Prime-less Amazon are effective lobbyists.

Should the US Follow Europe?

Having responded to criticisms of AICOA, the authors of the pro-AICOA letter go on offense. They assert that enactment of the bill is needed to ensure that the United States doesn’t lose ground to Europe, both in regulatory leadership and in innovation. Observing that the European Union’s Digital Markets Act (DMA) has just become law, the authors write that:

[w]ithout [AICOA], the role of protecting competition and innovation in the digital sector outside China will be left primarily to the European Union, abrogating U.S. leadership in this sector.

Moreover, if Europe implements its DMA and the United States does not adopt AICOA, the authors claim:

the center of gravity for innovation and entrepreneurship [could] shift from the U.S. to Europe, where the DMA would offer greater protections to start ups and app developers, and even makers and artisans, against exclusionary conduct by the gatekeeper platforms.

Implicit in the argument that AICOA is needed to maintain America’s regulatory leadership is the assumption that to lead in regulatory policy is to have the most restrictive rules. The most restrictive regulator will necessarily be the “leader” in the sense that it will be the one with the most control over regulated firms. But leading in the sense of optimizing outcomes and thereby serving as a model for other jurisdictions entails crafting the best policies—those that minimize the aggregate social losses from wrongly permitting bad behavior, wrongly condemning good behavior, and determining whether conduct is allowed or forbidden (i.e., those that “minimize the sum of error and decision costs”). Rarely is the most restrictive regulatory regime the one that optimizes outcomes, and as I have elsewhere explained, the rules set forth in the DMA hardly seem calibrated to do so.

As for “innovation and entrepreneurship” in the technological arena, it would be a seismic shift indeed if the center of gravity were to migrate to Europe, which is currently home to zero of the top 20 global tech companies. (The United States hosts 12; China, eight.)

It seems implausible, though, that imposing a bunch of restrictions on large tech companies that have significant resources for innovation and are scrambling to enter each other’s markets will enhance, rather than retard, innovation. The self-preferencing bans in AICOA and DMA, for example, would prevent Apple from developing its own search engine to compete with Google, as it has apparently contemplated. Why would Apple develop its own search engine if it couldn’t preference it on iPhones and iPads? And why would Google have started its shopping service to compete with Amazon if it couldn’t preference Google Shopping in search results? And why would any platform continually improve to gain more users as it neared the thresholds for enhanced duties under DMA or AICOA? It seems more likely that the DMA/AICOA approach will hinder, rather than spur, innovation.

At the very least, wouldn’t it be prudent to wait and see whether DMA leads to a flourishing of innovation and entrepreneurship in Europe before jumping on the European bandwagon? After all, technological innovations that occur in Europe won’t be available only to Europeans. Just as Europeans benefit from innovation by U.S. firms, American consumers will be able to reap the benefits of any DMA-inspired innovation occurring in Europe. Moreover, if DMA indeed furthers innovation by making it easier for entrants to gain footing, even American technology firms could benefit from the law by launching their products in Europe. There’s no reason for the tech sector to move to Europe to take advantage of a small-business-protective European law.

In fact, the optimal outcome might be to have one jurisdiction in which major tech platforms are free to innovate, enter each other’s markets via self-preferencing, etc. (the United States, under current law) and another that is more protective of upstart businesses that use the platforms (Europe under DMA). The former jurisdiction would create favorable conditions for platform innovation and inter-platform competition; the latter might enhance innovation among businesses that rely on the platforms. Consumers in each jurisdiction, however, would benefit from innovation facilitated by the other.

It makes little sense, then, for the United States to rush to adopt European-style regulation. DMA is a radical experiment. Regulatory history suggests that the sort of restrictiveness it imposes retards, rather than furthers, innovation. But in the unlikely event that things turn out differently this time, little harm would result from waiting to see DMA’s benefits before implementing its restrictive approach. 

Does AICOA Threaten Platforms’ Ability to Moderate Content and Police Disinformation?

The authors of the pro-AICOA letter conclude by addressing the concern that AICOA “will inadvertently make content moderation difficult because some of the prohibitions could be read… to cover and therefore prohibit some varieties of content moderation” by covered platforms.

The letter authors say that a reading of AICOA to prohibit content moderation is “strained.” They maintain that the act’s requirement of “competitive harm” would prevent imposition of liability based on content moderation and that the act is “plainly not intended to cover” instances of “purported censorship.” They further contend that the risk of judicial misconstrual exists with all proposed laws and therefore should not be a sufficient reason to oppose AICOA.

Each of these points is weak. Section 3(a)(3) of AICOA makes it unlawful for a covered platform to “discriminate in the application or enforcement of the terms of service of the covered platform among similarly situated business users in a manner that would materially harm competition.” It is hardly “strained” to reason that this provision is violated when, say, Google’s YouTube selectively demonetizes a business user for content that Google deems harmful or misleading. Or when Apple removes Parler, but not every other violator of service terms, from its App Store. Such conduct could “materially harm competition” by impeding the de-platformed business’ ability to compete with its rivals.

And it is hard to say that AICOA is “plainly not intended” to forbid these acts when a key supporting senator touted the bill as a means of policing content moderation and observed during markup that it would “make some positive improvement on the problem of censorship” (i.e., content moderation) because “it would provide protections to content providers, to businesses that are discriminated against because of the content of what they produce.”

At a minimum, we should expect some state attorneys general to try to use the law to police content moderation they disfavor, and the mere prospect of such legal action could chill anti-disinformation efforts and other forms of content moderation.

Of course, there’s a simple way for Congress to eliminate the risk of what the letter authors deem judicial misconstrual: It could clarify that AICOA’s prohibitions do not cover good-faith efforts to moderate content or police disinformation. Such clarification, however, would kill the bill, as several Republican legislators are supporting the act because it restricts content moderation.

The risk of judicial misconstrual with AICOA, then, is not the sort that exists with “any law, new or old,” as the letter authors contend. “Normal” misconstrual risk exists when legislators try to be clear about their intentions but, because language has its limits, some vagueness or ambiguity persists. AICOA’s architects have deliberately obscured their intentions in order to cobble together enough supporters to get the bill across the finish line.

The one thing that all AICOA supporters can agree on is that they deserve credit for “doing something” about Big Tech. If the law is construed in a way they disfavor, they can always act shocked and blame rogue courts. That’s shoddy, cynical lawmaking.

Conclusion

So, I respectfully disagree with Professors Scott Morton, Salop, and Dinielli on AICOA. There is no urgent need to pass the bill right now, especially as we are on the cusp of seeing an AICOA-like regime put to the test. The bill’s central liability standard is overly vague, and its plain terms would break popular products and services and thwart future innovation. The United States should equate regulatory leadership with the best, not the most restrictive, policies. And Congress should thoroughly debate and clarify its intentions on content moderation before enacting legislation that could upend the status quo on that important matter.

For all these reasons, Congress should reject AICOA. And for the same reasons, a future in which AICOA is adopted is extremely unlikely to resemble the Utopian world that Professors Scott Morton, Salop, and Dinielli imagine.

On March 31, I and several other law and economics scholars filed an amicus brief in Epic Games v. Apple, which is on appeal to the U.S. Court of Appeals for Ninth Circuit.  In this post, I summarize the central arguments of the brief, which was joined by Alden Abbott, Henry Butler, Alan Meese, Aurelien Portuese, and John Yun and prepared with the assistance of Don Falk of Schaerr Jaffe LLP.

First, some background for readers who haven’t followed the case.

Epic, maker of the popular Fortnite video game, brought antitrust challenges against two policies Apple enforces against developers of third-party apps that run on iOS, the mobile operating system for Apple’s popular iPhones and iPads.  One policy requires that all iOS apps be distributed through Apple’s own App Store.  The other requires that any purchases of digital goods made while using an iOS app utilize Apple’s In App Purchase system (IAP).  Apple collects a share of the revenue from sales made through its App Store and using IAP, so these two policies provide a way for it to monetize its innovative app platform.   

Epic maintains that Apple’s app policies violate the federal antitrust laws.  Following a trial, the district court disagreed, though it condemned another of Apple’s policies under California state law.  Epic has appealed the antitrust rulings against it. 

My fellow amici and I submitted our brief in support of Apple to draw the Ninth Circuit’s attention to a distinction that is crucial to ensuring that antitrust promotes long-term consumer welfare: the distinction between the mere extraction of surplus through the exercise of market power and the enhancement of market power via the weakening of competitive constraints.

The central claim of our brief is that Epic’s antitrust challenges to Apple’s app store policies should fail because Epic has not shown that the policies enhance Apple’s market power in any market.  Moreover, condemnation of the practices would likely induce Apple to use its legitimately obtained market power to extract surplus in a different way that would leave consumers worse off than they are under the status quo.   

Mere Surplus Extraction vs. Market Power Extension

As the Supreme Court has observed, “Congress designed the Sherman Act as a ‘consumer welfare prescription.’”  The Act endeavors to protect consumers from harm resulting from “market power,” which is the ability of a firm lacking competitive constraints to enhance its profits by reducing its output—either quantitively or qualitatively—from the level that would persist if the firm faced vigorous competition.  A monopolist, for example, might cut back on the quantity it produces (to drive up market price) or it might skimp on quality (to enhance its per-unit profit margin).  A firm facing vigorous competition, by contrast, couldn’t raise market price simply by reducing its own production, and it would lose significant sales to rivals if it raised its own price or unilaterally cut back on product quality.  Market power thus stems from deficient competition.

As Dennis Carlton and Ken Heyer have observed, two different types of market power-related business behavior may injure consumers and are thus candidates for antitrust prohibition.  One is an exercise of market power: an action whereby a firm lacking competitive constraints increases its returns by constricting its output so as to raise price or otherwise earn higher profit margins.  When a firm engages in this sort of conduct, it extracts a greater proportion of the wealth, or “surplus,” generated by its transactions with its customers.

Every voluntary transaction between a buyer and seller creates surplus, which is the difference between the subjective value the consumer attaches to an item produced and the cost of producing and distributing it.  Price and other contract terms determine how that surplus is allocated between the buyer and the seller.  When a firm lacking competitive constraints exercises its market power by, say, raising price, it extracts for itself a greater proportion of the surplus generated by its sale.

The other sort of market power-related business behavior involves an effort by a firm to enhance its market power by weakening competitive constraints.  For example, when a firm engages in unreasonably exclusionary conduct that drives its rivals from the market or increases their costs so as to render them less formidable competitors, its market power grows.

U.S. antitrust law treats these two types of market power-related conduct differently.  It forbids behavior that enhances market power and injures consumers, but it permits actions that merely exercise legitimately obtained market power without somehow enhancing it.  For example, while charging a monopoly price creates immediate consumer harm by extracting for the monopolist a greater share of the surplus created by the transaction, the Supreme Court observed in Trinko that “[t]he mere possession of monopoly power, and the concomitant charging of monopoly prices, is not . . . unlawful.”  (See also linkLine: “Simply possessing monopoly power and charging monopoly prices does not violate [Sherman Act] § 2….”)

Courts have similarly refused to condemn mere exercises of market power in cases involving surplus-extractive arrangements more complicated than simple monopoly pricing.  For example, in its Independent Ink decision, the U.S. Supreme Court expressly declined to adopt a rule that would have effectively banned “metering” tie-ins.

In a metering tie-in, a seller with market power on some unique product that is used with a competitively supplied complement that is consumed in varying amounts—say, a highly unique printer that uses standard ink—reduces the price of its unique product (the printer), requires buyers to also purchase from it their requirements of the complement (the ink), and then charges a supracompetitive price for the latter product.  This allows the seller to charge higher effective prices to high-volume users of its unique tying product (buyers who use lots of ink) and lower prices to lower-volume users. 

Assuming buyers’ use of the unique product correlates with the value they ascribe to it, a metering tie-in allows the seller to price discriminate, charging higher prices to buyers who value its unique product more.  This allows the seller to extract more of the surplus generated by sales of its product, but it in no way extends the seller’s market power.

In refusing to adopt a rule that would have condemned most metering tie-ins, the Independent Ink Court observed that “it is generally recognized that [price discrimination] . . . occurs in fully competitive markets” and that tying arrangements involving requirements ties may be “fully consistent with a free, competitive market.” The Court thus reasoned that mere price discrimination and surplus extraction, even when accomplished through some sort of contractual arrangement like a tie-in, are not by themselves anticompetitive harms warranting antitrust’s condemnation.    

The Ninth Circuit has similarly recognized that conduct that exercises market power to extract surplus but does not somehow enhance that power does not create antitrust liability.  In Qualcomm, the court refused to condemn the chipmaker’s “no license, no chips” policy, which enabled it to enhance its profits by earning royalties on original equipment manufacturers’ sales of their high-priced products.

In reversing the district court’s judgment in favor of the FTC, the Ninth Circuit conceded that Qualcomm’s policies were novel and that they allowed it to enhance its profits by extracting greater surplus.  The court refused to condemn the policies, however, because they did not injure competition by weakening competitive constraints:

This is not to say that Qualcomm’s “no license, no chips” policy is not “unique in the industry” (it is), or that the policy is not designed to maximize Qualcomm’s profits (Qualcomm has admitted as much). But profit-seeking behavior alone is insufficient to establish antitrust liability. As the Supreme Court stated in Trinko, the opportunity to charge monopoly prices “is an important element of the free-market system” and “is what attracts ‘business acumen’ in the first place; it induces risk taking that produces innovation and economic growth.”

The Qualcomm court’s reference to Trinko highlights one reason courts should not condemn exercises of market power that merely extract surplus without enhancing market power: allowing such surplus extraction furthers dynamic efficiency—welfare gain that accrues over time from the development of new and improved products and services.

Dynamic efficiency results from innovation, which entails costs and risks.  Firms are more willing to incur those costs and risks if their potential payoff is higher, and an innovative firm’s ability to earn supracompetitive profits off its “better mousetrap” enhances its payoff. 

Allowing innovators to extract such profits also helps address the fact most of the benefits of product innovation inure to people other than the innovator.  Private actors often engage in suboptimal levels of behaviors that produce such benefit spillovers, or “positive externalities,”  because they bear all the costs of those behaviors but capture just a fraction of the benefit produced.  By enhancing the benefits innovators capture from their innovative efforts, allowing non-power-enhancing surplus extraction helps generate a closer-to-optimal level of innovative activity.

Not only do supracompetitive profits extracted through the exercise of legitimately obtained market power motivate innovation, they also enable it by helping to fund innovative efforts.  Whereas businesses that are forced by competition to charge prices near their incremental cost must secure external funding for significant research and development (R&D) efforts, firms collecting supracompetitive returns can finance R&D internally.  Indeed, of the top fifteen global spenders on R&D in 2018, eleven were either technology firms accused of possessing monopoly power (#1 Apple, #2 Alphabet/Google, #5 Intel, #6 Microsoft, #7 Apple, and #14 Facebook) or pharmaceutical companies whose patent protections insulate their products from competition and enable supracompetitive pricing (#8 Roche, #9 Johnson & Johnson, #10 Merck, #12 Novartis, and #15 Pfizer).

In addition to fostering dynamic efficiency by motivating and enabling innovative efforts, a policy acquitting non-power-enhancing exercises of market power allows courts to avoid an intractable question: which instances of mere surplus extraction should be precluded?

Precluding all instances of surplus extraction by firms with market power would conflict with precedents like Trinko and linkLine (which say that legitimate monopolists may legally charge monopoly prices) and would be impracticable given the ubiquity of above-cost pricing in niche and brand-differentiated markets.

A rule precluding surplus extraction when accomplished by a practice more complicated that simple monopoly pricing—say, some practice that allows price discrimination against buyers who highly value a product—would be both arbitrary and backward.  The rule would be arbitrary because allowing supracompetitive profits from legitimately obtained market power motivates and enables innovation regardless of the means used to extract surplus. The rule would be backward because, while simple monopoly pricing always reduces overall market output (as output-reduction is the very means by which the producer causes price to rise), more complicated methods of extracting surplus, such as metering tie-ins, often enhance market output and overall social welfare.

A third possibility would be to preclude exercising market power to extract more surplus than is necessary to motivate and enable innovation.  That position, however, would require courts to determine how much surplus extraction is required to induce innovative efforts.  Courts are poorly positioned to perform such a task, and their inevitable mistakes could significantly chill entrepreneurial activity.

Consider, for example, a firm contemplating a $5 million investment that might return up to $50 million.  Suppose the managers of the firm weighed expected costs and benefits and decided the risky gamble was just worth taking.  If the gamble paid off but a court stepped in and capped the firm’s returns at $20 million—a seemingly generous quadrupling of the firm’s investment—future firms in the same position would not make similar investments.  After all, the firm here thought this gamble was just barely worth taking, given the high risk of failure, when available returns were $50 million.

In the end, then, the best policy is to draw the line as both the U.S. Supreme Court and the Ninth Circuit have done: Whereas enhancements of market power are forbidden, merely exercising legitimately obtained market power to extract surplus is permitted.

Apple’s Policies Do Not Enhance Its Market Power

Under the legal approach described above, the two Apple policies Epic has challenged do not give rise to antitrust liability.  While the policies may boost Apple’s profits by facilitating its extraction of surplus from app transactions on its mobile devices, they do not enhance Apple’s market power in any conceivable market.

As the creator and custodian of the iOS operating system, Apple has the ability to control which applications will run on its iPhones and iPads.  Developers cannot produce operable iOS apps unless Apple grants them access to the Application Programming Interfaces (APIs) required to enable the functionality of the operating system and hardware. In addition, Apple can require developers to obtain digital certificates that will enable their iOS apps to operate.  As the district court observed, “no certificate means the code will not run.”

Because Apple controls which apps will work on the operating system it created and maintains, Apple could collect the same proportion of surplus it currently extracts from iOS app sales and in-app purchases on iOS apps even without the policies Epic is challenging.  It could simply withhold access to the APIs or digital certificates needed to run iOS apps unless developers promised to pay it 30% of their revenues from app sales and in-app purchases of digital goods.

This means that the challenged policies do not give Apple any power it doesn’t already possess in the putative markets Epic identified: the markets for “iOS app distribution” and “iOS in-app payment processing.” 

The district court rejected those market definitions on the ground that Epic had not established cognizable aftermarkets for iOS-specific services.  It defined the relevant market instead as “mobile gaming transactions.”  But no matter.  The challenged policies would not enhance Apple’s market power in that broader market either.

In “mobile gaming transactions” involving non-iOS (e.g., Android) mobile apps, Apple’s policies give it no power at all.  Apple doesn’t distribute non-iOS apps or process in-app payments on such apps.  Moreover, even if Apple were to being doing so—say, by distributing Android apps in its App Store or allowing producers of Android apps to include IAP as their in-app payment system—it is implausible that Apple’s policies would allow it to gain new market power.  There are giant, formidable competitors in non-iOS app distribution (e.g., Google’s Play Store) and in payment processing for non-iOS in-app purchases (e.g., Google Play Billing).  It is inconceivable that Apple’s policies would allow it to usurp so much scale from those rivals that Apple could gain market power over non-iOS mobile gaming transactions.

That leaves only the iOS segment of the mobile gaming transactions market.  And, as we have just seen, Apple’s policies give it no new power to extract surplus from those transactions; because it controls access to iOS, it could do so using other means.

Nor do the challenged policies enable Apple to maintain its market power in any conceivable market.  This is not a situation like Microsoft where a firm in a market adjacent to a monopolist’s could somehow pose a challenge to that monopolist, and the monopolist nips the potential competition in the bud by reducing the potential rival’s scale.  There is no evidence in the record to support the (implausible) notion that rival iOS app stores or in-app payment processing systems could ever evolve in a manner that would pose a challenge to Apple’s position in mobile devices, mobile operating systems, or any other market in which it conceivably has market power. 

Epic might retort that but for the challenged policies, rivals could challenge Apple’s market share in iOS app distribution and in-app purchase processing.  Rivals could not, however, challenge Apple’s market power in such markets, as that power stems from its control of iOS.  The challenged policies therefore do not enable Apple to shore up any existing market power.

Alternative Means of Extracting Surplus Would Likely Reduce Consumer Welfare

Because the policies Epic has challenged are not the source of Apple’s ability to extract surplus from iOS app transactions, judicial condemnation of the policies would likely induce Apple to extract surplus using different means.  Changing how it earns profits off iOS app usage, however, would likely leave consumers worse off than they are under the status quo.

Apple could simply charge third-party app developers a flat fee for access to the APIs needed to produce operable iOS apps but then allow them to distribute their apps and process in-app payments however they choose.  Such an approach would allow Apple to monetize its innovative app platform while permitting competition among providers of iOS app distribution and in-app payment processing services.  Relative to the status quo, though, such a model would likely reduce consumer welfare by:

  • Reducing the number of free and niche apps,as app developers could no longer avoid a fee to Apple by adopting a free (likely ad-supported) business model, and producers of niche apps may not generate enough revenue to justify Apple’s flat fee;
  • Raising business risks for app developers, who, if Apple cannot earn incremental revenue off sales and use of their apps, may face a greater likelihood that the functionality of those apps will be incorporated into future versions of iOS;
  • Reducing Apple’s incentive to improve iOS and its mobile devices, as eliminating Apple’s incremental revenue from app usage reduces its motivation to make costly enhancements that keep users on their iPhones and iPads;
  • Raising the price of iPhones and iPads and generating deadweight loss, as Apple could no longer charge higher effective prices to people who use apps more heavily and would thus likely hike up its device prices, driving marginal consumers from the market; and
  • Reducing user privacy and security, as jettisoning a closed app distribution model (App Store only) would impair Apple’s ability to screen iOS apps for features and bugs that create security and privacy risks.

An alternative approach—one that would avoid many of the downsides just stated by allowing Apple to continue earning incremental revenue off iOS app usage—would be for Apple to charge app developers a revenue-based fee for access to the APIs and other amenities needed to produce operable iOS apps.  That approach, however, would create other costs that would likely leave consumers worse off than they are under the status quo.

The policies Epic has challenged allow Apple to collect a share of revenues from iOS app transactions immediately at the point of sale.  Replacing those policies with a revenue-based  API license system would require Apple to incur additional costs of collecting revenues and ensuring that app developers are accurately reporting them.  In order to extract the same surplus it currently collects—and to which it is entitled given its legitimately obtained market power—Apple would have to raise its revenue-sharing percentage above its current commission rate to cover its added collection and auditing costs.

The fact that Apple has elected not to adopt this alternative means of collecting the revenues to which it is entitled suggests that the added costs of moving to the alternative approach (extra collection and auditing costs) would exceed any additional consumer benefit such a move would produce.  Because Apple can collect the same revenue percentage from app transactions two different ways, it has an incentive to select the approach that maximizes iOS app transaction revenues.  That is the approach that creates the greatest value for consumers and also for Apple. 

If Apple believed that the benefits to app users of competition in app distribution and in-app payment processing would exceed the extra costs of collection and auditing, it would have every incentive to switch to a revenue-based licensing regime and increase its revenue share enough to cover its added collection and auditing costs.  As such an approach would enhance the net value consumers receive when buying apps and making in-app purchases, it would raise overall app revenues, boosting Apple’s bottom line.  The fact that Apple has not gone in this direction, then, suggests that it does not believe consumers would receive greater benefit under the alternative system.  Apple might be wrong, of course.  But it has a strong motivation to make the consumer welfare-enhancing decision here, as doing so maximizes its own profits.

The policies Epic has challenged do not enhance or shore up Apple’s market power, a salutary pre-requisite to antitrust liability.  Furthermore, condemning the policies would likely lead Apple to monetize its innovative app platform in a manner that would reduce consumer welfare relative to the status quo.  The Ninth Circuit should therefore affirm the district court’s rejection of Epic’s antitrust claims.  

Bad Blood at the FTC

Thom Lambert —  9 June 2021

John Carreyrou’s marvelous book Bad Blood chronicles the rise and fall of Theranos, the one-time Silicon Valley darling that was revealed to be a house of cards.[1] Theranos’s Svengali-like founder, Elizabeth Holmes, convinced scores of savvy business people (mainly older men) that her company was developing a machine that could detect all manner of maladies from a small quantity of a patient’s blood. Turns out it was a fraud. 

I had a couple of recurring thoughts as I read Bad Blood. First, I kept thinking about how Holmes’s fraud might impair future medical innovation. Something like Theranos’s machine would eventually be developed, I figured, but Holmes’s fraud would likely set things back by making investors leery of blood-based, multi-disease diagnostics.

I also had a thought about the causes of Theranos’s spectacular failure. A key problem, it seemed, was that the company tried to do too many things at once: develop diagnostic technologies, design an elegant machine (Holmes was obsessed with Steve Jobs and insisted that Theranos’s machine resemble a sleek Apple device), market the product, obtain regulatory approval, scale the operation by getting Theranos machines in retail chains like Safeway and Walgreens, and secure third-party payment from insurers.

A thought that didn’t occur to me while reading Bad Blood was that a multi-disease blood diagnostic system would soon be developed but would be delayed, or possibly even precluded from getting to market, by an antitrust enforcement action based on things the developers did to avoid the very problems that doomed Theranos. 

Sadly, that’s where we are with the Federal Trade Commission’s misguided challenge to the merger of Illumina and Grail.

Founded in 1998, San Diego-based Illumina is a leading provider of products used in genetic sequencing and genomic analysis. Illumina produces “next generation sequencing” (NGS) platforms that are used for a wide array of applications (genetic tests, etc.) developed by itself and other companies.

In 2015, Illumina founded Grail for the purpose of developing a blood test that could detect cancer in asymptomatic individuals—the “holy grail” of cancer diagnosis. Given the superior efficacy and lower cost of treatments for early- versus late-stage cancers, success by Grail could save millions of lives and billions of dollars.

Illumina created Grail as a separate entity in which it initially held a controlling interest (having provided the bulk of Grail’s $100 million Series A funding). Legally separating Grail in this fashion, rather than running it as an Illumina division, offered a number of benefits. It limited Illumina’s liability for Grail’s activities, enabling Grail to take greater risks. It mitigated the Theranos problem of managers’ being distracted by too many tasks: Grail managers could concentrate exclusively on developing a viable cancer-screening test, while Illumina’s management continued focusing on that company’s core business. It made it easier for Grail to attract talented managers, who would rather come in as corporate officers than as division heads. (Indeed, Grail landed Jeff Huber, a high-profile Google executive, as its initial CEO.) Structuring Grail as a majority-owned subsidiary also allowed Illumina to attract outside capital, with the prospect of raising more money in the future by selling new Grail stock to investors.

In 2017, Grail did exactly that, issuing new shares to investors in exchange for $1 billion. While this capital infusion enabled the company to move forward with its promising technologies, the creation of new shares meant that Illumina no longer held a controlling interest in the firm. Its ownership interest dipped below 20 percent and now stands at about 14.5 percent of Grail’s voting shares.  

Setting up Grail so as to facilitate outside capital formation and attract top managers who could focus single-mindedly on product development has paid off. Grail has now developed a blood test that, when processed on Illumina’s NGS platform, can accurately detect a number of cancers in asymptomatic individuals. Grail predicts that this “liquid biopsy,” called Galleri, will eventually be able to detect up to 50 cancers before physical symptoms manifest. Grail is also developing other blood-based cancer tests, including one that confirms cancer diagnoses in patients suspected to have cancer and another designed to detect cancer recurrence in patients who have undergone treatment.

Grail now faces a host of new challenges. In addition to continuing to develop its tests, Grail needs to:  

  • Engage in widespread testing of its cancer-detection products on up to 50 different cancers;
  • Process and present the information from its extensive testing in formats that will be acceptable to regulators;
  • Navigate the pre-market regulatory approval process in different countries across the globe;
  • Secure commitments from third-party payors (governments and private insurers) to provide coverage for its tests;
  • Develop means of manufacturing its products at scale;
  • Create and implement measures to ensure compliance with FDA’s Quality System Regulation (QSR), which governs virtually all aspects of medical device production (design, testing, production, process controls, quality assurance, labeling, packaging, handling, storage, distribution, installation, servicing, and shipping); and
  • Market its tests to hospitals and health-care professionals.

These steps are all required to secure widespread use of Grail’s tests. And, importantly, such widespread use will actually improve the quality of the tests. Grail’s tests analyze the DNA in a patient’s blood to look for methylation patterns that are known to be associated with cancer. In essence, the tests work by comparing the methylation patterns in a test subject’s DNA against a database of genomic data collected from large clinical studies. With enough comparison data, the tests can indicate not only the presence of cancer but also where in the body the cancer signal is coming from. And because Grail’s tests use machine learning to hone their algorithms in response to new data collected from test usage, the greater the use of Grail’s tests, the more accurate, sensitive, and comprehensive they become.     

To assist with the various tasks needed to achieve speedy and widespread use of its tests, Grail decided to reunite with Illumina. In September 2020, the companies entered a merger agreement under which Illumina would acquire the 85.5 percent of Grail voting shares it does not already own for cash and stock worth $7.1 billion and additional contingent payments of $1.2 billion to Grail’s non-Illumina shareholders.

Recombining with Illumina will allow Grail—which has appropriately focused heretofore solely on product development—to accomplish the tasks now required to get its tests to market. Illumina has substantial laboratory capacity that Grail can access to complete the testing needed to refine its products and establish their effectiveness. As the leading global producer of NGS platforms, Illumina has unparalleled experience in navigating the regulatory process for NGS-related products, producing and marketing those products at scale, and maintaining compliance with complex regulations like FDA’s QSR. With nearly 3,000 international employees located in 26 countries, it has obtained regulatory authorizations for NGS-based tests in more than 50 jurisdictions around the world.  It also has long-standing relationships with third-party payors, health systems, and laboratory customers. Grail, by contrast, has never obtained FDA approval for any products, has never manufactured NGS-based tests at scale, has only a fledgling regulatory affairs team, and has far less extensive contacts with potential payors and customers. By remaining focused on its key objective (unlike Theranos), Grail has achieved product-development success. Recombining with Illumina will now enable it, expeditiously and efficiently, to deploy its products across the globe, generating user data that will help improve the products going forward.

In addition to these benefits, the combination of Illumina and Grail will eliminate a problem that occurs when producers of complementary products each operate in markets that are not fully competitive: double marginalization. When sellers of products that are used together each possess some market power due to a lack of competition, their uncoordinated pricing decisions may result in less surplus for each of them and for consumers of their products. Combining so that they can coordinate pricing will leave them and their customers better off.

Unlike a producer participating in a competitive market, a producer that faces little competition can enhance its profits by raising its price above its incremental cost.[2] But there are limits on its ability to do so. As the well-known monopoly pricing model shows, even a monopolist has a “profit-maximizing price” beyond which any incremental price increase would lose money.[3] Raising price above that level would hurt both consumers and the monopolist.

When consumers are deciding whether to purchase products that must be used together, they assess the final price of the overall bundle. This means that when two sellers of complementary products both have market power, there is an above-cost, profit-maximizing combined price for their products. If the complement sellers individually raise their prices so that the combined price exceeds that level, they will reduce their own aggregate welfare and that of their customers.

This unfortunate situation is likely to occur when market power-possessing complement producers are separate companies that cannot coordinate their pricing. In setting its individual price, each separate firm will attempt to capture as much surplus for itself as possible. This will cause the combined price to rise above the profit-maximizing level. If they could unite, the complement sellers would coordinate their prices so that the combined price was lower and the sellers’ aggregate profits higher.

Here, Grail and Illumina provide complementary products (cancer-detection tests and the NGS platforms on which they are processed), and each faces little competition. If they price separately, their aggregate prices are likely to exceed the profit-maximizing combined price for the cancer test and NGS platform access. If they combine into a single firm, that firm would maximize its profits by lowering prices so that the aggregate test/platform price is the profit-maximizing combined price.  This would obviously benefit consumers.

In light of the social benefits the Grail/Illumina merger offers—speeding up and lowering the cost of getting Grail’s test approved and deployed at scale, enabling improvement of the test with more extensive user data, eliminating double marginalization—one might expect policymakers to cheer the companies’ recombination. The FTC, however, is trying to block it.  In late March, the commission brought an action claiming that the merger would violate Section 7 of the Clayton Act by substantially reducing competition in a line of commerce.

The FTC’s theory is that recombining Illumina and Grail will impair competition in the market for “multi-cancer early detection” (MCED) tests. The commission asserts that the combined company would have both the opportunity and the motivation to injure rival producers of MCED tests.

The opportunity to do so would stem from the fact that MCED tests must be processed on NGS platforms, which are produced exclusively by Illumina. Illumina could charge Grail’s rivals or their customers higher prices for access to its NGS platforms (or perhaps deny access altogether) and could withhold the technical assistance rivals would need to secure both regulatory approval of their tests and coverage by third-party payors.

But why would Illumina take this tack, given that it would be giving up profits on transactions with producers and users of other MCED tests? The commission asserts that the losses a combined Illumina/Grail would suffer in the NGS platform market would be more than offset by gains stemming from reduced competition in the MCED test market. Thus, the combined company would have a motive, as well as an opportunity, to cause anticompetitive harm.

There are multiple problems with the FTC’s theory. As an initial matter, the market the commission claims will be impaired doesn’t exist. There is no MCED test market for the simple reason that there are no commercializable MCED tests. If allowed to proceed, the Illumina/Grail merger may create such a market by facilitating the approval and deployment of the first MCED test. At present, however, there is no such market, and the chances of one ever emerging will be diminished if the FTC succeeds in blocking the recombination of Illumina and Grail.

Because there is no existing market for MCED tests, the FTC’s claim that a combined Illumina/Grail would have a motivation to injure MCED rivals—potential consumers of Illumina’s NGS platforms—is rank speculation. The commission has no idea what profits Illumina would earn from NGS platform sales related to MCED tests, what profits Grail would earn on its own MCED tests, and how the total profits of the combined company would be affected by impairing opportunities for rival MCED test producers.

In the only relevant market that does exist—the cancer-detection market—there can be no question about the competitive effect of an Illumina/Grail merger: It would enhance competition by speeding the creation of a far superior offering that promises to save lives and substantially reduce health-care costs. 

There is yet another problem with the FTC’s theory of anticompetitive harm. The commission’s concern that a recombined Illumina/Grail would foreclose Grail’s rivals from essential NGS platforms and needed technical assistance is obviated by Illumina’s commitments. Specifically, Illumina has irrevocably offered current and prospective oncology customers 12-year contract terms that would guarantee them the same access to Illumina’s sequencing products that they now enjoy, with no price increase. Indeed, the offered terms obligate Illumina not only to refrain from raising prices but also to lower them by at least 43% by 2025 and to provide regulatory and technical assistance requested by Grail’s potential rivals. Illumina’s continued compliance with its firm offer will be subject to regular audits by an independent auditor.

In the end, then, the FTC’s challenge to the Illumina/Grail merger is unjustified. The initial separation of Grail from Illumina encouraged the managerial focus and capital accumulation needed for successful test development. Recombining the two firms will now expedite and lower the costs of the regulatory approval and commercialization processes, permitting Grail’s tests to be widely used, which will enhance their quality. Bringing Grail’s tests and Illumina’s NGS platforms within a single company will also benefit consumers by eliminating double marginalization. Any foreclosure concerns are entirely speculative and are obviated by Illumina’s contractual commitments.

In light of all these considerations, one wonders why the FTC challenged this merger (and on a 4-0 vote) in the first place. Perhaps it was the populist forces from left and right that are pressuring the commission to generally be more aggressive in policing mergers. Some members of the commission may also worry, legitimately, that if they don’t act aggressively on a vertical merger, Congress will amend the antitrust laws in a deleterious fashion. But the commission has picked a poor target. This particular merger promises tremendous benefit and threatens little harm. The FTC should drop its challenge and encourage its European counterparts to do the same. 


[1] If you don’t have time for Carreyrou’s book (and you should make time if you can), HBO’s Theranos documentary is pretty solid.

[2] This ability is market power.  In a perfectly competitive market, any firm that charges an above-cost price will lose sales to rivals, who will vie for business by lowering their prices down to the level of their cost.

[3] Under the model, this is the price that emerges at the output level where the producer’s marginal revenue equals its marginal cost.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

On October 20, 2020, the U.S. Department of Justice (DOJ) and eleven states with Republican attorneys general sued Google for monopolizing and attempting to monopolize the markets for general internet search services, search advertising, and “general search text” advertising (i.e., ads that resemble search results).  Last week, California joined the lawsuit, making it a bipartisan affair.

DOJ and the states (collectively, “the government”) allege that Google has used contractual arrangements to expand and cement its dominance in the relevant markets.  In particular, the government complains that Google has agreed to share search ad revenues in exchange for making Google Search the default search engine on various “search access points.” 

Google has entered such agreements with Apple (for search on iPhones and iPads), manufacturers of Android devices and the mobile service carriers that support them, and producers of web browsers.  Google is also pursuing default status on new internet-enabled consumer products, such as voice assistants and “smart” TVs, appliances, and wearables.  In the government’s telling, this all amounts to Google’s sharing of monopoly profits with firms that can ensure its continued monopoly by imposing search defaults that users are unlikely to alter.

There are several obvious weaknesses with the government’s case.  One is that preset internet defaults are super easy to change and, in other contexts, are regularly altered.  For example, while 88% of desktop and laptop computers use the Windows operating system, which defaults to a Microsoft browser (Internet Explorer or Edge), Google’s Chrome browser commands a 69% market share on desktops and laptops, compared to around 13% for Internet Explorer and Edge combined.  Changing a default search engine is as easy as changing a browser default—three simple steps on an iPhone!—and it seems consumers will change defaults they don’t actually prefer.

A second obvious weakness, related to the first, is that the government has alleged no facts suggesting that Google’s search rivals—primarily Bing, Yahoo, and DuckDuckGo—would have enjoyed more success but for Google’s purportedly exclusionary agreements.  Even absent default status, people likely would have selected Google Search because it’s the better search engine.  It doesn’t seem the challenged arrangements caused Google’s search dominance.

Admittedly, the standard of causation in monopolization cases (at least those seeking only injunctive relief) is low.  The D.C. Circuit’s Microsoft decision described it as “edentulous” or, less pretentiously, toothless.  Nevertheless, the government is unlikely to prevail in its action against Google—and that’s a good thing.  Below, I highlight the central deficiency in the government’s Google case and point out problems with the government’s challenges to each of Google’s purportedly exclusionary arrangements.   

The Lawsuit’s Overarching Deficiency

We’ve all had the experience of typing a query only to have Google, within a few key strokes, accurately predict what we were going to ask and provide us with exactly the answer we were looking for.  It’s both eerie and awesome, and it keeps us returning to Google time and again.

But it’s not magic.  Nor has Google hacked our brains.  Google is so good at predicting our questions and providing responsive search results because its top-notch algorithms process gazillions of searches and can “learn” from users’ engagement.  Scale is thus essential to Google’s quality. 

The government’s complaint concedes as much.  It acknowledges that “[g]reater scale improves the quality of a general search engine’s algorithms” (¶35) and that “[t]he additional data from scale allows improved automated learning for algorithms to deliver more relevant results, particularly on ‘fresh’ queries (queries seeking recent information), location-based queries (queries asking about something in the searcher’s vicinity), and ‘long-tail’ queries (queries used infrequently)” (¶36). The complaint also asserts that “[t]he most effective way to achieve scale is for the general search engine to be the preset default on mobile devices, computers, and other devices…” (¶38).

Oddly, though, the government chides Google for pursuing “[t]he most effective way” of securing the scale that concededly “improves the quality of a general search engine’s algorithms.”  Google’s efforts to ensure and enhance its own product quality are improper, the government says, because “they deny rivals scale to compete effectively” (¶8).  In the government’s view, Google is legally obligated to forego opportunities to make its own product better so as to give its rivals a chance to improve their own offerings.

This is inconsistent with U.S. antitrust law.  Just as firms are not required to hold their prices high to create a price umbrella for their less efficient rivals, they need not refrain from efforts to improve the quality of their own offerings so as to give their rivals a foothold. 

Antitrust does forbid anticompetitive foreclosure of rivals—i.e., business-usurping arrangements that are not the result of efforts to compete on the merits by reducing cost or enhancing quality.  But firms are, and should be, free to make their products better, even if doing so makes things more difficult for their rivals.  Antitrust, after all, protects competition, not competitors.    

The central deficiency in the government’s case is that it concedes that scale is crucial to search engine quality, but it does not assert that there is a “minimum efficient scale”—i.e., a point at which scale economies are exhausted.  If a firm takes actions to enhance its own scale beyond minimum efficient scale, and if its efforts may hold its rivals below such scale, then it may have engaged in anticompetitive foreclosure.  But a firm that pursues scale that makes its products better is simply competing on the merits.

The government likely did not allege that there is a minimum efficient scale in general internet search services because returns to scale go on indefinitely, or at least for a very long time.  But the absence of such an allegation damns the government’s case against Google, for it implies that Google’s efforts to secure the distribution, and thus the greater use, of its services make those services better.

In this regard, the Microsoft case, which the government points to as a model for its action against Google (¶10), is inapposite.  Inthat case, the government alleged that Microsoft had entered license agreements that foreclosed Netscape, a potential rival, from the best avenues of browser distribution: original equipment manufacturers (OEMs) and internet access providers.  The government here similarly alleges that Google has foreclosed rival search engines from the best avenues of search distribution: default settings on mobile devices and web browsers.  But a key difference (in addition to the fact that search defaults are quite easy to change) is that Microsoft’s license restrictions foreclosed Netscape without enhancing the quality of Microsoft’s offerings.  Indeed, the court emphasized that the challenged Microsoft agreements were anticompetitive because they “reduced rival browsers’ usage share not by improving [Microsoft’s] own product but, rather, by preventing OEMs from taking actions that could increase rivals’ share of usage” (emphasis added).  Here, any foreclosure of Google’s search rivals is incidental to Google’s efforts to improve its product by enhancing its scale.

Now, the government might contend that the anticompetitive harms from raising rivals’ distribution costs exceed the procompetitive benefits of enhancing the quality of Google’s search services.  Courts, though, have generally been skeptical of claims that exclusion-causing product enhancements are anticompetitive because they do more harm than good.  There’s a sound reason for this: courts are ill-equipped to weigh the benefits of product enhancements against the costs of competition reductions resulting from product-enhancement efforts.  For that reason, they should—and likely will—stick with the rule that this sort of product-enhancing conduct is competition on the merits, even if it has the incidental effect of raising rivals’ costs.  And if they do so, the government will lose this case.     

Problems with the Government’s Specific Challenges

Agreements with Android OEMs and Wireless Carriers

The government alleges that Google has foreclosed its search rivals from distribution opportunities on the Android platform.  It has done so, the government says, by entering into exclusion-causing agreements with OEMs that produce Android products (Samsung, Motorola, etc.) and with carriers that provide wireless service for Android devices (AT&T, Verizon, etc.).

Android is an open source operating system that is owned by Google and licensed, for free, to producers of mobile internet devices.  Under the terms of the challenged agreements, Google’s counterparties promise not to produce Android “forks”—operating systems that are Android-based but significantly alter or “fragment” the basic platform—in order to get access to proprietary Google apps that Android users typically desire and to certain application protocol interfaces (APIs) that enable various functionalities.  In addition to these “anti-forking agreements,” counterparties enter various “pre-installation agreements” obligating them to install a suite of Google apps that use Google Search as a default.  Installing that suite is a condition for obtaining the right to pre-install Google’s app store (Google Play) and other must-have apps.  Finally, OEMs and carriers enter “revenue sharing agreements” that require the use of Google Search as the sole preset default on a number of search access points in exchange for a percentage of search ad revenue derived from covered devices.  Taken together, the government says, these anti-forking, pre-installation, and revenue-sharing agreements preclude the emergence of Android rivals (from forks) and ensure the continued dominance of Google Search on Android devices.

Eliminating these agreements, though, would likely harm consumers by reducing competition in the market for mobile operating systems.  Within that market, there are two dominant players: Apple’s iOS and Google’s Android.  Apple earns money off iOS by selling hardware—iPhones and iPads that are pre-installed with iOS.  Google licenses Android to OEMs for free but then earns advertising revenue off users’ searches (which provide an avenue for search ads) and other activities (which generate user data for better targeted display ads).  Apple and Google thus compete on revenue models.  As Randy Picker has explained, Microsoft tried a third revenue model—licensing a Windows mobile operating system to OEMs for a fee—but it failed.  The continued competition between Apple and Google, though, allows for satisfaction of heterogenous consumer preferences: Apple products are more expensive but more secure (due to Apple’s tight control over software and hardware); Android devices are cheaper (as the operating system is ad-supported) and offer more innovations (as OEMs have more flexibility), but tend to be less secure.  Such variety—a result of business model competition—is good for consumers. 

If the government were to prevail and force Google to end the agreements described above, thereby reducing the advertising revenue Google derives from Android, Google would have to either copy Apple’s vertically integrated model so as to recoup its Android investments through hardware sales, charge OEMs for Android (a la Microsoft), or cut back on its investments in Android.  In each case, consumers would suffer.  The first option would take away an offering preferred by many consumers—indeed most globally, as Android dominates iOS on a worldwide basis.  The second option would replace Google’s business model with one that failed, suggesting that consumers value it less.  The third option would reduce product quality in the market for mobile operating systems. 

In the end, then, the government’s challenge to Google’s Android agreements is myopic and misguided.  Competition among business models, like competition along any dimension, inures to the benefit of consumers.  Precluding it as the government is demanding would be silly.       

Agreements with Browser Producers

Web browsers like Apple’s Safari and Mozilla’s Firefox are a primary distribution channel for search engines.  The government claims that Google has illicitly foreclosed rival search engines from this avenue of distribution by entering revenue-sharing agreements with the major non-Microsoft browsers (i.e., all but Microsoft’s Edge and Internet Explorer).  Under those agreements, Google shares up to 40% of ad revenues generated from a browser in exchange for being the preset default on both computer and mobile versions of the browser.

Surely there is no problem, though, with search engines paying royalties to web browsers.  That’s how independent browsers like Opera and Firefox make money!  Indeed, 95% of Firefox’s revenue comes from search royalties.  If browsers were precluded from sharing in search engines’ ad revenues, they would have to find an alternative source of financing.  Producers of independent browsers would likely charge license fees, which consumers would probably avoid.  That means the only available browsers would be those affiliated with an operating system (Microsoft’s Edge, Apple’s Safari) or a search engine (Google’s Chrome).  It seems doubtful that reducing the number of viable browsers would benefit consumers.  The law should therefore allow payment of search royalties to browsers.  And if such payments are permitted, a browser will naturally set its default search engine so as to maximize its payout.  

Google’s search rivals can easily compete for default status on a browser by offering a better deal to the browser producer.  In 2014, for example, search engine Yahoo managed to wrest default status on Mozilla’s Firefox away from Google.  The arrangement was to last five years, but in 2017, Mozilla terminated the agreement and returned Google to default status because so many Firefox users were changing the browser’s default search engine from Yahoo to Google.  This historical example undermines the government’s challenges to Google’s browser agreements by showing (1) that other search engines can attain default status by competing, and (2) that defaults aren’t as “sticky” as the government claims—at least, not when the default is set to a search engine other than the one most people prefer.

In short, there’s nothing anticompetitive about Google’s browser agreements, and enjoining such deals would likely injure consumers by reducing competition among browsers.

Agreements with Apple

That brings us to the allegations that have gotten the most attention in the popular press: those concerning Google’s arrangements with Apple.  The complaint alleges that Google pays Apple $8-12 billion a year—a whopping 15-20% of Apple’s net income—for granting Google default search status on iOS devices.  In the government’s telling, Google is agreeing to share a significant portion of its monopoly profits with Apple in exchange for Apple’s assistance in maintaining Google’s search monopoly.

An alternative view, of course, is that Google is just responding to Apple’s power: Apple has assembled a giant installed base of loyal customers and can demand huge payments to favor one search engine over another on its popular mobile devices.  In that telling, Google may be paying Apple to prevent it from making Bing or another search engine the default on Apple’s search access points.

If that’s the case, what Google is doing is both procompetitive and a boon to consumers.  Microsoft could easily outbid Google to have Bing set as the default search engine on Apple’s devices. Microsoft’s market capitalization exceeds that of Google parent Alphabet by about $420 billion ($1.62 trillion versus $1.2 trillion), which is roughly the value of Walmart.  Despite its ability to outbid Google for default status, Microsoft hasn’t done so, perhaps because it realizes that defaults aren’t that sticky when the default service isn’t the one most people prefer.  Microsoft knows that from its experience with Internet Explorer and Edge (which collectively command only around 13% of the desktop browser market even though they’re the defaults on Windows, which has a 88% market share on desktops and laptops), and from its experience with Bing (where “Google” is the number one search term).  Nevertheless, the possibility remains that Microsoft could outbid Google for default status, improve its quality to prevent users from changing the default (or perhaps pay users for sticking with Bing), and thereby take valuable scale from Google, impairing the quality of Google Search.  To prevent that from happening, Google shares with Apple a generous portion of its search ad revenues, which, given the intense competition for mobile device sales, Apple likely passes along to consumers in the form of lower phone and tablet prices.

If the government succeeds in enjoining Google’s payments to Apple for default status, other search engines will presumably be precluded from such arrangements as well.  After all, the “foreclosure” effect of paying for default search status on Apple products is the same regardless of which search engine does the paying, and U.S. antitrust law does not “punish” successful firms by forbidding them from engaging in competitive activities that are open to their rivals. 

Ironically, then, the government’s success in its challenge to Google’s Apple payments would benefit Google at the expense of consumers:  Google would almost certainly remain the default search engine on Apple products, as it is most preferred by consumers and no rival could pay to dislodge it; Google would not have to pay a penny to retain its default status; and Apple would lose revenues that it likely passes along to consumers in the form of lower prices.  The courts are unlikely to countenance this perverse result by ruling that Google’s arrangements with Apple violate the antitrust laws.

Arrangements with Producers of Internet-Enabled “Smart” Devices

The final part of the government’s case against Google starkly highlights a problem that is endemic to the entire lawsuit.  The government claims that Google, having locked up all the traditional avenues of search distribution with the arrangements described above, is now seeking to foreclose search distribution in the new avenues being created by internet-enabled consumer products like wearables (e.g., smart watches), voice assistants, smart TVs, etc.  The alleged monopolistic strategy is similar to those described above: Google will share some of its monopoly profits in exchange for search default status on these smart devices, thereby preventing rival search engines from attaining valuable scale.

It’s easy to see in this context, though, why Google’s arrangements are likely procompetitive.  Unlike web browsers, mobile phones, and tablets, internet-enabled smart devices are novel.  Innovators are just now discovering new ways to embed internet functionality into everyday devices. 

Putting oneself in the position of these innovators helps illuminate a key beneficial aspect of Google’s arrangements:  They create an incentive to develop new and attractive means of distributing search.  Innovators currently at work on internet-enabled devices are no doubt spurred on by the possibility of landing a lucrative distribution agreement with Google or another search engine.  Banning these sorts of arrangements—the consequence of governmental success in this lawsuit—would diminish the incentive to innovate.

But that can be said of every single one of the arrangements the government is challenging. Because of Google’s revenue-sharing with search distributors, each of them has an added incentive to make their distribution channels desirable to consumers.  Android OEMs and Apple will work harder to produce mobile devices that people will want to use for internet searches; browser producers will endeavor to improve their offerings.  By paying producers of search access points a portion of the search ad revenues generated on their platforms, Google motivates them to generate more searches, which they can best do by making their products as attractive as possible. 

At the end of the day, then, the government’s action against Google seeks to condemn conduct that benefits consumers.  Because of the challenged arrangements, Google makes its own search services better, is able to license Android for free, ensures the continued existence of independent web browsers like Firefox and Opera, helps lower the price of iPhones and iPads, and spurs innovators to develop new “Internet of Things” devices that can harness the power of the web. 

The Biden administration would do well to recognize this lawsuit for what it is: a poorly conceived effort to appear to be “doing something” about a Big Tech company that has drawn the ire (for different reasons) of both progressives and conservatives.  DOJ and its state co-plaintiffs should seek dismissal of this action.  

In an amicus brief filed last Friday, a diverse group of antitrust scholars joined the Washington Legal Foundation in urging the U.S. Court of Appeals for the Second Circuit to vacate the Federal Trade Commission’s misguided 1-800 Contacts decision. Reasoning that 1-800’s settlements of trademark disputes were “inherently suspect,” the FTC condemned the settlements under a cursory “quick look” analysis. In so doing, it improperly expanded the category of inherently suspect behavior and ignored an obvious procompetitive justification for the challenged settlements.  If allowed to stand, the Commission’s decision will impair intellectual property protections that foster innovation.

A number of 1-800’s rivals purchased online ad placements that would appear when customers searched for “1-800 Contacts.” 1-800 sued those rivals for trademark infringement, and the lawsuits settled. As part of each settlement, 1-800 and its rival agreed not to bid on each other’s trademarked terms in search-based keyword advertising. (For example, EZ Contacts could not bid on a placement tied to a search for 1-800 Contacts, and vice-versa). Each party also agreed to employ “negative keywords” to ensure that its ads would not appear in response to a consumer’s online search for the other party’s trademarks. (For example, in bidding on keywords, 1-800 would have to specify that its ad must not appear in response to a search for EZ Contacts, and vice-versa). Notably, the settlement agreements didn’t restrict the parties’ advertisements through other media such as TV, radio, print, or other forms of online advertising. Nor did they restrict paid search advertising in response to any search terms other than the parties’ trademarks.

The FTC concluded that these settlement agreements violated the antitrust laws as unreasonable restraints of trade. Although the agreements were not unreasonable per se, as naked price-fixing is, the Commission didn’t engage in the normally applicable rule of reason analysis to determine whether the settlements passed muster. Instead, the Commission condemned the settlements under the truncated analysis that applies when, in the words of the Supreme Court, “an observer with even a rudimentary understanding of economics could conclude that the arrangements in question would have an anticompetitive effect on customers and markets.” The Commission decided that no more than a quick look was required because the settlements “restrict the ability of lower cost online sellers to show their ads to consumers.”

That was a mistake. First, the restraints in 1-800’s settlements are far less extensive than other restraints that the Supreme Court has said may not be condemned under a cursory quick look analysis. In California Dental, for example, the Supreme Court reversed a Ninth Circuit decision that employed the quick look analysis to condemn a de facto ban on all price and “comfort” advertising by members of a dental association. In light of the possibility that the ban could reduce misleading ads, enhance customer trust, and thereby stimulate demand, the Court held that the restraint must be assessed under the more probing rule of reason. A narrow limit on the placement of search ads is far less restrictive than the all-out ban for which the California Dental Court prescribed full-on rule of reason review.

1-800’s settlements are also less likely to be anticompetitive than are other settlements that the Supreme Court has said must be evaluated under the rule of reason. The Court’s Actavis decision rejected quick look and mandated full rule of reason analysis for reverse payment settlements of pharmaceutical patent litigation. In a reverse payment settlement, the patent holder pays an alleged infringer to stay out of the market for some length of time. 1-800’s settlements, by contrast, did not exclude its rivals from the market, place any restrictions on the content of their advertising, or restrict the placement of their ads except on webpages responding to searches for 1-800’s own trademarks. If the restraints in California Dental and Actavis required rule of reason analysis, then those in 1-800’s settlements surely must as well.

In addition to disregarding Supreme Court precedents that limit when mere quick look is appropriate, the FTC gave short shrift to a key procompetitive benefit of the restrictions in 1-800’s settlements. 1-800 spent millions of dollars convincing people that they could save money by ordering prescribed contact lenses from a third party rather than buying them from prescribing optometrists. It essentially built the online contact lens market in which its rivals now compete. In the process, it created a strong trademark, which undoubtedly boosts its own sales. (Trademarks point buyers to a particular seller and enhance consumer confidence in the seller’s offering, since consumers know that branded sellers will not want to tarnish their brands with shoddy products or service.)

When a rival buys ad space tied to a search for 1-800 Contacts, that rival is taking a free ride on 1-800’s investments in its own brand and in the online contact lens market itself. A rival that has advertised less extensively than 1-800—primarily because 1-800 has taken the lead in convincing consumers to buy their contact lenses online—will incur lower marketing costs than 1-800 and may therefore be able to underprice it.  1-800 may thus find that it loses sales to rivals who are not more efficient than it is but have lower costs because they have relied on 1-800’s own efforts.

If market pioneers like 1-800 cannot stop this sort of free-riding, they will have less incentive to make the investments that create new markets and develop strong trade names. The restrictions in the 1-800 settlements were simply an effort to prevent inefficient free-riding while otherwise preserving the parties’ freedom to advertise. They were a narrowly tailored solution to a problem that hurt 1-800 and reduced incentives for future investments in market-developing activities that inure to the benefit of consumers.

Rule of reason analysis would have allowed the FTC to assess the full market effects of 1-800’s settlements. The Commission’s truncated assessment, which was inconsistent with Supreme Court decisions on when a quick look will suffice, condemned conduct that was likely procompetitive. The Second Circuit should vacate the FTC’s order.

The full amicus brief, primarily drafted by WLF’s Corbin Barthold and joined by Richard Epstein, Keith Hylton, Geoff Manne, Hal Singer, and me, is here.

In my fifteen years as a law professor, I’ve become convinced that there’s a hole in the law school curriculum.  When it comes to regulation, we focus intently on the process of regulating and the interpretation of rules (see, e.g., typical administrative law and “leg/reg” courses), but we rarely teach students what, as a matter of substance, distinguishes a good regulation from a bad one.  That’s unfortunate, because lawyers often take the lead in crafting regulatory approaches. 

In the fall of 2017, I published a book seeking to fill this hole.  That book, How to Regulate: A Guide for Policymakers, is the inspiration for a symposium that will occur this Friday (Feb. 8) at the University of Missouri Law School.

The symposium, entitled Protecting the Public While Fostering Innovation and Entrepreneurship: First Principles for Optimal Regulation, will bring together policymakers and regulatory scholars who will go back to basics. Participants will consider two primary questions:

(1) How, as a substantive matter, should regulation be structured in particular areas? (Specifically, what regulatory approaches would be most likely to forbid the bad while chilling as little of the good as possible and while keeping administrative costs in check? In other words, what rules would minimize the sum of error and decision costs?), and

(2) What procedures would be most likely to generate such optimal rules?


The symposium webpage includes the schedule for the day (along with a button to Livestream the event), but here’s a quick overview.

I’ll set the stage by discussing the challenge policymakers face in trying to accomplish three goals simultaneously: ban bad instances of behavior, refrain from chilling good ones, and keep rules simple enough to be administrable.

We’ll then hear from a panel of experts about the principles that would best balance those competing concerns in their areas of expertise. Specifically:

  • Jerry Ellig (George Washington University; former chief economist of the FCC) will discuss telecommunications policy;
  • TOTM’s own Gus Hurwitz (Nebraska Law) will consider regulation of Internet platforms; and
  • Erika Lietzan (Mizzou Law) will examine the regulation of therapeutic drugs and medical devices.

Hopefully, we can identify some common threads among the substantive principles that should guide effective regulation in these disparate areas

Before we turn to consider regulatory procedures, we will hear from our keynote speaker, Commissioner Hester Peirce of the SEC. As The Economist recently reported, Commissioner Peirce has been making waves with her speeches, many of which have gone back to basics and asked why the government is intervening and whether it’s doing so in an optimal fashion.

Following Commissioner Peirce’s address, we will hear from the following panelists about how regulatory procedures should be structured in order to generate substantively optimal rules:

  • Bridget Dooling (George Washington University; former official in the White House Office of Information and Regulatory Affairs);
  • Ken Davis (former Deputy Attorney General of Virginia and member of the Federalist Society’s Regulatory Transparency Project);
  • James Broughel (Senior Fellow at the Mercatus Center; expert on state-level regulatory review procedures); and
  • Justin Smith (former counsel to Missouri governor; led the effort to streamline the Missouri regulatory code).

As you can see, this Friday is going to be a great day at Mizzou Law. If you’re close enough to join us in person, please come. Otherwise, please join us via Livestream.

Writing in the New York Times, journalist E. Tammy Kim recently called for Seattle and other pricey, high-tech hubs to impose a special tax on Microsoft and other large employers of high-paid workers. Efficiency demands such a tax, she says, because those companies are imposing a negative externality: By driving up demand for housing, they are causing rents and home prices to rise, which adversely affects city residents.

Arguing that her proposal is “akin to a pollution tax,” Ms. Kim writes:

A half-century ago, it seemed inconceivable that factories, smelters or power plants should have to account for the toxins they released into the air.  But we have since accepted the idea that businesses should have to pay the public for the negative externalities they cause.

It is true that negative externalities—costs imposed on people who are “external” to the process creating those costs (as when a factory belches rancid smoke on its neighbors)—are often taxed. One justification for such a tax is fairness: It seems inequitable that one party would impose costs on another; justice may demand that the victimizer pay. The justification cited by the economist who first proposed such taxes, though, was something different. In his 1920 opus, The Economics of Welfare, British economist A.C. Pigou proposed taxing behavior involving negative externalities in order to achieve efficiency—an increase in overall social welfare.   

With respect to the proposed tax on Microsoft and other high-tech employers, the fairness argument seems a stretch, and the efficiency argument outright fails. Let’s consider each.

To achieve fairness by forcing a victimizer to pay for imposing costs on a victim, one must determine who is the victimizer. Ms. Kim’s view is that Microsoft and its high-paid employees are victimizing (imposing costs on) incumbent renters and lower-paid homebuyers. But is that so clear?

Microsoft’s desire to employ high-skilled workers, and those employees’ desire to live near their work, conflicts with incumbent renters’ desire for low rent and lower paid homebuyers’ desire for cheaper home prices. If Microsoft got its way, incumbent renters and lower paid homebuyers would be worse off.

But incumbent renters’ and lower-paid homebuyers’ insistence on low rents and home prices conflicts with the desires of Microsoft, the high-skilled workers it would like to hire, and local homeowners. If incumbent renters and lower paid homebuyers got their way and prevented Microsoft from employing high-wage workers, Microsoft, its potential employees, and local homeowners would be worse off. Who is the victim here?

As Nobel laureate Ronald Coase famously observed, in most cases involving negative externalities, there is a reciprocal harm: Each party is a victim of the other party’s demands and a victimizer with respect to its own. When both parties are victimizing each other, it’s hard to “do justice” by taxing “the” victimizer.

A desire to achieve efficiency provides a sounder basis for many so-called Pigouvian taxes. With respect to Ms. Kim’s proposed tax, however, the efficiency justification fails. To see why that is so, first consider how it is that Pigouvian taxes may enhance social welfare.

When a business engages in some productive activity, it uses resources (labor, materials, etc.) to produce some sort of valuable output (e.g., a good or service). In determining what level of productive activity to engage in (e.g., how many hours to run the factory, etc.), it compares its cost of engaging in one more unit of activity to the added benefit (revenue) it will receive from doing so. If its so-called “marginal cost” from the additional activity is less than or equal to the “marginal benefit” it will receive, it will engage in the activity; otherwise, it won’t.  

When the business is bearing all the costs and benefits of its actions, this outcome is efficient. The cost of the inputs used in production are determined by the value they could generate in alternative uses. (For example, if a flidget producer could create $4 of value from an ounce of tin, a widget-maker would have to bid at least $4 to win that tin from the flidget-maker.) If a business finds that continued production generates additional revenue (reflective of consumers’ subjective valuation of the business’s additional product) in excess of its added cost (reflective of the value its inputs could create if deployed toward their next-best use), then making more moves productive resources to their highest and best uses, enhancing social welfare. This outcome is “allocatively efficient,” meaning that productive resources have been allocated in a manner that wrings the greatest possible value from them.

Allocative efficiency may not result, though, if the producer is able to foist some of its costs onto others.  Suppose that it costs a producer $4.50 to make an additional widget that he could sell for $5.00. He’d make the widget. But what if producing the widget created pollution that imposed $1 of cost on the producer’s neighbors? In that case, it could be inefficient to produce the widget; the total marginal cost of doing so, $5.50, might well exceed the marginal benefit produced, which could be as low as $5.00. Negative externalities, then, may result in an allocative inefficiency—i.e., a use of resources that produces less total value than some alternative use.

Pigou’s idea was to use taxes to prevent such inefficiencies. If the government were to charge the producer a tax equal to the cost his activity imposed on others ($1 in the above example), then he would capture all the marginal benefit and bear all the marginal cost of his activity. He would thus be motivated to continue his activity only to the point at which its total marginal benefit equaled its total marginal cost. The point of a Pigouvian tax, then, is to achieve allocative efficiency—i.e., to channel productive resources toward their highest and best ends.

When it comes to the negative externality Ms. Kim has identified—an increase in housing prices occasioned by high-tech companies’ hiring of skilled workers—the efficiency case for a Pigouvian tax crumbles. That is because the external cost at issue here is a “pecuniary” externality, a special sort of externality that does not generate inefficiency.

A pecuniary externality is one where the adverse third-party effect consists of an increase in market prices. If that’s the case, the allocative inefficiency that may justify Pigouvian taxes does not exist. There’s no inefficiency from the mere fact that buyers pay more.  Their loss is perfectly offset by a gain to sellers, and—here’s the crucial part—the higher prices channel productive resources toward, not away from, their highest and best ends. High rent levels, for example, signal to real estate developers that more resources should be devoted to creating living spaces within the city. That’s allocatively efficient.

Now, it may well be the case that government policies thwart developers from responding to those salutary price signals. The cities that Ms. Kim says should impose a tax on high-tech employers—Seattle, San Francisco, Austin, New York, and Boulder—have some of the nation’s most restrictive real estate development rules. But that’s a government failure, not a market failure.

In the end, Ms. Kim’s pollution tax analogy fails. The efficiency case for a Pigouvian tax to remedy negative externalities does not apply when, as here, the externality at issue is pecuniary.

For more on pecuniary versus “technological” (non-pecuniary) externalities and appropriate responses thereto, check out Chapter 4 of my recent book, How to Regulate: A Guide for Policymakers.

The Federal Trade Commission will soon hold hearings on Competition and Consumer Protection in the 21st Century.  The topics to be considered include:

  1. The state of antitrust and consumer protection law and enforcement, and their development, since the [1995] Pitofsky hearings;
  2. Competition and consumer protection issues in communication, information and media technology networks;
  3. The identification and measurement of market power and entry barriers, and the evaluation of collusive, exclusionary, or predatory conduct or conduct that violates the consumer protection statutes enforced by the FTC, in markets featuring “platform” businesses;
  4. The intersection between privacy, big data, and competition;
  5. The Commission’s remedial authority to deter unfair and deceptive conduct in privacy and data security matters;
  6. Evaluating the competitive effects of corporate acquisitions and mergers;
  7. Evidence and analysis of monopsony power, including but not limited to, in labor markets;
  8. The role of intellectual property and competition policy in promoting innovation;
  9. The consumer welfare implications associated with the use of algorithmic decision tools, artificial intelligence, and predictive analytics;
  10. The interpretation and harmonization of state and federal statutes and regulations that prohibit unfair and deceptive acts and practices; and
  11. The agency’s investigation, enforcement and remedial processes.

The Commission has solicited comments on each of these main topics and a number of subtopics.  Initial comments are due today, but comments will also be accepted at two other times.  First, before each scheduled hearing on a topic, the Commission will accept comments on that particular matter.  In addition, the Commission will accept comments at the end of all the hearings.

Over the weekend, Mike Sykuta and I submitted a comment on topic 6, “evaluating the competitive effects of corporate acquisitions and mergers.”  We addressed one of the subtopics the FTC will consider: “the analysis of acquisitions and holding of a non-controlling ownership interest in competing companies.”

Here’s our comment, with a link to our working paper on the topic of common ownership by institutional investors:

To Whom It May Concern:

We are grateful for the opportunity to respond to the U.S. Federal Trade Commission’s request for comment on its upcoming hearings on Competition and Consumer Protection in the 21st Century. We are professors of law (Lambert) and economics (Sykuta) at the University of Missouri. We wish to comment on Topic 6, “evaluating the competitive effects of corporate acquisitions and mergers” and specifically on Subtopic 6(c), “the analysis of acquisitions and holding of a non-controlling ownership interest in competing companies.”

Recent empirical research purports to demonstrate that institutional investors’ “common ownership” of small stakes in competing firms causes those firms to compete less aggressively, injuring consumers. A number of prominent antitrust scholars have cited this research as grounds for limiting the degree to which institutional investors may hold stakes in multiple firms that compete in any concentrated market. In our recent working paper, The Case for Doing Nothing About Institutional Investors’ Common Ownership of Small Stakes in Competing Firms, which we submit along with these comments, we contend that the purported competitive problem is overblown and that the proposed solutions would reduce overall social welfare.

With respect to the purported problem, our paper shows that the theory of anticompetitive harm from institutional investors’ common ownership is implausible and that the empirical studies supporting the theory are methodologically unsound. The theory fails to account for the fact that intra-industry diversified institutional investors are also inter-industry diversified, and it rests upon unrealistic assumptions about managerial decision-making. The empirical studies purporting to demonstrate anticompetitive harm from common ownership are deficient because they inaccurately assess institutional investors’ economic interests and employ an endogenous measure that precludes causal inferences.

Even if institutional investors’ common ownership of competing firms did soften market competition somewhat, the proposed policy solutions would themselves create welfare losses that would overwhelm any social benefits they secured. The proposed policy solutions would create tremendous new decision costs for business planners and adjudicators and would raise error costs by eliminating welfare-enhancing investment options and/or exacerbating corporate agency costs.

In light of these problems with the purported problem and shortcomings of the proposed solutions, the optimal regulatory approach—at least, on the current empirical record—is to do nothing about institutional investors’ common ownership of small stakes in competing firms.

Thank you for considering these comments and our attached paper. We would be happy to answer any questions you may have.

Sincerely,

Thomas A. Lambert, Wall Family Chair in Corporate Law and Governance, University of Missouri Law School;
Michael E. Sykuta, Associate Professor, Division of Applied Social Sciences, University of Missouri; Director, Contracting and Organizations Research Institute (CORI)

Kudos to the Commission for holding this important set of hearings.

One of the hottest topics in antitrust these days is institutional investors’ common ownership of the stock of competing firms. Large investment companies like BlackRock, Vanguard, State Street, and Fidelity offer index and actively managed mutual funds that are invested in thousands of companies. In many concentrated industries, these institutional investors are “intra-industry diversified,” meaning that they hold stakes in all the significant competitors within the industry.

Recent empirical studies (e.g., here and here) purport to show that this intra-industry diversification has led to a softening of competition in concentrated markets. The theory is that firm managers seek to maximize the profits of their largest and most powerful shareholders, all of which hold stakes in all the major firms in the market and therefore prefer maximization of industry, not firm-specific, profits. (For example, an investor that owns stock in all the airlines servicing a route would not want those airlines to engage in aggressive price competition to win business from each other. Additional sales to one airline would come at the expense of another, and prices—and thus profit margins—would be lower.)

The empirical studies on common ownership, which have received a great deal of attention in the academic literature and popular press and have inspired antitrust scholars to propose a number of policy solutions, have employed a complicated measurement known as “MHHI delta” (MHHI∆). MHHI∆ is a component of the “modified Herfindahl–Hirschman Index” (MHHI), which, as the name suggests, is an adaptation of the Herfindahl–Hirschman Index (HHI).

HHI, which ranges from near zero to 10,000 and is calculated by summing the squares of the market shares of the firms competing in a market, assesses the degree to which a market is concentrated and thus susceptible to collusion or oligopolistic coordination. MHHI endeavors to account for both market concentration (HHI) and the reduced competition incentives occasioned by common ownership of the firms within a market. MHHI∆ is the part of MHHI that accounts for common ownership incentives, so MHHI = HHI + MHHI∆.  (Notably, neither MHHI nor MHHI∆ is bounded by the 10,000 upper limit applicable to HHI.  At the end of this post, I offer an example of a market in which MHHI and MHHI∆ both exceed 10,000.)

In the leading common ownership study, which looked at the airline industry, the authors first calculated the MHHI∆ on each domestic airline route from 2001 to 2014. They then examined, for each route, how changes in the MHHI∆ over time correlated with changes in airfares on that route. To control for route-specific factors that might influence both fares and the MHHI∆, the authors ran a number of regressions. They concluded that common ownership of air carriers resulted in a 3%–7% increase in fares.

As should be apparent, it is difficult to understand the common ownership issue—the extent to which there is a competitive problem and the propriety of proposed solutions—without understanding MHHI∆. Unfortunately, the formula for the metric is extraordinarily complex. Posner et al. express it as follows:

Where:

  • βij is the fraction of shares in firm j controlled by investor I,
  • the shares are both cash flow and control shares (so control rights are assumed to be proportionate to the investor’s share of firm profits), and
  • sj is the market share of firm j.

The complexity of this formula is, for less technically oriented antitrusters (like me!), a barrier to entry into the common ownership debate.  In the paragraphs that follow, I attempt to lower that entry barrier by describing the overall process for determining MHHI∆, cataloguing the specific steps required to calculate the measure, and offering a concrete example.

Overview of the Process for Calculating MHHI∆

Determining the MHHI∆ for a market involves three primary tasks. The first is to assess, for each coupling of competing firms in the market (e.g., Southwest Airlines and United Airlines), the degree to which the investors in one of the competitors would prefer that it not attempt to win business from the other by lowering prices, etc. This assessment must be completed twice for each coupling. With the Southwest and United coupling, for example, one must assess both the degree to which United’s investors would prefer that the company not win business from Southwest and the degree to which Southwest’s investors would prefer that the company not win business from United. There will be different answers to those two questions if, for example, United has a significant shareholder who owns no Southwest stock (and thus wants United to win business from Southwest), but Southwest does not have a correspondingly significant shareholder who owns no United stock (and would thus want Southwest to win business from United).

Assessing the incentive of one firm, Firm J (to correspond to the formula above), to pull its competitive punches against another, Firm K, requires calculating a fraction that compares the interest of the first firm’s owners in “coupling” profits (the combined profits of J and K) to their interest in “own-firm” profits (J profits only). The numerator of that fraction is based on data from the coupling—i.e., the firm whose incentive to soften competition one is assessing (J) and the firm with which it is competing (K). The fraction’s denominator is based on data for the single firm whose competition-reduction incentive one is assessing (J). Specifically:

  • The numerator assesses the degree to which the firms in the coupling are commonly owned, such that their owners would not benefit from price-reducing, head-to-head competition and would instead prefer that the firms compete less vigorously so as to maximize coupling profits. To determine the numerator, then, one must examine all the investors who are invested in both firms; for each, multiply their ownership percentages in the two firms; and then sum those products for all investors with common ownership. (If an investor were invested in only one firm in the coupling, its ownership percentage would be multiplied by zero and would thus drop out; after all, an investor in only one of the firms has no interest in maximization of coupling versus own-firm profits.)
  • The denominator assesses the degree to which the investor base (weighted by control) of the firm whose competition-reduction incentive is under consideration (J) would prefer that it maximize its own profits, not the profits of the coupling. Determining the denominator requires summing the squares of the ownership percentages of investors in that firm. Squaring means that very small investors essentially drop out and that the denominator grows substantially with large ownership percentages by particular investors. Large ownership percentages suggest the presence of shareholders that are more likely able to influence management, whether those shareholders also own shares in the second company or not.

Having assessed, for each firm in a coupling, the incentive to soften competition with the other, one must proceed to the second primary task: weighing the significance of those firms’ incentives not to compete with each other in light of the coupling’s shares of the market. (The idea here is that if two small firms reduced competition with one another, the effect on overall market competition would be less significant than if two large firms held their competitive fire.) To determine the significance to the market of the two coupling members’ incentives to reduce competition with each other, one must multiply each of the two fractions determined above (in Task 1) times the product of the market shares of the two firms in the coupling. This will generate two “cross-MHHI deltas,” one for each of the two firms in the coupling (e.g., one cross-MHHI∆ for Southwest/United and another for United/Southwest).

The third and final task is to aggregate the effect of common ownership-induced competition-softening throughout the market as a whole by summing the softened competition metrics (i.e., two cross-MHHI deltas for each coupling of competitors within the market). If decimals were used to account for the firms’ market shares (e.g., if a 25% market share was denoted 0.25), the sum should be multiplied by 10,000.

Following is a detailed list of instructions for assessing the MHHI∆ for a market (assuming proportionate control—i.e., that investors’ control rights correspond to their shares of firm profits).

A Nine-Step Guide to Calculating the MHHI∆ for a Market

  1. List the firms participating in the market and the market share of each.
  2. List each investor’s ownership percentage of each firm in the market.
  3. List the potential pairings of firms whose incentives to compete with each other must be assessed. There will be two such pairings for each coupling of competitors in the market (e.g., Southwest/United and United/Southwest) because one must assess the incentive of each firm in the coupling to compete with the other, and that incentive may differ for the two firms (e.g., United may have less incentive to compete with Southwest than Southwest with United). This implies that the number of possible pairings will always be n(n-1), where n is the number of firms in the market.
  4. For each investor, perform the following for each pairing of firms: Multiply the investor’s percentage ownership of the two firms in each pairing (e.g., Institutional Investor 1’s percentage ownership in United * Institutional Investor 1’s percentage ownership in Southwest for the United/Southwest pairing).
  5. For each pairing, sum the amounts from item four across all investors that are invested in both firms. (This will be the numerator in the fraction used in Step 7 to determine the pairing’s cross-MHHI∆.)
  6. For the first firm in each pairing (the one whose incentive to compete with the other is under consideration), sum the squares of the ownership percentages of that firm held by each investor. (This will be the denominator of the fraction used in Step 7 to determine the pairing’s cross-MHHI∆.)
  7. Figure the cross-MHHI∆ for each pairing of firms by doing the following: Multiply the market shares of the two firms, and then multiply the resulting product times a fraction consisting of the relevant numerator (from Step 5) divided by the relevant denominator (from Step 6).
  8. Add together the cross-MHHI∆s for each pairing of firms in the market.
  9. Multiply that amount times 10,000.

I will now illustrate this nine-step process by working through a concrete example.

An Example

Suppose four airlines—American, Delta, Southwest, and United—service a particular market. American and Delta each have 30% of the market; Southwest and United each have a market share of 20%.

Five funds are invested in the market, and each holds stock in all four airlines. Fund 1 owns 1% of each airline’s stock. Fund 2 owns 2% of American and 1% of each of the others. Fund 3 owns 2% of Delta and 1% of each of the others. Fund 4 owns 2% of Southwest and 1% of each of the others. And Fund 4 owns 2% of United and 1% of each of the others. None of the airlines has any other significant stockholder.

Step 1: List firms and market shares.

  1. American – 30% market share
  2. Delta – 30% market share
  3. Southwest – 20% market share
  4. United – 20% market share

Step 2: List investors’ ownership percentages.

Step 3: Catalogue potential competitive pairings.

  1. American-Delta (AD)
  2. American-Southwest (AS)
  3. American-United (AU)
  4. Delta-American (DA)
  5. Delta-Southwest (DS)
  6. Delta-United (DU)
  7. Southwest-American (SA)
  8. Southwest-Delta (SD)
  9. Southwest-United (SU)
  10. United-American (UA)
  11. United-Delta (UD)
  12. United-Southwest (US)

Steps 4 and 5: Figure numerator for determining cross-MHHI∆s.

Step 6: Figure denominator for determining cross-MHHI∆s.

Steps 7 and 8: Determine cross-MHHI∆s for each potential pairing, and then sum all.

  1. AD: .09(.0007/.0008) = .07875
  2. AS: .06(.0007/.0008) = .0525
  3. AU: .06(.0007/.0008) = .0525
  4. DA: .09(.0007/.0008) = .07875
  5. DS: .06(.0007/.0008) = .0525
  6. DU: .06(.0007/.0008) = .0525
  7. SA: .06(.0007/.0008) = .0525
  8. SD: .06(.0007/.0008) = .0525
  9. SU: .04(.0007/.0008) = .035
  10. UA: .06(.0007/.0008) = .0525
  11. UD: .06(.0007/.0008) = .0525
  12. US: .04(.0007/.0008) = .035
    SUM = .6475

Step 9: Multiply by 10,000.

MHHI∆ = 6475.

(NOTE: HHI in this market would total (30)(30) + (30)(30) + (20)(20) + (20)(20) = 2600. MHHI would total 9075.)

***

I mentioned earlier that neither MHHI nor MHHI∆ is subject to an upper limit of 10,000. For example, if there are four firms in a market, five institutional investors that each own 5% of the first three firms and 1% of the fourth, and no other investors holding significant stakes in any of the firms, MHHI∆ will be 15,500 and MHHI 18,000.  (Hat tip to Steve Salop, who helped create the MHHI metric, for reminding me to point out that MHHI and MHHI∆ are not limited to 10,000.)

Ours is not an age of nuance.  It’s an age of tribalism, of teams—“Yer either fer us or agin’ us!”  Perhaps I should have been less surprised, then, when I read the unfavorable review of my book How to Regulate in, of all places, the Federalist Society Review.

I had expected some positive feedback from reviewer J. Kennerly Davis, a contributor to the Federalist Society’s Regulatory Transparency Project.  The “About” section of the Project’s website states:

In the ultra-complex and interconnected digital age in which we live, government must issue and enforce regulations to protect public health and safety.  However, despite the best of intentions, government regulation can fail, stifle innovation, foreclose opportunity, and harm the most vulnerable among us.  It is for precisely these reasons that we must be diligent in reviewing how our policies either succeed or fail us, and think about how we might improve them.

I might not have expressed these sentiments in such pro-regulation terms.  For example, I don’t think government should regulate, even “to protect public health and safety,” absent (1) a market failure and (2) confidence that systematic governmental failures won’t cause the cure to be worse than the disease.  I agree, though, that regulation is sometimes appropriate, that government interventions often fail (in systematic ways), and that regulatory policies should regularly be reviewed with an eye toward reducing the combined costs of market and government failures.

Those are, in fact, the central themes of How to Regulate.  The book sets forth an overarching goal for regulation (minimize the sum of error and decision costs) and then catalogues, for six oft-cited bases for regulating, what regulatory tools are available to policymakers and how each may misfire.  For every possible intervention, the book considers the potential for failure from two sources—the knowledge problem identified by F.A. Hayek and public choice concerns (rent-seeking, regulatory capture, etc.).  It ends up arguing:

  • for property rights-based approaches to environmental protection (versus the command-and-control status quo);
  • for increased reliance on the private sector to produce public goods;
  • that recognizing property rights, rather than allocating usage, is the best way to address the tragedy of the commons;
  • that market-based mechanisms, not shareholder suits and mandatory structural rules like those imposed by Sarbanes-Oxley and Dodd-Frank, are the best way to constrain agency costs in the corporate context;
  • that insider trading restrictions should be left to corporations themselves;
  • that antitrust law should continue to evolve in the consumer welfare-focused direction Robert Bork recommended;
  • against the FCC’s recently abrogated net neutrality rules;
  • that occupational licensure is primarily about rent-seeking and should be avoided;
  • that incentives for voluntary disclosure will usually obviate the need for mandatory disclosure to correct information asymmetry;
  • that the claims of behavioral economics do not justify paternalistic policies to protect people from themselves; and
  • that “libertarian-paternalism” is largely a ruse that tends to morph into hard paternalism.

Given the congruence of my book’s prescriptions with the purported aims of the Regulatory Transparency Project—not to mention the laundry list of specific market-oriented policies the book advocates—I had expected a generally positive review from Mr. Davis (whom I sincerely thank for reading and reviewing the book; book reviews are a ton of work).

I didn’t get what I’d expected.  Instead, Mr. Davis denounced my book for perpetuating “progressive assumptions about state and society” (“wrongheaded” assumptions, the editor’s introduction notes).  He responded to my proposed methodology with a “meh,” noting that it “is not clearly better than the status quo.”  His one compliment, which I’ll gladly accept, was that my discussion of economic theory was “generally accessible.”

Following are a few thoughts on Mr. Davis’s critiques.

Are My Assumptions Progressive?

According to Mr. Davis, my book endorses three progressive concepts:

(i) the idea that market based arrangements among private parties routinely misallocate resources, (ii) the idea that government policymakers are capable of formulating executive directives that can correct private ordering market failures and optimize the allocation of resources, and (iii) the idea that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.

I agree with Mr. Davis that these are progressive ideas.  If my book embraced them, it might be fair to label it “progressive.”  But it doesn’t.  Not one of them.

  1. Market Failure

Nothing in my book suggests that “market based arrangements among private parties routinely misallocate resources.”  I do say that “markets sometimes fail to work well,” and I explain how, in narrow sets of circumstances, market failures may emerge.  Understanding exactly what may happen in those narrow sets of circumstances helps to identify the least restrictive option for addressing problems and would thus would seem a pre-requisite to effective policymaking for a conservative or libertarian.  My mere invocation of the term “market failure,” however, was enough for Mr. Davis to kick me off the team.

Mr. Davis ignored altogether the many points where I explain how private ordering fixes situations that could lead to poor market performance.  At the end of the information asymmetry chapter, for example, I write,

This chapter has described information asymmetry as a problem, and indeed it is one.  But it can also present an opportunity for profit.  Entrepreneurs have long sought to make money—and create social value—by developing ways to correct informational imbalances and thereby facilitate transactions that wouldn’t otherwise occur.

I then describe the advent of companies like Carfax, AirBnb, and Uber, all of which offer privately ordered solutions to instances of information asymmetry that might otherwise create lemons problems.  I conclude:

These businesses thrive precisely because of information asymmetry.  By offering privately ordered solutions to the problem, they allow previously under-utilized assets to generate heretofore unrealized value.  And they enrich the people who created and financed them.  It’s a marvelous thing.

That theme—that potential market failures invite privately ordered solutions that often obviate the need for any governmental fix—permeates the book.  In the public goods chapter, I spend a great deal of time explaining how privately ordered devices like assurance contracts facilitate the production of amenities that are non-rivalrous and non-excludable.  In discussing the tragedy of the commons, I highlight Elinor Ostrom’s work showing how “groups of individuals have displayed a remarkable ability to manage commons goods effectively without either privatizing them or relying on government intervention.”  In the chapter on externalities, I spend a full seven pages explaining why Coasean bargains are more likely than most people think to prevent inefficiencies from negative externalities.  In the chapter on agency costs, I explain why privately ordered solutions like the market for corporate control would, if not precluded by some ill-conceived regulations, constrain agency costs better than structural rules from the government.

Disregarding all this, Mr. Davis chides me for assuming that “markets routinely fail.”  And, for good measure, he explains that government interventions are often a bigger source of failure, a point I repeatedly acknowledge, as it is a—perhaps the—central theme of the book.

  1. Trust in Experts

In what may be the strangest (and certainly the most misleading) part of his review, Mr. Davis criticizes me for placing too much confidence in experts by giving short shrift to the Hayekian knowledge problem and the insights of public choice.

          a.  The Knowledge Problem

According to Mr. Davis, the approach I advocate “is centered around fully functioning experts.”  He continues:

This progressive trust in experts is misplaced.  It is simply false to suppose that government policymakers are capable of formulating executive directives that effectively improve upon private arrangements and optimize the allocation of resources.  Friedrich Hayek and other classical liberals have persuasively argued, and everyday experience has repeatedly confirmed, that the information needed to allocate resources efficiently is voluminous and complex and widely dispersed.  So much so that government experts acting through top down directives can never hope to match the efficiency of resource allocation made through countless voluntary market transactions among private parties who actually possess the information needed to allocate the resources most efficiently.

Amen and hallelujah!  I couldn’t agree more!  Indeed, I said something similar when I came to the first regulatory tool my book examines (and criticizes), command-and-control pollution rules.  I wrote:

The difficulty here is an instance of a problem that afflicts regulation generally.  At the end of the day, regulating involves centralized economic planning:  A regulating “planner” mandates that productive resources be allocated away from some uses and toward others.  That requires the planner to know the relative value of different resource uses.  But such information, in the words of Nobel laureate F.A. Hayek, “is not given to anyone in its totality.”  The personal preferences of thousands or millions of individuals—preferences only they know—determine whether there should be more widgets and fewer gidgets, or vice-versa.  As Hayek observed, voluntary trading among resource owners in a free market generates prices that signal how resources should be allocated (i.e., toward the uses for which resource owners may command the highest prices).  But centralized economic planners—including regulators—don’t allocate resources on the basis of relative prices.  Regulators, in fact, generally assume that prices are wrong due to the market failure the regulators are seeking to address.  Thus, the so-called knowledge problem that afflicts regulation generally is particularly acute for command-and-control approaches that require regulators to make refined judgments on the basis of information about relative costs and benefits.

That was just the first of many times I invoked the knowledge problem to argue against top-down directives and in favor of market-oriented policies that would enable individuals to harness local knowledge to which regulators would not be privy.  The index to the book includes a “knowledge problem” entry with no fewer than nine sub-entries (e.g., “with licensure regimes,” “with Pigouvian taxes,” “with mandatory disclosure regimes”).  There are undoubtedly more mentions of the knowledge problem than those listed in the index, for the book assesses the degree to which the knowledge problem creates difficulties for every regulatory approach it considers.

Mr. Davis does mention one time where I “acknowledge[] the work of Hayek” and “recognize[] that context specific information is vitally important,” but he says I miss the point:

Having conceded these critical points [about the importance of context-specific information], Professor Lambert fails to follow them to the logical conclusion that private ordering arrangements are best for regulating resources efficiently.  Instead, he stops one step short, suggesting that policymakers defer to the regulator most familiar with the regulated party when they need context-specific information for their analysis.  Professor Lambert is mistaken.  The best information for resource allocation is not to be found in the regional office of the regulator.  It resides with the persons who have long been controlled and directed by the progressive regulatory system.  These are the ones to whom policymakers should defer.

I was initially puzzled by Mr. Davis’s description of how my approach would address the knowledge problem.  It’s inconsistent with the way I described the problem (the “regional office of the regulator” wouldn’t know people’s personal preferences, etc.), and I couldn’t remember ever suggesting that regulatory devolution—delegating decisions down toward local regulators—was the solution to the knowledge problem.

When I checked the citation in the sentences just quoted, I realized that Mr. Davis had misunderstood the point I was making in the passage he cited (my own fault, no doubt, not his).  The cited passage was at the very end of the book, where I was summarizing the book’s contributions.  I claimed to have set forth a plan for selecting regulatory approaches that would minimize the sum of error and decision costs.  I wanted to acknowledge, though, the irony of promulgating a generally applicable plan for regulating in a book that, time and again, decries top-down imposition of one-size-fits-all rules.  Thus, I wrote:

A central theme of this book is that Hayek’s knowledge problem—the fact that no central planner can possess and process all the information needed to allocate resources so as to unlock their greatest possible value—applies to regulation, which is ultimately a set of centralized decisions about resource allocation.  The very knowledge problem besetting regulators’ decisions about what others should do similarly afflicts pointy-headed academics’ efforts to set forth ex ante rules about what regulators should do.  Context-specific information to which only the “regulator on the spot” is privy may call for occasional departures from the regulatory plan proposed here.

As should be obvious, my point was not that the knowledge problem can generally be fixed by regulatory devolution.  Rather, I was acknowledging that the general regulatory approach I had set forth—i.e., the rules policymakers should follow in selecting among regulatory approaches—may occasionally misfire and should thus be implemented flexibly.

           b.  Public Choice Concerns

A second problem with my purported trust in experts, Mr. Davis explains, stems from the insights of public choice:

Actual policymakers simply don’t live up to [Woodrow] Wilson’s ideal of the disinterested, objective, apolitical, expert technocrat.  To the contrary, a vast amount of research related to public choice theory has convincingly demonstrated that decisions of regulatory agencies are frequently shaped by politics, institutional self-interest and the influence of the entities the agencies regulate.

Again, huzzah!  Those words could have been lifted straight out of the three full pages of discussion I devoted to public choice concerns with the very first regulatory intervention the book considered.  A snippet from that discussion:

While one might initially expect regulators pursuing the public interest to resist efforts to manipulate regulation for private gain, that assumes that government officials are not themselves rational, self-interest maximizers.  As scholars associated with the “public choice” economic tradition have demonstrated, government officials do not shed their self-interested nature when they step into the public square.  They are often receptive to lobbying in favor of questionable rules, especially since they benefit from regulatory expansions, which tend to enhance their job status and often their incomes.  They also tend to become “captured” by powerful regulatees who may shower them with personal benefits and potentially employ them after their stints in government have ended.

That’s just a slice.  Elsewhere in those three pages, I explain (1) how the dynamic of concentrated benefits and diffuse costs allows inefficient protectionist policies to persist, (2) how firms that benefit from protectionist regulation are often assisted by “pro-social” groups that will make a public interest case for the rules (Bruce Yandle’s Bootleggers and Baptists syndrome), and (3) the “[t]wo types of losses [that] result from the sort of interest-group manipulation public choice predicts.”  And that’s just the book’s initial foray into public choice.  The entry for “public choice concerns” in the book’s index includes eight sub-entries.  As with the knowledge problem, I addressed the public choice issues that could arise from every major regulatory approach the book considered.

For Mr. Davis, though, that was not enough to keep me out of the camp of Wilsonian progressives.  He explains:

Professor Lambert devotes a good deal of attention to the problem of “agency capture” by regulated entities.  However, he fails to acknowledge that a symbiotic relationship between regulators and regulated is not a bug in the regulatory system, but an inherent feature of a system defined by extensive and continuing government involvement in the allocation of resources.

To be honest, I’m not sure what that last sentence means.  Apparently, I didn’t recite some talismanic incantation that would indicate that I really do believe public choice concerns are a big problem for regulation.  I did say this in one of the book’s many discussions of public choice:

A regulator that has both regular contact with its regulatees and significant discretionary authority over them is particularly susceptible to capture.  The regulator’s discretionary authority provides regulatees with a strong motive to win over the regulator, which has the power to hobble the regulatee’s potential rivals and protect its revenue stream.  The regular contact between the regulator and the regulatee provides the regulatee with better access to those in power than that available to parties with opposing interests.  Moreover, the regulatee’s preferred course of action is likely (1) to create concentrated benefits (to the regulatee) and diffuse costs (to consumers generally), and (2) to involve an expansion of the regulator’s authority.  The upshot is that that those who bear the cost of the preferred policy are less likely to organize against it, and regulators, who benefit from turf expansion, are more likely to prefer it.  Rate-of-return regulation thus involves the precise combination that leads to regulatory expansion at consumer expense: broad and discretionary government power, close contact between regulators and regulatees, decisions that generally involve concentrated benefits and diffuse costs, and regular opportunities to expand regulators’ power and prestige.

In light of this combination of features, it should come as no surprise that the history of rate-of-return regulation is littered with instances of agency capture and regulatory expansion.

Even that was not enough to convince Mr. Davis that I reject the Wilsonian assumption of “disinterested, objective, apolitical, expert technocrat[s].”  I don’t know what more I could have said.

  1. Social Welfare

Mr. Davis is right when he says, “Professor Lambert’s ultimate goal for his book is to provide policymakers with a resource that will enable them to make regulatory decisions that produce greater social welfare.”  But nowhere in my book do I suggest, as he says I do, “that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.”  What I mean by “social welfare” is the aggregate welfare of all the individuals in a society.  And I’m careful to point out that only they know what makes them better off.  (At one point, for example, I write that “[g]overnment planners have no way of knowing how much pleasure regulatees derive from banned activities…or how much displeasure they experience when they must comply with an affirmative command…. [W]ith many paternalistic policies and proposals…government planners are really just guessing about welfare effects.”)

I agree with Mr. Davis that “[t]here is no single generally accepted methodology that anyone can use to determine objectively how and to what extent the welfare of society will be affected by a particular regulatory directive.”  For that reason, nowhere in the book do I suggest any sort of “metes and bounds” measurement of social welfare.  (I certainly do not endorse the use of GDP, which Mr. Davis rightly criticizes; that term appears nowhere in the book.)

Rather than prescribing any sort of precise measurement of social welfare, my book operates at the level of general principles:  We have reasons to believe that inefficiencies may arise when conditions are thus; there is a range of potential government responses to this situation—from doing nothing, to facilitating a privately ordered solution, to mandating various actions; based on our experience with these different interventions, the likely downsides of each (stemming from, for example, the knowledge problem and public choice concerns) are so-and-so; all things considered, the aggregate welfare of the individuals within this group will probably be greatest with policy x.

It is true that the thrust of the book is consequentialist, not deontological.  But it’s a book about policy, not ethics.  And its version of consequentialism is rule, not act, utilitarianism.  Is a consequentialist approach to policymaking enough to render one a progressive?  Should we excise John Stuart Mill’s On Liberty from the classical liberal canon?  I surely hope not.

Is My Proposed Approach an Improvement?

Mr. Davis’s second major criticism of my book—that what it proposes is “just the status quo”—has more bite.  By that, I mean two things.  First, it’s a more painful criticism to receive.  It’s easier for an author to hear “you’re saying something wrong” than “you’re not saying anything new.”

Second, there may be more merit to this criticism.  As Mr. Davis observes, I noted in the book’s introduction that “[a]t times during the drafting, I … wondered whether th[e] book was ‘original’ enough.”  I ultimately concluded that it was because it “br[ought] together insights of legal theorists and economists of various stripes…and systematize[d] their ideas into a unified, practical approach to regulating.”  Mr. Davis thinks I’ve overstated the book’s value, and he may be right.

The current regulatory landscape would suggest, though, that my book’s approach to selecting among potential regulatory policies isn’t “just the status quo.”  The approach I recommend would generate the specific policies catalogued at the outset of this response (in the bullet points).  The fact that those policies haven’t been implemented under the existing regulatory approach suggests that what I’m recommending must be something different than the status quo.

Mr. Davis observes—and I acknowledge—that my recommended approach resembles the review required of major executive agency regulations under Executive Order 12866, President Clinton’s revised version of President Reagan’s Executive Order 12291.  But that order is quite limited in its scope.  It doesn’t cover “minor” executive agency rules (those with expected costs of less than $100 million) or rules from independent agencies or from Congress or from courts or at the state or local level.  Moreover, I understand from talking to a former administrator of the Office of Information and Regulatory Affairs, which is charged with implementing the order, that it has actually generated little serious consideration of less restrictive alternatives, something my approach emphasizes.

What my book proposes is not some sort of governmental procedure; indeed, I emphasize in the conclusion that the book “has not addressed … how existing regulatory institutions should be reformed to encourage the sort of analysis th[e] book recommends.”  Instead, I propose a way to think through specific areas of regulation, one that is informed by a great deal of learning about both market and government failures.  The best audience for the book is probably law students who will someday find themselves influencing public policy as lawyers, legislators, regulators, or judges.  I am thus heartened that the book is being used as a text at several law schools.  My guess is that few law students receive significant exposure to Hayek, public choice, etc.

So, who knows?  Perhaps the book will make a difference at the margin.  Or perhaps it will amount to sound and fury, signifying nothing.  But I don’t think a classical liberal could fairly say that the analysis it counsels “is not clearly better than the status quo.”

A Truly Better Approach to Regulating

Mr. Davis ends his review with a stirring call to revamp the administrative state to bring it “in complete and consistent compliance with the fundamental law of our republic embodied in the Constitution, with its provisions interpreted to faithfully conform to their original public meaning.”  Among other things, he calls for restoring the separation of powers, which has been erased in agencies that combine legislative, executive, and judicial functions, and for eliminating unchecked government power, which results when the legislature delegates broad rulemaking and adjudicatory authority to politically unaccountable bureaucrats.

Once again, I concur.  There are major problems—constitutional and otherwise—with the current state of administrative law and procedure.  I’d be happy to tear down the existing administrative state and begin again on a constitutionally constrained tabula rasa.

But that’s not what my book was about.  I deliberately set out to write a book about the substance of regulation, not the process by which rules should be imposed.  I took that tack for two reasons.  First, there are numerous articles and books, by scholars far more expert than I, on the structure of the administrative state.  I could add little value on administrative process.

Second, the less-addressed substantive question—what, as a substantive matter, should a policy addressing x do?—would exist even if Mr. Davis’s constitutionally constrained regulatory process were implemented.  Suppose that we got rid of independent agencies, curtailed delegations of rulemaking authority to the executive branch, and returned to a system in which Congress wrote all rules, the executive branch enforced them, and the courts resolved any disputes.  Someone would still have to write the rule, and that someone (or group of people) should have some sense of the pros and cons of one approach over another.  That is what my book seeks to provide.

A hard core Hayekian—one who had immersed himself in Law, Legislation, and Liberty—might respond that no one should design regulation (purposive rules that Hayek would call thesis) and that efficient, “purpose-independent” laws (what Hayek called nomos) will just emerge as disputes arise.  But that is not Mr. Davis’s view.  He writes:

A system of governance or regulation based on the rule of law attains its policy objectives by proscribing actions that are inconsistent with those objectives.  For example, this type of regulation would prohibit a regulated party from discharging a pollutant in any amount greater than the limiting amount specified in the regulation.  Under this proscriptive approach to regulation, any and all actions not specifically prohibited are permitted.

Mr. Davis has thus contemplated a purposive rule, crafted by someone.  That someone should know the various policy options and the upsides and downsides of each.  How to Regulate could help.

Conclusion

I’m not sure why Mr. Davis viewed my book as no more than dressed-up progressivism.  Maybe he was triggered by the book’s cover art, which he says “is faithful to the progressive tradition,” resembling “the walls of public buildings from San Francisco to Stalingrad.”  Maybe it was a case of Sunstein Derangement Syndrome.  (Progressive legal scholar Cass Sunstein had nice things to say about the book, despite its criticisms of a number of his ideas.)  Or perhaps it was that I used the term “market failure.”  Many conservatives and libertarians fear, with good reason, that conceding the existence of market failures invites all sorts of government meddling.

At the end of the day, though, I believe we classical liberals should stop pretending that market outcomes are always perfect, that pure private ordering is always and everywhere the best policy.  We should certainly sing markets’ praises; they usually work so well that people don’t even notice them, and we should point that out.  We should continually remind people that government interventions also fail—and in systematic ways (e.g., the knowledge problem and public choice concerns).  We should insist that a market failure is never a sufficient condition for a governmental fix; one must always consider whether the cure will be worse than the disease.  In short, we should take and promote the view that government should operate “under a presumption of error.”

That view, economist Aaron Director famously observed, is the essence of laissez faire.  It’s implicit in the purpose statement of the Federalist Society’s Regulatory Transparency Project.  And it’s the central point of How to Regulate.

So let’s go easy on the friendly fire.

A recent tweet by Lina Khan, discussing yesterday’s American Express decision, exemplifies an unfortunate trend in contemporary antitrust discourse.  Khan wrote:

The economists cited by the Second Circuit (whose opinion SCOTUS affirms) for the analysis of ‘two-sided’ [markets] all had financial links to the credit card sector, as we point out in FN 4 [link to amicus brief].

Her implicit point—made more explicitly in the linked brief, which referred to the economists’ studies as “industry-funded”—was that economic analysis should be discounted if the author has ever received compensation from a firm that might benefit from the proffered analysis.

There are two problems with this reasoning.  First, it’s fallacious.  An ad hominem argument, one addressed “to the person” rather than to the substance of the person’s claims, is a fallacy of irrelevance, sometimes known as a genetic fallacy.  Biased people may make truthful claims, just as unbiased people may get things wrong.  An idea’s “genetics” are irrelevant.  One should assess the substance of the actual idea, not the identity of its proponent.

Second, the reasoning ignores that virtually everyone is biased in some way.  In the antitrust world, those claiming that we should discount the findings and theories of industry-connected experts urging antitrust modesty often stand to gain from having a “bigger” antitrust.

In the common ownership debate about which Mike Sykuta and I have recently been blogging, proponents of common ownership restrictions have routinely written off contrary studies by suggesting bias on the part of the studies’ authors.  All the while, they have ignored their own biases:  If their proposed policies are implemented, their expertise becomes exceedingly valuable to plaintiff lawyers and to industry participants seeking to traverse a new legal minefield.

At the end of our recent paper, The Case for Doing Nothing About Institutional Investors’ Common Ownership of Small Stakes in Competing Firms, Mike and I wrote, “Such regulatory modesty will prove disappointing to those with a personal interest in having highly complex antitrust doctrines that are aggressively enforced.”  I had initially included a snarky footnote, but Mike, who is far nicer than I, convinced me to remove it.

I’ll reproduce it here in the hopes of reducing the incidence of antitrust ad hominem.

Professor Elhauge has repeatedly discounted criticisms of the common ownership studies by suggesting that critics are biased.  See, e.g., Elhauge, supra note 26, at 1 (observing that “objections to my analysis have been raised in various articles, some funded by institutional investors with large horizontal shareholdings”); id. at 3 (“My analysis of executive compensation has been critiqued in a paper by economic consultants O’Brien and Waehrer that was funded by the Investment Company Institute, which represents institutional investors and was headed for the last three years by the CEO of Vanguard.”); Elhauge, supra note 124, at 3 (observing that airline and banking studies “have been critiqued in other articles, some funded by the sort of institutional investors that have large horizontal shareholdings”); id. at 17 (“The Investment Company Institute, an association of institutional investors that for the preceding three years was headed by the CEO of Vanguard, has funded a couple of papers to critique the empirical study showing an adverse link between horizontal shareholding and airline prices.”); id. (observing that co-authors of critique “both have significant experience in the airline industry because they consulted either for the airlines or the DOJ on airline mergers that were approved notwithstanding high levels of horizontal shareholding”); id. at 19 (“The Investment Company Institute has responded by funding a second critique of the airline study.”); id. at 23-24 (“Even to the extent that such studies are not directly funded by industry, when an industry has been viewed as benign for a long time, confirmation bias is a powerful force that will incline many to interpret any data to find no adverse effects.”).  He fails, however, to acknowledge his own bias.  As a professor of antitrust law at one of the nation’s most prestigious law schools, he has an interest in having antitrust be as big and complicated as possible: The more complex the doctrine, and the broader its reach, the more valuable a preeminent antitrust professor’s expertise becomes.  This is not to suggest that one should discount the assertions of Professor Elhauge or other proponents of restrictions on common ownership.  It is simply to observe that bias is unavoidable and that the best approach is therefore to evaluate claims according to their substance, not according to who is asserting them.

Even if institutional investors’ common ownership of small stakes in competing firms did cause some softening of market competition—a claim that is both suspect as a theoretical matter and empirically shaky—the policy solutions common ownership critics have proposed would do more harm than good.

Einer Elhauge has called for public and private lawsuits against institutional investors under Clayton Act Section 7, which is primarily used to police anticompetitive mergers but which literally forbids any stock acquisition that substantially lessens competition in a market. Eric Posner, Fiona Scott Morton, and Glen Weyl have called on the federal antitrust enforcement agencies (FTC and DOJ) to promulgate an enforcement policy that would discourage institutional investors from investing and voting shares in multiple firms within any oligopolistic industry.

As Mike Sykuta and I explain in our recent paper on common ownership, both approaches would create tremendous decision costs for business planners and adjudicators and would likely entail massive error costs as institutional investors eliminated welfare-enhancing product offerings and curtailed activities that reduce agency costs.

Decision Costs

The touchstone for liability under Elhauge’s Section 7 approach would be a pattern of common ownership that caused, or likely would cause, market prices to rise. Elhauge would identify suspect patterns of common ownership using MHHI∆, a measure that assesses incentives to reduce competition based on, among other things, the extent to which investors own stock in multiple firms within a market and the market shares of the commonly owned firms. (Mike described MHHI∆ here.) Specifically, Elhauge says, liability would result from “any horizontal stock acquisitions that have created, or would create, a ∆MHHI of over 200 in a market with an MHHI over 2500,” if “those horizontal stock acquisitions raised prices or are likely to do so.”

The administrative burden this approach would place on business planners would be tremendous. Because an institutional investor can’t directly control market prices, the only way it could avoid liability would be to ensure either that the markets in which it was invested did not have an MHHI greater than 2500 or that its acquisitions’ own contribution to MHHI∆ in those markets was less than 200. MHHI and MHHI∆, though, are largely determined by others’ investments and by commonly owned firms’ market shares, both of which change constantly. This implies that business planners could ensure against liability only by continually monitoring others’ activities and general market developments.

Adjudicators would also face high decision costs under Elhauge’s Section 7 approach. First, they would have to assess complicated econometric studies to determine whether adverse price effects were actually caused by patterns of common ownership. Then, if they decided common ownership had caused a rise in prices, they would have to answer a nearly intractable question: How should the economic harm from common ownership be allocated among the investors holding stakes in multiple firms in the industry? As Posner et al. have observed, “MHHI∆ is a collective responsibility of the holding pattern” in markets in which there are multiple intra-industry diversified investors. It would not work to assign liability only to those diversified investors who could substantially reduce MHHI∆ by divesting, for oftentimes the unilateral divestment of each institutional investor from the market would occasion only a small reduction in MHHI∆. An aggressive court might impose joint liability on all intra-industry diversified investors, but the investor(s) from whom plaintiffs collected would likely seek contribution from the other intra-industry diversified investors. Denying contribution seems intolerably inequitable, but how would a court apportion damages?

In light of these administrative difficulties, Posner et al. advocate a more determinate, rule-based approach. They would have the federal antitrust enforcement agencies compile annual lists of oligopolistic industries and then threaten enforcement action against any institutional investor holding more than one percent of the stock in such an industry if the investor (1) held stock in more than one firm within the industry, and (2) either voted its shares or engaged firm managers.

On first glance, this enforcement policy approach might appear to reduce decision costs: Business planners would have to do less investigation to avoid liability if they could rely on trustworthy, easily identifiable safe harbors; adjudicators’ decision costs would fall if the enforcement policy made it easier to identify illicit investment patterns. But the approach saddles antitrust enforcers with the herculean task of compiling, and annually updating, lists of oligopolistic industries. Given that the antitrust agencies frequently struggle with the far more modest task of defining markets in the small number of merger challenges they file each year, there is little reason to believe enforcers could perform their oligopoly-designating duties at a reasonable cost.

Error Costs

Even greater than the proposed policy solutions’ administrative costs are their likely error costs—i.e., the welfare losses that would stem from wrongly deterring welfare-enhancing arrangements. Such costs would result if, as is likely, institutional investors were to respond to the policy solutions by making one of the two changes proponents of the solutions appear to prefer: either refraining from intra-industry diversification or remaining fully passive in the industries in which they hold stock of multiple competitors.

If institutional investors were to seek to avoid liability by investing in only one firm per concentrated industry, retail investors would lose access to a number of attractive investment opportunities. Passive index funds, which offer retail investors instant diversification with extremely low fees (due to the lack of active management), would virtually disappear, as most major stock indices include multiple firms per industry.

Moreover, because critics of common ownership maintain that intra-industry diversification at the institutional investor level is sufficient to induce competition-softening in concentrated markets, each institutional investor would have to settle on one firm per concentrated industry for all its funds. That requirement would impede institutional investors’ ability to offer a variety of actively managed funds organized around distinct investment strategies—e.g., growth, value, income etc. If, for example, Southwest Airlines were a growth stock and United Airlines a value stock, an institutional investor could not offer both a growth fund including Southwest and a value fund including United.

Finally, institutional investors could not offer funds designed to bet on an industry while limiting exposure to company-specific risks within that industry. Suppose, for example, that a financial crisis led to a precipitous drop in the stock prices of all commercial banks. A retail investor might reasonably conclude that the market had overreacted with respect to the industry as a whole, that the industry would likely rebound, but that some commercial banks would probably fail. Such an investor would wish to invest in the commercial banking sector but to hold a diversified portfolio within that sector. A legal regime that drove fund families to avoid intra-industry diversification would prevent them from offering the sort of fund this investor would prefer.

Of course, if institutional investors were to continue intra-industry diversification and seek to avoid liability by remaining passive in industries in which they were diversified, the funds described above could still be offered to investors. In that case, though, another set of significant error costs would arise: increased agency costs in the form of managerial misfeasance.

Unlike most individual shareholders, institutional investors often hold significant stakes in public companies and have the resources to become informed on corporate matters. They have a stronger motive and more opportunity to monitor firm managers and are thus particularly well-poised to keep managers on their toes. Institutional investors with long-term investor horizons—including all index funds, which cannot divest from their portfolio companies if firm performance suffers—have proven particularly beneficial to firm performance.

Indeed, a recent study by Jarrad Harford, Ambrus Kecskés, & Sattar Mansi found that investment by long-term institutional investors enhanced the quality of corporate managers, reduced measurable instances of managerial misbehavior, boosted innovation, decreased debt maturity (causing firms to become more exposed to financial market discipline), and increased shareholder returns. It strains credulity to suppose that this laundry list of benefits could similarly be achieved by long-term institutional investors that had no ability to influence managerial decision-making by voting their shares or engaging managers. Opting for passivity to avoid antitrust risk, then, would prevent institutional investors from achieving their agency cost-reducing potential.

In the end, proponents of additional antitrust intervention to police common ownership have not made their case. Their theory as to why current levels of intra-industry diversification would cause consumer harm is implausible, and the empirical evidence they say demonstrates such harm is both scant and methodologically suspect. The policy solutions they have proposed for dealing with the purported problem would radically rework an industry that has provided substantial benefits to investors, raising the costs of portfolio diversification and enhancing agency costs at public companies. Courts and antitrust enforcers should reject their calls for additional antitrust intervention to police common ownership.