Archives For market power

In an expected decision (but with a somewhat unexpected coalition), the U.S. Supreme Court has moved 5 to 4 to vacate an order issued early last month by the 5th U.S. Circuit Court of Appeals, which stayed an earlier December 2021 order from the U.S. District Court for the Western District of Texas enjoining Texas’ attorney general from enforcing the state’s recently enacted social-media law, H.B. 20. The law would bar social-media platforms with more than 50 million active users from engaging in “censorship” based on political viewpoint. 

The shadow-docket order serves to grant the preliminary injunction sought by NetChoice and the Computer & Communications Industry Association to block the law—which they argue is facially unconstitutional—from taking effect. The trade groups also are challenging a similar Florida law, which the 11th U.S. Circuit Court of Appeals last week ruled was “substantially likely” to violate the First Amendment. Both state laws will thus be stayed while challenges on the merits proceed. 

But the element of the Supreme Court’s order drawing the most initial interest is the “strange bedfellows” breakdown that produced it. Chief Justice John Roberts was joined by conservative Justices Brett Kavanaugh and Amy Coney Barrett and liberals Stephen Breyer and Sonia Sotomayor in moving to vacate the 5th Circuit’s stay. Meanwhile, Justice Samuel Alito wrote a dissent that was joined by fellow conservatives Clarence Thomas and Neil Gorsuch, and liberal Justice Elena Kagan also dissented without offering a written justification.

A glance at the recent history, however, reveals why it should not be all that surprising that the justices would not come down along predictable partisan lines. Indeed, when it comes to content moderation and the question of whether to designate platforms as “common carriers,” the one undeniably predictable outcome is that both liberals and conservatives have been remarkably inconsistent.

Both Sides Flip Flop on Common Carriage

Ever since Justice Thomas used his concurrence in 2021’s Biden v. Knight First Amendment Institute to lay out a blueprint for how states could regulate social-media companies as common carriers, states led by conservatives have been working to pass bills to restrict the ability of social media companies to “censor.” 

Forcing common carriage on the Internet was, not long ago, something conservatives opposed. It was progressives who called net neutrality the “21st Century First Amendment.” The actual First Amendment, however, protects the rights of both Internet service providers (ISPs) and social-media companies to decide the rules of the road on their own platforms.

Back in the heady days of 2014, when the Federal Communications Commission (FCC) was still planning its next moves on net neutrality after losing at the U.S. Court of Appeals for the D.C. Circuit the first time around, Geoffrey Manne and I at the International Center for Law & Economics teamed with Berin Szoka and Tom Struble of TechFreedom to write a piece for the First Amendment Law Review arguing that there was no exception that would render broadband ISPs “state actors” subject to the First Amendment. Further, we argued that the right to editorial discretion meant that net-neutrality regulations would be subject to (and likely fail) First Amendment scrutiny under Tornillo or Turner.

After the FCC moved to reclassify broadband as a Title II common carrier in 2015, then-Judge Kavanaugh of the D.C. Circuit dissented from the denial of en banc review, in part on First Amendment grounds. He argued that “the First Amendment bars the Government from restricting the editorial discretion of Internet service providers, absent a showing that an Internet service provider possesses market power in a relevant geographic market.” In fact, Kavanaugh went so far as to link the interests of ISPs and Big Tech (and even traditional media), stating:

If market power need not be shown, the Government could regulate the editorial decisions of Facebook and Google, of MSNBC and Fox, of NYTimes.com and WSJ.com, of YouTube and Twitter. Can the Government really force Facebook and Google and all of those other entities to operate as common carriers? Can the Government really impose forced-carriage or equal-access obligations on YouTube and Twitter? If the Government’s theory in this case were accepted, then the answers would be yes. After all, if the Government could force Internet service providers to carry unwanted content even absent a showing of market power, then it could do the same to all those other entities as well. There is no principled distinction between this case and those hypothetical cases.

This was not a controversial view among free-market, right-of-center types at the time.

An interesting shift started to occur during the presidency of Donald Trump, however, as tensions between social-media companies and many on the right came to a head. Instead of seeing these companies as private actors with strong First Amendment rights, some conservatives began looking either for ways to apply the First Amendment to them directly as “state actors” or to craft regulations that would essentially make social-media companies into common carriers with regard to speech.

But Kavanaugh’s opinion in USTelecom remains the best way forward to understand how the First Amendment applies online today, whether regarding net neutrality or social-media regulation. Given Justice Alito’s view, expressed in his dissent, that it “is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies,” it is a fair bet that laws like those passed by Texas and Florida will get a hearing before the Court in the not-distant future. If Justice Kavanaugh’s opinion has sway among the conservative bloc of the Supreme Court, or is able to peel off justices from the liberal bloc, the Texas law and others like it (as well as net-neutrality regulations) will be struck down as First Amendment violations.

Kavanaugh’s USTelecom Dissent

In then-Judge Kavanaugh’s dissent, he highlighted two reasons he believed the FCC’s reclassification of broadband as Title II was unlawful. The first was that the reclassification decision was a “major question” that required clear authority delegated by Congress. The second, more important point was that the FCC’s reclassification decision was subject to the Turner standard. Under that standard, since the FCC did not engage—at the very least—in a market-power analysis, the rules could not stand, as they amounted to mandated speech.

The interesting part of this opinion is that it tracks very closely to the analysis of common-carriage requirements for social-media companies. Kavanaugh’s opinion offered important insights into:

  1. the applicability of the First Amendment right to editorial discretion to common carriers;
  2. the “use it or lose it” nature of this right;
  3. whether Turner’s protections depended on scarcity; and 
  4. what would be required to satisfy Turner scrutiny.

Common Carriage and First Amendment Protection

Kavanaugh found unequivocally that common carriers, such as ISPs classified under Title II, were subject to First Amendment protection under the Turner decisions:

The Court’s ultimate conclusion on that threshold First Amendment point was not obvious beforehand. One could have imagined the Court saying that cable operators merely operate the transmission pipes and are not traditional editors. One could have imagined the Court comparing cable operators to electricity providers, trucking companies, and railroads – all entities subject to traditional economic regulation. But that was not the analytical path charted by the Turner Broadcasting Court. Instead, the Court analogized the cable operators to the publishers, pamphleteers, and bookstore owners traditionally protected by the First Amendment. As Turner Broadcasting concluded, the First Amendment’s basic principles “do not vary when a new and different medium for communication appears” – although there of course can be some differences in how the ultimate First Amendment analysis plays out depending on the nature of (and competition in) a particular communications market. Brown v. Entertainment Merchants Association, 564 U.S. 786, 790 (2011) (internal quotation mark omitted).

Here, of course, we deal with Internet service providers, not cable television operators. But Internet service providers and cable operators perform the same kinds of functions in their respective networks. Just like cable operators, Internet service providers deliver content to consumers. Internet service providers may not necessarily generate much content of their own, but they may decide what content they will transmit, just as cable operators decide what content they will transmit. Deciding whether and how to transmit ESPN and deciding whether and how to transmit ESPN.com are not meaningfully different for First Amendment purposes.

Indeed, some of the same entities that provide cable television service – colloquially known as cable companies – provide Internet access over the very same wires. If those entities receive First Amendment protection when they transmit television stations and networks, they likewise receive First Amendment protection when they transmit Internet content. It would be entirely illogical to conclude otherwise. In short, Internet service providers enjoy First Amendment protection of their rights to speak and exercise editorial discretion, just as cable operators do.

‘Use It or Lose It’ Right to Editorial Discretion

Kavanaugh questioned whether the First Amendment right to editorial discretion depends, to some degree, on how much the entity used the right. Ultimately, he rejected the idea forwarded by the FCC that, since ISPs don’t restrict access to any sites, they were essentially holding themselves out to be common carriers:

I find that argument mystifying. The FCC’s “use it or lose it” theory of First Amendment rights finds no support in the Constitution or precedent. The FCC’s theory is circular, in essence saying: “They have no First Amendment rights because they have not been regularly exercising any First Amendment rights and therefore they have no First Amendment rights.” It may be true that some, many, or even most Internet service providers have chosen not to exercise much editorial discretion, and instead have decided to allow most or all Internet content to be transmitted on an equal basis. But that “carry all comers” decision itself is an exercise of editorial discretion. Moreover, the fact that the Internet service providers have not been aggressively exercising their editorial discretion does not mean that they have no right to exercise their editorial discretion. That would be akin to arguing that people lose the right to vote if they sit out a few elections. Or citizens lose the right to protest if they have not protested before. Or a bookstore loses the right to display its favored books if it has not done so recently. That is not how constitutional rights work. The FCC’s “use it or lose it” theory is wholly foreign to the First Amendment.

Employing a similar logic, Kavanaugh also rejected the notion that net-neutrality rules were essentially voluntary, given that ISPs held themselves out as carrying all content.

Relatedly, the FCC claims that, under the net neutrality rule, an Internet service provider supposedly may opt out of the rule by choosing to carry only some Internet content. But even under the FCC’s description of the rule, an Internet service provider that chooses to carry most or all content still is not allowed to favor some content over other content when it comes to price, speed, and availability. That half-baked regulatory approach is just as foreign to the First Amendment. If a bookstore (or Amazon) decides to carry all books, may the Government then force the bookstore (or Amazon) to feature and promote all books in the same manner? If a newsstand carries all newspapers, may the Government force the newsstand to display all newspapers in the same way? May the Government force the newsstand to price them all equally? Of course not. There is no such theory of the First Amendment. Here, either Internet service providers have a right to exercise editorial discretion, or they do not. If they have a right to exercise editorial discretion, the choice of whether and how to exercise that editorial discretion is up to them, not up to the Government.

Think about what the FCC is saying: Under the rule, you supposedly can exercise your editorial discretion to refuse to carry some Internet content. But if you choose to carry most or all Internet content, you cannot exercise your editorial discretion to favor some content over other content. What First Amendment case or principle supports that theory? Crickets.

In a footnote, Kavanugh continued to lambast the theory of “voluntary regulation” forwarded by the concurrence, stating:

The concurrence in the denial of rehearing en banc seems to suggest that the net neutrality rule is voluntary. According to the concurrence, Internet service providers may comply with the net neutrality rule if they want to comply, but can choose not to comply if they do not want to comply. To the concurring judges, net neutrality merely means “if you say it, do it.”…. If that description were really true, the net neutrality rule would be a simple prohibition against false advertising. But that does not appear to be an accurate description of the rule… It would be strange indeed if all of the controversy were over a “rule” that is in fact entirely voluntary and merely proscribes false advertising. In any event, I tend to doubt that Internet service providers can now simply say that they will choose not to comply with any aspects of the net neutrality rule and be done with it. But if that is what the concurrence means to say, that would of course avoid any First Amendment problem: To state the obvious, a supposed “rule” that actually imposes no mandates or prohibitions and need not be followed would not raise a First Amendment issue.

Scarcity and Capacity to Carry Content

The FCC had also argued that there was a difference between ISPs and the cable companies in Turner in that ISPs did not face decisions about scarcity in content carriage. But Kavanaugh rejected this theory as inconsistent with the First Amendment’s right not to be compelled to carry a message or speech.

That argument, too, makes little sense as a matter of basic First Amendment law. First Amendment protection does not go away simply because you have a large communications platform. A large bookstore has the same right to exercise editorial discretion as a small bookstore. Suppose Amazon has capacity to sell every book currently in publication and therefore does not face the scarcity of space that a bookstore does. Could the Government therefore force Amazon to sell, feature, and promote every book on an equal basis, and prohibit Amazon from promoting or recommending particular books or authors? Of course not. And there is no reason for a different result here. Put simply, the Internet’s technological architecture may mean that Internet service providers can provide unlimited content; it does not mean that they must.

Keep in mind, moreover, why that is so. The First Amendment affords editors and speakers the right not to speak and not to carry or favor unwanted speech of others, at least absent sufficient governmental justification for infringing on that right… That foundational principle packs at least as much punch when you have room on your platform to carry a lot of speakers as it does when you have room on your platform to carry only a few speakers.

Turner Scrutiny and Bottleneck Market Power

Finally, Kavanaugh applied Turner scrutiny and found that, at the very least, it requires a finding of “bottleneck market power” that would allow ISPs to harm consumers. 

At the time of the Turner Broadcasting decisions, cable operators exercised monopoly power in the local cable television markets. That monopoly power afforded cable operators the ability to unfairly disadvantage certain broadcast stations and networks. In the absence of a competitive market, a broadcast station had few places to turn when a cable operator declined to carry it. Without Government intervention, cable operators could have disfavored certain broadcasters and indeed forced some broadcasters out of the market altogether. That would diminish the content available to consumers. The Supreme Court concluded that the cable operators’ market-distorting monopoly power justified Government intervention. Because of the cable operators’ monopoly power, the Court ultimately upheld the must-carry statute…

The problem for the FCC in this case is that here, unlike in Turner Broadcasting, the FCC has not shown that Internet service providers possess market power in a relevant geographic market… 

Rather than addressing any problem of market power, the net neutrality rule instead compels private Internet service providers to supply an open platform for all would-be Internet speakers, and thereby diversify and increase the number of voices available on the Internet. The rule forcibly reduces the relative voices of some Internet service and content providers and enhances the relative voices of other Internet content providers.

But except in rare circumstances, the First Amendment does not allow the Government to regulate the content choices of private editors just so that the Government may enhance certain voices and alter the content available to the citizenry… Turner Broadcasting did not allow the Government to satisfy intermediate scrutiny merely by asserting an interest in diversifying or increasing the number of speakers available on cable systems. After all, if that interest sufficed to uphold must-carry regulation without a showing of market power, the Turner Broadcasting litigation would have unfolded much differently. The Supreme Court would have had little or no need to determine whether the cable operators had market power. But the Supreme Court emphasized and relied on the Government’s market power showing when the Court upheld the must-carry requirements… To be sure, the interests in diversifying and increasing content are important governmental interests in the abstract, according to the Supreme Court But absent some market dysfunction, Government regulation of the content carriage decisions of communications service providers is not essential to furthering those interests, as is required to satisfy intermediate scrutiny.

In other words, without a finding of bottleneck market power, there would be no basis for satisfying the government interest prong of Turner.

Applying Kavanaugh’s Dissent to NetChoice v. Paxton

Interestingly, each of these main points arises in the debate over regulating social-media companies as common carriers. Texas’ H.B. 20 attempts to do exactly that, which is at the heart of the litigation in NetChoice v. Paxton.

Common Carriage and First Amendment Protection

To the first point, Texas attempts to claim in its briefs that social-media companies are common carriers subject to lesser First Amendment protection: “Assuming the platforms’ refusals to serve certain customers implicated First Amendment rights, Texas has properly denominated the platforms common carriers. Imposing common-carriage requirements on a business does not offend the First Amendment.”

But much like the cable operators before them in Turner, social-media companies are not simply carriers of persons or things like the classic examples of railroads, telegraphs, and telephones. As TechFreedom put it in its brief: “As its name suggests… ‘common carriage’ is about offering, to the public at large  and on indiscriminate terms, to carry generic stuff from point A to point B. Social media websites fulfill none of these elements.”

In a sense, it’s even clearer that social-media companies are not common carriers than it was in the case of ISPs, because social-media platforms have always had terms of service that limit what can be said and that even allow the platforms to remove users for violations. All social-media platforms curate content for users in ways that ISPs normally do not.

‘Use It or Lose It’ Right to Editorial Discretion

Just as the FCC did in the Title II context, Texas also presses the idea that social-media companies gave up their right to editorial discretion by disclaiming the choice to exercise it, stating: “While the platforms compare their business policies to classic examples of First Amendment speech, such as a newspaper’s decision to include an article in its pages, the platforms have disclaimed any such status over many years and in countless cases. This Court should not accept the platforms’ good-for-this-case-only characterization of their businesses.” Pointing primarily to cases where social-media companies have invoked Section 230 immunity as a defense, Texas argues they have essentially lost the right to editorial discretion.

This, again, flies in the face of First Amendment jurisprudence, as Kavanaugh earlier explained. Moreover, the idea that social-media companies have disclaimed editorial discretion due to Section 230 is inconsistent with what that law actually does. Section 230 allows social-media companies to engage in as much or as little content moderation as they so choose by holding the third-party speakers accountable rather than the platform. Social-media companies do not relinquish their First Amendment rights to editorial discretion because they assert an applicable defense under the law. Moreover, social-media companies have long had rules delineating permissible speech, and they enforce those rules actively.

Interestingly, there has also been an analogue to the idea forwarded in USTelecom that the law’s First Amendment burdens are relatively limited. As noted above, then-Judge Kavanaugh rejected the idea forwarded by the concurrence that net-neutrality rules were essentially voluntary. In the case of H.B. 20, the bill’s original sponsor recently argued on Twitter that the Texas law essentially incorporates Section 230 by reference. If this is true, then the rules would be as pointless as the net-neutrality rules would have been, because social-media companies would be free under Section 230(c)(2) to remove “otherwise objectionable” material under the Texas law.

Scarcity and Capacity to Carry Content

In an earlier brief to the 5th Circuit, Texas attempted to differentiate social-media companies from the cable company in Turner by stating there was no necessary conflict between speakers, stating “[HB 20] does not, for example, pit one group of speakers against another.” But this is just a different way of saying that, since social-media companies don’t face scarcity in their technical capacity to carry speech, they can be required to carry all speech. This is inconsistent with the right Kavanaugh identified not to carry a message or speech, which is not subject to an exception that depends on the platform’s capacity to carry more speech.

Turner Scrutiny and Bottleneck Market Power

Finally, Judge Kavanaugh’s application of Turner to ISPs makes clear that a showing of bottleneck market power is necessary before common-carriage regulation may be applied to social-media companies. In fact, Kavanaugh used a comparison to social-media sites and broadcasters as a reductio ad absurdum for the idea that one could regulate ISPs without a showing of market power. As he put it there:

Consider the implications if the law were otherwise. If market power need not be shown, the Government could regulate the editorial decisions of Facebook and Google, of MSNBC and Fox, of NYTimes.com and WSJ.com, of YouTube and Twitter. Can the Government really force Facebook and Google and all of those other entities to operate as common carriers? Can the Government really impose forced-carriage or equal-access obligations on YouTube and Twitter? If the Government’s theory in this case were accepted, then the answers would be yes. After all, if the Government could force Internet service providers to carry unwanted content even absent a showing of market power, then it could do the same to all those other entities as well. There is no principled distinction between this case and those hypothetical cases.

Much like the FCC with its Open Internet Order, Texas did not make a finding of bottleneck market power in H.B. 20. Instead, Texas basically asked for the opportunity to get to discovery to develop the case that social-media platforms have market power, stating that “[b]ecause the District Court sharply limited discovery before issuing its preliminary injunction, the parties have not yet had the opportunity to develop many factual questions, including whether the platforms possess market power.” This simply won’t fly under Turner, which required a legislative finding of bottleneck market power that simply doesn’t exist in H.B. 20. 

Moreover, bottleneck market power means more than simply “market power” in an antitrust sense. As Judge Kavanaugh put it: “Turner Broadcasting seems to require even more from the Government. The Government apparently must also show that the market power would actually be used to disadvantage certain content providers, thereby diminishing the diversity and amount of content available.” Here, that would mean not only that social-media companies have market power, but they want to use it to disadvantage users in a way that makes less diverse content and less total content available.

The economics of multi-sided markets is probably the best explanation for why platforms have moderation rules. They are used to maximize a platform’s value by keeping as many users engaged and on those platforms as possible. In other words, the effect of moderation rules is to increase the amount of user speech by limiting harassing content that could repel users. This is a much better explanation for these rules than “anti-conservative bias” or a desire to censor for censorship’s sake (though there may be room for debate on the margin when it comes to the moderation of misinformation and hate speech).

In fact, social-media companies, unlike the cable operators in Turner, do not have the type of “physical connection between the television set and the cable network” that would grant them “bottleneck, or gatekeeper, control over” speech in ways that would allow platforms to “silence the voice of competing speakers with a mere flick of the switch.” Cf. Turner, 512 U.S. at 656. Even if they tried, social-media companies simply couldn’t prevent Internet users from accessing content they wish to see online; they inevitably will find such content by going to a different site or app.

Conclusion: The Future of the First Amendment Online

While many on both sides of the partisan aisle appear to see a stark divide between the interests of—and First Amendment protections afforded to—ISPs and social-media companies, Kavanaugh’s opinion in USTelecom shows clearly they are in the same boat. The two rise or fall together. If the government can impose common-carriage requirements on social-media companies in the name of free speech, then they most assuredly can when it comes to ISPs. If the First Amendment protects the editorial discretion of one, then it does for both.

The question then moves to relative market power, and whether the dominant firms in either sector can truly be said to have “bottleneck” market power, which implies the physical control of infrastructure that social-media companies certainly lack.

While it will be interesting to see what the 5th Circuit (and likely, the Supreme Court) ultimately do when reviewing H.B. 20 and similar laws, if now-Justice Kavanaugh’s dissent is any hint, there will be a strong contingent on the Court for finding the First Amendment applies online by protecting the right of private actors (ISPs and social-media companies) to set the rules of the road on their property. As Kavanaugh put it in Manhattan Community Access Corp. v. Halleck: “[t]he Free Speech Clause of the First Amendment constrains governmental actors and protects private actors.” Competition is the best way to protect consumers’ interests, not prophylactic government regulation.

With the 11th Circuit upholding the stay against Florida’s social-media law and the Supreme Court granting the emergency application to vacate the stay of the injunction in NetChoice v. Paxton, the future of the First Amendment appears to be on strong ground. There is no basis to conclude that simply calling private actors “common carriers” reduces their right to editorial discretion under the First Amendment.

On March 31, I and several other law and economics scholars filed an amicus brief in Epic Games v. Apple, which is on appeal to the U.S. Court of Appeals for Ninth Circuit.  In this post, I summarize the central arguments of the brief, which was joined by Alden Abbott, Henry Butler, Alan Meese, Aurelien Portuese, and John Yun and prepared with the assistance of Don Falk of Schaerr Jaffe LLP.

First, some background for readers who haven’t followed the case.

Epic, maker of the popular Fortnite video game, brought antitrust challenges against two policies Apple enforces against developers of third-party apps that run on iOS, the mobile operating system for Apple’s popular iPhones and iPads.  One policy requires that all iOS apps be distributed through Apple’s own App Store.  The other requires that any purchases of digital goods made while using an iOS app utilize Apple’s In App Purchase system (IAP).  Apple collects a share of the revenue from sales made through its App Store and using IAP, so these two policies provide a way for it to monetize its innovative app platform.   

Epic maintains that Apple’s app policies violate the federal antitrust laws.  Following a trial, the district court disagreed, though it condemned another of Apple’s policies under California state law.  Epic has appealed the antitrust rulings against it. 

My fellow amici and I submitted our brief in support of Apple to draw the Ninth Circuit’s attention to a distinction that is crucial to ensuring that antitrust promotes long-term consumer welfare: the distinction between the mere extraction of surplus through the exercise of market power and the enhancement of market power via the weakening of competitive constraints.

The central claim of our brief is that Epic’s antitrust challenges to Apple’s app store policies should fail because Epic has not shown that the policies enhance Apple’s market power in any market.  Moreover, condemnation of the practices would likely induce Apple to use its legitimately obtained market power to extract surplus in a different way that would leave consumers worse off than they are under the status quo.   

Mere Surplus Extraction vs. Market Power Extension

As the Supreme Court has observed, “Congress designed the Sherman Act as a ‘consumer welfare prescription.’”  The Act endeavors to protect consumers from harm resulting from “market power,” which is the ability of a firm lacking competitive constraints to enhance its profits by reducing its output—either quantitively or qualitatively—from the level that would persist if the firm faced vigorous competition.  A monopolist, for example, might cut back on the quantity it produces (to drive up market price) or it might skimp on quality (to enhance its per-unit profit margin).  A firm facing vigorous competition, by contrast, couldn’t raise market price simply by reducing its own production, and it would lose significant sales to rivals if it raised its own price or unilaterally cut back on product quality.  Market power thus stems from deficient competition.

As Dennis Carlton and Ken Heyer have observed, two different types of market power-related business behavior may injure consumers and are thus candidates for antitrust prohibition.  One is an exercise of market power: an action whereby a firm lacking competitive constraints increases its returns by constricting its output so as to raise price or otherwise earn higher profit margins.  When a firm engages in this sort of conduct, it extracts a greater proportion of the wealth, or “surplus,” generated by its transactions with its customers.

Every voluntary transaction between a buyer and seller creates surplus, which is the difference between the subjective value the consumer attaches to an item produced and the cost of producing and distributing it.  Price and other contract terms determine how that surplus is allocated between the buyer and the seller.  When a firm lacking competitive constraints exercises its market power by, say, raising price, it extracts for itself a greater proportion of the surplus generated by its sale.

The other sort of market power-related business behavior involves an effort by a firm to enhance its market power by weakening competitive constraints.  For example, when a firm engages in unreasonably exclusionary conduct that drives its rivals from the market or increases their costs so as to render them less formidable competitors, its market power grows.

U.S. antitrust law treats these two types of market power-related conduct differently.  It forbids behavior that enhances market power and injures consumers, but it permits actions that merely exercise legitimately obtained market power without somehow enhancing it.  For example, while charging a monopoly price creates immediate consumer harm by extracting for the monopolist a greater share of the surplus created by the transaction, the Supreme Court observed in Trinko that “[t]he mere possession of monopoly power, and the concomitant charging of monopoly prices, is not . . . unlawful.”  (See also linkLine: “Simply possessing monopoly power and charging monopoly prices does not violate [Sherman Act] § 2….”)

Courts have similarly refused to condemn mere exercises of market power in cases involving surplus-extractive arrangements more complicated than simple monopoly pricing.  For example, in its Independent Ink decision, the U.S. Supreme Court expressly declined to adopt a rule that would have effectively banned “metering” tie-ins.

In a metering tie-in, a seller with market power on some unique product that is used with a competitively supplied complement that is consumed in varying amounts—say, a highly unique printer that uses standard ink—reduces the price of its unique product (the printer), requires buyers to also purchase from it their requirements of the complement (the ink), and then charges a supracompetitive price for the latter product.  This allows the seller to charge higher effective prices to high-volume users of its unique tying product (buyers who use lots of ink) and lower prices to lower-volume users. 

Assuming buyers’ use of the unique product correlates with the value they ascribe to it, a metering tie-in allows the seller to price discriminate, charging higher prices to buyers who value its unique product more.  This allows the seller to extract more of the surplus generated by sales of its product, but it in no way extends the seller’s market power.

In refusing to adopt a rule that would have condemned most metering tie-ins, the Independent Ink Court observed that “it is generally recognized that [price discrimination] . . . occurs in fully competitive markets” and that tying arrangements involving requirements ties may be “fully consistent with a free, competitive market.” The Court thus reasoned that mere price discrimination and surplus extraction, even when accomplished through some sort of contractual arrangement like a tie-in, are not by themselves anticompetitive harms warranting antitrust’s condemnation.    

The Ninth Circuit has similarly recognized that conduct that exercises market power to extract surplus but does not somehow enhance that power does not create antitrust liability.  In Qualcomm, the court refused to condemn the chipmaker’s “no license, no chips” policy, which enabled it to enhance its profits by earning royalties on original equipment manufacturers’ sales of their high-priced products.

In reversing the district court’s judgment in favor of the FTC, the Ninth Circuit conceded that Qualcomm’s policies were novel and that they allowed it to enhance its profits by extracting greater surplus.  The court refused to condemn the policies, however, because they did not injure competition by weakening competitive constraints:

This is not to say that Qualcomm’s “no license, no chips” policy is not “unique in the industry” (it is), or that the policy is not designed to maximize Qualcomm’s profits (Qualcomm has admitted as much). But profit-seeking behavior alone is insufficient to establish antitrust liability. As the Supreme Court stated in Trinko, the opportunity to charge monopoly prices “is an important element of the free-market system” and “is what attracts ‘business acumen’ in the first place; it induces risk taking that produces innovation and economic growth.”

The Qualcomm court’s reference to Trinko highlights one reason courts should not condemn exercises of market power that merely extract surplus without enhancing market power: allowing such surplus extraction furthers dynamic efficiency—welfare gain that accrues over time from the development of new and improved products and services.

Dynamic efficiency results from innovation, which entails costs and risks.  Firms are more willing to incur those costs and risks if their potential payoff is higher, and an innovative firm’s ability to earn supracompetitive profits off its “better mousetrap” enhances its payoff. 

Allowing innovators to extract such profits also helps address the fact most of the benefits of product innovation inure to people other than the innovator.  Private actors often engage in suboptimal levels of behaviors that produce such benefit spillovers, or “positive externalities,”  because they bear all the costs of those behaviors but capture just a fraction of the benefit produced.  By enhancing the benefits innovators capture from their innovative efforts, allowing non-power-enhancing surplus extraction helps generate a closer-to-optimal level of innovative activity.

Not only do supracompetitive profits extracted through the exercise of legitimately obtained market power motivate innovation, they also enable it by helping to fund innovative efforts.  Whereas businesses that are forced by competition to charge prices near their incremental cost must secure external funding for significant research and development (R&D) efforts, firms collecting supracompetitive returns can finance R&D internally.  Indeed, of the top fifteen global spenders on R&D in 2018, eleven were either technology firms accused of possessing monopoly power (#1 Apple, #2 Alphabet/Google, #5 Intel, #6 Microsoft, #7 Apple, and #14 Facebook) or pharmaceutical companies whose patent protections insulate their products from competition and enable supracompetitive pricing (#8 Roche, #9 Johnson & Johnson, #10 Merck, #12 Novartis, and #15 Pfizer).

In addition to fostering dynamic efficiency by motivating and enabling innovative efforts, a policy acquitting non-power-enhancing exercises of market power allows courts to avoid an intractable question: which instances of mere surplus extraction should be precluded?

Precluding all instances of surplus extraction by firms with market power would conflict with precedents like Trinko and linkLine (which say that legitimate monopolists may legally charge monopoly prices) and would be impracticable given the ubiquity of above-cost pricing in niche and brand-differentiated markets.

A rule precluding surplus extraction when accomplished by a practice more complicated that simple monopoly pricing—say, some practice that allows price discrimination against buyers who highly value a product—would be both arbitrary and backward.  The rule would be arbitrary because allowing supracompetitive profits from legitimately obtained market power motivates and enables innovation regardless of the means used to extract surplus. The rule would be backward because, while simple monopoly pricing always reduces overall market output (as output-reduction is the very means by which the producer causes price to rise), more complicated methods of extracting surplus, such as metering tie-ins, often enhance market output and overall social welfare.

A third possibility would be to preclude exercising market power to extract more surplus than is necessary to motivate and enable innovation.  That position, however, would require courts to determine how much surplus extraction is required to induce innovative efforts.  Courts are poorly positioned to perform such a task, and their inevitable mistakes could significantly chill entrepreneurial activity.

Consider, for example, a firm contemplating a $5 million investment that might return up to $50 million.  Suppose the managers of the firm weighed expected costs and benefits and decided the risky gamble was just worth taking.  If the gamble paid off but a court stepped in and capped the firm’s returns at $20 million—a seemingly generous quadrupling of the firm’s investment—future firms in the same position would not make similar investments.  After all, the firm here thought this gamble was just barely worth taking, given the high risk of failure, when available returns were $50 million.

In the end, then, the best policy is to draw the line as both the U.S. Supreme Court and the Ninth Circuit have done: Whereas enhancements of market power are forbidden, merely exercising legitimately obtained market power to extract surplus is permitted.

Apple’s Policies Do Not Enhance Its Market Power

Under the legal approach described above, the two Apple policies Epic has challenged do not give rise to antitrust liability.  While the policies may boost Apple’s profits by facilitating its extraction of surplus from app transactions on its mobile devices, they do not enhance Apple’s market power in any conceivable market.

As the creator and custodian of the iOS operating system, Apple has the ability to control which applications will run on its iPhones and iPads.  Developers cannot produce operable iOS apps unless Apple grants them access to the Application Programming Interfaces (APIs) required to enable the functionality of the operating system and hardware. In addition, Apple can require developers to obtain digital certificates that will enable their iOS apps to operate.  As the district court observed, “no certificate means the code will not run.”

Because Apple controls which apps will work on the operating system it created and maintains, Apple could collect the same proportion of surplus it currently extracts from iOS app sales and in-app purchases on iOS apps even without the policies Epic is challenging.  It could simply withhold access to the APIs or digital certificates needed to run iOS apps unless developers promised to pay it 30% of their revenues from app sales and in-app purchases of digital goods.

This means that the challenged policies do not give Apple any power it doesn’t already possess in the putative markets Epic identified: the markets for “iOS app distribution” and “iOS in-app payment processing.” 

The district court rejected those market definitions on the ground that Epic had not established cognizable aftermarkets for iOS-specific services.  It defined the relevant market instead as “mobile gaming transactions.”  But no matter.  The challenged policies would not enhance Apple’s market power in that broader market either.

In “mobile gaming transactions” involving non-iOS (e.g., Android) mobile apps, Apple’s policies give it no power at all.  Apple doesn’t distribute non-iOS apps or process in-app payments on such apps.  Moreover, even if Apple were to being doing so—say, by distributing Android apps in its App Store or allowing producers of Android apps to include IAP as their in-app payment system—it is implausible that Apple’s policies would allow it to gain new market power.  There are giant, formidable competitors in non-iOS app distribution (e.g., Google’s Play Store) and in payment processing for non-iOS in-app purchases (e.g., Google Play Billing).  It is inconceivable that Apple’s policies would allow it to usurp so much scale from those rivals that Apple could gain market power over non-iOS mobile gaming transactions.

That leaves only the iOS segment of the mobile gaming transactions market.  And, as we have just seen, Apple’s policies give it no new power to extract surplus from those transactions; because it controls access to iOS, it could do so using other means.

Nor do the challenged policies enable Apple to maintain its market power in any conceivable market.  This is not a situation like Microsoft where a firm in a market adjacent to a monopolist’s could somehow pose a challenge to that monopolist, and the monopolist nips the potential competition in the bud by reducing the potential rival’s scale.  There is no evidence in the record to support the (implausible) notion that rival iOS app stores or in-app payment processing systems could ever evolve in a manner that would pose a challenge to Apple’s position in mobile devices, mobile operating systems, or any other market in which it conceivably has market power. 

Epic might retort that but for the challenged policies, rivals could challenge Apple’s market share in iOS app distribution and in-app purchase processing.  Rivals could not, however, challenge Apple’s market power in such markets, as that power stems from its control of iOS.  The challenged policies therefore do not enable Apple to shore up any existing market power.

Alternative Means of Extracting Surplus Would Likely Reduce Consumer Welfare

Because the policies Epic has challenged are not the source of Apple’s ability to extract surplus from iOS app transactions, judicial condemnation of the policies would likely induce Apple to extract surplus using different means.  Changing how it earns profits off iOS app usage, however, would likely leave consumers worse off than they are under the status quo.

Apple could simply charge third-party app developers a flat fee for access to the APIs needed to produce operable iOS apps but then allow them to distribute their apps and process in-app payments however they choose.  Such an approach would allow Apple to monetize its innovative app platform while permitting competition among providers of iOS app distribution and in-app payment processing services.  Relative to the status quo, though, such a model would likely reduce consumer welfare by:

  • Reducing the number of free and niche apps,as app developers could no longer avoid a fee to Apple by adopting a free (likely ad-supported) business model, and producers of niche apps may not generate enough revenue to justify Apple’s flat fee;
  • Raising business risks for app developers, who, if Apple cannot earn incremental revenue off sales and use of their apps, may face a greater likelihood that the functionality of those apps will be incorporated into future versions of iOS;
  • Reducing Apple’s incentive to improve iOS and its mobile devices, as eliminating Apple’s incremental revenue from app usage reduces its motivation to make costly enhancements that keep users on their iPhones and iPads;
  • Raising the price of iPhones and iPads and generating deadweight loss, as Apple could no longer charge higher effective prices to people who use apps more heavily and would thus likely hike up its device prices, driving marginal consumers from the market; and
  • Reducing user privacy and security, as jettisoning a closed app distribution model (App Store only) would impair Apple’s ability to screen iOS apps for features and bugs that create security and privacy risks.

An alternative approach—one that would avoid many of the downsides just stated by allowing Apple to continue earning incremental revenue off iOS app usage—would be for Apple to charge app developers a revenue-based fee for access to the APIs and other amenities needed to produce operable iOS apps.  That approach, however, would create other costs that would likely leave consumers worse off than they are under the status quo.

The policies Epic has challenged allow Apple to collect a share of revenues from iOS app transactions immediately at the point of sale.  Replacing those policies with a revenue-based  API license system would require Apple to incur additional costs of collecting revenues and ensuring that app developers are accurately reporting them.  In order to extract the same surplus it currently collects—and to which it is entitled given its legitimately obtained market power—Apple would have to raise its revenue-sharing percentage above its current commission rate to cover its added collection and auditing costs.

The fact that Apple has elected not to adopt this alternative means of collecting the revenues to which it is entitled suggests that the added costs of moving to the alternative approach (extra collection and auditing costs) would exceed any additional consumer benefit such a move would produce.  Because Apple can collect the same revenue percentage from app transactions two different ways, it has an incentive to select the approach that maximizes iOS app transaction revenues.  That is the approach that creates the greatest value for consumers and also for Apple. 

If Apple believed that the benefits to app users of competition in app distribution and in-app payment processing would exceed the extra costs of collection and auditing, it would have every incentive to switch to a revenue-based licensing regime and increase its revenue share enough to cover its added collection and auditing costs.  As such an approach would enhance the net value consumers receive when buying apps and making in-app purchases, it would raise overall app revenues, boosting Apple’s bottom line.  The fact that Apple has not gone in this direction, then, suggests that it does not believe consumers would receive greater benefit under the alternative system.  Apple might be wrong, of course.  But it has a strong motivation to make the consumer welfare-enhancing decision here, as doing so maximizes its own profits.

The policies Epic has challenged do not enhance or shore up Apple’s market power, a salutary pre-requisite to antitrust liability.  Furthermore, condemning the policies would likely lead Apple to monetize its innovative app platform in a manner that would reduce consumer welfare relative to the status quo.  The Ninth Circuit should therefore affirm the district court’s rejection of Epic’s antitrust claims.  

There has been a rapid proliferation of proposals in recent years to closely regulate competition among large digital platforms. The European Union’s Digital Markets Act (DMA, which will become effective in 2023) imposes a variety of data-use, interoperability, and non-self-preferencing obligations on digital “gatekeeper” firms. A host of other regulatory schemes are being considered in Australia, France, Germany, and Japan, among other countries (for example, see here). The United Kingdom has established a Digital Markets Unit “to operationalise the future pro-competition regime for digital markets.” Recently introduced U.S. Senate and House Bills—although touted as “antitrust reform” legislation—effectively amount to “regulation in disguise” of disfavored business activities by very large companies,  including the major digital platforms (see here and here).

Sorely missing from these regulatory proposals is any sense of the fallibility of regulation. Indeed, proponents of new regulatory proposals seem to implicitly assume that government regulation of platforms will enhance welfare, ignoring real-life regulatory costs and regulatory failures (see here, for example). Without evidence, new regulatory initiatives are put forth as superior to long-established, consumer-based antitrust law enforcement.

The hope that new regulatory tools will somehow “solve” digital market competitive “problems” stems from the untested assumption that established consumer welfare-based antitrust enforcement is “not up to the task.” Untested assumptions, however, are an unsound guide to public policy decisions. Rather, in order to optimize welfare, all proposed government interventions in the economy, including regulation and antitrust, should be subject to decision-theoretic analysis that is designed to minimize the sum of error and decision costs (see here). What might such an analysis reveal?

Wonder no more. In a just-released Mercatus Center Working Paper, Professor Thom Lambert has conducted a decision-theoretic analysis that evaluates the relative merits of U.S. consumer welfare-based antitrust, ex ante regulation, and ongoing agency oversight in addressing the market power of large digital platforms. While explaining that antitrust and its alternatives have their respective costs and benefits, Lambert concludes that antitrust is the welfare-superior approach to dealing with platform competition issues. According to Lambert:

This paper provides a comparative institutional analysis of the leading approaches to addressing the market power of large digital platforms: (1) the traditional US antitrust approach; (2) imposition of ex ante conduct rules such as those in the EU’s Digital Markets Act and several bills recently advanced by the Judiciary Committee of the US House of Representatives; and (3) ongoing agency oversight, exemplified by the UK’s newly established “Digital Markets Unit.” After identifying the advantages and disadvantages of each approach, this paper examines how they might play out in the context of digital platforms. It first examines whether antitrust is too slow and indeterminate to tackle market power concerns arising from digital platforms. It next considers possible error costs resulting from the most prominent proposed conduct rules. It then shows how three features of the agency oversight model—its broad focus, political susceptibility, and perpetual control—render it particularly vulnerable to rent-seeking efforts and agency capture. The paper concludes that antitrust’s downsides (relative indeterminacy and slowness) are likely to be less significant than those of ex ante conduct rules (large error costs resulting from high informational requirements) and ongoing agency oversight (rent-seeking and agency capture).

Lambert’s analysis should be carefully consulted by American legislators and potential rule-makers (including at the Federal Trade Commission) before they institute digital platform regulation. One also hopes that enlightened foreign competition officials will also take note of Professor Lambert’s well-reasoned study. 

Interrogations concerning the role that economic theory should play in policy decisions are nothing new. Milton Friedman famously drew a distinction between “positive” and “normative” economics, notably arguing that theoretical models were valuable, despite their unrealistic assumptions. Kenneth Arrow and Gerard Debreu’s highly theoretical work on General Equilibrium Theory is widely acknowledged as one of the most important achievements of modern economics.

But for all their intellectual value and academic merit, the use of models to inform policy decisions is not uncontroversial. There is indeed a long and unfortunate history of influential economic models turning out to be poor depictions (and predictors) of real-world outcomes.

This raises a key question: should policymakers use economic models to inform their decisions and, if so, how? This post uses the economics of externalities to illustrate both the virtues and pitfalls of economic modeling. Throughout economic history, externalities have routinely been cited to support claims of market failure and calls for government intervention. However, as explained below, these fears have frequently failed to withstand empirical scrutiny.

Today, similar models are touted to support government intervention in digital industries. Externalities are notably said to prevent consumers from switching between platforms, allegedly leading to unassailable barriers to entry and deficient venture-capital investment. Unfortunately, as explained below, the models that underpin these fears are highly abstracted and far removed from underlying market realities.

Ultimately, this post argues that, while models provide a powerful way of thinking about the world, naïvely transposing them to real-world settings is misguided. This is not to say that models are useless—quite the contrary. Indeed, “falsified” models can shed powerful light on economic behavior that would otherwise prove hard to understand.

Bees

Fears surrounding economic externalities are as old as modern economics. For example, in the 1950s, economists routinely cited bee pollination as a source of externalities and, ultimately, market failure.

The basic argument was straightforward: Bees and orchards provide each other with positive externalities. Bees cross-pollinate flowers and orchards contain vast amounts of nectar upon which bees feed, thus improving honey yields. Accordingly, several famous economists argued that there was a market failure; bees fly where they please and farmers cannot prevent bees from feeding on their blossoming flowers—allegedly causing underinvestment in both. This led James Meade to conclude:

[T]he apple-farmer provides to the beekeeper some of his factors free of charge. The apple-farmer is paid less than the value of his marginal social net product, and the beekeeper receives more than the value of his marginal social net product.

A finding echoed by Francis Bator:

If, then, apple producers are unable to protect their equity in apple-nectar and markets do not impute to apple blossoms their correct shadow value, profit-maximizing decisions will fail correctly to allocate resources at the margin. There will be failure “by enforcement.” This is what I would call an ownership externality. It is essentially Meade’s “unpaid factor” case.

It took more than 20 years and painstaking research by Steven Cheung to conclusively debunk these assertions. So how did economic agents overcome this “insurmountable” market failure?

The answer, it turns out, was extremely simple. While bees do fly where they please, the relative placement of beehives and orchards has a tremendous impact on both fruit and honey yields. This is partly because bees have a very limited mean foraging range (roughly 2-3km). This left economic agents with ample scope to prevent free-riding.

Using these natural sources of excludability, they built a web of complex agreements that internalize the symbiotic virtues of beehives and fruit orchards. To cite Steven Cheung’s research

Pollination contracts usually include stipulations regarding the number and strength of the colonies, the rental fee per hive, the time of delivery and removal of hives, the protection of bees from pesticide sprays, and the strategic placing of hives. Apiary lease contracts differ from pollination contracts in two essential aspects. One is, predictably, that the amount of apiary rent seldom depends on the number of colonies, since the farmer is interested only in obtaining the rent per apiary offered by the highest bidder. Second, the amount of apiary rent is not necessarily fixed. Paid mostly in honey, it may vary according to either the current honey yield or the honey yield of the preceding year.

But what of neighboring orchards? Wouldn’t these entail a more complex externality (i.e., could one orchard free-ride on agreements concluded between other orchards and neighboring apiaries)? Apparently not:

Acknowledging the complication, beekeepers and farmers are quick to point out that a social rule, or custom of the orchards, takes the place of explicit contracting: during the pollination period the owner of an orchard either keeps bees himself or hires as many hives per area as are employed in neighboring orchards of the same type. One failing to comply would be rated as a “bad neighbor,” it is said, and could expect a number of inconveniences imposed on him by other orchard owners. This customary matching of hive densities involves the exchange of gifts of the same kind, which apparently entails lower transaction costs than would be incurred under explicit contracting, where farmers would have to negotiate and make money payments to one another for the bee spillover.

In short, not only did the bee/orchard externality model fail, but it failed to account for extremely obvious counter-evidence. Even a rapid flip through the Yellow Pages (or, today, a search on Google) would have revealed a vibrant market for bee pollination. In short, the bee externalities, at least as presented in economic textbooks, were merely an economic “fable.” Unfortunately, they would not be the last.

The Lighthouse

Lighthouses provide another cautionary tale. Indeed, Henry Sidgwick, A.C. Pigou, John Stuart Mill, and Paul Samuelson all cited the externalities involved in the provision of lighthouse services as a source of market failure.

Here, too, the problem was allegedly straightforward. A lighthouse cannot prevent ships from free-riding on its services when they sail by it (i.e., it is mostly impossible to determine whether a ship has paid fees and to turn off the lighthouse if that is not the case). Hence there can be no efficient market for light dues (lighthouses were seen as a “public good”). As Paul Samuelson famously put it:

Take our earlier case of a lighthouse to warn against rocks. Its beam helps everyone in sight. A businessman could not build it for a profit, since he cannot claim a price from each user. This certainly is the kind of activity that governments would naturally undertake.

He added that:

[E]ven if the operators were able—say, by radar reconnaissance—to claim a toll from every nearby user, that fact would not necessarily make it socially optimal for this service to be provided like a private good at a market-determined individual price. Why not? Because it costs society zero extra cost to let one extra ship use the service; hence any ships discouraged from those waters by the requirement to pay a positive price will represent a social economic loss—even if the price charged to all is no more than enough to pay the long-run expenses of the lighthouse.

More than a century after it was first mentioned in economics textbooks, Ronald Coase finally laid the lighthouse myth to rest—rebutting Samuelson’s second claim in the process.

What piece of evidence had eluded economists for all those years? As Coase observed, contemporary economists had somehow overlooked the fact that large parts of the British lighthouse system were privately operated, and had been for centuries:

[T]he right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament. The tolls were collected at the ports by agents (who might act for several lighthouses), who might be private individuals but were commonly customs officials. The toll varied with the lighthouse and ships paid a toll, varying with the size of the vessel, for each lighthouse passed. It was normally a rate per ton (say 1/4d or 1/2d) for each voyage. Later, books were published setting out the lighthouses passed on different voyages and the charges that would be made.

In other words, lighthouses used a simple physical feature to create “excludability” and prevent free-riding. The main reason ships require lighthouses is to avoid hitting rocks when they make their way to a port. By tying port fees and light dues, lighthouse owners—aided by mild government-enforced property rights—could easily earn a return on their investments, thus disproving the lighthouse free-riding myth.

Ultimately, this meant that a large share of the British lighthouse system was privately operated throughout the 19th century, and this share would presumably have been more pronounced if government-run “Trinity House” lighthouses had not crowded out private investment:

The position in 1820 was that there were 24 lighthouses operated by Trinity House and 22 by private individuals or organizations. But many of the Trinity House lighthouses had not been built originally by them but had been acquired by purchase or as the result of the expiration of a lease.

Of course, this system was not perfect. Some ships (notably foreign ones that did not dock in the United Kingdom) might free-ride on this arrangement. It also entailed some level of market power. The ability to charge light dues meant that prices were higher than the “socially optimal” baseline of zero (the marginal cost of providing light is close to zero). Though it is worth noting that tying port fees and light dues might also have decreased double marginalization, to the benefit of sailors.

Samuelson was particularly weary of this market power that went hand in hand with the private provision of public goods, including lighthouses:

Being able to limit a public good’s consumption does not make it a true-blue private good. For what, after all, are the true marginal costs of having one extra family tune in on the program? They are literally zero. Why then prevent any family which would receive positive pleasure from tuning in on the program from doing so?

However, as Coase explained, light fees represented only a tiny fraction of a ship’s costs. In practice, they were thus unlikely to affect market output meaningfully:

[W]hat is the gain which Samuelson sees as coming from this change in the way in which the lighthouse service is financed? It is that some ships which are now discouraged from making a voyage to Britain because of the light dues would in future do so. As it happens, the form of the toll and the exemptions mean that for most ships the number of voyages will not be affected by the fact that light dues are paid. There may be some ships somewhere which are laid up or broken up because of the light dues, but the number cannot be great, if indeed there are any ships in this category.

Samuelson’s critique also falls prey to the Nirvana Fallacy pointed out by Harold Demsetz: markets might not be perfect, but neither is government intervention. Market power and imperfect appropriability are the two (paradoxical) pitfalls of the first; “white elephants,” underinvestment, and lack of competition (and the information it generates) tend to stem from the latter.

Which of these solutions is superior, in each case, is an empirical question that early economists had simply failed to consider—assuming instead that market failure was systematic in markets that present prima facie externalities. In other words, models were taken as gospel without any circumspection about their relevance to real-world settings.

The Tragedy of the Commons

Externalities were also said to undermine the efficient use of “common pool resources,” such grazing lands, common irrigation systems, and fisheries—resources where one agent’s use diminishes that of others, and where exclusion is either difficult or impossible.

The most famous formulation of this problem is Garret Hardin’s highly influential (over 47,000 cites) “tragedy of the commons.” Hardin cited the example of multiple herdsmen occupying the same grazing ground:

The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another … But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.

In more technical terms, each economic agent purportedly exerts an unpriced negative externality on the others, thus leading to the premature depletion of common pool resources. Hardin extended this reasoning to other problems, such as pollution and allegations of global overpopulation.

Although Hardin hardly documented any real-world occurrences of this so-called tragedy, his policy prescriptions were unequivocal:

The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from the misery of overpopulation. Freedom to breed will bring ruin to all.

As with many other theoretical externalities, empirical scrutiny revealed that these fears were greatly overblown. In her Nobel-winning work, Elinor Ostrom showed that economic agents often found ways to mitigate these potential externalities markedly. For example, mountain villages often implement rules and norms that limit the use of grazing grounds and wooded areas. Likewise, landowners across the world often set up “irrigation communities” that prevent agents from overusing water.

Along similar lines, Julian Morris and I conjecture that informal arrangements and reputational effects might mitigate opportunistic behavior in the standard essential patent industry.

These bottom-up solutions are certainly not perfect. Many common institutions fail—for example, Elinor Ostrom documents several problematic fisheries, groundwater basins and forests, although it is worth noting that government intervention was sometimes behind these failures. To cite but one example:

Several scholars have documented what occurred when the Government of Nepal passed the “Private Forest Nationalization Act” […]. Whereas the law was officially proclaimed to “protect, manage and conserve the forest for the benefit of the entire country”, it actually disrupted previously established communal control over the local forests. Messerschmidt (1986, p.458) reports what happened immediately after the law came into effect:

Nepalese villagers began freeriding — systematically overexploiting their forest resources on a large scale.

In any case, the question is not so much whether private institutions fail, but whether they do so more often than government intervention. be it regulation or property rights. In short, the “tragedy of the commons” is ultimately an empirical question: what works better in each case, government intervention, propertization, or emergent rules and norms?

More broadly, the key lesson is that it is wrong to blindly apply models while ignoring real-world outcomes. As Elinor Ostrom herself put it:

The intellectual trap in relying entirely on models to provide the foundation for policy analysis is that scholars then presume that they are omniscient observers able to comprehend the essentials of how complex, dynamic systems work by creating stylized descriptions of some aspects of those systems.

Dvorak Keyboards

In 1985, Paul David published an influential paper arguing that market failures undermined competition between the QWERTY and Dvorak keyboard layouts. This version of history then became a dominant narrative in the field of network economics, including works by Joseph Farrell & Garth Saloner, and Jean Tirole.

The basic claim was that QWERTY users’ reluctance to switch toward the putatively superior Dvorak layout exerted a negative externality on the rest of the ecosystem (and a positive externality on other QWERTY users), thus preventing the adoption of a more efficient standard. As Paul David put it:

Although the initial lead acquired by QWERTY through its association with the Remington was quantitatively very slender, when magnified by expectations it may well have been quite sufficient to guarantee that the industry eventually would lock in to a de facto QWERTY standard. […]

Competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system — where decentralized decision making subsequently has sufficed to hold it.

Unfortunately, many of the above papers paid little to no attention to actual market conditions in the typewriter and keyboard layout industries. Years later, Stan Liebowitz and Stephen Margolis undertook a detailed analysis of the keyboard layout market. They almost entirely rejected any notion that QWERTY prevailed despite it being the inferior standard:

Yet there are many aspects of the QWERTY-versus-Dvorak fable that do not survive scrutiny. First, the claim that Dvorak is a better keyboard is supported only by evidence that is both scant and suspect. Second, studies in the ergonomics literature find no significant advantage for Dvorak that can be deemed scientifically reliable. Third, the competition among producers of typewriters, out of which the standard emerged, was far more vigorous than is commonly reported. Fourth, there were far more typing contests than just the single Cincinnati contest. These contests provided ample opportunity to demonstrate the superiority of alternative keyboard arrangements. That QWERTY survived significant challenges early in the history of typewriting demonstrates that it is at least among the reasonably fit, even if not the fittest that can be imagined.

In short, there was little to no evidence supporting the view that QWERTY inefficiently prevailed because of network effects. The falsification of this narrative also weakens broader claims that network effects systematically lead to either excess momentum or excess inertia in standardization. Indeed, it is tempting to characterize all network industries with heavily skewed market shares as resulting from market failure. Yet the QWERTY/Dvorak story suggests that such a conclusion would be premature.

Killzones, Zoom, and TikTok

If you are still reading at this point, you might think that contemporary scholars would know better than to base calls for policy intervention on theoretical externalities. Alas, nothing could be further from the truth.

For instance, a recent paper by Sai Kamepalli, Raghuram Rajan and Luigi Zingales conjectures that the interplay between mergers and network externalities discourages the adoption of superior independent platforms:

If techies expect two platforms to merge, they will be reluctant to pay the switching costs and adopt the new platform early on, unless the new platform significantly outperforms the incumbent one. After all, they know that if the entering platform’s technology is a net improvement over the existing technology, it will be adopted by the incumbent after merger, with new features melded with old features so that the techies’ adjustment costs are minimized. Thus, the prospect of a merger will dissuade many techies from trying the new technology.

Although this key behavioral assumption drives the results of the theoretical model, the paper presents no evidence to support the contention that it occurs in real-world settings. Admittedly, the paper does present evidence of reduced venture capital investments after mergers involving large tech firms. But even on their own terms, this data simply does not support the authors’ behavioral assumption.

And this is no isolated example. Over the past couple of years, several scholars have called for more muscular antitrust intervention in networked industries. A common theme is that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in, thus raising barriers to entry for potential rivals (here, here, here).

But there are also countless counterexamples, where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.

Zoom is one of the most salient instances. As I have written previously:

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.

Along similar lines, Geoffrey Manne and Alec Stapp have put forward a multitude of other examples. These include: The demise of Yahoo; the disruption of early instant-messaging applications and websites; MySpace’s rapid decline; etc. In all these cases, outcomes do not match the predictions of theoretical models.

More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and its powerful algorithm are the most likely explanations for its success.

While these developments certainly do not disprove network effects theory, they eviscerate the common belief in antitrust circles that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. As in the previous examples, the question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions.

In Conclusion

My argument is not that models are without value. To the contrary, framing problems in economic terms—and simplifying them in ways that make them cognizable—enables scholars and policymakers to better understand where market failures might arise, and how these problems can be anticipated and solved by private actors. In other words, models alone cannot tell us that markets will fail, but they can direct inquiries and help us to understand why firms behave the way they do, and why markets (including digital ones) are organized in a given way.

In that respect, both the theoretical and empirical research cited throughout this post offer valuable insights for today’s policymakers.

For a start, as Ronald Coase famously argued in what is perhaps his most famous work, externalities (and market failure more generally) are a function of transaction costs. When these are low (relative to the value of a good), market failures are unlikely. This is perhaps clearest in the “Fable of the Bees” example. Given bees’ short foraging range, there were ultimately few real-world obstacles to writing contracts that internalized the mutual benefits of bees and orchards.

Perhaps more importantly, economic research sheds light on behavior that might otherwise be seen as anticompetitive. The rules and norms that bind farming/beekeeping communities, as well as users of common pool resources, could easily be analyzed as a cartel by naïve antitrust authorities. Yet externality theory suggests they play a key role in preventing market failure.

Along similar lines, mergers and acquisitions (as well as vertical integration, more generally) can reduce opportunism and other externalities that might otherwise undermine collaboration between firms (here, here and here). And much of the same is true for certain types of unilateral behavior. Tying video games to consoles (and pricing the console below cost) can help entrants overcome network externalities that might otherwise shield incumbents. Likewise, Google tying its proprietary apps to the open source Android operating system arguably enabled it to earn a return on its investments, thus overcoming the externality problem that plagues open source software.

All of this raises a tantalizing prospect that deserves far more attention than it is currently given in policy circles: authorities around the world are seeking to regulate the tech space. Draft legislation has notably been tabled in the United States, European Union and the United Kingdom. These draft bills would all make it harder for large tech firms to implement various economic hierarchies, including mergers and certain contractual arrangements.

This is highly paradoxical. If digital markets are indeed plagued by network externalities and high transaction costs, as critics allege, then preventing firms from adopting complex hierarchies—which have traditionally been seen as a way to solve externalities—is just as likely to exacerbate problems. In other words, like the economists of old cited above, today’s policymakers appear to be focusing too heavily on simple models that predict market failure, and far too little on the mechanisms that firms have put in place to thrive within this complex environment.

The bigger picture is that far more circumspection is required when using theoretical models in real-world policy settings. Indeed, as Harold Demsetz famously put it, the purpose of normative economics is not so much to identify market failures, but to help policymakers determine which of several alternative institutions will deliver the best outcomes for consumers:

This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem […].