Archives For truth on the market

With the passing of Justice Ruth Bader Ginsburg, many have already noted her impact on the law as an advocate for gender equality and women’s rights, her importance as a role model for women, and her civility. Indeed, a key piece of her legacy is that she was a jurist in the classic sense of the word: she believed in using coherent legal reasoning to reach a result. And that meant Justice Ginsburg’s decisions sometimes cut against partisan political expectations. 

This is clearly demonstrated in our little corner of the law: RBG frequently voted in the majority on antitrust cases in a manner that—to populist leftwing observers—would be surprising. Moreover, she authored an important case on price discrimination that likewise cuts against the expectation of populist antitrust critics and demonstrates her nuanced jurisprudence.

RBG’s record on the Court shows a respect for the evolving nature of antitrust law

In the absence of written opinions of her own, it is difficult to discern what was actually in Justice Ginsburg’s mind as she encountered antitrust issues. But, her voting record represents at least a willingness to approach antitrust in an apolitical manner. 

Over the last several decades, Justice Ginsburg joined the Supreme Court majority in many cases dealing with a wide variety of antitrust issues, including the duty to deal doctrine, vertical restraints, joint ventures, and mergers. In many of these cases, RBG aligned herself with judgments of the type that the antitrust populists criticize.

The following are major consumer welfare standard cases that helped shape the current state of antitrust law in which she joined the majority or issued a concurrence: 

  • Verizon Commc’ns Inc. v. Law Offices of Curtis Trinko, LLP, 540 U.S. 398 (2004) (unanimous opinion heightening the standard for finding a duty to deal)
  • Pacific Bell Tel. Co v. linkLine Commc’ns, Inc.,  555 U.S. 438 (2009) (Justice Ginsburg joined the concurrence finding there was no “price squeeze” but suggesting the predatory pricing claim should be remanded)
  • Weyerhaeuser Co. v. Ross-Simmons Hardwood Lumber Co., Inc., 549 U.S. 312 (2007) (unanimous opinion finding predatory buying claims are still subject to the dangerous probability of recoupment test from Brooke Group)
  • Apple, Inc. v. Robert Pepper, 139 S.Ct. 1514 (2019) (part of majority written by Justice Kavanaugh finding that iPhone owners were direct purchasers under Illinois Brick that may sue Apple for alleged monopolization)
  • State Oil Co. v. Khan, 522 U.S. 3 (1997) (unanimous opinion overturning per se treatment of vertical maximum price fixing under Albrecht and applying rule of reason standard)
  • Texaco Inc. v. Dagher, 547 U.S. 1 (2006) (unanimous opinion finding it is not per se illegal under §1 of the Sherman Act for a lawful, economically integrated joint venture to set the prices at which it sells its products)
  • Illinois Tool Works Inc. v. Independent Ink, Inc., 547 U.S. 28 (2006) (unanimous opinion finding a patent does not necessarily confer market power upon the patentee, in all cases involving a tying arrangement, the plaintiff must prove that the defendant has market power in the tying product)
  • U.S. v. Baker Hughes, Inc., 908 F. 2d 981 (D.C. Cir. 1990) (unanimous opinion written by then-Judge Clarence Thomas while both were on the D.C. Circuit of Appeals finding against the government’s argument that the defendant in a Section 7 merger challenge can rebut a prima facie case only by a clear showing that entry into the market by competitors would be quick and effective)

Even where she joined the dissent in antitrust cases, she did so within the ambit of the consumer welfare standard. Thus, while she was part of the dissent in cases like Leegin Creative Leather Products, Inc. v. PSKS, Inc., 551 U.S. 877 (2007), Bell Atlantic Corp v. Twombly, 550 U.S. 544 (2007), and Ohio v. American Express Co., 138 S.Ct. 2274 (2018), she still left a legacy of supporting modern antitrust jurisprudence. In those cases, RBG simply  had a different vision for how best to optimize consumer welfare. 

Justice Ginsburg’s Volvo Opinion

The 2006 decision Volvo Trucks North America, Inc. v. Reeder-Simco GMC, Inc. was one of the few antitrust decisions authored by RBG and shows her appreciation for the consumer welfare standard. In particular, Justice Ginsburg affirmed the notion that antitrust law is designed to protect competition not competitors—a lesson that, as of late, needs to be refreshed. 

Volvo, a 7-2 decision, dealt with the Robinson-Patman Act’s prohibition on price discimination. Reeder-Simco, a retail car dealer that sold Volvos, alleged that Volvo Inc. was violating the Robinson-Patman Act by selling cars to them at different prices than to other Volvo dealers.

The Robinson-Patman Act is frequently cited by antitrust populists as a way to return antitrust law to its former glory. A main argument of Lina Khan’s Amazon’s Antitrust Paradox was that the Chicago School had distorted the law on vertical restraints generally, and price discrimination in particular. One source of this distortion in Khan’s opinion has been the Supreme Court’s mishandling of the Robinson-Patman Act.

Yet, in Volvo we see Justice Ginsburg wrestling with the Robinson-Patman Act in a way to give effect to the law as written, which may run counter to some of the contemporary populist impulse to revise the Court’s interpretation of antitrust laws. Justice Ginsburg, citing Brown & Williamson, first noted that: 

Mindful of the purposes of the Act and of the antitrust laws generally, we have explained that Robinson-Patman does not “ban all price differences charged to different purchasers of commodities of like grade and quality.”

Instead, the Robinson-Patman Act was aimed at a particular class of harms that Congress believed existed when large chain-stores were able to exert something like monopsony buying power. Moreover, Justice Ginsburg noted, the Act “proscribes ‘price discrimination only to the extent that it threatens to injure competition’[.]”

Under the Act, plaintiffs needed to demonstrate evidence of Volvo Inc. systematically treating plaintiffs as “disfavored” purchasers as against another set of “favored” purchasers. Instead, all plaintiffs could produce was anecdotal and inconsistent evidence of Volvo Inc. disfavoring them. Thus, the plaintiffs— and theoretically other similarly situated Volvo dealers— were in fact harmed in a sense by Volvo Inc. Yet, Justice Ginsburg was unwilling to rewrite the Act on Congress’s behalf to incorporate new harms later discovered (a fact which would not earn her accolades in populist circles these days). 

Instead, Justice Ginsburg wrote that:

Interbrand competition, our opinions affirm, is the “primary concern of antitrust law.”… The Robinson-Patman Act signals no large departure from that main concern. Even if the Act’s text could be construed in the manner urged by [plaintiffs], we would resist interpretation geared more to the protection of existing competitors than to the stimulation of competition. In the case before us, there is no evidence that any favored purchaser possesses market power, the allegedly favored purchasers are dealers with little resemblance to large independent department stores or chain operations, and the supplier’s selective price discounting fosters competition among suppliers of different brands… By declining to extend Robinson-Patman’s governance to such cases, we continue to construe the Act “consistently with broader policies of the antitrust laws.” Brooke Group, 509 U.S., at 220… (cautioning against Robinson-Patman constructions that “extend beyond the prohibitions of the Act and, in doing so, help give rise to a price uniformity and rigidity in open conflict with the purposes of other antitrust legislation”).

Thus, interested in the soundness of her jurisprudence in the face of a well-developed body of antitrust law, Justice Ginsburg chose to continue to develop that body of law rather than engage in judicial policymaking in favor of a sympathetic plaintiff. 

It must surely be tempting for a justice on the Court to adopt less principled approaches to the law in any given case, and it is equally as impressive that Justice Ginsburg consistently stuck to her principles. We can only hope her successor takes note of Justice Ginsburg’s example.

Apple’s legal team will be relieved that “you reap what you sow” is just a proverb. After a long-running antitrust battle against Qualcomm unsurprisingly ended in failure, Apple now faces antitrust accusations of its own (most notably from Epic Games). Somewhat paradoxically, this turn of events might cause Apple to see its previous defeat in a new light. Indeed, the well-established antitrust principles that scuppered Apple’s challenge against Qualcomm will now be the rock upon which it builds its legal defense.

But while Apple’s reversal of fortunes might seem anecdotal, it neatly illustrates a fundamental – and often overlooked – principle of antitrust policy: Antitrust law is about maximizing consumer welfare. Accordingly, the allocation of surplus between two companies is only incidentally relevant to antitrust proceedings, and it certainly is not a goal in and of itself. In other words, antitrust law is not about protecting David from Goliath.

Jockeying over the distribution of surplus

Or at least that is the theory. In practice, however, most antitrust cases are but small parts of much wider battles where corporations use courts and regulators in order to jockey for market position and/or tilt the distribution of surplus in their favor. The Microsoft competition suits brought by the DOJ and the European commission (in the EU and US) partly originated from complaints, and lobbying, by Sun Microsystems, Novell, and Netscape. Likewise, the European Commission’s case against Google was prompted by accusations from Microsoft and Oracle, among others. The European Intel case was initiated following a complaint by AMD. The list goes on.

The last couple of years have witnessed a proliferation of antitrust suits that are emblematic of this type of power tussle. For instance, Apple has been notoriously industrious in using the court system to lower the royalties that it pays to Qualcomm for LTE chips. One of the focal points of Apple’s discontent was Qualcomm’s policy of basing royalties on the end-price of devices (Qualcomm charged iPhone manufacturers a 5% royalty rate on their handset sales – and Apple received further rebates):

“The whole idea of a percentage of the cost of the phone didn’t make sense to us,” [Apple COO Jeff Williams] said. “It struck at our very core of fairness. At the time we were making something really really different.”

This pricing dispute not only gave rise to high-profile court cases, it also led Apple to lobby Standard Developing Organizations (“SDOs”) in a partly successful attempt to make them amend their patent policies, so as to prevent this type of pricing. 

However, in a highly ironic turn of events, Apple now finds itself on the receiving end of strikingly similar allegations. At issue is the 30% commission that Apple charges for in app purchases on the iPhone and iPad. These “high” commissions led several companies to lodge complaints with competition authorities (Spotify and Facebook, in the EU) and file antitrust suits against Apple (Epic Games, in the US).

Of course, these complaints are couched in more sophisticated, and antitrust-relevant, reasoning. But that doesn’t alter the fact that these disputes are ultimately driven by firms trying to tilt the allocation of surplus in their favor (for a more detailed explanation, see Apple and Qualcomm).

Pushback from courts: The Qualcomm case

Against this backdrop, a string of recent cases sends a clear message to would-be plaintiffs: antitrust courts will not be drawn into rent allocation disputes that have no bearing on consumer welfare. 

The best example of this judicial trend is Qualcomm’s victory before the United States Court of Appeal for the 9th Circuit. The case centered on the royalties that Qualcomm charged to OEMs for its Standard Essential Patents (SEPs). Both the district court and the FTC found that Qualcomm had deployed a series of tactics (rebates, refusals to deal, etc) that enabled it to circumvent its FRAND pledges. 

However, the Court of Appeal was not convinced. It failed to find any consumer harm, or recognizable antitrust infringement. Instead, it held that the dispute at hand was essentially a matter of contract law:

To the extent Qualcomm has breached any of its FRAND commitments, a conclusion we need not and do not reach, the remedy for such a breach lies in contract and patent law. 

This is not surprising. From the outset, numerous critics pointed that the case lied well beyond the narrow confines of antitrust law. The scathing dissenting statement written by Commissioner Maureen Olhaussen is revealing:

[I]n the Commission’s 2-1 decision to sue Qualcomm, I face an extraordinary situation: an enforcement action based on a flawed legal theory (including a standalone Section 5 count) that lacks economic and evidentiary support, that was brought on the eve of a new presidential administration, and that, by its mere issuance, will undermine U.S. intellectual property rights in Asia and worldwide. These extreme circumstances compel me to voice my objections. 

In reaching its conclusion, the Court notably rejected the notion that SEP royalties should be systematically based upon the “Smallest Saleable Patent Practicing Unit” (or SSPPU):

Even if we accept that the modem chip in a cellphone is the cellphone’s SSPPU, the district court’s analysis is still fundamentally flawed. No court has held that the SSPPU concept is a per se rule for “reasonable royalty” calculations; instead, the concept is used as a tool in jury cases to minimize potential jury confusion when the jury is weighing complex expert testimony about patent damages.

Similarly, it saw no objection to Qualcomm licensing its technology at the OEM level (rather than the component level):

Qualcomm’s rationale for “switching” to OEM-level licensing was not “to sacrifice short-term benefits in order to obtain higher profits in the long run from the exclusion of competition,” the second element of the Aspen Skiing exception. Aerotec Int’l, 836 F.3d at 1184 (internal quotation marks and citation omitted). Instead, Qualcomm responded to the change in patent-exhaustion law by choosing the path that was “far more lucrative,” both in the short term and the long term, regardless of any impacts on competition. 

Finally, the Court concluded that a firm breaching its FRAND pledges did not automatically amount to anticompetitive conduct: 

We decline to adopt a theory of antitrust liability that would presume anticompetitive conduct any time a company could not prove that the “fair value” of its SEP portfolios corresponds to the prices the market appears willing to pay for those SEPs in the form of licensing royalty rates.

Taken together, these findings paint a very clear picture. The Qualcomm Court repeatedly rejected the radical idea that US antitrust law should concern itself with the prices charged by monopolists — as opposed to practices that allow firms to illegally acquire or maintain a monopoly position. The words of Learned Hand and those of Antonin Scalia (respectively, below) loom large:

The successful competitor, having been urged to compete, must not be turned upon when he wins. 

And,

To safeguard the incentive to innovate, the possession of monopoly power will not be found unlawful unless it is accompanied by an element of anticompetitive conduct.

Other courts (both in the US and abroad) have reached similar conclusions

For instance, a district court in Texas dismissed a suit brought by Continental Automotive Systems (which supplies electronic systems to the automotive industry) against a group of SEP holders. 

Continental challenged the patent holders’ decision to license their technology at the vehicle rather than component level (the allegation is very similar to the FTC’s complaint that Qualcomm licensed its SEPs at the OEM, rather than chipset level). However, following a forceful intervention by the DOJ, the Court ultimately held that the facts alleged by Continental were not indicative of antitrust injury. It thus dismissed the case.

Likewise, within weeks of the Qualcomm and Continental decisions, the UK Supreme court also ruled in favor of SEP holders. In its Unwired Planet ruling, the Court concluded that discriminatory licenses did not automatically infringe competition law (even though they might breach a firm’s contractual obligations):

[I]t cannot be said that there is any general presumption that differential pricing for licensees is problematic in terms of the public or private interests at stake.

In reaching this conclusion, the UK Supreme Court emphasized that the determination of whether licenses were FRAND, or not, was first and foremost a matter of contract law. In the case at hand, the most important guide to making this determination were the internal rules of the relevant SDO (as opposed to competition case law):

Since price discrimination is the norm as a matter of licensing practice and may promote objectives which the ETSI regime is intended to promote (such as innovation and consumer welfare), it would have required far clearer language in the ETSI FRAND undertaking to indicate an intention to impose the more strict, “hard-edged” non-discrimination obligation for which Huawei contends. Further, in view of the prevalence of competition laws in the major economies around the world, it is to be expected that any anti-competitive effects from differential pricing would be most appropriately addressed by those laws

All of this ultimately led the Court to rule in favor of Unwired Planet, thus dismissing Huawei’s claims that it had infringed competition law by breaching its FRAND pledges. 

In short, courts and antitrust authorities on both sides of the Atlantic have repeatedly, and unambiguously, concluded that pricing disputes (albeit in the specific context of technological standards) are generally a matter of contract law. Antitrust/competition law intercedes only when unfair/excessive/discriminatory prices are both caused by anticompetitive behavior and result in anticompetitive injury.

Apple’s Loss is… Apple’s gain.

Readers might wonder how the above cases relate to Apple’s app store. But, on closer inspection the parallels are numerous. As explained above, courts have repeatedly stressed that antitrust enforcement should not concern itself with the allocation of surplus between commercial partners. Yet that is precisely what Epic Game’s suit against Apple is all about.

Indeed, Epic’s central claim is not that it is somehow foreclosed from Apple’s App Store (for example, because Apple might have agreed to exclusively distribute the games of one of Epic’s rivals). Instead, all of its objections are down to the fact that it would like to access Apple’s store under more favorable terms:

Apple’s conduct denies developers the choice of how best to distribute their apps. Developers are barred from reaching over one billion iOS users unless they go through Apple’s App Store, and on Apple’s terms. […]

Thus, developers are dependent on Apple’s noblesse oblige, as Apple may deny access to the App Store, change the terms of access, or alter the tax it imposes on developers, all in its sole discretion and on the commercially devastating threat of the developer losing access to the entire iOS userbase. […]

By imposing its 30% tax, Apple necessarily forces developers to suffer lower profits, reduce the quantity or quality of their apps, raise prices to consumers, or some combination of the three.

And the parallels with the Qualcomm litigation do not stop there. Epic is effectively asking courts to make Apple monetize its platform at a different level than the one that it chose to maximize its profits (no more monetization at the app store level). Similarly, Epic Games omits any suggestion of profit sacrifice on the part of Apple — even though it is a critical element of most unilateral conduct theories of harm. Finally, Epic is challenging conduct that is both the industry norm and emerged in a highly competitive setting.

In short, all of Epic’s allegations are about monopoly prices, not monopoly maintenance or monopolization. Accordingly, just as the SEP cases discussed above were plainly beyond the outer bounds of antitrust enforcement (something that the DOJ repeatedly stressed with regard to the Qualcomm case), so too is the current wave of antitrust litigation against Apple. When all is said and done, Apple might thus be relieved that Qualcomm was victorious in their antitrust confrontation. Indeed, the legal principles that caused its demise against Qualcomm are precisely the ones that will, likely, enable it to prevail against Epic Games.

Germán Gutiérrez and Thomas Philippon have released a major rewrite of their paper comparing the U.S. and EU competitive environments. 

Although the NBER website provides an enticing title — “How European Markets Became Free: A Study of Institutional Drift” — the paper itself has a much more yawn-inducing title: “How EU Markets Became More Competitive Than US Markets: A Study of Institutional Drift.”

Having already critiqued the original paper at length (here and here), I wouldn’t normally take much interest in the do-over. However, in a recent episode of Tyler Cowen’s podcast, Jason Furman gave a shout out to Philippon’s work on increasing concentration. So, I thought it might be worth a review.

As with the original, the paper begins with a conclusion: The EU appears to be more competitive than the U.S. The authors then concoct a theory to explain their conclusion. The theory’s a bit janky, but it goes something like this:

  • Because of lobbying pressure and regulatory capture, an individual country will enforce competition policy at a suboptimal level.
  • Because of competing interests among different countries, a “supra-national” body will be more independent and better able to foster pro-competitive policies and to engage in more vigorous enforcement of competition policy.
  • The EU’s supra-national body and its Directorate-General for Competition is more independent than the U.S. Department of Justice and Federal Trade Commission.
  • Therefore, their model explains why the EU is more competitive than the U.S. Q.E.D.

If you’re looking for what this has to do with “institutional drift,” don’t bother. The term only shows up in the title.

The original paper provided evidence from 12 separate “markets,” that they say demonstrated their conclusion about EU vs. U.S. competitiveness. These weren’t really “markets” in the competition policy sense, they were just broad industry categories, such as health, information, trade, and professional services (actually “other business sector services”). 

As pointed out in one of my earlier critiques, In all but one of these industries, the 8-firm concentration ratios for the U.S. and the EU are below 40 percent and the HHI measures reported in the original paper are at levels that most observers would presume to be competitive. 

Sending their original markets to drift in the appendices, Gutiérrez and Philippon’s revised paper focuses its attention on two markets — telecommunications and airlines — to highlight their claims that EU markets are more competitive than the U.S. First, telecoms:

To be more concrete, consider the Telecom industry and the entry of the French Telecom company Free Mobile. Until 2011, the French mobile industry was an oligopoly with three large historical incumbents and weak competition. … Free obtained its 4G license in 2011 and entered the market with a plan of unlimited talk, messaging and data for €20. Within six months, the incumbents Orange, SFR and Bouygues had reacted by launching their own discount brands and by offering €20 contracts as well. … The relative price decline was 40%: France went from being 15% more expensive than the US [in 2011] to being 25% cheaper in about two years [in 2013].

While this is an interesting story about how entry can increase competition, the story of a single firm entering a market in a single country is hardly evidence that the EU as a whole is more competitive than the U.S.

What Gutiérrez and Philippon don’t report is that from 2013 to 2019, prices declined by 12% in the U.S. and only 8% in France. In the EU as a whole, prices decreased by only 5% over the years 2013-2019.

Gutiérrez and Philippon’s passenger airline story is even weaker. Because airline prices don’t fit their narrative, they argue that increasing airline profits are evidence that the U.S. is less competitive than the EU. 

The picture above is from Figure 5 of their paper (“Air Transportation Profits and Concentration, EU vs US”). They claim that the “rise in US concentration and profits aligns closely with a controversial merger wave,” with the vertical line in the figure marking the Delta-Northwest merger.

Sure, profitability among U.S. firms increased. But, before the “merger wave,” profits were negative. Perhaps predatory pricing is pro-competitive after all.

Where Gutiérrez and Philippon really fumble is with airline pricing. Since the merger wave that pulled the U.S. airline industry out of insolvency, ticket prices (as measured by the Consumer Price Index), have decreased by 6%. In France, prices increased by 4% and in the EU, prices increased by 30%. 

The paper relies more heavily on eyeballing graphs than statistical analysis, but something about Table 2 caught my attention — the R-squared statistics. First, they’re all over the place. But, look at column (1): A perfect 1.00 R-squared. Could it be that Gutiérrez and Philippon’s statistical model has (almost) as many parameters as variables?

Notice that all the regressions with an R-squared of 0.9 or higher include country fixed effects. The two regressions with R-squareds of 0.95 and 0.96 also include country-industry fixed effects. It’s very possible that the regressions results are driven entirely by idiosyncratic differences among countries and industries. 

Gutiérrez and Philippon provide no interpretation for their results in Table 2, but it seems to work like this, using column (1): A 10% increase in the 4-firm concentration ratio (which is different from a 10 percentage point increase), would be associated with a 1.8% increase in prices four years later. So, an increase in CR4 from 20% to 22% (or an increase from 60% to 66%) would be associated with a 1.8% increase in prices over four years, or about 0.4% a year. On the one hand, I just don’t buy it. On the other hand, the effect is so small that it seems economically insignificant. 

I’m sure Gutiérrez and Philippon have put a lot of time into this paper and its revision. But there’s an old saying that the best thing about banging your head against the wall is that it feels so good when it stops. Perhaps, it’s time to stop with this paper and let it “drift” into obscurity.

In the latest congressional hearing, purportedly analyzing Google’s “stacking the deck” in the online advertising marketplace, much of the opening statement and questioning by Senator Mike Lee and later questioning by Senator Josh Hawley focused on an episode of alleged anti-conservative bias by Google in threatening to demonetize The Federalist, a conservative publisher, unless they exercised a greater degree of control over its comments section. The senators connected this to Google’s “dominance,” arguing that it is only because Google’s ad services are essential that Google can dictate terms to a conservative website. A similar impulse motivates Section 230 reform efforts as well: allegedly anti-conservative online platforms wield their dominance to censor conservative speech, either through deplatforming or demonetization.

Before even getting into the analysis of how to incorporate political bias into antitrust analysis, though, it should be noted that there likely is no viable antitrust remedy. Even aside from the Section 230 debate, online platforms like Google are First Amendment speakers who have editorial discretion over their sites and apps, much like newspapers. An antitrust remedy compelling these companies to carry speech they disagree with would almost certainly violate the First Amendment.

But even aside from the First Amendment aspect of this debate, there is no easy way to incorporate concerns about political bias into antitrust. Perhaps the best way to understand this argument in the antitrust sense is as a non-price effects analysis. 

Political bias could be seen by end consumers as an important aspect of product quality. Conservatives have made the case that not only Google, but also Facebook and Twitter, have discriminated against conservative voices. The argument would then follow that consumer welfare is harmed when these dominant platforms leverage their control of the social media marketplace into the marketplace of ideas by censoring voices with whom they disagree. 

While this has theoretical plausibility, there are real practical difficulties. As Geoffrey Manne and I have written previously, in the context of incorporating privacy into antitrust analysis:

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application. 

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist. 

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

Just as with privacy and other product qualities, the analysis becomes increasingly complex first when tradeoffs between price and quality are introduced, and then even more so when tradeoffs between what different consumer groups perceive as quality is added. In fact, it is more complex than privacy. All but the most exhibitionistic would prefer more to less privacy, all other things being equal. But with political media consumption, most would prefer to have more of what they want to read available, even if it comes at the expense of what others may want. There is no easy way to understand what consumer welfare means in a situation where one group’s preferences need to come at the expense of another’s in moderation decisions.

Consider the case of The Federalist again. The allegation is that Google is imposing their anticonservative bias by “forcing” the website to clean up its comments section. The argument is that since The Federalist needs Google’s advertising money, it must play by Google’s rules. And since it did so, there is now one less avenue for conservative speech.

What this argument misses is the balance Google and other online services must strike as multi-sided platforms. The goal is to connect advertisers on one side of the platform, to the users on the other. If a site wants to take advantage of the ad network, it seems inevitable that intermediaries like Google will need to create rules about what can and can’t be shown or they run the risk of losing advertisers who don’t want to be associated with certain speech or conduct. For instance, most companies don’t want to be associated with racist commentary. Thus, they will take great pains to make sure they don’t sponsor or place ads in venues associated with racism. Online platforms connecting advertisers to potential consumers must take that into consideration.

Users, like those who frequent The Federalist, have unpriced access to content across those sites and apps which are part of ad networks like Google’s. Other models, like paid subscriptions (which The Federalist also has available), are also possible. But it isn’t clear that conservative voices or conservative consumers have been harmed overall by the option of unpriced access on one side of the platform, with advertisers paying on the other side. If anything, it seems the opposite is the case since conservatives long complained about legacy media having a bias and lauded the Internet as an opportunity to gain a foothold in the marketplace of ideas.

Online platforms like Google must balance the interests of users from across the political spectrum. If their moderation practices are too politically biased in one direction or another, users could switch to another online platform with one click or swipe. Assuming online platforms wish to maximize revenue, they will have a strong incentive to limit political bias from its moderation practices. The ease of switching to another platform which markets itself as more free speech-friendly, like Parler, shows entrepreneurs can take advantage of market opportunities if Google and other online platforms go too far with political bias. 

While one could perhaps argue that the major online platforms are colluding to keep out conservative voices, this is difficult to square with the different moderation practices each employs, as well as the data that suggests conservative voices are consistently among the most shared on Facebook

Antitrust is not a cure-all law. Conservatives who normally understand this need to reconsider whether antitrust is really well-suited for litigating concerns about anti-conservative bias online. 

Municipal broadband has been heavily promoted by its advocates as a potential source of competition against Internet service providers (“ISPs”) with market power. Jonathan Sallet argued in Broadband for America’s Future: A Vision for the 2020s, for instance, that municipal broadband has a huge role to play in boosting broadband competition, with attendant lower prices, faster speeds, and economic development. 

Municipal broadband, of course, can mean more than one thing: From “direct consumer” government-run systems, to “open access” where government builds the back-end, but leaves it up to private firms to bring the connections to consumers, to “middle mile” where the government network reaches only some parts of the community but allows private firms to connect to serve other consumers. The focus of this blog post is on the “direct consumer” model.

There have been many economic studies on municipal broadband, both theoretical and empirical. The literature largely finds that municipal broadband poses serious risks to taxpayers, often relies heavily on cross-subsidies from government-owned electric utilities, crowds out private ISP investment in areas it operates, and largely fails the cost-benefit analysis. While advocates have defended municipal broadband on the grounds of its speed, price, and resulting attractiveness to consumers and businesses, others have noted that many of those benefits come at the expense of other parts of the country from which businesses move. 

What this literature has not touched upon is a more fundamental problem: municipal broadband lacks the price signals necessary for economic calculation.. The insights of the Austrian school of economics helps explain why this model is incapable of providing efficient outcomes for society. Rather than creating a valuable source of competition, municipal broadband creates “islands of chaos” undisciplined by the market test of profit-and-loss. As a result, municipal broadband is a poor model for promoting competition and innovation in broadband markets. 

The importance of profit-and-loss to economic calculation

One of the things often assumed away in economic analysis is the very thing the market process depends upon: the discovery of knowledge. Knowledge, in this context, is not the technical knowledge of how to build or maintain a broadband network, but the more fundamental knowledge which is discovered by those exercising entrepreneurial judgment in the marketplace. 

This type of knowledge is dependent on prices throughout the market. In the market process, prices coordinate exchange between market participants without each knowing the full plan of anyone else. For consumers, prices allow for the incremental choices between different options. For producers, prices in capital markets similarly allow for choices between different ways of producing their goods for the next stage of production. Prices in interest rates help coordinate present consumption, investment, and saving. And, the price signal of profit-and-loss allows producers to know whether they have cost-effectively served consumer needs. 

The broadband marketplace can’t be considered in isolation from the greater marketplace in which it is situated. But it can be analyzed under the framework of prices and the knowledge they convey.

For broadband consumers, prices are important for determining the relative importance of Internet access compared to other felt needs. The quality of broadband connection demanded by consumers is dependent on the price. All other things being equal, consumers demand faster connections with less latency issues. But many consumers may prefer slower speeds and connections with more latency if it is cheaper. Even choices between the importance of upload speeds versus download speeds may be highly asymmetrical if determined by consumers.  

While “High Performance Broadband for All” may be a great goal from a social planner’s perspective, individuals acting in the marketplace may prioritize other needs with his or her scarce resources. Even if consumers do need Internet access of some kind, the benefits of 100 Mbps download speeds over 25 Mbps, or upload speeds of 100 Mbps versus 3 Mbps may not be worth the costs. 

For broadband ISPs, prices for capital goods are important for building out the network. The relative prices of fiber, copper, wireless, and all the other factors of production in building out a network help them choose in light of anticipated profit. 

All the decisions of broadband ISPs are made through the lens of pursuing profit. If they are successful, it is because the revenues generated are greater than the costs of production, including the cost of money represented in interest rates. Just as importantly, loss shows the ISPs were unsuccessful in cost-effectively serving consumers. While broadband companies may be able to have losses over some period of time, they ultimately must turn a profit at some point, or there will be exit from the marketplace. Profit-and-loss both serve important functions.

Sallet misses the point when he states the“full value of broadband lies not just in the number of jobs it directly creates or the profits it delivers to broadband providers but also in its importance as a mechanism that others use across the economy and society.” From an economic point of view, profits aren’t important because economists love it when broadband ISPs get rich. Profits are important as an incentive to build the networks we all benefit from, and a signal for greater competition and innovation.

Municipal broadband as islands of chaos

Sallet believes the lack of high-speed broadband (as he defines it) is due to the monopoly power of broadband ISPs. He sees the entry of municipal broadband as pro-competitive. But the entry of a government-run broadband company actually creates “islands of chaos” within the market economy, reducing the ability of prices to coordinate disparate plans of action among participants. This, ultimately, makes society poorer.

The case against municipal broadband doesn’t rely on greater knowledge of how to build or maintain a network being in the hands of private engineers. It relies instead on the different institutional frameworks within which the manager of the government-run broadband network works as compared to the private broadband ISP. The type of knowledge gained in the market process comes from prices, including profit-and-loss. The manager of the municipal broadband network simply doesn’t have access to this knowledge and can’t calculate the best course of action as a result.

This is because the government-run municipal broadband network is not reliant upon revenues generated by free choices of consumers alone. Rather than needing to ultimately demonstrate positive revenue in order to remain a going concern, government-run providers can instead base their ongoing operation on access to below-market loans backed by government power, cross-subsidies when it is run by a government electric utility, and/or public money in the form of public borrowing (i.e. bonds) or taxes. 

Municipal broadband, in fact, does rely heavily on subsidies from the government. As a result, municipal broadband is not subject to the discipline of the market’s profit-and-loss test. This frees the enterprise to focus on other goals, including higher speeds—especially upload speeds—and lower prices than private ISPs often offer in the same market. This is why municipal broadband networks build symmetrical high-speed fiber networks at higher rates than the private sector.

But far from representing a superior source of “competition,” municipal broadband is actually an example of “predatory entry.” In areas where there is already private provision of broadband, municipal broadband can “out-compete” those providers due to subsidies from the rest of society. Eventually, this could lead to exit by the private ISPs, starting with the least cost-efficient to the most. In areas where there is limited provision of Internet access, the entry of municipal broadband could reduce incentives for private entry altogether. In either case, there is little reason to believe municipal broadband actually increases consumer welfarein the long run.

Moreover, there are serious concerns in relying upon municipal broadband for the buildout of ISP networks. While Sallet describes fiber as “future-proof,” there is little reason to think that it is. The profit motive induces broadband ISPs to constantly innovate and improve their networks. Contrary to what you would expect from an alleged monopoly industry, broadband companies are consistently among the highest investors in the American economy. Similar incentives would not apply to municipal broadband, which lacks the profit motive to innovate. 

Conclusion

There is a definite need to improve public policy to promote more competition in broadband markets. But municipal broadband is not the answer. The lack of profit-and-loss prevents the public manager of municipal broadband from having the price signal necessary to know it is serving the public cost-effectively. No amount of bureaucratic management can replace the institutional incentives of the marketplace.

Speaking about his new book in a ProMarket interview, David Dayen inadvertently captures what is perhaps the essential disconnect between antitrust reformers (populists, neo-Brandeisians, hipsters, whatever you may call them) and those of us who are more comfortable with the antitrust status quo (whatever you may call us). He says: “The antitrust doctrine that we’ve seen over the last 40 years simply does not match the lived experience of people.”

Narratives of Consumer Experience of Markets

This emphasis on “lived experience” runs through Dayen’s antitrust perspective. Citing to Hal Singer’s review of the book, the interview notes that “the heart of Dayen’s book is the personal accounts of ordinary Americans—airline passengers, hospital patients, farmers, and small business owners—attempting to achieve a slice of the American dream and facing insurmountable barriers in the form of unaccountable private monopolies.” As Singer notes in his review, “Dayen’s personalized storytelling, free of any stodgy regression analysis, is more likely to move policymakers” than are traditional economic arguments.

Dayen’s focus on individual narratives — of the consumer’s lived experience — is fundamentally different than the traditional antitrust economist’s perspective on competition and the market. It is worth exploring the differences between the two. The basic argument that I make below is that Dayen is right but also that he misunderstands the purpose of competition in a capitalist economy. A robustly competitive market is a brutal rat race that places each individual on an accelerating treadmill. There is no satiation or satisfaction for the individual consumer in these markets. But it is this very lack of satisfaction, this endless thirst for more, that makes competitive markets so powerful, and ultimately beneficial, for consumers. 

This is the fundamental challenge and paradox of capitalism. Satisfaction requires perspective that most consumers often don’t feel, and that many consumers never will feel. It requires the ability to step off that treadmill occasionally and to look how far society and individual welfare has come, even if individually one feels like they have not moved at all. It requires recognizing that the alternative to an uncomfortable flight to visit family isn’t a comfortable one, but an unaffordable one; that the alternative to low cost, processed foods, isn’t abundant higher-quality food but greater poverty for those who already can least afford food; that the alternative to a startup being beholden to Google’s and Amazon’s terms of service isn’t a market in which they have boundless access to these platforms’ infrastructures, but one in which each startup needs to entirely engineer its own infrastructure. In all of these cases, the fundamental tradeoff is between having something that is less perfect than an imagined ideal of it, and not having it at all

What Dayen refers to as consumers’ “lived experience” is really their “perceived experience.” This is important to how markets work. Competition is driven by consumers’ perception that things could be better (and by entrepreneurs’ perception that they can make it so). This perception is what keeps us on the treadmill. Consumers don’t look to their past generations and say “wow, by nearly every measure my life can be better than theirs with less effort!” They focus on what they don’t have yet, on the seemingly better lives of their contemporaries.

This description of markets may sound grotesquely dehumanizing. To the extent that it really is, this is because we live in a world of scarcity. There will always be tradeoffs and in a literally real way no consumer will ever have everything that she needs, let alone that she wants. 

On the flip side, this is what drives markets to make consumers better off. Consumers’ wants drive producers’ factories and innovators’ minds. There is no supply curve without a demand curve. And consumers are able to satisfy their own needs by becoming producers who work to satisfy the wants and needs of others. 

A Fair Question: Are Markets Worth It?

Dayen’s perspective on this description of markets, shared with his fellow reform-minded anti-antitrust crusaders, is that the typical consumers’ perceived experience of the market demonstrates that markets don’t work — that they have been captured by monopolists seeking to extract every ounce of revenue from each individual consumer. But this is not a story of monopolies. It is more plainly the story of markets. What Dayen identifies as a problem with the markets really is just the markets working as they are supposed to.

If this is just how markets work, it is fair to ask whether they are worth it. Importantly, those of us who answer “yes” need not be blind to or dismissive of concerns such as Dayen’s — to the concerns of the typical consumer. Economists have long recognized that capitalist markets are about allocative efficiency, not distributive efficiency — about making society as a whole as wealthy as possible but not about making sure that that wealth is fairly distributed. 

The antitrust reform movement is driven by advocates who long for a world in which everyone is poorer but feels more equal, as opposed to what they perceive as a world in which a few monopolists are extremely wealthy and everyone else feels poor. Their perception of this as the but-for world is not unreasonable, but it is also not accurate. The better world is the one with thriving, prosperous, markets,in which consumers broadly feel that they share in this prosperity. It may be the case that such a world has some oligopolies and even monopolies — that is what economic efficiency sometimes looks like. 

But those firms’ prosperity need not be adverse to consumers’ experience of the market. The challenging question is how we achieve this outcome. But that is a question of politics and macroeconomic policy, and of corporate social policy. It is a question of national identity, whether consumers’ perception of the economic treadmill can pivot from one of perceived futility to one of recognizing their lived contributions to society. It is one that antitrust law as it exists today contributes to answering, but not one that antitrust law on its own can ever answer.

On the other hand, were we to follow the populists’ lead and turn antitrust into a remedy for the perceived maladies of the market, we would risk the engine that improves consumers’ actual lived experience. The alternative to an antitrust driven by economic analysis and that errs on the side of not disrupting markets in favor of perceived injuries is an antitrust in which markets are beholden to the whims of politicians and enforcement officials. This is a world in which litigation is used by politicians to make it appear they are delivering on impossible promises, in which litigation is used to displace blame for politicians’ policy failures, in which litigation is used to distract from socio-political events entirely unrelated to the market. 

Concerns such as Dayen’s are timeless and not unreasonable. But the reflexive action is not the answer to such concerns. Rather, the response always must be to ask “opposed to what?” What is the but-for world? Here, Dayen and his peers suffer both Type I and Type II errors. They misdiagnose antitrust and non-competitive markets as the cause of their perceived problems. And they are overly confident in their proposed solutions to those problems, not recognizing the real harms that their proposed politicization of antitrust and markets poses.

Much has already been said about the twin antitrust suits filed by Epic Games against Apple and Google. For those who are not familiar with the cases, the game developer – most famous for its hit title Fortnite and the “Unreal Engine” that underpins much of the game (and movie) industry – is complaining that Apple and Google are thwarting competition from rival app stores and in-app payment processors. 

Supporters have been quick to see in these suits a long-overdue challenge against the 30% commissions that Apple and Google charge. Some have even portrayed Epic as a modern-day Robin Hood, leading the fight against Big Tech to the benefit of small app developers and consumers alike. Epic itself has been keen to stoke this image, comparing its litigation to a fight for basic freedoms in the face of Big Brother:

However, upon closer inspection, cracks rapidly appear in this rosy picture. What is left is a company partaking in blatant rent-seeking that threatens to harm the sprawling ecosystems that have emerged around both Apple and Google’s app stores.

Two issues are particularly salient. First, Epic is trying to protect its own interests at the expense of the broader industry. If successful, its suit would merely lead to alternative revenue schemes that – although more beneficial to itself – would leave smaller developers to shoulder higher fees. Second, the fees that Epic portrays as extortionate were in fact key to the emergence of mobile gaming.

Epic’s utopia is not an equilibrium

Central to Epic’s claims is the idea that both Apple and Google: (i) thwart competition from rival app stores, and implement a series of measures that prevent developers from reaching gamers through alternative means (such as pre-installing apps, or sideloading them in the case of Apple’s platforms); and (ii) tie their proprietary payment processing services to their app stores. According to Epic, this ultimately enables both Apple and Google to extract “extortionate” commissions (30%) from app developers.

But Epic’s whole case is based on the unrealistic assumption that both Apple and Google will sit idly by while rival play stores and payment systems take a free-ride on the vast investments they have ploughed into their respective smartphone platforms. In other words, removing Apple and Google’s ability to charge commissions on in-app purchases does not prevent them from monetizing their platforms elsewhere.

Indeed, economic and strategic management theory tells us that so long as Apple and Google single-handedly control one of the necessary points of access to their respective ecosystems, they should be able to extract a sizable share of the revenue generated on their platforms. One can only speculate, but it is easy to imagine Apple and Google charging rival app stores for access to their respective platforms, or charging developers for access to critical APIs.

Epic itself seems to concede this point. In a recent Verge article, it argued that Apple was threatening to cut off its access to iOS and Mac developer tools, which Apple currently offers at little to no cost:

Apple will terminate Epic’s inclusion in the Apple Developer Program, a membership that’s necessary to distribute apps on iOS devices or use Apple developer tools, if the company does not “cure your breaches” to the agreement within two weeks, according to a letter from Apple that was shared by Epic. Epic won’t be able to notarize Mac apps either, a process that could make installing Epic’s software more difficult or block it altogether. Apple requires that all apps are notarized before they can be run on newer versions of macOS, even if they’re distributed outside the App Store.

There is little to prevent Apple from more heavily monetizing these tools – should Epic’s antitrust case successfully prevent it from charging commissions via its app store.

All of this raises the question: why is Epic bringing a suit that, if successful, would merely result in the emergence of alternative fee schedules (as opposed to a significant reduction of the overall fees paid by developers).

One potential answer is that the current system is highly favorable to small apps that earn little to no revenue from purchases and who benefit most from the trust created by Apple and Google’s curation of their stores. It is, however, much less favorable to developers like Epic who no longer require any curation to garner the necessary trust from consumers and who earn a large share of their revenue from in-app purchases.

In more technical terms, the fact that all in-game payments are made through Apple and Google’s payment processing enables both platforms to more easily price-discriminate. Unlike fixed fees (but just like royalties), percentage commissions are necessarily state-contingent (i.e. the same commission will lead to vastly different revenue depending on an underlying app’s success). The most successful apps thus contribute far more to a platform’s fixed costs. For instance, it is estimated that mobile games account for 72% of all app store spend. Likewise, more than 80% of the apps on Apple’s store pay no commission at all.

This likely expands app store output by getting lower value developers on board. In that sense, it is akin to Ramsey pricing (where a firm/utility expands social welfare by allocating a higher share of fixed costs to the most inelastic consumers). Unfortunately, this would be much harder to accomplish if high value developers could easily bypass Apple or Google’s payment systems.

The bottom line is that Epic appears to be fighting to change Apple and Google’s app store business models in order to obtain fee schedules that are better aligned with its own interests. This is all the more important for Epic Games, given that mobile gaming is becoming increasingly popular relative to other gaming mediums (also here).

The emergence of new gaming platforms

Up to this point, I have mostly presented a zero-sum view of Epic’s lawsuit – i.e. developers and platforms are fighting over the distribution app store profits (though some smaller developers may lose out). But this ignores what is likely the chief virtue of Apple and Google’s “closed” distribution model. Namely, that it has greatly expanded the market for mobile gaming (and other mobile software), and will likely continue to do so in the future.

Much has already been said about the significant security and trust benefits that Apple and Google’s curation of their app stores (including their control of in-app payments) provide to users. Benedict Evans and Ben Thompson have both written excellent pieces on this very topic. 

In a nutshell, the closed model allows previously unknown developers to rapidly expand because (i) users do not have to fear their apps contain some form of malware, and (ii) they greatly reduce payments frictions, most notably security related ones. But while these are indeed tremendous benefits, another important upside seems to have gone relatively unnoticed. 

The “closed” business model also gives Apple and Google (as well as other platforms) significant incentives to develop new distribution mediums (smart TVs spring to mind) and improve existing ones. In turn, this greatly expands the audience that software developers can reach. In short, developers get a smaller share of a much larger pie.

The economics of two-sided markets are enlightening in this respect. Apple and Google’s stores are what Armstrong and Wright (here and here) refer to as “competitive bottlenecks”. That is, they compete aggressively (amongst themselves, and with other gaming platforms) to attract exclusive users. They can then charge developers a premium to access those users (note, however, that in the case at hand the incidence of those platform fees is unclear).

This gives platforms significant incentives to continuously attract and retain new users. For instance, if Steve Jobs is to be believed, giving consumers better access to media such as eBooks, video and games was one of the driving forces behind the launch of the iPad

This model of innovation would be seriously undermined if developers and consumers could easily bypass platforms (as Epic games is seeking to do).

In response, some commentators have countered that platforms may use their strong market positions to squeeze developers, thereby undermining software investments. But such a course of action may ultimately be self-defeating. For instance, writing about retail platforms imitating third-party sellers, Anfrei Hagiu, Tat-How Teh and Julian Wright have argued that:

[T]he platform has an incentive to commit itself not to imitate highly innovative third-party products in order to preserve their incentives to innovate.

Seen in this light, Apple and Google’s 30% commissions can be seen as a soft commitment not to expropriate developers, thus leaving them with a sizable share of the revenue generated on each platform. This may explain why the 30% commission has become a standard in the games industry (and beyond). 

Furthermore, from an evolutionary perspective, it is hard to argue that the 30% commission is somehow extortionate. If game developers were systematically expropriated, then the gaming industry – in particular its mobile segment – would not have grown so drastically over the past years:

All of this this likely explains why a recent survey found that 81% of app developers believed regulatory intervention would be misguided:

81% of developers and publishers believe that the relationship between them and platforms is best handled within the industry, rather than through government intervention. Competition and choice mean that developers will use platforms that they work with best.

The upshot is that the “closed” model employed by Apple and Google has served the gaming industry well. There is little compelling reason to overhaul that model today.

Final thoughts

When all is said and done, there is no escaping the fact that Epic games is currently playing a high-stakes rent-seeking game. As Apple noted in its opposition to Epic’s motion for a temporary restraining order:

Epic did not, and has not, contested that it is in breach of the App Store Guidelines and the License Agreement. Epic’s plan was to violate the agreements intentionally in order to manufacture an emergency. The moment Fortnite was removed from the App Store, Epic launched an extensive PR smear campaign against Apple and a litigation plan was orchestrated to the minute; within hours, Epic had filed a 56-page complaint, and within a few days, filed nearly 200 pages with this Court in a pre-packaged “emergency” motion. And just yesterday, it even sought to leverage its request to this Court for a sales promotion, announcing a “#FreeFortniteCup” to take place on August 23, inviting players for one last “Battle Royale” across “all platforms” this Sunday, with prizes targeting Apple.

Epic is ultimately seeking to introduce its own app store on both Apple and Google’s platforms, or at least bypass their payment processing services (as Spotify is seeking to do in the EU).

Unfortunately, as this post has argued, condoning this type of free-riding could prove highly detrimental to the entire mobile software industry. Smaller companies would almost inevitably be left to foot a larger share of the bill, existing platforms would become less secure, and the development of new ones could be hindered. At the end of the day, 30% might actually be a small price to pay.

In an age of antitrust populism on both ends of the political spectrum, federal and state regulators face considerable pressure to deploy the antitrust laws against firms that have dominant market shares. Yet federal case law makes clear that merely winning the race for a market is an insufficient basis for antitrust liability. Rather, any plaintiff must show that the winner either secured or is maintaining its dominant position through practices that go beyond vigorous competition. Any other principle would inhibit the competitive process that the antitrust laws are designed to promote. Federal judges who enjoy life tenure are far more insulated from outside pressures and therefore more likely to demand evidence of anticompetitive practices as a predicate condition for any determination of antitrust liability.

This separation of powers between the executive branch, which prosecutes alleged infractions of the law, and the judicial branch, which polices the prosecutor, is the simple genius behind the divided system of government generally attributed to the eighteenth-century French thinker, Montesquieu. The practical wisdom of this fundamental principle of political design, which runs throughout the U.S. Constitution, can be observed in full force in the current antitrust landscape, in which the federal courts have acted as a bulwark against several contestable enforcement actions by antitrust regulators.

In three headline cases brought by the Department of Justice or the Federal Trade Commission since 2017, the prosecutorial bench has struck out in court. Under the exacting scrutiny of the judiciary, government litigators failed to present sufficient evidence that a dominant firm had engaged in practices that caused, or were likely to cause, significant anticompetitive effects. In each case, these enforcement actions, applauded by policymakers and commentators who tend to follow “big is bad” intuitions, foundered when assessed in light of judicial precedent, the factual record, and the economic principles embedded in modern antitrust law. An ongoing suit, filed by the FTC this year after more than 18 months since the closing of the targeted acquisition, exhibits similar factual and legal infirmities.

Strike 1: The AT&T/Time-Warner Transaction

In response to the announcement of AT&T’s $85.4 billion acquisition of Time Warner, the DOJ filed suit in 2017 to prevent the formation of a dominant provider in home-video distribution that would purportedly deny competitors access to “must-have” content. As I have observed previously, this theory of the case suffered from two fundamental difficulties. 

First, content is an abundant and renewable resource so it is hard to see how AT&T+TW could meaningfully foreclose competitors’ access to this necessary input. Even in the hypothetical case of potentially “must-have” content, it was unclear whether it would be economically rational for post-acquisition AT&T regularly to deny access to other distributors, given that doing so would imply an immediate and significant loss in licensing revenues without any clearly offsetting future gain in revenues from new subscribers.

Second, home-video distribution is a market lapsing rapidly into obsolescence as content monetization shifts from home-based viewing to a streaming environment in which consumers expect “anywhere, everywhere” access. The blockbuster acquisition was probably best understood as a necessary effort to adapt to this new environment (already populated by several major streaming platforms), rather than an otherwise puzzling strategy to spend billions to capture a market on the verge of commercial irrelevance. 

Strike 2: The Sabre/Farelogix Acquisition

In 2019, the DOJ filed suit to block the $360 million acquisition of Farelogix by Sabre, one of three leading airline booking platforms, on the ground that it would substantially lessen competition. The factual basis for this legal diagnosis was unclear. In 2018, Sabre earned approximately $3.9 billion in worldwide revenues, compared to $40 million for Farelogix. Given this drastic difference in market share, and the almost trivial share attributable to Farelogix, it is difficult to fathom how the DOJ could credibly assert that the acquisition “would extinguish a crucial constraint on Sabre’s market power.” 

To use a now much-discussed theory of antitrust liability, it might nonetheless be argued that Farelogix posed a “nascent” competitive threat to the Sabre platform. That is: while Farelogix is small today, it may become big enough tomorrow to pose a threat to Sabre’s market leadership. 

But that theory runs straight into a highly inconvenient fact. Farelogix was founded in 1998 and, during the ensuing two decades, had neither achieved broad adoption of its customized booking technology nor succeeded in offering airlines a viable pathway to bypass the three major intermediary platforms. The proposed acquisition therefore seems best understood as a mutually beneficial transaction in which a smaller (and not very nascent) firm elects to monetize its technology by embedding it in a leading platform that seeks to innovate by acquisition. Robust technology ecosystems do this all the time, efficiently exploiting the natural complementarities between a smaller firm’s “out of the box” innovation with the capital-intensive infrastructure of an incumbent. (Postscript: While the DOJ lost this case in federal court, Sabre elected in May 2020 not to close following similarly puzzling opposition by British competition regulators.) 

Strike 3: FTC v. Qualcomm

The divergence of theories of anticompetitive risk from market realities is vividly illustrated by the landmark suit filed by the FTC in 2017 against Qualcomm. 

The litigation pursued nothing less than a wholesale reengineering of the IP licensing relationships between innovators and implementers that underlie the global smartphone market. Those relationships principally consist of device-level licenses between IP innovators such as Qualcomm and device manufacturers and distributors such as Apple. This structure efficiently collects remuneration from the downstream segment of the supply chain for upstream firms that invest in pushing forward the technology frontier. The FTC thought otherwise and pursued a remedy that would have required Qualcomm to offer licenses to its direct competitors in the chip market and to rewrite its existing licenses with device producers and other intermediate users on a component, rather than device, level. 

Remarkably, these drastic forms of intervention into private-ordering arrangements rested on nothing more than what former FTC Commissioner Maureen Ohlhausen once appropriately called a “possibility theorem.” The FTC deployed a mostly theoretical argument that Qualcomm had extracted an “unreasonably high” royalty that had potentially discouraged innovation, impeded entry into the chip market, and inflated retail prices for consumers. Yet these claims run contrary to all available empirical evidence, which indicates that the mobile wireless device market has exhibited since its inception declining quality-adjusted prices, increasing output, robust entry into the production market, and continuous innovation. The mismatch between the government’s theory of market failure and the actual record of market success over more than two decades challenges the policy wisdom of disrupting hundreds of existing contractual arrangements between IP licensors and licensees in a thriving market. 

The FTC nonetheless secured from the district court a sweeping order that would have had precisely this disruptive effect, including imposing a “duty to deal” that would have required Qualcomm to license directly its competitors in the chip market. The Ninth Circuit stayed the order and, on August 11, 2020, issued an unqualified reversal, stating that the lower court had erroneously conflated “hypercompetitive” (good) with anticompetitive (bad) conduct and observing that “[t]hroughout its analysis, the district court conflated the desire to maximize profits with an intent to ‘destroy competition itself.’” In unusually direct language, the appellate court also observed (as even the FTC had acknowledged on appeal) that the district court’s ruling was incompatible with the Supreme Court’s ruling in Aspen Skiing Co. v. Aspen Highlands Skiing Corp., which strictly limits the circumstances in which a duty to deal can be imposed. In some cases, it appears that additional levels of judicial review are necessary to protect antitrust law against not only administrative but judicial overreach.

Axon v. FTC

For the most explicit illustration of the interface between Montesquieu’s principle of divided government and the risk posed to antitrust law by cases of prosecutorial excess, we can turn to an unusual and ongoing litigation, Axon v. FTC.

The HSR Act and Post-Consummation Merger Challenges

The HSR Act provides regulators with the opportunity to preemptively challenge acquisitions and related transactions on antitrust grounds prior to those transactions having been consummated. Since its enactment in 1976, this statutory innovation has laudably increased dealmakers’ ability to close transactions with a high level of certainty that regulators would not belatedly seek to “unscramble the egg.” While the HSR Act does not foreclose this contingency since regulatory failure to challenge a transaction only indicates current enforcement intentions, it is probably fair to say that M&A dealmakers generally assume that regulators would reverse course only in exceptional circumstances. In turn, the low prospect of after-the-fact regulatory intervention encourages the efficient use of M&A transactions for the purpose of shifting corporate assets to users that value those assets most highly.

The FTC’s Belated Attack on the Axon/Vievu Acquisition

Dealmakers may be revisiting that understanding in the wake of the FTC’s decision in January 2020 to challenge the acquisition of Vievu by Axon, each being a manufacturer of body-worn camera equipment and related data-management software for law enforcement agencies. The acquisition had closed in May 2018 but had not been reported through HSR since it fell well below the reportable deal threshold. Given a total transaction value of $7 million, the passage of more than 18 months since closing, and the insolvency or near-insolvency of the target company, it is far from obvious that the Axon acquisition posed a material competitive risk that merits unsettling expectations that regulators will typically not challenge a consummated transaction, especially in the case of what is a micro-sized nebula in the M&A universe. 

These concerns are heightened by the fact that the FTC suit relies on a debatably narrow definition of the relevant market (body-camera equipment and related “cloud-based” data management software for police departments in large metropolitan areas, rather than a market that encompassed more generally defined categories of body-worn camera equipment, law enforcement agencies, and data management services). Even within this circumscribed market, there are apparently several companies that offer related technologies and an even larger group that could plausibly enter in response to perceived profit opportunities. Despite this contestable legal position, Axon’s court filing states that the FTC offered to settle the suit on stiff terms: Axon must agree to divest itself of the Vievu assets and to license all of Axon’s pre-transaction intellectual property to the buyer of the Vievu assets. This effectively amounts to an opportunistic use of the antitrust merger laws to engage in post-transaction market reengineering, rather than merely blocking an acquisition to maintain the pre-transaction status quo.

Does the FTC Violate the Separation of Powers?

In a provocative strategy, Axon has gone on the offensive and filed suit in federal district court to challenge on constitutional grounds the long-standing internal administrative proceeding through which the FTC’s antitrust claims are initially adjudicated. Unlike the DOJ, the FTC’s first stop in the litigation process (absent settlement) is not a federal district court but an internal proceeding before an administrative law judge (“ALJ”), whose ruling can then be appealed to the Commission. Axon is effectively arguing that this administrative internalization of the judicial function violates the separation of powers principle as implemented in the U.S. Constitution. 

Writing on a clean slate, Axon’s claim is eminently reasonable. The fact that FTC-paid personnel sit on both sides of the internal adjudicative process as prosecutor (the FTC litigation team) and judge (the ALJ and the Commissioners) locates the executive and judicial functions in the hands of a single administrative entity. (To be clear, the Commission’s rulings are appealable to federal court, albeit at significant cost and delay.) In any event, a court presented with Axon’s claim—as of this writing, the Ninth Circuit (taking the case on appeal by Axon)—is not writing on a clean slate and is most likely reluctant to accept a claim that would trigger challenges to the legality of other similarly structured adjudicative processes at other agencies. Nonetheless, Axon’s argument does raise important concerns as to whether certain elements of the FTC’s adjudicative mechanism (as distinguished from the very existence of that mechanism) could be refined to mitigate the conflicts of interest that arise in its current form.

Conclusion

Antitrust vigilance certainly has its place, but it also has its limits. Given the aspirational language of the antitrust statutes and the largely unlimited structural remedies to which an antitrust litigation can lead, there is an inevitable risk of prosecutorial overreach that can betray the fundamental objective to protect consumer welfare. Applied to the antitrust context, the separation of powers principle mitigates this risk by subjecting enforcement actions to judicial examination, which is in turn disciplined by the constraints of appellate review and stare decisis. A rich body of federal case law implements this review function by anchoring antitrust in a decisionmaking framework that promotes the public’s interest in deterring business practices that endanger the competitive process behind a market-based economy. As illustrated by the recent string of failed antitrust suits, and the ongoing FTC litigation against Axon, that same decisionmaking framework can also protect the competitive process against regulatory practices that pose this same type of risk.

We’re delighted to welcome Jonathan M. Barnett as our newest blogger at Truth on the Market.

Jonathan Barnett is director of the USC Gould School of Law Media, Entertainment and Technology Law Program. Barnett specializes in intellectual property, contracts, antitrust, and corporate law. He has published in the Harvard Law Review, Yale Law Journal, Journal of Legal Studies, Review of Law & Economics, Journal of Corporation Law and other scholarly journals.

He joined USC Law in fall 2006 and was a visiting professor at New York University School of Law in fall 2010. Prior to academia, Barnett practiced corporate law as a senior associate at Cleary Gottlieb Steen & Hamilton in New York, specializing in private equity and mergers and acquisitions transactions. He was also a visiting assistant professor at Fordham University School of Law in New York. A magna cum laude graduate of University of Pennsylvania, Barnett received a MPhil from Cambridge University and a JD from Yale Law School.

You can find his scholarship at SSRN.

Recently-published emails from 2012 between Mark Zuckerberg and Facebook’s then-Chief Financial Officer David Ebersman, in which Zuckerberg lays out his rationale for buying Instagram, have prompted many to speculate that the deal may not have been cleared had antitrust agencies had had access to Facebook’s internal documents at the time.

The issue is Zuckerberg’s description of Instagram as a nascent competitor and potential threat to Facebook:

These businesses are nascent but the networks established, the brands are already meaningful, and if they grow to a large scale they could be very disruptive to us. Given that we think our own valuation is fairly aggressive and that we’re vulnerable in mobile, I’m curious if we should consider going after one or two of them. 

Ebersman objects that a new rival would just enter the market if Facebook bought Instagram. In response, Zuckerberg wrote:

There are network effects around social products and a finite number of different social mechanics to invent. Once someone wins at a specific mechanic, it’s difficult for others to supplant them without doing something different.

These email exchanges may not paint a particularly positive picture of Zuckerberg’s intent in doing the merger, and it is possible that at the time they may have caused antitrust agencies to scrutinise the merger more carefully. But they do not tell us that the acquisition was ultimately harmful to consumers, or about the counterfactual of the merger being blocked. While we know that Instagram became enormously popular in the years following the merger, it is not clear that it would have been just as successful without the deal, or that Facebook and its other products would be less popular today. 

Moreover, it fails to account for the fact that Facebook had the resources to quickly scale Instagram up to a level that provided immediate benefits to an enormous number of users, instead of waiting for the app to potentially grow to such scale organically. 

The rationale

Writing for Pro Market, Randy Picker argued that these emails hint that the acquisition was essentially about taking out a nascent competitor:

Buying Instagram really was about controlling the window in which the Instagram social mechanic invention posed a risk to Facebook … Facebook well understood the competitive risk posed by Instagram and how purchasing it would control that risk.

This is a plausible interpretation of the internal emails, although there are others. For instance, Zuckerberg also seems to say that the purpose is to use Instagram to improve Facebook to make it good enough to fend off other entrants:

If we incorporate the social mechanics they were using, those new products won’t get much traction since we’ll already have their mechanics deployed at scale. 

If this was the rationale, rather than simply trying to kill a nascent competitor, it would be pro-competitive. It is good for consumers if a product makes itself better to beat its rivals by acquiring undervalued assets to deploy them at greater scale and with superior managerial efficiency, even if the acquirer hopes that in doing so it will prevent rivals from ever gaining significant market share. 

Further, despite popular characterization, on its face the acquisition was not about trying to destroy a consumer option, but only to ensure that Facebook was competitively viable in providing that option. Another reasonable interpretation of the emails is that Facebook was wrestling with the age-old make-or-buy dilemma faced by every firm at some point or another. 

Was the merger anticompetitive?

But let us assume that eliminating competition from Instagram was indeed the merger’s sole rationale. Would that necessarily make it anticompetitive?  

Chief among the objections is that both Facebook and Instagram are networked goods. Their value to each user depends, to a significant extent, on the number (and quality) of other people using the same platform. Many scholars have argued that this can create self-reinforcing dynamics where the strong grow stronger – though such an outcome is certainly not a given, since other factors about the service matter too, and networks can suffer from diseconomies of scale as well, where new users reduce the quality of the network.

This network effects point is central to the reasoning of those who oppose the merger: Facebook purportedly acquired Instagram because Instagram’s network had grown large enough to be a threat. With Instagram out of the picture, Facebook could thus take on the remaining smaller rivals with the advantage of its own much larger installed base of users. 

However, this network tipping argument could cut both ways. It is plausible that the proper counterfactual was not duopoly competition between Facebook and Instagram, but either Facebook or Instagram offering both firms’ features (only later). In other words, a possible framing of the merger is that it merely  accelerated the cross-pollination of social mechanics between Facebook and Instagram. Something that would likely prove beneficial to consumers.

This finds some support in Mark Zuckerberg’s reply to David Ebersman:

Buying them would give us the people and time to integrate their innovations into our core products.

The exchange between Zuckerberg and Ebersman also suggests another pro-competitive justification: bringing Instagram’s “social mechanics” to Facebook’s much larger network of users. We can only speculate about what ‘social mechanics’ Zuckerberg actually had in mind, but at the time Facebook’s photo sharing functionality was largely based around albums of unedited photos, whereas Instagram’s core product was a stream of filtered, cropped single images. 

Zuckerberg’s plan to gradually bring these features to Facebook’s users – as opposed to them having to familiarize themselves with an entirely different platform – would likely cut in favor of the deal being cleared by enforcers.

Another possibility is that it was Instagram’s network of creators – the people who had begun to use Instagram as a new medium, distinct from the generic photo albums Facebook had, and who would eventually grow to be known as ‘influencers’ – who were the valuable thing. Bringing them onto the Facebook platform would undoubtedly increase its value to regular users. For example, Kim Kardashian, one of Instagram’s most popular users, joined the service in February 2012, two months before the deal went through, and she was not the first such person to adopt Instagram in this way. We can see the importance of a service’s most creative users today, as Facebook is actually trying to pay TikTok creators to move to its TikTok clone Reels.

But if this was indeed the rationale, not only is this a sign of a company in the midst of fierce competition – rather than one on the cusp of acquiring a monopoly position – but, more fundamentally, it suggests that Facebook was always going to come out on top. Or at least it thought so.

The benefit of hindsight

Today’s commentators have the benefit of hindsight. This inherently biases contemporary takes on the Facebook/Instagram merger. For instance, it seems almost self-evident with hindsight that Facebook would succeed and that entry in the social media space would only occur at the fringes of existing platforms (the combined Facebook/Instagram platform) – think of the emergence of TikTok. However, at the time of the merger, such an outcome was anything but a foregone conclusion.

For instance, critics argue that Instagram no longer competes with Facebook because of the merger. However, it is equally plausible that Instagram only became so successful because of its combination with Facebook (notably thanks to the addition of Facebook’s advertising platform, and the rapid rollout of a stories feature in response to Snapchat’s rise). Indeed, Instagram grew from roughly 24 million at the time of the acquisition to over 1 Billion users in 2018. Likewise, it earned zero revenue at the time of the merger. This might explain why the acquisition was widely derided at the time.

This is critical from an antitrust perspective. Antitrust enforcers adjudicate merger proceedings in the face of extreme uncertainty. All possible outcomes, including the counterfactual setting, have certain probabilities of being true that enforcers and courts have to make educated guesses about, assigning probabilities to potential anticompetitive harms, merger efficiencies, and so on.

Authorities at the time of the merger could not ignore these uncertainties. What was the likelihood that a company with a fraction of Facebook’s users (24 million to Facebook’s 1 billion), and worth $1 billion, could grow to threaten Facebook’s market position? At the time, the answer seemed to be “very unlikely”. Moreover, how could authorities know that Google+ (Facebook’s strongest competitor at the time) would fail? These outcomes were not just hard to ascertain, they were simply unknowable.

Of course, this is preceisly what neo-Brandesian antitrust scholars object to today: among the many seemingly innocuous big tech acquisitions that are permitted each year, there is bound to be at least one acquired firm that might have been a future disruptor. True as this may be, identifying that one successful company among all the others is the antitrust equivalent of finding a needle in a haystack. Instagram simply did not fit that description at the time of the merger. Such a stance also ignores the very real benefits that may arise from such arrangements.

Closing remarks

While it is tempting to reassess the Facebook Instagram merger in light of new revelations, such an undertaking is not without pitfalls. Hindsight bias is perhaps the most obvious, but the difficulties run deeper.

If we think that the Facebook/Instagram merger has been and will continue to be good for consumers, it would be strange to think that we should nevertheless break them up because we discovered that Zuckerberg had intended to do things that would harm consumers. Conversely, if you think a breakup would be good for consumers today, would it change your mind if you discovered that Mark Zuckerberg had the intentions of an angel when he went ahead with the merger in 2012, or that he had angelic intent today?

Ultimately, merger review involves making predictions about the future. While it may be reasonable to take the intentions of the merging parties into consideration when making those predictions (although it’s not obvious that we should), these are not the only or best ways to determine what the future will hold. As Ebersman himself points out in the emails, history is filled with over-optimistic mergers that failed to deliver benefits to the merging parties. That this one succeeded beyond the wildest dreams of everyone involved – except maybe Mark Zuckerberg – does not tell us that competition agencies should have ruled on it differently.

This blog post summarizes the findings of a paper published in Volume 21 of the Federalist Society Review. The paper was co-authored by Dirk Auer, Geoffrey A. Manne, Julian Morris, & Kristian Stout. It uses the analytical framework of law and economics to discuss recent patent law reforms in the US, and their negative ramifications for inventors. The full paper can be found on the Federalist Society’s website, here.

Property rights are a pillar of the free market. As Harold Demsetz famously argued, they spur specialization, investment and competition throughout the economy. And the same holds true for intellectual property rights (IPRs). 

However, despite the many social benefits that have been attributed to intellectual property protection, the past decades have witnessed the birth and growth of an powerful intellectual movement seeking to reduce the legal protections offered to inventors by patent law.

These critics argue that excessive patent protection is holding back western economies. For instance, they posit that the owners of the standard essential patents (“SEPs”) are charging their commercial partners too much for the rights to use their patents (this is referred to as patent holdup and royalty stacking). Furthermore, they argue that so-called patent trolls (“patent-assertion entities” or “PAEs”) are deterring innovation by small startups by employing “extortionate” litigation tactics.

Unfortunately, this movement has led to a deterioration of appropriate remedies in patent disputes.

The many benefits of patent protection

While patents likely play an important role in providing inventors with incentives to innovate, their role in enabling the commercialization of ideas is probably even more important.

By creating a system of clearly defined property rights, patents empower market players to coordinate their efforts in order to collectively produce innovations. In other words, patents greatly reduce the cost of concluding mutually-advantageous deals, whereby firms specialize in various aspects of the innovation process. Critically, these deals occur in the shadow of patent litigation and injunctive relief. The threat of these ensures that all parties have an incentive to take a seat at the negotiating table.

This is arguably nowhere more apparent than in the standardization space. Many of the most high-profile modern technologies are the fruit of large-scale collaboration coordinated through standards developing organizations (SDOs). These include technologies such as Wi-Fi, 3G, 4G, 5G, Blu-Ray, USB-C, and Thunderbolt 3. The coordination necessary to produce technologies of this sort is hard to imagine without some form of enforceable property right in the resulting inventions.

The shift away from injunctive relief

Of the many recent reforms to patent law, the most significant has arguably been a significant limitation of patent holders’ availability to obtain permanent injunctions. This is particularly true in the case of so-called standard essential patents (SEPs). 

However, intellectual property laws are meaningless without the ability to enforce them and remedy breaches. And injunctions are almost certainly the most powerful, and important, of these remedies.

The significance of injunctions is perhaps best understood by highlighting the weakness of damages awards when applied to intangible assets. Indeed, it is often difficult to establish the appropriate size of an award of damages when intangible property—such as invention and innovation in the case of patents—is the core property being protected. This is because these assets are almost always highly idiosyncratic. By blocking all infringing uses of an invention, injunctions thus prevent courts from having to act as price regulators. In doing so, they also ensure that innovators are adequately rewarded for their technological contributions.

Unfortunately, the Supreme Court’s 2006 ruling in eBay Inc. v. MercExchange, LLC significantly narrowed the circumstances under which patent holders could obtain permanent injunctions. This predictably led lower courts to grant fewer permanent injunctions in patent litigation suits. 

But while critics of injunctions had hoped that reducing their availability would spur innovation, empirical evidence suggests that this has not been the case so far. 

Other reforms

And injunctions are not the only area of patent law that have witnessed a gradual shift against the interests of patent holders. Much of the same could be said about damages awards, revised fee shifting standards, and the introduction of Inter Partes Review.

Critically, the intellectual movement to soften patent protection has also had ramifications outside of the judicial sphere. It is notably behind several legislative reforms, particularly the America Invents Act. Moreover, it has led numerous private parties – most notably Standard Developing Organizations (SDOs) – to adopt stances that have advanced the interests of technology implementers at the expense of inventors.

For instance, one of the most noteworthy reforms has been IEEE’s sweeping reforms to its IP policy, in 2015. The new rules notably prevented SEP holders from seeking permanent injunctions against so-called “willing licensees”. They also mandated that royalties pertaining to SEPs should be based upon the value of the smallest saleable component that practices the patented technology. Both of these measures ultimately sought to tilt the bargaining range in license negotiations in favor of implementers.

Concluding remarks

The developments discussed in this article might seem like small details, but they are part of a wider trend whereby U.S. patent law is becoming increasingly inhospitable for inventors. This is particularly true when it comes to the enforcement of SEPs by means of injunction.

While the short-term effect of these various reforms has yet to be quantified, there is a real risk that, by decreasing the value of patents and increasing transaction costs, these changes may ultimately limit the diffusion of innovations and harm incentives to invent.

This likely explains why some legislators have recently put forward bills that seek to reinforce the U.S. patent system (here and here).

Despite these initiatives, the fact remains that there is today a strong undercurrent pushing for weaker or less certain patent protection. If left unchecked, this threatens to undermine the utility of patents in facilitating the efficient allocation of resources for innovation and its commercialization. Policymakers should thus pay careful attention to the changes this trend may bring about and move swiftly to recalibrate the patent system where needed in order to better protect the property rights of inventors and yield more innovation overall.

As Thomas Sowell has noted many times, political debates often involve the use of words which if taken literally mean something very different than the connotations which are conveyed. Examples abound in the debate about broadband buildout. 

There is a general consensus on the need to subsidize aspects of broadband buildout to rural areas in order to close the digital divide. But this real need allows for strategic obfuscation of key terms in this debate by parties hoping to achieve political or competitive gain. 

“Access” and “high-speed broadband”

For instance, nearly everyone would agree that Internet policy should “promote access to high-speed broadband.” But how some academics and activists define “access” and “high-speed broadband” are much different than the average American would expect.

A commonsense definition of access is that consumers have the ability to buy broadband sufficient to meet their needs, considering the costs and benefits they face. In the context of the digital divide between rural and urban areas, the different options available to consumers in each area is a reflection of the very real costs and other challenges of providing service. In rural areas with low population density, it costs broadband providers considerably more per potential subscriber to build the infrastructure needed to provide service. At some point, depending on the technology, it is no longer profitable to build out to the next customer several miles down the road. The options and prices available to rural consumers reflects this unavoidable fact. Holding price constant, there is no doubt that many rural consumers would prefer higher speeds than are currently available to them. But this is not the real-world choice which presents itself. 

But access in this debate instead means the availability of the same broadband options regardless of where people live. Rather than being seen as a reflection of underlying economic realities, the fact that rural Americans do not have the same options available to them that urban Americans do is seen as a problem which calls out for a political solution. Thus, billions of dollars are spent in an attempt to “close the digital divide” by subsidizing broadband providers to build infrastructure to  rural areas. 

“High-speed broadband” similarly has a meaning in this debate significantly different from what many consumers, especially those lacking “high speed” service, expect. For consumers, fast enough is what allows them to use the Internet in the ways they desire. What is fast enough does change over time as more and more uses for the Internet become common. This is why the FCC has changed the technical definition of broadband multiple times over the years as usage patterns and bandwidth requirements change. Currently, the FCC uses 25Mbps down/3 Mbps up as the baseline for broadband.

However, for some, like Jonathan Sallet, this is thoroughly insufficient. In his Broadband for America’s Future: A Vision for the 2020s, he instead proposes “100 Mbps symmetrical service without usage limits.” The disconnect between consumer demand as measured in the marketplace in light of real trade-offs between cost and performance and this arbitrary number is not well-explained in this study. The assumption is simply that faster is better, and that the building of faster networks is a mere engineering issue once sufficiently funded and executed with enough political will.

But there is little evidence that consumers “need” faster Internet than the market is currently providing. In fact, one Wall Street Journal study suggests “typical U.S. households don’t use most of their bandwidth while streaming and get marginal gains from upgrading speeds.” Moreover, there is even less evidence that most consumers or businesses need anything close to upload speeds of 100 Mbps. For even intensive uses like high-resolution live streaming, recommended upload speeds still fall far short of 100 Mbps. 

“Competition” and “Overbuilding”

Similarly, no one objects to the importance of “competition in the broadband marketplace.” But what is meant by this term is subject to vastly different interpretations.

The number of competitors is not the same as the amount of competition. Competition is a process by which market participants discover the best way to serve consumers at lowest cost. Specific markets are often subject to competition not only from the firms which exist within those markets, but also from potential competitors who may enter the market any time potential profits reach a point high enough to justify the costs of entry. An important inference from this is that temporary monopolies, in the sense that one firm has a significant share of the market, is not in itself illegal under antitrust law, even if they are charging monopoly prices. Potential entry is as real in its effects as actual competitors in forcing incumbents to continue to innovate and provide value to consumers. 

However, many assume the best way to encourage competition in broadband buildout is to simply promote more competitors. A significant portion of Broadband for America’s Future emphasizes the importance of subsidizing new competition in order to increase buildout, increase quality, and bring down prices. In particular, Sallet emphasizes the benefits of municipal broadband, i.e. when local governments build and run their own networks. 

In fact, Sallet argues that fears of “overbuilding” are really just fears of competition by incumbent broadband ISPs:

Language here is important. There is a tendency to call the construction of new, competitive networks in a locality with an existing network “overbuilding”—as if it were an unnecessary thing, a useless piece of engineering. But what some call “overbuilding” should be called by a more familiar term: “Competition.” “Overbuilding” is an engineering concept; “competition” is an economic concept that helps consumers because it shifts the focus from counting broadband networks to counting the dollars that consumers save when they have competitive choices. The difference is fundamental—overbuilding asks whether the dollars spent to build another network are necessary for the delivery of a communications service; economics asks whether spending those dollars will lead to competition that allows consumers to spend less and get more. 

Sallet makes two rhetorical moves here to make his argument. 

The first is redefining “overbuilding,” which refers to literally building a new network on top of (that is, “over”) previously built architecture, as a ploy by ISPs to avoid competition. But this is truly Orwellian. When a new entrant can build over an incumbent and take advantage of the first-mover’s investments to enter at a lower cost, a failure to compensate the first-mover is free riding. If the government compels such free riding, it reduces incentives for firms to make the initial investment to build the infrastructure.

The second is defining competition as the number of competitors, even if those competitors need to be subsidized by the government in order to enter the marketplace.  

But there is no way to determine the “right” number of competitors in a given market in advance. In the real world, markets don’t match blackboard descriptions of perfect competition. In fact, there are sometimes high fixed costs which limit the number of firms which will likely exist in a competitive market. In some markets, known as natural monopolies, high infrastructural costs and other barriers to entry relative to the size of the market lead to a situation where it is cheaper for a monopoly to provide a good or service than multiple firms in a market. But it is important to note that only firms operating under market pressures can assess the viability of competition. This is why there is a significant risk in government subsidizing entry. 

Competition drives sustained investment in the capital-intensive architecture of broadband networks, which suggests that ISPs are not natural monopolies. If they were, then having a monopoly provider regulated by the government to ensure the public interest, or government-run broadband companies, may make sense. In fact, Sallet denies ISPs are natural monopolies, stating that “the history of telecommunications regulation in the United States suggests that monopolies were a result of policy choices, not mandated by any iron law of economics” and “it would be odd for public policy to treat the creation of a monopoly as a success.” 

As noted by economist George Ford in his study, The Impact of Government-Owned Broadband Networks on Private Investment and Consumer Welfare, unlike the threat of entry which often causes incumbents to act competitively even in the absence of competitors, the threat of subsidized entry reduces incentives for private entities to invest in those markets altogether. This includes both the incentive to build the network and update it. Subsidized entry may, in fact, tip the scales from competition that promotes consumer welfare to that which could harm it. If the market only profitably sustains one or two competitors, adding another through municipal broadband or subsidizing a new entrant may reduce the profitability of the incumbent(s) and eventually lead to exit. When this happens, only the government-run or subsidized network may survive because the subsidized entrant is shielded from the market test of profit-and-loss.

The “Donut Hole” Problem

The term “donut hole” is a final example to consider of how words can be used to confuse rather than enlighten in this debate.

There is broad agreement that to generate the positive externalities from universal service, there needs to be subsidies for buildout to high-cost rural areas. However, this seeming agreement masks vastly different approaches. 

For instance, some critics of the current subsidy approach have identified a phenomenon where the city center has multiple competitive ISPs and government policy extends subsidies to ISPs to build out broadband coverage into rural areas, but there is relatively paltry Internet services in between due to a lack of private or public investment. They describe this as a “donut hole” because the “unserved” rural areas receive subsidies while “underserved” outlying parts immediately surrounding town centers receive nothing under current policy.

Conceptually, this is not a donut hole. It is actually more like a target or bullseye, where the city center is served by private investment and the rural areas receive subsidies to be served. 

Indeed, there is a different use of the term donut hole, which describes how public investment in city centers can create a donut hole of funding needed to support rural build-out. Most Internet providers rely on profits from providing lower-cost service to higher-population areas (like city centers) to cross-subsidize the higher cost of providing service in outlying and rural areas. But municipal providers generally only provide municipal service — they only provide lower-cost service. This hits the carriers that serve higher-cost areas with a double whammy. First, every customer that municipal providers take from private carriers cuts the revenue that those carriers rely on to provide service elsewhere. Second, and even more problematic, because the municipal providers have lower costs (because they tend not to serve the higher-cost outlying areas), they can offer lower prices for service. This “competition” exerts downward pressure on the private firms’ prices, further reducing revenue across their entire in-town customer base. 

This version of the “donut hole,” in which the revenues that private firms rely on from the city center to support the costs of providing service to outlying areas has two simultaneous effects. First, it directly reduces the funding available to serve more rural areas. And, second, it increases the average cost of providing service across its network (because it is no longer recovering as much of its costs from the lower-cost city core), which increases the prices that need to be charged to rural users in order to justify offering service at all.

Conclusion

Overcoming the problem of the rural digital divide starts with understanding why it exists. It is simply more expensive to build networks in areas with low population density. If universal service is the goal, subsidies, whether explicit subsidies from government or implicit cross-subsidies by broadband companies, are necessary to build out to these areas. But obfuscations about increasing “access to high-speed broadband” by promoting “competition” shouldn’t control the debate.

Instead, there needs to be a nuanced understanding of how government-subsidized entry into the broadband marketplace can discourage private investment and grow the size of the “donut hole,” thereby leading to demand for even greater subsidies. Policymakers should avoid exacerbating the digital divide by prioritizing subsidized competition over market processes.