Archives For antitrust

Apple’s legal team will be relieved that “you reap what you sow” is just a proverb. After a long-running antitrust battle against Qualcomm unsurprisingly ended in failure, Apple now faces antitrust accusations of its own (most notably from Epic Games). Somewhat paradoxically, this turn of events might cause Apple to see its previous defeat in a new light. Indeed, the well-established antitrust principles that scuppered Apple’s challenge against Qualcomm will now be the rock upon which it builds its legal defense.

But while Apple’s reversal of fortunes might seem anecdotal, it neatly illustrates a fundamental – and often overlooked – principle of antitrust policy: Antitrust law is about maximizing consumer welfare. Accordingly, the allocation of surplus between two companies is only incidentally relevant to antitrust proceedings, and it certainly is not a goal in and of itself. In other words, antitrust law is not about protecting David from Goliath.

Jockeying over the distribution of surplus

Or at least that is the theory. In practice, however, most antitrust cases are but small parts of much wider battles where corporations use courts and regulators in order to jockey for market position and/or tilt the distribution of surplus in their favor. The Microsoft competition suits brought by the DOJ and the European commission (in the EU and US) partly originated from complaints, and lobbying, by Sun Microsystems, Novell, and Netscape. Likewise, the European Commission’s case against Google was prompted by accusations from Microsoft and Oracle, among others. The European Intel case was initiated following a complaint by AMD. The list goes on.

The last couple of years have witnessed a proliferation of antitrust suits that are emblematic of this type of power tussle. For instance, Apple has been notoriously industrious in using the court system to lower the royalties that it pays to Qualcomm for LTE chips. One of the focal points of Apple’s discontent was Qualcomm’s policy of basing royalties on the end-price of devices (Qualcomm charged iPhone manufacturers a 5% royalty rate on their handset sales – and Apple received further rebates):

“The whole idea of a percentage of the cost of the phone didn’t make sense to us,” [Apple COO Jeff Williams] said. “It struck at our very core of fairness. At the time we were making something really really different.”

This pricing dispute not only gave rise to high-profile court cases, it also led Apple to lobby Standard Developing Organizations (“SDOs”) in a partly successful attempt to make them amend their patent policies, so as to prevent this type of pricing. 

However, in a highly ironic turn of events, Apple now finds itself on the receiving end of strikingly similar allegations. At issue is the 30% commission that Apple charges for in app purchases on the iPhone and iPad. These “high” commissions led several companies to lodge complaints with competition authorities (Spotify and Facebook, in the EU) and file antitrust suits against Apple (Epic Games, in the US).

Of course, these complaints are couched in more sophisticated, and antitrust-relevant, reasoning. But that doesn’t alter the fact that these disputes are ultimately driven by firms trying to tilt the allocation of surplus in their favor (for a more detailed explanation, see Apple and Qualcomm).

Pushback from courts: The Qualcomm case

Against this backdrop, a string of recent cases sends a clear message to would-be plaintiffs: antitrust courts will not be drawn into rent allocation disputes that have no bearing on consumer welfare. 

The best example of this judicial trend is Qualcomm’s victory before the United States Court of Appeal for the 9th Circuit. The case centered on the royalties that Qualcomm charged to OEMs for its Standard Essential Patents (SEPs). Both the district court and the FTC found that Qualcomm had deployed a series of tactics (rebates, refusals to deal, etc) that enabled it to circumvent its FRAND pledges. 

However, the Court of Appeal was not convinced. It failed to find any consumer harm, or recognizable antitrust infringement. Instead, it held that the dispute at hand was essentially a matter of contract law:

To the extent Qualcomm has breached any of its FRAND commitments, a conclusion we need not and do not reach, the remedy for such a breach lies in contract and patent law. 

This is not surprising. From the outset, numerous critics pointed that the case lied well beyond the narrow confines of antitrust law. The scathing dissenting statement written by Commissioner Maureen Olhaussen is revealing:

[I]n the Commission’s 2-1 decision to sue Qualcomm, I face an extraordinary situation: an enforcement action based on a flawed legal theory (including a standalone Section 5 count) that lacks economic and evidentiary support, that was brought on the eve of a new presidential administration, and that, by its mere issuance, will undermine U.S. intellectual property rights in Asia and worldwide. These extreme circumstances compel me to voice my objections. 

In reaching its conclusion, the Court notably rejected the notion that SEP royalties should be systematically based upon the “Smallest Saleable Patent Practicing Unit” (or SSPPU):

Even if we accept that the modem chip in a cellphone is the cellphone’s SSPPU, the district court’s analysis is still fundamentally flawed. No court has held that the SSPPU concept is a per se rule for “reasonable royalty” calculations; instead, the concept is used as a tool in jury cases to minimize potential jury confusion when the jury is weighing complex expert testimony about patent damages.

Similarly, it saw no objection to Qualcomm licensing its technology at the OEM level (rather than the component level):

Qualcomm’s rationale for “switching” to OEM-level licensing was not “to sacrifice short-term benefits in order to obtain higher profits in the long run from the exclusion of competition,” the second element of the Aspen Skiing exception. Aerotec Int’l, 836 F.3d at 1184 (internal quotation marks and citation omitted). Instead, Qualcomm responded to the change in patent-exhaustion law by choosing the path that was “far more lucrative,” both in the short term and the long term, regardless of any impacts on competition. 

Finally, the Court concluded that a firm breaching its FRAND pledges did not automatically amount to anticompetitive conduct: 

We decline to adopt a theory of antitrust liability that would presume anticompetitive conduct any time a company could not prove that the “fair value” of its SEP portfolios corresponds to the prices the market appears willing to pay for those SEPs in the form of licensing royalty rates.

Taken together, these findings paint a very clear picture. The Qualcomm Court repeatedly rejected the radical idea that US antitrust law should concern itself with the prices charged by monopolists — as opposed to practices that allow firms to illegally acquire or maintain a monopoly position. The words of Learned Hand and those of Antonin Scalia (respectively, below) loom large:

The successful competitor, having been urged to compete, must not be turned upon when he wins. 

And,

To safeguard the incentive to innovate, the possession of monopoly power will not be found unlawful unless it is accompanied by an element of anticompetitive conduct.

Other courts (both in the US and abroad) have reached similar conclusions

For instance, a district court in Texas dismissed a suit brought by Continental Automotive Systems (which supplies electronic systems to the automotive industry) against a group of SEP holders. 

Continental challenged the patent holders’ decision to license their technology at the vehicle rather than component level (the allegation is very similar to the FTC’s complaint that Qualcomm licensed its SEPs at the OEM, rather than chipset level). However, following a forceful intervention by the DOJ, the Court ultimately held that the facts alleged by Continental were not indicative of antitrust injury. It thus dismissed the case.

Likewise, within weeks of the Qualcomm and Continental decisions, the UK Supreme court also ruled in favor of SEP holders. In its Unwired Planet ruling, the Court concluded that discriminatory licenses did not automatically infringe competition law (even though they might breach a firm’s contractual obligations):

[I]t cannot be said that there is any general presumption that differential pricing for licensees is problematic in terms of the public or private interests at stake.

In reaching this conclusion, the UK Supreme Court emphasized that the determination of whether licenses were FRAND, or not, was first and foremost a matter of contract law. In the case at hand, the most important guide to making this determination were the internal rules of the relevant SDO (as opposed to competition case law):

Since price discrimination is the norm as a matter of licensing practice and may promote objectives which the ETSI regime is intended to promote (such as innovation and consumer welfare), it would have required far clearer language in the ETSI FRAND undertaking to indicate an intention to impose the more strict, “hard-edged” non-discrimination obligation for which Huawei contends. Further, in view of the prevalence of competition laws in the major economies around the world, it is to be expected that any anti-competitive effects from differential pricing would be most appropriately addressed by those laws

All of this ultimately led the Court to rule in favor of Unwired Planet, thus dismissing Huawei’s claims that it had infringed competition law by breaching its FRAND pledges. 

In short, courts and antitrust authorities on both sides of the Atlantic have repeatedly, and unambiguously, concluded that pricing disputes (albeit in the specific context of technological standards) are generally a matter of contract law. Antitrust/competition law intercedes only when unfair/excessive/discriminatory prices are both caused by anticompetitive behavior and result in anticompetitive injury.

Apple’s Loss is… Apple’s gain.

Readers might wonder how the above cases relate to Apple’s app store. But, on closer inspection the parallels are numerous. As explained above, courts have repeatedly stressed that antitrust enforcement should not concern itself with the allocation of surplus between commercial partners. Yet that is precisely what Epic Game’s suit against Apple is all about.

Indeed, Epic’s central claim is not that it is somehow foreclosed from Apple’s App Store (for example, because Apple might have agreed to exclusively distribute the games of one of Epic’s rivals). Instead, all of its objections are down to the fact that it would like to access Apple’s store under more favorable terms:

Apple’s conduct denies developers the choice of how best to distribute their apps. Developers are barred from reaching over one billion iOS users unless they go through Apple’s App Store, and on Apple’s terms. […]

Thus, developers are dependent on Apple’s noblesse oblige, as Apple may deny access to the App Store, change the terms of access, or alter the tax it imposes on developers, all in its sole discretion and on the commercially devastating threat of the developer losing access to the entire iOS userbase. […]

By imposing its 30% tax, Apple necessarily forces developers to suffer lower profits, reduce the quantity or quality of their apps, raise prices to consumers, or some combination of the three.

And the parallels with the Qualcomm litigation do not stop there. Epic is effectively asking courts to make Apple monetize its platform at a different level than the one that it chose to maximize its profits (no more monetization at the app store level). Similarly, Epic Games omits any suggestion of profit sacrifice on the part of Apple — even though it is a critical element of most unilateral conduct theories of harm. Finally, Epic is challenging conduct that is both the industry norm and emerged in a highly competitive setting.

In short, all of Epic’s allegations are about monopoly prices, not monopoly maintenance or monopolization. Accordingly, just as the SEP cases discussed above were plainly beyond the outer bounds of antitrust enforcement (something that the DOJ repeatedly stressed with regard to the Qualcomm case), so too is the current wave of antitrust litigation against Apple. When all is said and done, Apple might thus be relieved that Qualcomm was victorious in their antitrust confrontation. Indeed, the legal principles that caused its demise against Qualcomm are precisely the ones that will, likely, enable it to prevail against Epic Games.

In the latest congressional hearing, purportedly analyzing Google’s “stacking the deck” in the online advertising marketplace, much of the opening statement and questioning by Senator Mike Lee and later questioning by Senator Josh Hawley focused on an episode of alleged anti-conservative bias by Google in threatening to demonetize The Federalist, a conservative publisher, unless they exercised a greater degree of control over its comments section. The senators connected this to Google’s “dominance,” arguing that it is only because Google’s ad services are essential that Google can dictate terms to a conservative website. A similar impulse motivates Section 230 reform efforts as well: allegedly anti-conservative online platforms wield their dominance to censor conservative speech, either through deplatforming or demonetization.

Before even getting into the analysis of how to incorporate political bias into antitrust analysis, though, it should be noted that there likely is no viable antitrust remedy. Even aside from the Section 230 debate, online platforms like Google are First Amendment speakers who have editorial discretion over their sites and apps, much like newspapers. An antitrust remedy compelling these companies to carry speech they disagree with would almost certainly violate the First Amendment.

But even aside from the First Amendment aspect of this debate, there is no easy way to incorporate concerns about political bias into antitrust. Perhaps the best way to understand this argument in the antitrust sense is as a non-price effects analysis. 

Political bias could be seen by end consumers as an important aspect of product quality. Conservatives have made the case that not only Google, but also Facebook and Twitter, have discriminated against conservative voices. The argument would then follow that consumer welfare is harmed when these dominant platforms leverage their control of the social media marketplace into the marketplace of ideas by censoring voices with whom they disagree. 

While this has theoretical plausibility, there are real practical difficulties. As Geoffrey Manne and I have written previously, in the context of incorporating privacy into antitrust analysis:

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application. 

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist. 

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

Just as with privacy and other product qualities, the analysis becomes increasingly complex first when tradeoffs between price and quality are introduced, and then even more so when tradeoffs between what different consumer groups perceive as quality is added. In fact, it is more complex than privacy. All but the most exhibitionistic would prefer more to less privacy, all other things being equal. But with political media consumption, most would prefer to have more of what they want to read available, even if it comes at the expense of what others may want. There is no easy way to understand what consumer welfare means in a situation where one group’s preferences need to come at the expense of another’s in moderation decisions.

Consider the case of The Federalist again. The allegation is that Google is imposing their anticonservative bias by “forcing” the website to clean up its comments section. The argument is that since The Federalist needs Google’s advertising money, it must play by Google’s rules. And since it did so, there is now one less avenue for conservative speech.

What this argument misses is the balance Google and other online services must strike as multi-sided platforms. The goal is to connect advertisers on one side of the platform, to the users on the other. If a site wants to take advantage of the ad network, it seems inevitable that intermediaries like Google will need to create rules about what can and can’t be shown or they run the risk of losing advertisers who don’t want to be associated with certain speech or conduct. For instance, most companies don’t want to be associated with racist commentary. Thus, they will take great pains to make sure they don’t sponsor or place ads in venues associated with racism. Online platforms connecting advertisers to potential consumers must take that into consideration.

Users, like those who frequent The Federalist, have unpriced access to content across those sites and apps which are part of ad networks like Google’s. Other models, like paid subscriptions (which The Federalist also has available), are also possible. But it isn’t clear that conservative voices or conservative consumers have been harmed overall by the option of unpriced access on one side of the platform, with advertisers paying on the other side. If anything, it seems the opposite is the case since conservatives long complained about legacy media having a bias and lauded the Internet as an opportunity to gain a foothold in the marketplace of ideas.

Online platforms like Google must balance the interests of users from across the political spectrum. If their moderation practices are too politically biased in one direction or another, users could switch to another online platform with one click or swipe. Assuming online platforms wish to maximize revenue, they will have a strong incentive to limit political bias from its moderation practices. The ease of switching to another platform which markets itself as more free speech-friendly, like Parler, shows entrepreneurs can take advantage of market opportunities if Google and other online platforms go too far with political bias. 

While one could perhaps argue that the major online platforms are colluding to keep out conservative voices, this is difficult to square with the different moderation practices each employs, as well as the data that suggests conservative voices are consistently among the most shared on Facebook

Antitrust is not a cure-all law. Conservatives who normally understand this need to reconsider whether antitrust is really well-suited for litigating concerns about anti-conservative bias online. 

This week the Senate will hold a hearing into potential anticompetitive conduct by Google in its display advertising business—the “stack” of products that it offers to advertisers seeking to place display ads on third-party websites. It is also widely reported that the Department of Justice is preparing a lawsuit against Google that will likely include allegations of anticompetitive behavior in this market, and is likely to be joined by a number of state attorneys general in that lawsuit. Meanwhile, several papers have been published detailing these allegations

This aspect of digital advertising can be incredibly complex and difficult to understand. Here we explain how display advertising fits in the broader digital advertising market, describe how display advertising works, consider the main allegations against Google, and explain why Google’s critics are misguided to focus on antitrust as a solution to alleged problems in the market (even if those allegations turn out to be correct).

Display advertising in context

Over the past decade, the price of advertising has fallen steadily while output has risen. Spending on digital advertising in the US grew from $26 billion in 2010 to nearly $130 billion in 2019, an average increase of 20% a year. Over the same period the Producer Price Index for Internet advertising sales declined by nearly 40%. The rising spending in the face of falling prices indicates the number of ads bought and sold increased by approximately 27% a year. Since 2000, advertising spending has been falling as a share of GDP, with online advertising growing as a share of that. The combination of increasing quantity, decreasing cost, and increasing total revenues are consistent with a growing and increasingly competitive market.

Display advertising on third-party websites is only a small subsection of the digital advertising market, comprising approximately 15-20% of digital advertising spending in the US. The rest of the digital advertising market is made up of ads on search results pages on sites like Google, Amazon and Kayak, on people’s Instagram and Facebook feeds, listings on sites like Zillow (for houses) or Craigslist, referral fees paid to price comparison websites for things like health insurance, audio and visual ads on services like Spotify and Hulu, and sponsored content from influencers and bloggers who will promote products to their fans. 

And digital advertising itself is only one of many channels through which companies can market their products. About 53% of total advertising spending in the United States goes on digital channels, with 30% going on TV advertising and the rest on things like radio ads, billboards and other more traditional forms of advertising. A few people still even read physical newspapers and the ads they contain, although physical newspapers’ bigger money makers have traditionally been classified ads, which have been replaced by less costly and more effective internet classifieds, such as those offered by Craigslist, or targeted ads on Google Maps or Facebook.

Indeed, it should be noted that advertising itself is only part of the larger marketing market of which non-advertising marketing communication—e.g., events, sales promotion, direct marketing, telemarketing, product placement—is as big a part as is advertising (each is roughly $500bn globally); it just hasn’t been as thoroughly disrupted by the Internet yet. But it is a mistake to assume that digital advertising is not a part of this broader market. And of that $1tr global market, Internet advertising in total occupies only about 18%—and thus display advertising only about 3%.

Ad placement is only one part of the cost of digital advertising. An advertiser trying to persuade people to buy its product must also do market research and analytics to find out who its target market is and what they want. Moreover, there are the costs of designing and managing a marketing campaign and additional costs to analyze and evaluate the effectiveness of the campaign. 

Nevertheless, one of the most straightforward ways to earn money from a website is to show ads to readers alongside the publisher’s content. To satisfy publishers’ demand for advertising revenues, many services have arisen to automate and simplify the placement of and payment for ad space on publishers’ websites. Google plays a large role in providing these services—what is referred to as “open display” advertising. And it is Google’s substantial role in this space that has sparked speculation and concern among antitrust watchdogs and enforcement authorities.

Before delving into the open display advertising market, a quick note about terms. In these discussions, “advertisers” are businesses that are trying to sell people stuff. Advertisers include large firms such as Best Buy and Disney and small businesses like the local plumber or financial adviser. “Publishers” are websites that carry those ads, and publish content that users want to read. Note that the term “publisher” refers to all websites regardless of the things they’re carrying: a blog about the best way to clean stains out of household appliances is a “publisher” just as much as the New York Times is. 

Under this broad definition, Facebook, Instagram, and YouTube are also considered publishers. In their role as publishers, they have a common goal: to provide content that attracts users to their pages who will act on the advertising displayed. “Users” are you and me—the people who want to read publishers’ content, and to whom advertisers want to show ads. Finally, “intermediaries” are the digital businesses, like Google, that sit in between the advertisers and the publishers, allowing them to do business with each other without ever meeting or speaking.

The display advertising market

If you’re an advertiser, display advertising works like this: your company—one that sells shoes, let’s say—wants to reach a certain kind of person and tell her about the company’s shoes. These shoes are comfortable, stylish, and inexpensive. You use a tool like Google Ads (or, if it’s a big company and you want a more expansive campaign over which you have more control, Google Marketing Platform) to design and upload an ad, and tell Google about the people you want to read—their age and location, say, and/or characterizations of their past browsing and searching habits (“interested in sports”). 

Using that information, Google finds ad space on websites whose audiences match the people you want to target. This ad space is auctioned off to the highest bidder among the range of companies vying, with your shoe company, to reach users matching the characteristics of the website’s users. Thanks to tracking data, it doesn’t just have to be sports-relevant websites: as a user browses sports-related sites on the web, her browser picks up files (cookies) that will tag her as someone potentially interested in sports apparel for targeting later.

So a user might look at a sports website and then later go to a recipe blog, and there receive the shoes ad on the basis of her earlier browsing. You, the shoe seller, hope that she will either click through and buy (or at least consider buying) the shoes when she sees those ads, but one of the benefits of display advertising over search advertising is that—as with TV ads or billboard ads—just seeing the ad will make her aware of the product and potentially more likely to buy it later. Advertisers thus sometimes pay on the basis of clicks, sometimes on the basis of views, and sometimes on the basis of conversion (when a consumer takes an action of some sort, such as making a purchase or filling out a form).

That’s the advertiser’s perspective. From the publisher’s perspective—the owner of that recipe blog, let’s say—you want to auction ad space off to advertisers like that shoe company. In that case, you go to an ad server—Google’s product is called AdSense—give them a little bit of information about your site, and add some html code to your website. These ad servers gather information about your content (e.g., by looking at keywords you use) and your readers (e.g., by looking at what websites they’ve used in the past to make guesses about what they’ll be interested in) and places relevant ads next to and among your content. If they click, lucky you—you’ll get paid a few cents or dollars. 

Apart from privacy concerns about the tracking of users, the really tricky and controversial part here concerns the way scarce advertising space is allocated. Most of the time, it’s done through auctions that happen in real time: each time a user loads a website, an auction is held in a fraction of a second to decide which advertiser gets to display an ad. The longer this process takes, the slower pages load and the more likely users are to get frustrated and go somewhere else.

As well as the service hosting the auction, there are lots of little functions that different companies perform that make the auction and placement process smoother. Some fear that by offering a very popular product integrated end to end, Google’s “stack” of advertising products can bias auctions in favour of its own products. There’s also speculation that Google’s product is so tightly integrated and so effective at using data to match users and advertisers that it is not viable for smaller rivals to compete.

We’ll discuss this speculation and fear in more detail below. But it’s worth bearing in mind that this kind of real-time bidding for ad placement was not always the norm, and is not the only way that websites display ads to their users even today. Big advertisers and websites often deal with each other directly. As with, say, TV advertising, large companies advertising often have a good idea about the people they want to reach. And big publishers (like popular news websites) often have a good idea about who their readers are. For example, big brands often want to push a message to a large number of people across different customer types as part of a broader ad campaign. 

Of these kinds of direct sales, sometimes the space is bought outright, in advance, and reserved for those advertisers. In most cases, direct sales are run through limited, intermediated auction services that are not open to the general market. Put together, these kinds of direct ad buys account for close to 70% of total US display advertising spending. The remainder—the stuff that’s left over after these kinds of sales have been done—is typically sold through the real-time, open display auctions described above.

Different adtech products compete on their ability to target customers effectively, to serve ads quickly (since any delay in the auction and ad placement process slows down page load times for users), and to do so inexpensively. All else equal (including the effectiveness of the ad placement), advertisers want to pay the lowest possible price to place an ad. Similarly, publishers want to receive the highest possible price to display an ad. As a result, both advertisers and publishers have a keen interest in reducing the intermediary’s “take” of the ad spending.

This is all a simplification of how the market works. There is not one single auction house for ad space—in practice, many advertisers and publishers end up having to use lots of different auctions to find the best price. As the market evolved to reach this state from the early days of direct ad buys, new functions that added efficiency to the market emerged. 

In the early years of ad display auctions, individual processes in the stack were performed by numerous competing companies. Through a process of “vertical integration” some companies, such as Google, brought these different processes under the same roof, with the expectation that integration would streamline the stack and make the selling and placement of ads more efficient and effective. The process of vertical integration in pursuit of efficiency has led to a more consolidated market in which Google is the largest player, offering simple, integrated ad buying products to advertisers and ad selling products to publishers. 

Google is by no means the only integrated adtech service provider, however: Facebook, Amazon, Verizon, AT&T/Xandr, theTradeDesk, LumenAd, Taboola and others also provide end-to-end adtech services. But, in the market for open auction placement on third-party websites, Google is the biggest.

The cases against Google

The UK’s Competition and Markets Authority (CMA) carried out a formal study into the digital advertising market between 2019 and 2020, issuing its final report in July of this year. Although also encompassing Google’s Search advertising business and Facebook’s display advertising business (both of which relate to ads on those companies “owned and operated” websites and apps), the CMA study involved the most detailed independent review of Google’s open display advertising business to date. 

That study did not lead to any competition enforcement proceedings against Google—the CMA concluded, in other words, that Google had not broken UK competition law—but it did conclude that Google’s vertically integrated products led to conflicts of interest that could lead it to behaving in ways that did not benefit the advertisers and publishers that use it. One example was Google’s withholding of certain data from publishers that would make it easier for them to use other ad selling products; another was the practice of setting price floors that allegedly led advertisers to pay more than they would otherwise.

Instead the CMA recommended the setting up of a “Digital Markets Unit” (DMU) that could regulate digital markets in general, and a code of conduct for Google and Facebook (and perhaps other large tech platforms) intended to govern their dealings with smaller customers.

The CMA’s analysis is flawed, however. For instance, it makes big assumptions about the dependency of advertisers on display advertising, largely assuming that they would not switch to other forms of advertising if prices rose, and it is light on economics. But factually it is the most comprehensively researched investigation into digital advertising yet published.

Piggybacking on the CMA’s research, and mounting perhaps the strongest attack on Google’s adtech offerings to date, was a paper released just prior to the CMA’s final report called “Roadmap for a Digital Advertising Monopolization Case Against Google”, by Yale economist Fiona Scott Morton and Omidyar Network lawyer David Dinielli. Dinielli will testify before the Senate committee.

While the Scott Morton and Dinielli paper is extremely broad, it also suffers from a number of problems. 

One, because it was released before the CMA’s final report, it is largely based on the interim report released months earlier by the CMA, halfway through the market study in December 2019. This means that several of its claims are out of date. For example, it makes much of the possibility raised by the CMA in its interim report that Google may take a larger cut of advertising spending than its competitors, and claims made in another report that Google introduces “hidden” fees that increases the overall cut it takes from ad auctions. 

But in the final report, after further investigation, the CMA concludes that this is not the case. In the final report, the CMA describes its analysis of all Google Ad Manager open auctions related to UK web traffic during the period between 8–14 March 2020 (involving billions of auctions). This, according to the CMA, allowed it to observe any possible “hidden” fees as well. The CMA concludes:

Our analysis found that, in transactions where both Google Ads and Ad Manager (AdX) are used, Google’s overall take rate is approximately 30% of advertisers’ spend. This is broadly in line with (or slightly lower than) our aggregate market-wide fee estimate outlined above. We also calculated the margin between the winning bid and the second highest bid in AdX for Google and non-Google DSPs, to test whether Google was systematically able to win with a lower margin over the second highest bid (which might have indicated that they were able to use their data advantage to extract additional hidden fees). We found that Google’s average winning margin was similar to that of non-Google DSPs. Overall, this evidence does not indicate that Google is currently extracting significant hidden fees. As noted below, however, it retains the ability and incentive to do so. (p. 275, emphasis added)

Scott Morton and Dinielli also misquote and/or misunderstand important sections of the CMA interim report as relating to display advertising when, in fact, they relate to search. For example, Scott Morton and Dinielli write that the “CMA concluded that Google has nearly insurmountable advantages in access to location data, due to the location information [uniquely available to it from other sources].” (p. 15). The CMA never makes any claim of “insurmountable advantage,” however. Rather, to support the claim, Scott Morton and Dinielli cite to a portion of the CMA interim report recounting a suggestion made by Microsoft regarding the “critical” value of location data in providing relevant advertising. 

But that portion of the report, as well as the suggestion made by Microsoft, is about search advertising. While location data may also be valuable for display advertising, it is not clear that the GPS-level data that is so valuable in providing mobile search ad listings (for a nearby cafe or restaurant, say) is particularly useful for display advertising, which may be just as well-targeted by less granular, city- or county-level location data, which is readily available from a number of sources. In any case, Scott Morton and Dinielli are simply wrong to use a suggestion offered by Microsoft relating to search advertising to demonstrate the veracity of an assertion about a conclusion drawn by the CMA regarding display advertising. 

Scott Morton and Dinielli also confusingly word their own judgements about Google’s conduct in ways that could be misinterpreted as conclusions by the CMA:

The CMA reports that Google has implemented an anticompetitive sales strategy on the publisher ad server end of the intermediation chain. Specifically, after purchasing DoubleClick, which became its publisher ad server, Google apparently lowered its prices to publishers by a factor of ten, at least according to one publisher’s account related to the CMA. (p. 20)

In fact, the CMA does not conclude that Google lowering its prices was an “anticompetitive sales strategy”—it does not use these words at all—and what Scott Morton and Dinielli are referring to is a claim by a rival ad server business, Smart, that Google cutting its prices after acquiring Doubleclick led to Google expanding its market share. Apart from the misleading wording, it is unclear why a competition authority should consider it to be “anticompetitive” when prices are falling and kept low, and—as Smart reported to the CMA—its competitor’s response is to enhance its own offering. 

The case that remains

Stripping away the elements of Scott Morton and Dinielli’s case that seem unsubstantiated by a more careful reading of the CMA reports, and with the benefit of the findings in the CMA’s final report, we are left with a case that argues that Google self-preferences to an unreasonable extent, giving itself a product that is as successful as it is in display advertising only because of Google’s unique ability to gain advantage from its other products that have little to do with display advertising. Because of this self-preferencing, they might argue, innovative new entrants cannot compete on an equal footing, so the market loses out on incremental competition because of the advantages Google gets from being the world’s biggest search company, owning YouTube, running Google Maps and Google Cloud, and so on. 

The most significant examples of this are Google’s use of data from other products—like location data from Maps or viewing history from YouTube—to target ads more effectively; its ability to enable advertisers placing search ads to easily place display ads through the same interface; its introduction of faster and more efficient auction processes that sidestep the existing tools developed by other third-party ad exchanges; and its design of its own tool (“open bidding”) for aggregating auction bids for advertising space to compete with (rather than incorporate) an alternative tool (“header bidding”) that is arguably faster, but costs more money to use.

These allegations require detailed consideration, and in a future paper we will attempt to assess them in detail. But in thinking about them now it may be useful to consider the remedies that could be imposed to address them, assuming they do diminish the ability of rivals to compete with Google: what possible interventions we could make in order to make the market work better for advertisers, publishers, and users. 

We can think of remedies as falling into two broad buckets: remedies that stop Google from doing things that improve the quality of its own offerings, thus making it harder for others to keep up; and remedies that require it to help rivals improve their products in ways otherwise accessible only to Google (e.g., by making Google’s products interoperable with third-party services) without inherently diminishing the quality of Google’s own products.

The first camp of these, what we might call “status quo minus,” includes rules banning Google from using data from its other products or offering single order forms for advertisers, or, in the extreme, a structural remedy that “breaks up” Google by either forcing it to sell off its display ad business altogether or to sell off elements of it. 

What is striking about these kinds of interventions is that all of them “work” by making Google worse for those that use it. Restrictions on Google’s ability to use data from other products, for example, will make its service more expensive and less effective for those who use it. Ads will be less well-targeted and therefore less effective. This will lead to lower bids from advertisers. Lower ad prices will be transmitted through the auction process to produce lower payments for publishers. Reduced publisher revenues will mean some content providers exit. Users will thus be confronted with less available content and ads that are less relevant to them and thus, presumably, more annoying. In other words: No one will be better off, and most likely everyone will be worse off.

The reason a “single order form” helps Google is that it is useful to advertisers, the same way it’s useful to be able to buy all your groceries at one store instead of lots of different ones. Similarly, vertical integration in the “ad stack” allows for a faster, cheaper, and simpler product for users on all sides of the market. A different kind of integration that has been criticized by others, where third-party intermediaries can bid more quickly if they host on Google Cloud, benefits publishers and users because it speeds up auction time, allowing websites to load faster. So does Google’s unified alternative to “header bidding,” giving a speed boost that is apparently valuable enough to publishers that they will pay for it.

So who would benefit from stopping Google from doing these things, or even forcing Google to sell its operations in this area? Not advertisers or publishers. Maybe Google’s rival ad intermediaries would; presumably, artificially hamstringing Google’s products would make it easier for them to compete with Google. But if so, it’s difficult to see how this would be an overall improvement. It is even harder to see how this would improve the competitive process—the very goal of antitrust. Rather, any increase in the competitiveness of rivals would result not from making their products better, but from making Google’s product worse. That is a weakening of competition, not its promotion. 

On the other hand, interventions that aim to make Google’s products more interoperable at least do not fall prey to this problem. Such “status quo plus” interventions would aim to take the benefits of Google’s products and innovations and allow more companies to use them to improve their own competing products. Not surprisingly, such interventions would be more in line with the conclusions the CMA came to than the divestitures and operating restrictions proposed by Scott Morton and Dinielli, as well as (reportedly) state attorneys general considering a case against Google.

But mandated interoperability raises a host of different concerns: extensive and uncertain rulemaking, ongoing regulatory oversight, and, likely, price controls, all of which would limit Google’s ability to experiment with and improve its products. The history of such mandated duties to deal or compulsory licenses is a troubled one, at best. But even if, for the sake of argument, we concluded that these kinds of remedies were desirable, they are difficult to impose via an antitrust lawsuit of the kind that the Department of Justice is expected to launch. Most importantly, if the conclusion of Google’s critics is that Google’s main offense is offering a product that is just too good to compete with without regulating it like a utility, with all the costs to innovation that that would entail, maybe we ought to think twice about whether an antitrust intervention is really worth it at all.

Much has already been said about the twin antitrust suits filed by Epic Games against Apple and Google. For those who are not familiar with the cases, the game developer – most famous for its hit title Fortnite and the “Unreal Engine” that underpins much of the game (and movie) industry – is complaining that Apple and Google are thwarting competition from rival app stores and in-app payment processors. 

Supporters have been quick to see in these suits a long-overdue challenge against the 30% commissions that Apple and Google charge. Some have even portrayed Epic as a modern-day Robin Hood, leading the fight against Big Tech to the benefit of small app developers and consumers alike. Epic itself has been keen to stoke this image, comparing its litigation to a fight for basic freedoms in the face of Big Brother:

However, upon closer inspection, cracks rapidly appear in this rosy picture. What is left is a company partaking in blatant rent-seeking that threatens to harm the sprawling ecosystems that have emerged around both Apple and Google’s app stores.

Two issues are particularly salient. First, Epic is trying to protect its own interests at the expense of the broader industry. If successful, its suit would merely lead to alternative revenue schemes that – although more beneficial to itself – would leave smaller developers to shoulder higher fees. Second, the fees that Epic portrays as extortionate were in fact key to the emergence of mobile gaming.

Epic’s utopia is not an equilibrium

Central to Epic’s claims is the idea that both Apple and Google: (i) thwart competition from rival app stores, and implement a series of measures that prevent developers from reaching gamers through alternative means (such as pre-installing apps, or sideloading them in the case of Apple’s platforms); and (ii) tie their proprietary payment processing services to their app stores. According to Epic, this ultimately enables both Apple and Google to extract “extortionate” commissions (30%) from app developers.

But Epic’s whole case is based on the unrealistic assumption that both Apple and Google will sit idly by while rival play stores and payment systems take a free-ride on the vast investments they have ploughed into their respective smartphone platforms. In other words, removing Apple and Google’s ability to charge commissions on in-app purchases does not prevent them from monetizing their platforms elsewhere.

Indeed, economic and strategic management theory tells us that so long as Apple and Google single-handedly control one of the necessary points of access to their respective ecosystems, they should be able to extract a sizable share of the revenue generated on their platforms. One can only speculate, but it is easy to imagine Apple and Google charging rival app stores for access to their respective platforms, or charging developers for access to critical APIs.

Epic itself seems to concede this point. In a recent Verge article, it argued that Apple was threatening to cut off its access to iOS and Mac developer tools, which Apple currently offers at little to no cost:

Apple will terminate Epic’s inclusion in the Apple Developer Program, a membership that’s necessary to distribute apps on iOS devices or use Apple developer tools, if the company does not “cure your breaches” to the agreement within two weeks, according to a letter from Apple that was shared by Epic. Epic won’t be able to notarize Mac apps either, a process that could make installing Epic’s software more difficult or block it altogether. Apple requires that all apps are notarized before they can be run on newer versions of macOS, even if they’re distributed outside the App Store.

There is little to prevent Apple from more heavily monetizing these tools – should Epic’s antitrust case successfully prevent it from charging commissions via its app store.

All of this raises the question: why is Epic bringing a suit that, if successful, would merely result in the emergence of alternative fee schedules (as opposed to a significant reduction of the overall fees paid by developers).

One potential answer is that the current system is highly favorable to small apps that earn little to no revenue from purchases and who benefit most from the trust created by Apple and Google’s curation of their stores. It is, however, much less favorable to developers like Epic who no longer require any curation to garner the necessary trust from consumers and who earn a large share of their revenue from in-app purchases.

In more technical terms, the fact that all in-game payments are made through Apple and Google’s payment processing enables both platforms to more easily price-discriminate. Unlike fixed fees (but just like royalties), percentage commissions are necessarily state-contingent (i.e. the same commission will lead to vastly different revenue depending on an underlying app’s success). The most successful apps thus contribute far more to a platform’s fixed costs. For instance, it is estimated that mobile games account for 72% of all app store spend. Likewise, more than 80% of the apps on Apple’s store pay no commission at all.

This likely expands app store output by getting lower value developers on board. In that sense, it is akin to Ramsey pricing (where a firm/utility expands social welfare by allocating a higher share of fixed costs to the most inelastic consumers). Unfortunately, this would be much harder to accomplish if high value developers could easily bypass Apple or Google’s payment systems.

The bottom line is that Epic appears to be fighting to change Apple and Google’s app store business models in order to obtain fee schedules that are better aligned with its own interests. This is all the more important for Epic Games, given that mobile gaming is becoming increasingly popular relative to other gaming mediums (also here).

The emergence of new gaming platforms

Up to this point, I have mostly presented a zero-sum view of Epic’s lawsuit – i.e. developers and platforms are fighting over the distribution app store profits (though some smaller developers may lose out). But this ignores what is likely the chief virtue of Apple and Google’s “closed” distribution model. Namely, that it has greatly expanded the market for mobile gaming (and other mobile software), and will likely continue to do so in the future.

Much has already been said about the significant security and trust benefits that Apple and Google’s curation of their app stores (including their control of in-app payments) provide to users. Benedict Evans and Ben Thompson have both written excellent pieces on this very topic. 

In a nutshell, the closed model allows previously unknown developers to rapidly expand because (i) users do not have to fear their apps contain some form of malware, and (ii) they greatly reduce payments frictions, most notably security related ones. But while these are indeed tremendous benefits, another important upside seems to have gone relatively unnoticed. 

The “closed” business model also gives Apple and Google (as well as other platforms) significant incentives to develop new distribution mediums (smart TVs spring to mind) and improve existing ones. In turn, this greatly expands the audience that software developers can reach. In short, developers get a smaller share of a much larger pie.

The economics of two-sided markets are enlightening in this respect. Apple and Google’s stores are what Armstrong and Wright (here and here) refer to as “competitive bottlenecks”. That is, they compete aggressively (amongst themselves, and with other gaming platforms) to attract exclusive users. They can then charge developers a premium to access those users (note, however, that in the case at hand the incidence of those platform fees is unclear).

This gives platforms significant incentives to continuously attract and retain new users. For instance, if Steve Jobs is to be believed, giving consumers better access to media such as eBooks, video and games was one of the driving forces behind the launch of the iPad

This model of innovation would be seriously undermined if developers and consumers could easily bypass platforms (as Epic games is seeking to do).

In response, some commentators have countered that platforms may use their strong market positions to squeeze developers, thereby undermining software investments. But such a course of action may ultimately be self-defeating. For instance, writing about retail platforms imitating third-party sellers, Anfrei Hagiu, Tat-How Teh and Julian Wright have argued that:

[T]he platform has an incentive to commit itself not to imitate highly innovative third-party products in order to preserve their incentives to innovate.

Seen in this light, Apple and Google’s 30% commissions can be seen as a soft commitment not to expropriate developers, thus leaving them with a sizable share of the revenue generated on each platform. This may explain why the 30% commission has become a standard in the games industry (and beyond). 

Furthermore, from an evolutionary perspective, it is hard to argue that the 30% commission is somehow extortionate. If game developers were systematically expropriated, then the gaming industry – in particular its mobile segment – would not have grown so drastically over the past years:

All of this this likely explains why a recent survey found that 81% of app developers believed regulatory intervention would be misguided:

81% of developers and publishers believe that the relationship between them and platforms is best handled within the industry, rather than through government intervention. Competition and choice mean that developers will use platforms that they work with best.

The upshot is that the “closed” model employed by Apple and Google has served the gaming industry well. There is little compelling reason to overhaul that model today.

Final thoughts

When all is said and done, there is no escaping the fact that Epic games is currently playing a high-stakes rent-seeking game. As Apple noted in its opposition to Epic’s motion for a temporary restraining order:

Epic did not, and has not, contested that it is in breach of the App Store Guidelines and the License Agreement. Epic’s plan was to violate the agreements intentionally in order to manufacture an emergency. The moment Fortnite was removed from the App Store, Epic launched an extensive PR smear campaign against Apple and a litigation plan was orchestrated to the minute; within hours, Epic had filed a 56-page complaint, and within a few days, filed nearly 200 pages with this Court in a pre-packaged “emergency” motion. And just yesterday, it even sought to leverage its request to this Court for a sales promotion, announcing a “#FreeFortniteCup” to take place on August 23, inviting players for one last “Battle Royale” across “all platforms” this Sunday, with prizes targeting Apple.

Epic is ultimately seeking to introduce its own app store on both Apple and Google’s platforms, or at least bypass their payment processing services (as Spotify is seeking to do in the EU).

Unfortunately, as this post has argued, condoning this type of free-riding could prove highly detrimental to the entire mobile software industry. Smaller companies would almost inevitably be left to foot a larger share of the bill, existing platforms would become less secure, and the development of new ones could be hindered. At the end of the day, 30% might actually be a small price to pay.

The goal of US antitrust law is to ensure that competition continues to produce positive results for consumers and the economy in general. We published a letter co-signed by twenty three of the U.S.’s leading economists, legal scholars and practitioners, including one winner of the Nobel Prize in economics (full list of signatories here), to exactly that effect urging the House Judiciary Committee on the State of Antitrust Law to reject calls for radical upheaval of antitrust law that would, among other things, undermine the independence and neutrality of US antitrust law. 

A critical part of maintaining independence and neutrality in the administration of antitrust is ensuring that it is insulated from politics. Unfortunately, this view is under attack from all sides. The President sees widespread misconduct among US tech firms that he believes are controlled by the “radical left” and is, apparently, happy to use whatever tools are at hand to chasten them. 

Meanwhile, Senator Klobuchar has claimed, without any real evidence, that the mooted Uber/Grubhub merger is simply about monopolisation of the market, and not, for example, related to the huge changes that businesses like this are facing because of the Covid shutdown.

Both of these statements challenge the principle that the rule of law depends on being politically neutral, including in antitrust. 

Our letter, contrary to the claims made by President Trump, Sen. Klobuchar and some of the claims made to the Committee, asserts that the evidence and economic theory is clear: existing antitrust law is doing a good job of promoting competition and consumer welfare in digital markets and the economy more broadly, and concludes that the Committee should focus on reforms that improve antitrust at the margin, not changes that throw out decades of practice and precedent.

The letter argues that:

  1. The American economy—including the digital sector—is competitive, innovative, and serves consumers well, contrary to how it is sometimes portrayed in the public debate. 
  2. Structural changes in the economy have resulted from increased competition, and increases in national concentration have generally happened because competition at the local level has intensified and local concentration has fallen.
  3. Lax antitrust enforcement has not allowed systematic increases in market power, and the evidence simply does not support out the idea that antitrust enforcement has weakened in recent decades.
  4. Existing antitrust law is adequate for protecting competition in the modern economy, and built up through years of careful case-by-case scrutiny. Calls to throw out decades of precedent to achieve an antitrust “Year Zero” would throw away a huge body of learning and deliberation.
  5. History teaches that discarding the modern approach to antitrust would harm consumers, and return to a situation where per se rules prohibited the use of economic analysis and fact-based defences of business practices.
  6. Common sense reforms should be pursued to improve antitrust enforcement, and the reforms proposed in the letter could help to improve competition and consumer outcomes in the United States without overturning the whole system.

The reforms suggested include measures to increase transparency of the DoJ and FTC, greater scope for antitrust challenges against state-sponsored monopolies, stronger penalties for criminal cartel conduct, and more agency resources being made available to protect workers from anti-competitive wage-fixing agreements between businesses. These are suggestions for the House Committee to consider and are not supported by all the letter’s signatories.

Some of the arguments in the letter are set out in greater detail in the ICLE’s own submission to the Committee, which goes into detail about the nature of competition in modern digital markets and in traditional markets that have been changed because of the adoption of digital technologies. 

The full letter is here.

On Monday evening, around 6:00 PM Eastern Standard Time, news leaked that the United States District Court for the Southern District of New York had decided to allow the T-Mobile/Sprint merger to go through, giving the companies a victory over a group of state attorneys general trying to block the deal.

Thomas Philippon, a professor of finance at NYU, used this opportunity to conduct a quick-and-dirty event study on Twitter:

Short thread on T-Mobile/Sprint merger. There were 2 theories:

(A) It’s a 4-to-3 merger that will lower competition and increase markups.

(B) The new merged entity will be able to take on the industry leaders AT&T and Verizon.

(A) and (B) make clear predictions. (A) predicts the merger is good news for AT&T and Verizon’s shareholders. (B) predicts the merger is bad news for AT&T and Verizon’s shareholders. The news leaked at 6pm that the judge would approve the merger. Sprint went up 60% as expected. Let’s test the theories. 

Here is Verizon’s after trading price: Up 2.5%.

Here is ATT after hours: Up 2%.

Conclusion 1: Theory B is bogus, and the merger is a transfer of at least 2%*$280B (AT&T) + 2.5%*$240B (Verizon) = $11.6 billion from the pockets of consumers to the pockets of shareholders. 

Conclusion 2: I and others have argued for a long time that theory B was bogus; this was anticipated. But lobbying is very effective indeed… 

Conclusion 3: US consumers already pay two or three times more than those of other rich countries for their cell phone plans. The gap will only increase.

And just a reminder: these firms invest 0% of the excess profits. 

Philippon published his thread about 40 minutes prior to markets opening for regular trading on Tuesday morning. The Court’s official decision was published shortly before markets opened as well. By the time regular trading began at 9:30 AM, Verizon had completely reversed its overnight increase and opened down from the previous day’s close. While AT&T opened up slightly, it too had given back most of its initial gains. By 11:00 AM, AT&T was also in the red. When markets closed at 4:00 PM on Tuesday, Verizon was down more than 2.5 percent and AT&T was down just under 0.5 percent.

Does this mean that, in fact, theory A is the “bogus” one? Was the T-Mobile/Sprint merger decision actually a transfer of “$7.4 billion from the pockets of shareholders to the pockets of consumers,” as I suggested in my own tongue-in-cheek thread later that day? In this post, I will look at the factors that go into conducting a proper event study.  

What’s the appropriate window for a merger event study?

In a response to my thread, Philippon said, “I would argue that an event study is best done at the time of the event, not 16 hours after. Leak of merger approval 6 pm Monday. AT&T up 2 percent immediately. AT&T still up at open Tuesday. Then comes down at 10am.” I don’t disagree that “an event study is best done at the time of the event.” In this case, however, we need to consider two important details: When was the “event” exactly, and what were the conditions in the financial markets at that time?

This event did not begin and end with the leak on Monday night. The official announcement came Tuesday morning when the full text of the decision was published. This additional information answered a few questions for market participants: 

  • Were the initial news reports true?
  • Based on the text of the decision, what is the likelihood it gets reversed on appeal?
    • Wall Street: “Not all analysts are convinced this story is over just yet. In a note released immediately after the judge’s verdict, Nomura analyst Jeff Kvaal warned that ‘we expect the state AGs to appeal.’ RBC Capital analyst Jonathan Atkin noted that such an appeal, if filed, could delay closing of the merger by ‘an additional 4-5’ months — potentially delaying closure until September 2020.”
  • Did the Court impose any further remedies or conditions on the merger?

As stock traders digested all the information from the decision, Verizon and AT&T quickly went negative. There is much debate in the academic literature about the appropriate window for event studies on mergers. But the range in question is always one of days or weeks — not a couple hours in after hours markets. A recent paper using the event study methodology analyzed roughly 5,000 mergers and found abnormal returns of about positive one percent for competitors in the relevant market following a merger announcement. Notably for our purposes, this small abnormal return builds in the first few days following a merger announcement and persists for up to 30 days, as shown in the chart below:

As with the other studies the paper cites in its literature review, this particular research design included a window of multiple weeks both before and after the event occured. When analyzing the T-Mobile/Sprint merger decision, we should similarly expand the window beyond just a few hours of after hours trading.

How liquid is the after hours market?

More important than the length of the window, however, is the relative liquidity of the market during that time. The after hours market is much thinner than the regular hours market and may not reflect all available information. For some rough numbers, let’s look at data from NASDAQ. For the last five after hours trading sessions, total volume was between 80 and 100 million shares. Let’s call it 90 million on average. By contrast, the total volume for the last five regular trading hours sessions was between 2 and 2.5 billion shares. Let’s call it 2.25 billion on average. So, the regular trading hours have roughly 25 times as much liquidity as the after hours market

We could also look at relative liquidity for a single company as opposed to the total market. On Wednesday during regular hours (data is only available for the most recent day), 22.49 million shares of Verizon stock were traded. In after hours trading that same day, fewer than a million shares traded hands. You could change some assumptions and account for other differences in the after market and the regular market when analyzing the data above. But the conclusion remains the same: the regular market is at least an order of magnitude more liquid than the after hours market. This is incredibly important to keep in mind as we compare the after hours price changes (as reported by Philippon) to the price changes during regular trading hours.

What are Wall Street analysts saying about the decision?

To understand the fundamentals behind these stock moves, it’s useful to see what Wall Street analysts are saying about the merger decision. Prior to the ruling, analysts were already worried about Verizon’s ability to compete with the combined T-Mobile/Sprint entity in the short- and medium-term:

Last week analysts at LightShed Partners wrote that if Verizon wins most of the first available tranche of C-band spectrum, it could deploy 60 MHz in 2022 and see capacity and speed benefits starting in 2023.

With that timeline, C-Band still does not answer the questions of what spectrum Verizon will be using for the next three years,” wrote LightShed’s Walter Piecyk and Joe Galone at the time.

Following the news of the decision, analysts were clear in delivering their own verdict on how the decision would affect Verizon:

Verizon looks to us to be a net loser here,” wrote the MoffettNathanson team led by Craig Moffett.

…  

Approval of the T-Mobile/Sprint deal takes not just one but two spectrum options off the table,” wrote Moffett. “Sprint is now not a seller of 2.5 GHz spectrum, and Dish is not a seller of AWS-4. More than ever, Verizon must now bet on C-band.”

LightShed also pegged Tuesday’s merger ruling as a negative for Verizon.

“It’s not great news for Verizon, given that it removes Sprint and Dish’s spectrum as an alternative, created a new competitor in Dish, and has empowered T-Mobile with the tools to deliver a superior network experience to consumers,” wrote LightShed.

In a note following news reports that the court would side with T-Mobile and Sprint, New Street analyst Johnathan Chaplin wrote, “T-Mobile will be far more disruptive once they have access to Sprint’s spectrum than they have been until now.”

However, analysts were more sanguine about AT&T’s prospects:

AT&T, though, has been busy deploying additional spectrum, both as part of its FirstNet build and to support 5G rollouts. This has seen AT&T increase its amount of deployed spectrum by almost 60%, according to Moffett, which takes “some of the pressure off to respond to New T-Mobile.”

Still, while AT&T may be in a better position on the spectrum front compared to Verizon, it faces the “same competitive dynamics,” Moffett wrote. “For AT&T, the deal is probably a net neutral.”

The quantitative evidence from the stock market seems to agree with the qualitative analysis from the Wall Street research firms. Let’s look at the five-day window of trading from Monday morning to Friday (today). Unsurprisingly, Sprint, T-Mobile, and Dish have reacted very favorably to the news:

Consistent with the Wall Street analysis, Verizon stock remains down 2.5 percent over a five-day window while AT&T has been flat over the same period:

How do you separate beta from alpha in an event study?

Philippon argued that after market trading may be more efficient because it is dominated by hedge funds and includes less “noise trading.” In my opinion, the liquidity effect likely outweighs this factor. Also, it’s unclear why we should assume “smart money” is setting the price in the after hours market but not during regular trading when hedge funds are still active. Sophisticated professional traders often make easy profits by picking off panicked retail investors who only read the headlines. When you see a wild swing in the markets that moderates over time, the wild swing is probably the noise and the moderation is probably the signal.

And, as Karl Smith noted, since the aftermarket is thin, price moves in individual stocks might reflect changes in the broader stock market (“beta”) more than changes due to new company-specific information (“alpha”). Here are the last five days for e-mini S&P 500 futures, which track the broader market and are traded after hours:

The market trended up on Monday night and was flat on Tuesday. This slightly positive macro environment means we would need to adjust the returns downward for AT&T and Verizon. Of course, this is counter to Philippon’s conjecture that the merger decision would increase their stock prices. But to be clear, these changes are so minuscule in percentage terms, this adjustment wouldn’t make much of a difference in this case.

Lastly, let’s see what we can learn from a similar historical episode in the stock market.

The parallel to the 2016 presidential election

The type of reversal we saw in AT&T and Verizon is not unprecedented. Some commenters said the pattern reminded them of the market reaction to Trump’s election in 2016:

Much like the T-Mobile/Sprint merger news, the “event” in 2016 was not a single moment in time. It began around 9 PM Tuesday night when Trump started to overperform in early state results. Over the course of the next three hours, S&P 500 futures contracts fell about 5 percent — an enormous drop in such a short period of time. If Philippon had tried to estimate the “Trump effect” in the same manner he did the T-Mobile/Sprint case, he would have concluded that a Trump presidency would reduce aggregate future profits by about 5 percent relative to a Clinton presidency.

But, as you can see in the chart above, if we widen the aperture of the event study to include the hours past midnight, the story flips. Markets started to bounce back even before Trump took the stage to make his victory speech. The themes of his speech were widely regarded as reassuring for markets, which further pared losses from earlier in the night. When regular trading hours resumed on Wednesday, the markets decided a Trump presidency would be very good for certain sectors of the economy, particularly finance, energy, biotech, and private prisons. By the end of the day, the stock market finished up about a percentage point from where it closed prior to the election — near all time highs.

Maybe this is more noise than signal?

As a few others pointed out, these relatively small moves in AT&T and Verizon (less than 3 percent in either direction) may just be noise. That’s certainly possible given the magnitude of the changes. Contra Philippon, I think the methodology in question is too weak to rule out the pro-competitive theory of the case, i.e., that the new merged entity would be a stronger competitor to take on industry leaders AT&T and Verizon. We need much more robust and varied evidence before we can call anything “bogus.” Of course, that means this event study is not sufficient to prove the pro-competitive theory of the case, either.

Olivier Blanchard, a former chief economist of the IMF, shared Philippon’s thread on Twitter and added this comment above: “The beauty of the argument. Simple hypothesis, simple test, clear conclusion.”

If only things were so simple.

The DOJ and 20 state AGs sued Microsoft on May 18, 1998 for unlawful maintenance of its monopoly position in the PC market. The government accused the desktop giant of tying its operating system (Windows) and its web browser (Internet Explorer). Microsoft had indeed become dominant in the PC market by the late 1980s:

Source: Asymco

But after the introduction of smartphones in the mid-2000s, Microsoft’s market share of personal computing units (including PCs, smartphones, and tablets) collapsed:

Source: Benedict Evans

Steven Sinofsy pointed out why this was a classic case of disruptive innovation rather than sustaining innovation: “Google and Microsoft were competitors but only by virtue of being tech companies hiring engineers. After that, almost nothing about what was being made or sold was similar even if things could ultimately be viewed as substitutes. That is literally the definition of innovation.”

Browsers

Microsoft grew to dominance during the PC era by bundling its desktop operating system (Windows) with its productivity software (Office) and modularizing the hardware providers. By 1995, Bill Gates had realized that the internet was the next big thing, calling it “The Internet Tidal Wave” in a famous internal memo. Gates feared that the browser would function as “middleware” and disintermediate Microsoft from its relationship with the end-user. At the time, Netscape Navigator was gaining market share from the first browser to popularize the internet, Mosaic (so-named because it supported a multitude of protocols).

Later that same year, Microsoft released its own browser, Internet Explorer, which would be bundled with its Windows operating system. Internet Explorer soon grew to dominate the market:

Source: Browser Wars

Steven Sinofsky described how the the browser threatened to undermine the Windows platform (emphasis added):

Microsoft saw browsers as a platform threat to Windows. Famously. Browsers though were an app — running everywhere, distributed everywhere. Microsoft chose to compete as though browsing was on par with Windows (i.e., substitutes).

That meant doing things like IBM did — finding holes in distribution where browsers could “sneak” in (e.g., OEM deals) and seeing how to make Microsoft browser work best and only with Windows. Sound familiar? It does to me.

Imagine (some of us did) a world instead where Microsoft would have built a browser that was an app distributed everywhere, running everywhere. That would have been a very different strategy. One that some imagined, but not when Windows was central.

Showing how much your own gravity as a big company can make even obvious steps strategically weak: Microsoft knew browsers had to be cross-platform so it built Internet Explorer for Mac and Unix. Neat. But wait, the main strategic differentiator for Internet Explorer was ActiveX which was clearly Windows only.

So even when trying to compete in a new market the strategy was not going to work technically and customers would immediately know. Either they would ignore the key part of Windows or the key part of x-platform. This is what a big company “master plan” looks like … Active Desktop.

Regulators claimed victory but the loss already happened. But for none of the reasons the writers of history say at least [in my humble opinion]. As a reminder, Microsoft stopped working on Internet Explorer 7 years before Chrome even existed — literally didn’t release a new version for 5+ years.

One of the most important pieces of context for this case is that other browsers were also free for personal use (even if they weren’t bundled with an operating system). At the time, Netscape was free for individuals. Mosaic was free for non-commercial use. Today, Chrome and Firefox are free for all users. Chrome makes money for Google by increasing the value of its ecosystem and serving as a complement for its other products (particularly search). Firefox is able to more than cover its costs by charging Google (and others) to be the default option in its browser. 

By bundling Internet Explorer with Windows for free, Microsoft was arguably charging the market rate. In highly competitive markets, economic theory tells us the price should approach marginal cost — which in software is roughly zero. As James Pethokoukis argued, there are many more reasons to be skeptical about the popular narrative surrounding the Microsoft case. The reasons for doubt range across features, products, and markets, including server operating systems, mobile devices, and search engines. Let’s examine a few of them.

Operating Systems

In a 2007 article for Wired titled “I Blew It on Microsoft,” Lawrence Lessig, a Harvard law professor, admits that his predictions about the future of competition in computer operating systems failed to account for the potential of open-source solutions:

We pro-regulators were making an assumption that history has shown to be completely false: That something as complex as an OS has to be built by a commercial entity. Only crazies imagined that volunteers outside the control of a corporation could successfully create a system over which no one had exclusive command. We knew those crazies. They worked on something called Linux.

According to Web Technology Surveys, as of April 2019, about 70 percent of servers use a Linux-based operating system while the remaining 30 percent use Windows.

Mobile

In 2007, Steve Ballmer believed that Microsoft would be the dominant company in smartphones, saying in an interview with USA Today (emphasis added):

There’s no chance that the iPhone is going to get any significant market share. No chance. It’s a $500 subsidized item. They may make a lot of money. But if you actually take a look at the 1.3 billion phones that get sold, I’d prefer to have our software in 60% or 70% or 80% of them, than I would to have 2% or 3%, which is what Apple might get.

But as Ballmer himself noted in 2013, Microsoft was too committed to the Windows platform to fully pivot its focus to mobile:

If there’s one thing I regret, there was a period in the early 2000s when we were so focused on what we had to do around Windows that we weren’t able to redeploy talent to the new device form factor called the phone.

This is another classic example of the innovator’s dilemma. Microsoft enjoyed high profit margins in its Windows business, which caused the company to underrate the significance of the shift from PCs to smartphones.

Search

To further drive home how dependent Microsoft was on its legacy products, this 2009 WSJ piece notes that the company had a search engine ad service in 2000 and shut it down to avoid cannibalizing its core business:

Nearly a decade ago, early in Mr. Ballmer’s tenure as CEO, Microsoft had its own inner Google and killed it. In 2000, before Google married Web search with advertising, Microsoft had a rudimentary system that did the same, called Keywords, running on the Web. Advertisers began signing up. But Microsoft executives, in part fearing the company would cannibalize other revenue streams, shut it down after two months.

Ben Thompson says we should wonder if the case against Microsoft was a complete waste of everyone’s time (and money): 

In short, to cite Microsoft as a reason for antitrust action against Google in particular is to get history completely wrong: Google would have emerged with or without antitrust action against Microsoft; if anything the real question is whether or not Google’s emergence shows that the Microsoft lawsuit was a waste of time and money.

The most obvious implications of the Microsoft case were negative: (1) PCs became bloated with “crapware” (2) competition in the browser market failed to materialize for many years (3) PCs were less safe because Microsoft couldn’t bundle security software, and (4) some PC users missed out on using first-party software from Microsoft because it couldn’t be bundled with Windows. When weighed against these large costs, the supposed benefits pale in comparison.

Conclusion

In all three cases I’ve discussed in this series — AT&T, IBM, and Microsoft — the real story was not that antitrust enforcers divined the perfect time to break up — or regulate — the dominant tech company. The real story was that slow and then sudden technological change outpaced the organizational inertia of incumbents, permanently displacing the former tech giants from their dominant position in the tech ecosystem. 

The next paradigm shift will be near-impossible to predict. Those who know which technology — and when — it will be would make a lot more money implementing their ideas than they would by playing pundit in the media. Regardless of whether the future winner will be Google, Facebook, Amazon, Apple, Microsoft, or some unknown startup company, antitrust enforcers should remember that the proper goal of public policy in this domain is to maximize total innovation — from firms both large and small. Fetishizing innovation by small companies — and using law enforcement to harass big companies in the hopes for an indirect benefit to competition — will make us all worse off in the long run.

The case against AT&T began in 1974. The government alleged that AT&T had monopolized the market for local and long-distance telephone service as well as telephone equipment. In 1982, the company entered into a consent decree to be broken up into eight pieces (the “Baby Bells” plus the parent company), which was completed in 1984. As a remedy, the government required the company to divest its local operating companies and guarantee equal access to all long-distance and information service providers (ISPs).

Source: Mohanram & Nanda

As the chart above shows, the divestiture broke up AT&T’s national monopoly into seven regional monopolies. In general, modern antitrust analysis focuses on the local product market (because that’s the relevant level for consumer decisions). In hindsight, how did breaking up a national monopoly into seven regional monopolies increase consumer choice? It’s also important to note that, prior to its structural breakup, AT&T was a government-granted monopoly regulated by the FCC. Any antitrust remedy should be analyzed in light of the company’s unique relationship with regulators.

Breaking up one national monopoly into seven regional monopolies is not an effective way to boost innovation. And there are economies of scale and network effects to be gained by owning a national network to serve a national market. In the case of AT&T, those economic incentives are why the Baby Bells forged themselves back together in the decades following the breakup.

Source: WSJ

As Clifford Winston and Robert Crandall noted

Appearing to put Ma Bell back together again may embarrass the trustbusters, but it should not concern American consumers who, in two decades since the breakup, are overwhelmed with competitive options to provide whatever communications services they desire.

Moreover, according to Crandall & Winston (2003), the lower prices following the breakup of AT&T weren’t due to the structural remedy at all (emphasis added):

But on closer examination, the rise in competition and lower long-distance prices are attributable to just one aspect of the 1982 decree; specifically, a requirement that the Bell companies modify their switching facilities to provide equal access to all long-distance carriers. The Federal Communications Commission (FCC) could have promulgated such a requirement without the intervention of the antitrust authorities. For example, the Canadian regulatory commission imposed equal access on its vertically integrated carriers, including Bell Canada, in 1993. As a result, long-distance competition developed much more rapidly in Canada than it had in the United States (Crandall and Hazlett, 2001). The FCC, however, was trying to block MCI from competing in ordinary long-distance services when the AT&T case was filed by the Department of Justice in 1974. In contrast to Canadian and more recent European experience, a lengthy antitrust battle and a disruptive vertical dissolution were required in the U.S. market to offset the FCC’s anti-competitive policies. Thus, antitrust policy did not triumph in this case over restrictive practices by a monopolist to block competition, but instead it overcame anticompetitive policies by a federal regulatory agency.

A quick look at the data on telephone service in the US, EU, and Canada show that the latter two were able to achieve similar reductions in price without breaking up their national providers.

Source: Crandall & Jackson (2011)

The paradigm shift from wireline to wireless

The technological revolution spurred by the transition from wireline telephone service to wireless telephone service shook up the telecommunications industry in the 1990s. The rapid change caught even some of the smartest players by surprise. In 1980, the management consulting firm McKinsey and Co. produced a report for AT&T predicting how large the cellular market might become by the year 2000. Their forecast said that 900,000 cell phones would be in use. The actual number was more than 109 million.

Along with the rise of broadband, the transition to wireless technology led to an explosion in investment. In contrast, the breakup of AT&T in 1984 had no discernible effect on the trend in industry investment:

The lesson for antitrust enforcers is clear: breaking up national monopolies into regional monopolies is no remedy. In certain cases, mandating equal access to critical networks may be warranted. Most of all, technology shocks will upend industries in ways that regulators — and dominant incumbents — fail to predict.

The Department of Justice began its antitrust case against IBM on January 17, 1969. The DOJ sued under the Sherman Antitrust Act, claiming IBM tried to monopolize the market for “general-purpose digital computers.” The case lasted almost thirteen years, ending on January 8, 1982 when Assistant Attorney General William Baxter declared the case to be “without merit” and dropped the charges. 

The case lasted so long, and expanded in scope so much, that by the time the trial began, “more than half of the practices the government raised as antitrust violations were related to products that did not exist in 1969.” Baltimore law professor Robert Lande said it was “the largest legal case of any kind ever filed.” Yale law professor Robert Bork called it “the antitrust division’s Vietnam.”

As the case dragged on, IBM was faced with increasingly perverse incentives. As NYU law professor Richard Epstein pointed out (emphasis added), 

Oddly, enough IBM was able to strengthen its antitrust-related legal position by reducing its market share, which it achieved through raising prices. When the suit was discontinued that share had fallen dramatically since 1969 from about 50 percent of the market to 37 percent in 1982. Only after the government suit ended did IBM lower its prices in order to increase market share.

Source: Levy & Welzer

In an interview with Vox, Tim Wu claimed that without the IBM case, Apple wouldn’t exist and we might still be using mainframe computers (emphasis added):

Vox: You said that Apple wouldn’t exist without the IBM case.

Wu: Yeah, I did say that. The case against IBM took 13 years and we didn’t get a verdict but in that time, there was the “policeman at the elbow” effect. IBM was once an all-powerful company. It’s not clear that we would have had an independent software industry, or that it would have developed that quickly, the idea of software as a product, [without this case]. That was one of the immediate benefits of that excavation.

And then the other big one is that it gave a lot of room for the personal computer to get started, and the software that surrounds the personal computer — two companies came in, Apple and Microsoft. They were sort of born in the wake of the IBM lawsuit. You know they were smart guys, but people did need the pressure off their backs.

Nobody is going to start in the shadow of Facebook and get anywhere. Snap’s been the best, but how are they doing? They’ve been halted. I think it’s a lot harder to imagine this revolutionary stuff that happened in the ’80s. If IBM had been completely unwatched by regulators, by enforcement, doing whatever they wanted, I think IBM would have held on and maybe we’d still be using mainframes, or something — a very different situation.

Steven Sinofsky, a former Microsoft executive and current Andreessen Horowitz board partner, had a different take on the matter, attributing IBM’s (belated) success in PCs to its utter failure in minicomputers (emphasis added):

IBM chose to prevent third parties from interoperating with mainframes sometimes at crazy levels (punch card formats). And then chose to defend until the end their business model of leasing … The minicomputer was a direct threat not because of technology but because of those attributes. I’ve heard people say IBM went into PCs because the antitrust loss caused them to look for growth or something. Ha. PCs were spun up because IBM was losing Minis. But everything about the PC was almost a fluke organizationally and strategically. The story of IBM regulation is told as though PCs exist because of the case.

The more likely story is that IBM got swamped by the paradigm shift from mainframes to PCs. IBM was dominant in mainframe computers which were sold to the government and large enterprises. Microsoft, Intel, and other leaders in the PC market sold to small businesses and consumers, which required an entirely different business model than IBM was structured to implement.

ABB – Always Be Bundling (Or Unbundling)

“There’s only two ways I know of to make money: bundling and unbundling.” – Jim Barksdale

In 1969, IBM unbundled its software and services from hardware sales. As many industry observers note, this action precipitated the rise of the independent software development industry. But would this have happened regardless of whether there was an ongoing antitrust case? Given that bundling and unbundling is ubiquitous in the history of the computer industry, the answer is likely yes.

As the following charts show, IBM first created an integrated solution in the mainframe market, controlling everything from raw materials and equipment to distribution and service. When PCs disrupted mainframes, the entire value chain was unbundled. Later, Microsoft bundled its operating system with applications software. 

Source: Clayton Christensen

The first smartphone to disrupt the PC market was the Apple iPhone — an integrated solution. And once the technology became “good enough” to meet the average consumer’s needs, Google modularized everything except the operating system (Android) and the app store (Google Play).

Source: SlashData
Source: Jake Nielson

Another key prong in Tim Wu’s argument that the government served as an effective “policeman at the elbow” in the IBM case is that the company adopted an open model when it entered the PC market and did not require an exclusive license from Microsoft to use its operating system. But exclusivity is only one term in a contract negotiation. In an interview with Playboy magazine in 1994, Bill Gates explained how he was able to secure favorable terms from IBM (emphasis added):

Our restricting IBM’s ability to compete with us in licensing MS-DOS to other computer makers was the key point of the negotiation. We wanted to make sure only we could license it. We did the deal with them at a fairly low price, hoping that would help popularize it. Then we could make our move because we insisted that all other business stay with us. We knew that good IBM products are usually cloned, so it didn’t take a rocket scientist to figure out that eventually we could license DOS to others. We knew that if we were ever going to make a lot of money on DOS it was going to come from the compatible guys, not from IBM. They paid us a fixed fee for DOS. We didn’t get a royalty, even though we did make some money on the deal. Other people paid a royalty. So it was always advantageous to us, the market grew and other hardware guys were able to sell units.

In this version of the story, IBM refrained from demanding an exclusive license from Microsoft not because it was fearful of antitrust enforcers but because Microsoft made significant concessions on price and capped its upside by agreeing to a fixed fee rather than a royalty. These economic and technical explanations for why IBM wasn’t able to leverage its dominant position in mainframes into the PC market are more consistent with the evidence than Wu’s “policeman at the elbow” theory.

In my next post, I will discuss the other major antitrust case that came to an end in 1982: AT&T.

Big Tech continues to be mired in “a very antitrust situation,” as President Trump put it in 2018. Antitrust advocates have zeroed in on Facebook, Google, Apple, and Amazon as their primary targets. These advocates justify their proposals by pointing to the trio of antitrust cases against IBM, AT&T, and Microsoft. Elizabeth Warren, in announcing her plan to break up the tech giants, highlighted the case against Microsoft:

The government’s antitrust case against Microsoft helped clear a path for Internet companies like Google and Facebook to emerge. The story demonstrates why promoting competition is so important: it allows new, groundbreaking companies to grow and thrive — which pushes everyone in the marketplace to offer better products and services.

Tim Wu, a law professor at Columbia University, summarized the overarching narrative recently (emphasis added):

If there is one thing I’d like the tech world to understand better, it is that the trilogy of antitrust suits against IBM, AT&T, and Microsoft played a major role in making the United States the world’s preeminent tech economy.

The IBM-AT&T-Microsoft trilogy of antitrust cases each helped prevent major monopolists from killing small firms and asserting control of the future (of the 80s, 90s, and 00s, respectively).

A list of products and firms that owe at least something to the IBM-AT&T-Microsoft trilogy.

(1) IBM: software as product, Apple, Microsoft, Intel, Seagate, Sun, Dell, Compaq

(2) AT&T: Modems, ISPs, AOL, the Internet and Web industries

(3) Microsoft: Google, Facebook, Amazon

Wu argues that by breaking up the current crop of dominant tech companies, we can sow the seeds for the next one. But this reasoning depends on an incorrect — albeit increasingly popular — reading of the history of the tech industry. Entrepreneurs take purposeful action to produce innovative products for an underserved segment of the market. They also respond to broader technological change by integrating or modularizing different products in their market. This bundling and unbundling is a never-ending process.

Whether the government distracts a dominant incumbent with a failed lawsuit (e.g., IBM), imposes an ineffective conduct remedy (e.g., Microsoft), or breaks up a government-granted national monopoly into regional monopolies (e.g., AT&T), the dynamic nature of competition between tech companies will far outweigh the effects of antitrust enforcers tilting at windmills.

In a series of posts for Truth on the Market, I will review the cases against IBM, AT&T, and Microsoft and discuss what we can learn from them. In this introductory article, I will explain the relevant concepts necessary for understanding the history of market competition in the tech industry.

Competition for the Market

In industries like tech that tend toward “winner takes most,” it’s important to distinguish between competition during the market maturation phase — when no clear winner has emerged and the technology has yet to be widely adopted — and competition after the technology has been diffused in the economy. Benedict Evans recently explained how this cycle works (emphasis added):

When a market is being created, people compete at doing the same thing better. Windows versus Mac. Office versus Lotus. MySpace versus Facebook. Eventually, someone wins, and no-one else can get in. The market opportunity has closed. Be, NeXT/Path were too late. Monopoly!

But then the winner is overtaken by something completely different that makes it irrelevant. PCs overtook mainframes. HTML/LAMP overtook Win32. iOS & Android overtook Windows. Google overtook Microsoft.

Tech antitrust too often wants to insert a competitor to the winning monopolist, when it’s too late. Meanwhile, the monopolist is made irrelevant by something that comes from totally outside the entire conversation and owes nothing to any antitrust interventions.

In antitrust parlance, this is known as competing for the market. By contrast, in more static industries where the playing field doesn’t shift so radically and the market doesn’t tip toward “winner take most,” firms compete within the market. What Benedict Evans refers to as “something completely different” is often a disruptive product.

Disruptive Innovation

As Clay Christensen explains in the Innovator’s Dilemma, a disruptive product is one that is low-quality (but fast-improving), low-margin, and targeted at an underserved segment of the market. Initially, it is rational for the incumbent firms to ignore the disruptive technology and focus on improving their legacy technology to serve high-margin customers. But once the disruptive technology improves to the point it can serve the whole market, it’s too late for the incumbent to switch technologies and catch up. This process looks like overlapping s-curves:

Source: Max Mayblum

We see these S-curves in the technology industry all the time:

Source: Benedict Evans

As Christensen explains in the Innovator’s Solution, consumer needs can be thought of as “jobs-to-be-done.” Early on, when a product is just good enough to get a job done, firms compete on product quality and pursue an integrated strategy — designing, manufacturing, and distributing the product in-house. As the underlying technology improves and the product overshoots the needs of the jobs-to-be-done, products become modular and the primary dimension of competition moves to cost and convenience. As this cycle repeats itself, companies are either bundling different modules together to create more integrated products or unbundling integrated products to create more modular products.

Moore’s Law

Source: Our World in Data

Moore’s Law is the gasoline that gets poured on the fire of technology cycles. Though this “law” is nothing more than the observation that “the number of transistors in a dense integrated circuit doubles about every two years,” the implications for dynamic competition are difficult to overstate. As Bill Gates explained in a 1994 interview with Playboy magazine, Moore’s Law means that computer power is essentially “free” from an engineering perspective:

When you have the microprocessor doubling in power every two years, in a sense you can think of computer power as almost free. So you ask, Why be in the business of making something that’s almost free? What is the scarce resource? What is it that limits being able to get value out of that infinite computing power? Software.

Exponentially smaller integrated circuits can be combined with new user interfaces and networks to create new computer classes, which themselves represent the opportunity for disruption.

Bell’s Law of Computer Classes

Source: Brad Campbell

A corollary to Moore’s Law, Bell’s law of computer classes predicts that “roughly every decade a new, lower priced computer class forms based on a new programming platform, network, and interface resulting in new usage and the establishment of a new industry.” Originally formulated in 1972, we have seen this prediction play out in the birth of mainframes, minicomputers, workstations, personal computers, laptops, smartphones, and the Internet of Things.

Understanding these concepts — competition for the market, disruptive innovation, Moore’s Law, and Bell’s Law of Computer Classes — will be crucial for understanding the true effects (or lack thereof) of the antitrust cases against IBM, AT&T, and Microsoft. In my next post, I will look at the DOJ’s (ultimately unsuccessful) 13-year antitrust battle with IBM.

Qualcomm is currently in the midst of a high-profile antitrust case against the FTC. At the heart of these proceedings lies Qualcomm’s so-called “No License, No Chips” (NLNC) policy, whereby it purportedly refuses to sell chips to OEMs that have not concluded a license agreement covering its underlying intellectual property. According to the FTC and Qualcomm’s opponents, this ultimately thwarts competition in the chipset market.

Against this backdrop, Mark Lemley, Douglas Melamed, and Steven Salop penned a high-profile amicus brief supporting the FTC’s stance. 

We responded to their brief in a Truth on the Market blog post, and this led to a series of blog exchanges between the amici and ourselves. 

This post summarizes these exchanges.

1. Amicus brief supporting the FTC’s stance, and ICLE brief in support of Qualcomm’s position

The starting point of this blog exchange was an Amicus brief written by Mark Lemley, Douglas Melamed, and Steven Salop (“the amici”) , and signed by 40 law and economics scholars. 

The amici made two key normative claims:

  • Qualcomm’s no license, no chips policy is unlawful under well-established antitrust principles: 
    Qualcomm uses the NLNC policy to make it more expensive for OEMs to purchase competitors’ chipsets, and thereby disadvantages rivals and creates artificial barriers to entry and competition in the chipset markets.”
  • Qualcomm’s refusal to license chip-set rivals reinforces the no license, no chips policy and violates the antitrust laws:
    Qualcomm’s refusal to license chipmakers is also unlawful, in part because it bolsters the NLNC policy.16 In addition, Qualcomm’s refusal to license chipmakers increases the costs of using rival chipsets, excludes rivals, and raises barriers to entry even if NLNC is not itself illegal.

It is important to note that ICLE also filed an amicus brief in these proceedings. Contrary to the amici, ICLE’s scholars concluded that Qualcomm’s behavior did not raise any antitrust concerns and was ultimately a matter of contract law and .

2. ICLE response to the Lemley, Melamed and Salop Amicus brief.

We responded to the amici in a first blog post

The post argued that the amici failed to convincingly show that Qualcomm’s NLNC policy was exclusionary. We notably highlighted two important factors.

  • First, Qualcomm could not use its chipset position and NLNC policy to avert the threat of FRAND litigation, thus extracting supracompetitve royalties:
    Qualcomm will be unable to charge a total price that is significantly above the price of rivals’ chips, plus the FRAND rate for its IP (and expected litigation costs).”
  • Second, Qualcomm’s behavior did not appear to fall within standard patterns of strategic behavior:
    The amici attempt to overcome this weakness by implicitly framing their argument in terms of exclusivity, strategic entry deterrence, and tying […]. But none of these arguments totally overcomes the flaw in their reasoning.” 

3. Amici’s counterargument 

The amici wrote a thoughtful response to our post. Their piece rested on two main arguments:

  • The Amici underlined that their theory of anticompetitive harm did not imply any form of profit sacrifice on Qualcomm’s part (in the chip segment):
    Manne and Auer seem to think that the concern with the no license/no chips policy is that it enables inflated patent royalties to subsidize a profit sacrifice in chip sales, as if the issue were predatory pricing in chips.  But there is no such sacrifice.
  • The deleterious effects of Qualcomm’s behavior were merely a function of its NLNC policy and strong chipset position. In conjunction, these two factors deterred OEMs from pursuing FRAND litigation:
    Qualcomm is able to charge more than $2 for the license only because it uses the power of its chip monopoly to coerce the OEMs to give up the option of negotiating in light of the otherwise applicable constraints on the royalties it can charge.

4. ICLE rebuttal

We then responded to the amici with the following points:

  • We agreed that it would be a problem if Qualcomm could prevent OEMs from negotiating license agreements in the shadow of FRAND litigation:
    The critical question is whether there is a realistic threat of litigation to constrain the royalties commanded by Qualcomm (we believe that Lemley et al. agree with us on this point).”
  • However, Qualcomm’s behavior did not preclude OEMs from pursuing this type of strategy:
    We believe the following facts support our assertion:
    OEMs have pursued various litigation strategies in order to obtain lower rates on Qualcomm’s IP. […]
    For the most part, Qualcomm’s threats to cut off chip supplies were just that: threats. […]
    OEMs also wield powerful threats. […]
    Qualcomm’s chipsets might no longer be “must-buys” in the future.”

 5. Amici’s surrebuttal

The amici sent us a final response (reproduced here in full) :

In their original post, Manne and Auer argued that the antitrust argument against Qualcomm’s no license/no chips policy was based on bad economics and bad law.  They now seem to have abandoned that argument and claim instead – contrary to the extensive factual findings of the district court – that, while Qualcomm threatened to cut off chips, it was a paper tiger that OEMs could, and knew they could, ignore.  The implication is that the Ninth Circuit should affirm the district court on the no license/ no chips issue unless it sets aside the court’s fact findings.  That seems like agreement with the position of our amicus brief.

We will not in this post review the huge factual record.  We do note, however, that Manne and Auer cite in support of their factual argument only that 3 industry giants brought and then settled litigation against Qualcomm.  But all 3 brought antitrust litigation; their doing so hardly proves that contract litigation or what Manne and Auer call “holdout” were viable options for anyone, much less for smaller OEMs.  The fact that Qualcomm found it necessary to actually cut off only one OEM – and then it only took the OEM only 7 days to capitulate – certainly does not prove that Qualcomm’s threats lacked credibility.   Notably, Manne and Auer do not claim that any OEMs bought chips from competitors of Qualcomm (although Apple bought some chips from Intel for a short while). No license/no chips appears to have been a successful, coercive policy, not an easily ignored threat.                                                                                                                                              

6. Concluding remarks

First and foremost, we would like to thank the Amici for thoughtfully engaging with us. This is what the law & economics tradition is all about: moving the ball forward by taking part in vigorous, multidisciplinary, debates.

With that said, we do feel compelled to leave readers with two short remarks. 

First, contrary to what the amici claim, we believe that our position has remained the same throughout these debates. 

Second, and more importantly, we think that everyone agrees that the critical question is whether OEMs were prevented from negotiating licenses in the shadow of FRAND litigation. 

We leave it up to Truth on the Market readers to judge which side of this debate is correct.

[This guest post is authored by Mark A. Lemley, Professor of Law and the Director of Program in Law, Science & Technology at Stanford Law School; A. Douglas Melamed, Professor of the Practice of Law at Stanford Law School and Former Senior Vice President and General Counsel of Intel from 2009 to 2014; and Steven Salop, Professor of Economics and Law at Georgetown Law School. It is part of an ongoing debate between the authors, on one side, and Geoffrey Manne and Dirk Auer, on the other, and has been integrated into our ongoing series on the FTC v. Qualcomm case, where all of the posts in this exchange are collected.]

In their original post, Manne and Auer argued that the antitrust argument against Qualcomm’s no license/no chips policy was based on bad economics and bad law. They now seem to have abandoned that argument and claim instead – contrary to the extensive factual findings of the district court – that, while Qualcomm threatened to cut off chips, it was a paper tiger that OEMs could, and knew they could, ignore. The implication is that the Ninth Circuit should affirm the district court on the no license/ no chips issue unless it sets aside the court’s fact findings. That seems like agreement with the position of our amicus brief.

We will not in this post review the huge factual record. We do note, however, that Manne and Auer cite in support of their factual argument only that 3 industry giants brought and then settled litigation against Qualcomm. But all 3 brought antitrust litigation; their doing so hardly proves that contract litigation or what Manne and Auer call “holdout” were viable options for anyone, much less for smaller OEMs. The fact that Qualcomm found it necessary to actually cut off only one OEM – and then it only took the OEM only 7 days to capitulate – certainly does not prove that Qualcomm’s threats lacked credibility. Notably, Manne and Auer do not claim that any OEMs bought chips from competitors of Qualcomm (although Apple bought some chips from Intel for a short while). No license/no chips appears to have been a successful, coercive policy, not an easily ignored threat.