Archives For antitrust

The goal of US antitrust law is to ensure that competition continues to produce positive results for consumers and the economy in general. We published a letter co-signed by twenty three of the U.S.’s leading economists, legal scholars and practitioners, including one winner of the Nobel Prize in economics (full list of signatories here), to exactly that effect urging the House Judiciary Committee on the State of Antitrust Law to reject calls for radical upheaval of antitrust law that would, among other things, undermine the independence and neutrality of US antitrust law. 

A critical part of maintaining independence and neutrality in the administration of antitrust is ensuring that it is insulated from politics. Unfortunately, this view is under attack from all sides. The President sees widespread misconduct among US tech firms that he believes are controlled by the “radical left” and is, apparently, happy to use whatever tools are at hand to chasten them. 

Meanwhile, Senator Klobuchar has claimed, without any real evidence, that the mooted Uber/Grubhub merger is simply about monopolisation of the market, and not, for example, related to the huge changes that businesses like this are facing because of the Covid shutdown.

Both of these statements challenge the principle that the rule of law depends on being politically neutral, including in antitrust. 

Our letter, contrary to the claims made by President Trump, Sen. Klobuchar and some of the claims made to the Committee, asserts that the evidence and economic theory is clear: existing antitrust law is doing a good job of promoting competition and consumer welfare in digital markets and the economy more broadly, and concludes that the Committee should focus on reforms that improve antitrust at the margin, not changes that throw out decades of practice and precedent.

The letter argues that:

  1. The American economy—including the digital sector—is competitive, innovative, and serves consumers well, contrary to how it is sometimes portrayed in the public debate. 
  2. Structural changes in the economy have resulted from increased competition, and increases in national concentration have generally happened because competition at the local level has intensified and local concentration has fallen.
  3. Lax antitrust enforcement has not allowed systematic increases in market power, and the evidence simply does not support out the idea that antitrust enforcement has weakened in recent decades.
  4. Existing antitrust law is adequate for protecting competition in the modern economy, and built up through years of careful case-by-case scrutiny. Calls to throw out decades of precedent to achieve an antitrust “Year Zero” would throw away a huge body of learning and deliberation.
  5. History teaches that discarding the modern approach to antitrust would harm consumers, and return to a situation where per se rules prohibited the use of economic analysis and fact-based defences of business practices.
  6. Common sense reforms should be pursued to improve antitrust enforcement, and the reforms proposed in the letter could help to improve competition and consumer outcomes in the United States without overturning the whole system.

The reforms suggested include measures to increase transparency of the DoJ and FTC, greater scope for antitrust challenges against state-sponsored monopolies, stronger penalties for criminal cartel conduct, and more agency resources being made available to protect workers from anti-competitive wage-fixing agreements between businesses. These are suggestions for the House Committee to consider and are not supported by all the letter’s signatories.

Some of the arguments in the letter are set out in greater detail in the ICLE’s own submission to the Committee, which goes into detail about the nature of competition in modern digital markets and in traditional markets that have been changed because of the adoption of digital technologies. 

The full letter is here.

On Monday evening, around 6:00 PM Eastern Standard Time, news leaked that the United States District Court for the Southern District of New York had decided to allow the T-Mobile/Sprint merger to go through, giving the companies a victory over a group of state attorneys general trying to block the deal.

Thomas Philippon, a professor of finance at NYU, used this opportunity to conduct a quick-and-dirty event study on Twitter:

Short thread on T-Mobile/Sprint merger. There were 2 theories:

(A) It’s a 4-to-3 merger that will lower competition and increase markups.

(B) The new merged entity will be able to take on the industry leaders AT&T and Verizon.

(A) and (B) make clear predictions. (A) predicts the merger is good news for AT&T and Verizon’s shareholders. (B) predicts the merger is bad news for AT&T and Verizon’s shareholders. The news leaked at 6pm that the judge would approve the merger. Sprint went up 60% as expected. Let’s test the theories. 

Here is Verizon’s after trading price: Up 2.5%.

Here is ATT after hours: Up 2%.

Conclusion 1: Theory B is bogus, and the merger is a transfer of at least 2%*$280B (AT&T) + 2.5%*$240B (Verizon) = $11.6 billion from the pockets of consumers to the pockets of shareholders. 

Conclusion 2: I and others have argued for a long time that theory B was bogus; this was anticipated. But lobbying is very effective indeed… 

Conclusion 3: US consumers already pay two or three times more than those of other rich countries for their cell phone plans. The gap will only increase.

And just a reminder: these firms invest 0% of the excess profits. 

Philippon published his thread about 40 minutes prior to markets opening for regular trading on Tuesday morning. The Court’s official decision was published shortly before markets opened as well. By the time regular trading began at 9:30 AM, Verizon had completely reversed its overnight increase and opened down from the previous day’s close. While AT&T opened up slightly, it too had given back most of its initial gains. By 11:00 AM, AT&T was also in the red. When markets closed at 4:00 PM on Tuesday, Verizon was down more than 2.5 percent and AT&T was down just under 0.5 percent.

Does this mean that, in fact, theory A is the “bogus” one? Was the T-Mobile/Sprint merger decision actually a transfer of “$7.4 billion from the pockets of shareholders to the pockets of consumers,” as I suggested in my own tongue-in-cheek thread later that day? In this post, I will look at the factors that go into conducting a proper event study.  

What’s the appropriate window for a merger event study?

In a response to my thread, Philippon said, “I would argue that an event study is best done at the time of the event, not 16 hours after. Leak of merger approval 6 pm Monday. AT&T up 2 percent immediately. AT&T still up at open Tuesday. Then comes down at 10am.” I don’t disagree that “an event study is best done at the time of the event.” In this case, however, we need to consider two important details: When was the “event” exactly, and what were the conditions in the financial markets at that time?

This event did not begin and end with the leak on Monday night. The official announcement came Tuesday morning when the full text of the decision was published. This additional information answered a few questions for market participants: 

  • Were the initial news reports true?
  • Based on the text of the decision, what is the likelihood it gets reversed on appeal?
    • Wall Street: “Not all analysts are convinced this story is over just yet. In a note released immediately after the judge’s verdict, Nomura analyst Jeff Kvaal warned that ‘we expect the state AGs to appeal.’ RBC Capital analyst Jonathan Atkin noted that such an appeal, if filed, could delay closing of the merger by ‘an additional 4-5’ months — potentially delaying closure until September 2020.”
  • Did the Court impose any further remedies or conditions on the merger?

As stock traders digested all the information from the decision, Verizon and AT&T quickly went negative. There is much debate in the academic literature about the appropriate window for event studies on mergers. But the range in question is always one of days or weeks — not a couple hours in after hours markets. A recent paper using the event study methodology analyzed roughly 5,000 mergers and found abnormal returns of about positive one percent for competitors in the relevant market following a merger announcement. Notably for our purposes, this small abnormal return builds in the first few days following a merger announcement and persists for up to 30 days, as shown in the chart below:

As with the other studies the paper cites in its literature review, this particular research design included a window of multiple weeks both before and after the event occured. When analyzing the T-Mobile/Sprint merger decision, we should similarly expand the window beyond just a few hours of after hours trading.

How liquid is the after hours market?

More important than the length of the window, however, is the relative liquidity of the market during that time. The after hours market is much thinner than the regular hours market and may not reflect all available information. For some rough numbers, let’s look at data from NASDAQ. For the last five after hours trading sessions, total volume was between 80 and 100 million shares. Let’s call it 90 million on average. By contrast, the total volume for the last five regular trading hours sessions was between 2 and 2.5 billion shares. Let’s call it 2.25 billion on average. So, the regular trading hours have roughly 25 times as much liquidity as the after hours market

We could also look at relative liquidity for a single company as opposed to the total market. On Wednesday during regular hours (data is only available for the most recent day), 22.49 million shares of Verizon stock were traded. In after hours trading that same day, fewer than a million shares traded hands. You could change some assumptions and account for other differences in the after market and the regular market when analyzing the data above. But the conclusion remains the same: the regular market is at least an order of magnitude more liquid than the after hours market. This is incredibly important to keep in mind as we compare the after hours price changes (as reported by Philippon) to the price changes during regular trading hours.

What are Wall Street analysts saying about the decision?

To understand the fundamentals behind these stock moves, it’s useful to see what Wall Street analysts are saying about the merger decision. Prior to the ruling, analysts were already worried about Verizon’s ability to compete with the combined T-Mobile/Sprint entity in the short- and medium-term:

Last week analysts at LightShed Partners wrote that if Verizon wins most of the first available tranche of C-band spectrum, it could deploy 60 MHz in 2022 and see capacity and speed benefits starting in 2023.

With that timeline, C-Band still does not answer the questions of what spectrum Verizon will be using for the next three years,” wrote LightShed’s Walter Piecyk and Joe Galone at the time.

Following the news of the decision, analysts were clear in delivering their own verdict on how the decision would affect Verizon:

Verizon looks to us to be a net loser here,” wrote the MoffettNathanson team led by Craig Moffett.

…  

Approval of the T-Mobile/Sprint deal takes not just one but two spectrum options off the table,” wrote Moffett. “Sprint is now not a seller of 2.5 GHz spectrum, and Dish is not a seller of AWS-4. More than ever, Verizon must now bet on C-band.”

LightShed also pegged Tuesday’s merger ruling as a negative for Verizon.

“It’s not great news for Verizon, given that it removes Sprint and Dish’s spectrum as an alternative, created a new competitor in Dish, and has empowered T-Mobile with the tools to deliver a superior network experience to consumers,” wrote LightShed.

In a note following news reports that the court would side with T-Mobile and Sprint, New Street analyst Johnathan Chaplin wrote, “T-Mobile will be far more disruptive once they have access to Sprint’s spectrum than they have been until now.”

However, analysts were more sanguine about AT&T’s prospects:

AT&T, though, has been busy deploying additional spectrum, both as part of its FirstNet build and to support 5G rollouts. This has seen AT&T increase its amount of deployed spectrum by almost 60%, according to Moffett, which takes “some of the pressure off to respond to New T-Mobile.”

Still, while AT&T may be in a better position on the spectrum front compared to Verizon, it faces the “same competitive dynamics,” Moffett wrote. “For AT&T, the deal is probably a net neutral.”

The quantitative evidence from the stock market seems to agree with the qualitative analysis from the Wall Street research firms. Let’s look at the five-day window of trading from Monday morning to Friday (today). Unsurprisingly, Sprint, T-Mobile, and Dish have reacted very favorably to the news:

Consistent with the Wall Street analysis, Verizon stock remains down 2.5 percent over a five-day window while AT&T has been flat over the same period:

How do you separate beta from alpha in an event study?

Philippon argued that after market trading may be more efficient because it is dominated by hedge funds and includes less “noise trading.” In my opinion, the liquidity effect likely outweighs this factor. Also, it’s unclear why we should assume “smart money” is setting the price in the after hours market but not during regular trading when hedge funds are still active. Sophisticated professional traders often make easy profits by picking off panicked retail investors who only read the headlines. When you see a wild swing in the markets that moderates over time, the wild swing is probably the noise and the moderation is probably the signal.

And, as Karl Smith noted, since the aftermarket is thin, price moves in individual stocks might reflect changes in the broader stock market (“beta”) more than changes due to new company-specific information (“alpha”). Here are the last five days for e-mini S&P 500 futures, which track the broader market and are traded after hours:

The market trended up on Monday night and was flat on Tuesday. This slightly positive macro environment means we would need to adjust the returns downward for AT&T and Verizon. Of course, this is counter to Philippon’s conjecture that the merger decision would increase their stock prices. But to be clear, these changes are so minuscule in percentage terms, this adjustment wouldn’t make much of a difference in this case.

Lastly, let’s see what we can learn from a similar historical episode in the stock market.

The parallel to the 2016 presidential election

The type of reversal we saw in AT&T and Verizon is not unprecedented. Some commenters said the pattern reminded them of the market reaction to Trump’s election in 2016:

Much like the T-Mobile/Sprint merger news, the “event” in 2016 was not a single moment in time. It began around 9 PM Tuesday night when Trump started to overperform in early state results. Over the course of the next three hours, S&P 500 futures contracts fell about 5 percent — an enormous drop in such a short period of time. If Philippon had tried to estimate the “Trump effect” in the same manner he did the T-Mobile/Sprint case, he would have concluded that a Trump presidency would reduce aggregate future profits by about 5 percent relative to a Clinton presidency.

But, as you can see in the chart above, if we widen the aperture of the event study to include the hours past midnight, the story flips. Markets started to bounce back even before Trump took the stage to make his victory speech. The themes of his speech were widely regarded as reassuring for markets, which further pared losses from earlier in the night. When regular trading hours resumed on Wednesday, the markets decided a Trump presidency would be very good for certain sectors of the economy, particularly finance, energy, biotech, and private prisons. By the end of the day, the stock market finished up about a percentage point from where it closed prior to the election — near all time highs.

Maybe this is more noise than signal?

As a few others pointed out, these relatively small moves in AT&T and Verizon (less than 3 percent in either direction) may just be noise. That’s certainly possible given the magnitude of the changes. Contra Philippon, I think the methodology in question is too weak to rule out the pro-competitive theory of the case, i.e., that the new merged entity would be a stronger competitor to take on industry leaders AT&T and Verizon. We need much more robust and varied evidence before we can call anything “bogus.” Of course, that means this event study is not sufficient to prove the pro-competitive theory of the case, either.

Olivier Blanchard, a former chief economist of the IMF, shared Philippon’s thread on Twitter and added this comment above: “The beauty of the argument. Simple hypothesis, simple test, clear conclusion.”

If only things were so simple.

The DOJ and 20 state AGs sued Microsoft on May 18, 1998 for unlawful maintenance of its monopoly position in the PC market. The government accused the desktop giant of tying its operating system (Windows) and its web browser (Internet Explorer). Microsoft had indeed become dominant in the PC market by the late 1980s:

Source: Asymco

But after the introduction of smartphones in the mid-2000s, Microsoft’s market share of personal computing units (including PCs, smartphones, and tablets) collapsed:

Source: Benedict Evans

Steven Sinofsy pointed out why this was a classic case of disruptive innovation rather than sustaining innovation: “Google and Microsoft were competitors but only by virtue of being tech companies hiring engineers. After that, almost nothing about what was being made or sold was similar even if things could ultimately be viewed as substitutes. That is literally the definition of innovation.”

Browsers

Microsoft grew to dominance during the PC era by bundling its desktop operating system (Windows) with its productivity software (Office) and modularizing the hardware providers. By 1995, Bill Gates had realized that the internet was the next big thing, calling it “The Internet Tidal Wave” in a famous internal memo. Gates feared that the browser would function as “middleware” and disintermediate Microsoft from its relationship with the end-user. At the time, Netscape Navigator was gaining market share from the first browser to popularize the internet, Mosaic (so-named because it supported a multitude of protocols).

Later that same year, Microsoft released its own browser, Internet Explorer, which would be bundled with its Windows operating system. Internet Explorer soon grew to dominate the market:

Source: Browser Wars

Steven Sinofsky described how the the browser threatened to undermine the Windows platform (emphasis added):

Microsoft saw browsers as a platform threat to Windows. Famously. Browsers though were an app — running everywhere, distributed everywhere. Microsoft chose to compete as though browsing was on par with Windows (i.e., substitutes).

That meant doing things like IBM did — finding holes in distribution where browsers could “sneak” in (e.g., OEM deals) and seeing how to make Microsoft browser work best and only with Windows. Sound familiar? It does to me.

Imagine (some of us did) a world instead where Microsoft would have built a browser that was an app distributed everywhere, running everywhere. That would have been a very different strategy. One that some imagined, but not when Windows was central.

Showing how much your own gravity as a big company can make even obvious steps strategically weak: Microsoft knew browsers had to be cross-platform so it built Internet Explorer for Mac and Unix. Neat. But wait, the main strategic differentiator for Internet Explorer was ActiveX which was clearly Windows only.

So even when trying to compete in a new market the strategy was not going to work technically and customers would immediately know. Either they would ignore the key part of Windows or the key part of x-platform. This is what a big company “master plan” looks like … Active Desktop.

Regulators claimed victory but the loss already happened. But for none of the reasons the writers of history say at least [in my humble opinion]. As a reminder, Microsoft stopped working on Internet Explorer 7 years before Chrome even existed — literally didn’t release a new version for 5+ years.

One of the most important pieces of context for this case is that other browsers were also free for personal use (even if they weren’t bundled with an operating system). At the time, Netscape was free for individuals. Mosaic was free for non-commercial use. Today, Chrome and Firefox are free for all users. Chrome makes money for Google by increasing the value of its ecosystem and serving as a complement for its other products (particularly search). Firefox is able to more than cover its costs by charging Google (and others) to be the default option in its browser. 

By bundling Internet Explorer with Windows for free, Microsoft was arguably charging the market rate. In highly competitive markets, economic theory tells us the price should approach marginal cost — which in software is roughly zero. As James Pethokoukis argued, there are many more reasons to be skeptical about the popular narrative surrounding the Microsoft case. The reasons for doubt range across features, products, and markets, including server operating systems, mobile devices, and search engines. Let’s examine a few of them.

Operating Systems

In a 2007 article for Wired titled “I Blew It on Microsoft,” Lawrence Lessig, a Harvard law professor, admits that his predictions about the future of competition in computer operating systems failed to account for the potential of open-source solutions:

We pro-regulators were making an assumption that history has shown to be completely false: That something as complex as an OS has to be built by a commercial entity. Only crazies imagined that volunteers outside the control of a corporation could successfully create a system over which no one had exclusive command. We knew those crazies. They worked on something called Linux.

According to Web Technology Surveys, as of April 2019, about 70 percent of servers use a Linux-based operating system while the remaining 30 percent use Windows.

Mobile

In 2007, Steve Ballmer believed that Microsoft would be the dominant company in smartphones, saying in an interview with USA Today (emphasis added):

There’s no chance that the iPhone is going to get any significant market share. No chance. It’s a $500 subsidized item. They may make a lot of money. But if you actually take a look at the 1.3 billion phones that get sold, I’d prefer to have our software in 60% or 70% or 80% of them, than I would to have 2% or 3%, which is what Apple might get.

But as Ballmer himself noted in 2013, Microsoft was too committed to the Windows platform to fully pivot its focus to mobile:

If there’s one thing I regret, there was a period in the early 2000s when we were so focused on what we had to do around Windows that we weren’t able to redeploy talent to the new device form factor called the phone.

This is another classic example of the innovator’s dilemma. Microsoft enjoyed high profit margins in its Windows business, which caused the company to underrate the significance of the shift from PCs to smartphones.

Search

To further drive home how dependent Microsoft was on its legacy products, this 2009 WSJ piece notes that the company had a search engine ad service in 2000 and shut it down to avoid cannibalizing its core business:

Nearly a decade ago, early in Mr. Ballmer’s tenure as CEO, Microsoft had its own inner Google and killed it. In 2000, before Google married Web search with advertising, Microsoft had a rudimentary system that did the same, called Keywords, running on the Web. Advertisers began signing up. But Microsoft executives, in part fearing the company would cannibalize other revenue streams, shut it down after two months.

Ben Thompson says we should wonder if the case against Microsoft was a complete waste of everyone’s time (and money): 

In short, to cite Microsoft as a reason for antitrust action against Google in particular is to get history completely wrong: Google would have emerged with or without antitrust action against Microsoft; if anything the real question is whether or not Google’s emergence shows that the Microsoft lawsuit was a waste of time and money.

The most obvious implications of the Microsoft case were negative: (1) PCs became bloated with “crapware” (2) competition in the browser market failed to materialize for many years (3) PCs were less safe because Microsoft couldn’t bundle security software, and (4) some PC users missed out on using first-party software from Microsoft because it couldn’t be bundled with Windows. When weighed against these large costs, the supposed benefits pale in comparison.

Conclusion

In all three cases I’ve discussed in this series — AT&T, IBM, and Microsoft — the real story was not that antitrust enforcers divined the perfect time to break up — or regulate — the dominant tech company. The real story was that slow and then sudden technological change outpaced the organizational inertia of incumbents, permanently displacing the former tech giants from their dominant position in the tech ecosystem. 

The next paradigm shift will be near-impossible to predict. Those who know which technology — and when — it will be would make a lot more money implementing their ideas than they would by playing pundit in the media. Regardless of whether the future winner will be Google, Facebook, Amazon, Apple, Microsoft, or some unknown startup company, antitrust enforcers should remember that the proper goal of public policy in this domain is to maximize total innovation — from firms both large and small. Fetishizing innovation by small companies — and using law enforcement to harass big companies in the hopes for an indirect benefit to competition — will make us all worse off in the long run.

The case against AT&T began in 1974. The government alleged that AT&T had monopolized the market for local and long-distance telephone service as well as telephone equipment. In 1982, the company entered into a consent decree to be broken up into eight pieces (the “Baby Bells” plus the parent company), which was completed in 1984. As a remedy, the government required the company to divest its local operating companies and guarantee equal access to all long-distance and information service providers (ISPs).

Source: Mohanram & Nanda

As the chart above shows, the divestiture broke up AT&T’s national monopoly into seven regional monopolies. In general, modern antitrust analysis focuses on the local product market (because that’s the relevant level for consumer decisions). In hindsight, how did breaking up a national monopoly into seven regional monopolies increase consumer choice? It’s also important to note that, prior to its structural breakup, AT&T was a government-granted monopoly regulated by the FCC. Any antitrust remedy should be analyzed in light of the company’s unique relationship with regulators.

Breaking up one national monopoly into seven regional monopolies is not an effective way to boost innovation. And there are economies of scale and network effects to be gained by owning a national network to serve a national market. In the case of AT&T, those economic incentives are why the Baby Bells forged themselves back together in the decades following the breakup.

Source: WSJ

As Clifford Winston and Robert Crandall noted

Appearing to put Ma Bell back together again may embarrass the trustbusters, but it should not concern American consumers who, in two decades since the breakup, are overwhelmed with competitive options to provide whatever communications services they desire.

Moreover, according to Crandall & Winston (2003), the lower prices following the breakup of AT&T weren’t due to the structural remedy at all (emphasis added):

But on closer examination, the rise in competition and lower long-distance prices are attributable to just one aspect of the 1982 decree; specifically, a requirement that the Bell companies modify their switching facilities to provide equal access to all long-distance carriers. The Federal Communications Commission (FCC) could have promulgated such a requirement without the intervention of the antitrust authorities. For example, the Canadian regulatory commission imposed equal access on its vertically integrated carriers, including Bell Canada, in 1993. As a result, long-distance competition developed much more rapidly in Canada than it had in the United States (Crandall and Hazlett, 2001). The FCC, however, was trying to block MCI from competing in ordinary long-distance services when the AT&T case was filed by the Department of Justice in 1974. In contrast to Canadian and more recent European experience, a lengthy antitrust battle and a disruptive vertical dissolution were required in the U.S. market to offset the FCC’s anti-competitive policies. Thus, antitrust policy did not triumph in this case over restrictive practices by a monopolist to block competition, but instead it overcame anticompetitive policies by a federal regulatory agency.

A quick look at the data on telephone service in the US, EU, and Canada show that the latter two were able to achieve similar reductions in price without breaking up their national providers.

Source: Crandall & Jackson (2011)

The paradigm shift from wireline to wireless

The technological revolution spurred by the transition from wireline telephone service to wireless telephone service shook up the telecommunications industry in the 1990s. The rapid change caught even some of the smartest players by surprise. In 1980, the management consulting firm McKinsey and Co. produced a report for AT&T predicting how large the cellular market might become by the year 2000. Their forecast said that 900,000 cell phones would be in use. The actual number was more than 109 million.

Along with the rise of broadband, the transition to wireless technology led to an explosion in investment. In contrast, the breakup of AT&T in 1984 had no discernible effect on the trend in industry investment:

The lesson for antitrust enforcers is clear: breaking up national monopolies into regional monopolies is no remedy. In certain cases, mandating equal access to critical networks may be warranted. Most of all, technology shocks will upend industries in ways that regulators — and dominant incumbents — fail to predict.

The Department of Justice began its antitrust case against IBM on January 17, 1969. The DOJ sued under the Sherman Antitrust Act, claiming IBM tried to monopolize the market for “general-purpose digital computers.” The case lasted almost thirteen years, ending on January 8, 1982 when Assistant Attorney General William Baxter declared the case to be “without merit” and dropped the charges. 

The case lasted so long, and expanded in scope so much, that by the time the trial began, “more than half of the practices the government raised as antitrust violations were related to products that did not exist in 1969.” Baltimore law professor Robert Lande said it was “the largest legal case of any kind ever filed.” Yale law professor Robert Bork called it “the antitrust division’s Vietnam.”

As the case dragged on, IBM was faced with increasingly perverse incentives. As NYU law professor Richard Epstein pointed out (emphasis added), 

Oddly, enough IBM was able to strengthen its antitrust-related legal position by reducing its market share, which it achieved through raising prices. When the suit was discontinued that share had fallen dramatically since 1969 from about 50 percent of the market to 37 percent in 1982. Only after the government suit ended did IBM lower its prices in order to increase market share.

Source: Levy & Welzer

In an interview with Vox, Tim Wu claimed that without the IBM case, Apple wouldn’t exist and we might still be using mainframe computers (emphasis added):

Vox: You said that Apple wouldn’t exist without the IBM case.

Wu: Yeah, I did say that. The case against IBM took 13 years and we didn’t get a verdict but in that time, there was the “policeman at the elbow” effect. IBM was once an all-powerful company. It’s not clear that we would have had an independent software industry, or that it would have developed that quickly, the idea of software as a product, [without this case]. That was one of the immediate benefits of that excavation.

And then the other big one is that it gave a lot of room for the personal computer to get started, and the software that surrounds the personal computer — two companies came in, Apple and Microsoft. They were sort of born in the wake of the IBM lawsuit. You know they were smart guys, but people did need the pressure off their backs.

Nobody is going to start in the shadow of Facebook and get anywhere. Snap’s been the best, but how are they doing? They’ve been halted. I think it’s a lot harder to imagine this revolutionary stuff that happened in the ’80s. If IBM had been completely unwatched by regulators, by enforcement, doing whatever they wanted, I think IBM would have held on and maybe we’d still be using mainframes, or something — a very different situation.

Steven Sinofsky, a former Microsoft executive and current Andreessen Horowitz board partner, had a different take on the matter, attributing IBM’s (belated) success in PCs to its utter failure in minicomputers (emphasis added):

IBM chose to prevent third parties from interoperating with mainframes sometimes at crazy levels (punch card formats). And then chose to defend until the end their business model of leasing … The minicomputer was a direct threat not because of technology but because of those attributes. I’ve heard people say IBM went into PCs because the antitrust loss caused them to look for growth or something. Ha. PCs were spun up because IBM was losing Minis. But everything about the PC was almost a fluke organizationally and strategically. The story of IBM regulation is told as though PCs exist because of the case.

The more likely story is that IBM got swamped by the paradigm shift from mainframes to PCs. IBM was dominant in mainframe computers which were sold to the government and large enterprises. Microsoft, Intel, and other leaders in the PC market sold to small businesses and consumers, which required an entirely different business model than IBM was structured to implement.

ABB – Always Be Bundling (Or Unbundling)

“There’s only two ways I know of to make money: bundling and unbundling.” – Jim Barksdale

In 1969, IBM unbundled its software and services from hardware sales. As many industry observers note, this action precipitated the rise of the independent software development industry. But would this have happened regardless of whether there was an ongoing antitrust case? Given that bundling and unbundling is ubiquitous in the history of the computer industry, the answer is likely yes.

As the following charts show, IBM first created an integrated solution in the mainframe market, controlling everything from raw materials and equipment to distribution and service. When PCs disrupted mainframes, the entire value chain was unbundled. Later, Microsoft bundled its operating system with applications software. 

Source: Clayton Christensen

The first smartphone to disrupt the PC market was the Apple iPhone — an integrated solution. And once the technology became “good enough” to meet the average consumer’s needs, Google modularized everything except the operating system (Android) and the app store (Google Play).

Source: SlashData
Source: Jake Nielson

Another key prong in Tim Wu’s argument that the government served as an effective “policeman at the elbow” in the IBM case is that the company adopted an open model when it entered the PC market and did not require an exclusive license from Microsoft to use its operating system. But exclusivity is only one term in a contract negotiation. In an interview with Playboy magazine in 1994, Bill Gates explained how he was able to secure favorable terms from IBM (emphasis added):

Our restricting IBM’s ability to compete with us in licensing MS-DOS to other computer makers was the key point of the negotiation. We wanted to make sure only we could license it. We did the deal with them at a fairly low price, hoping that would help popularize it. Then we could make our move because we insisted that all other business stay with us. We knew that good IBM products are usually cloned, so it didn’t take a rocket scientist to figure out that eventually we could license DOS to others. We knew that if we were ever going to make a lot of money on DOS it was going to come from the compatible guys, not from IBM. They paid us a fixed fee for DOS. We didn’t get a royalty, even though we did make some money on the deal. Other people paid a royalty. So it was always advantageous to us, the market grew and other hardware guys were able to sell units.

In this version of the story, IBM refrained from demanding an exclusive license from Microsoft not because it was fearful of antitrust enforcers but because Microsoft made significant concessions on price and capped its upside by agreeing to a fixed fee rather than a royalty. These economic and technical explanations for why IBM wasn’t able to leverage its dominant position in mainframes into the PC market are more consistent with the evidence than Wu’s “policeman at the elbow” theory.

In my next post, I will discuss the other major antitrust case that came to an end in 1982: AT&T.

Big Tech continues to be mired in “a very antitrust situation,” as President Trump put it in 2018. Antitrust advocates have zeroed in on Facebook, Google, Apple, and Amazon as their primary targets. These advocates justify their proposals by pointing to the trio of antitrust cases against IBM, AT&T, and Microsoft. Elizabeth Warren, in announcing her plan to break up the tech giants, highlighted the case against Microsoft:

The government’s antitrust case against Microsoft helped clear a path for Internet companies like Google and Facebook to emerge. The story demonstrates why promoting competition is so important: it allows new, groundbreaking companies to grow and thrive — which pushes everyone in the marketplace to offer better products and services.

Tim Wu, a law professor at Columbia University, summarized the overarching narrative recently (emphasis added):

If there is one thing I’d like the tech world to understand better, it is that the trilogy of antitrust suits against IBM, AT&T, and Microsoft played a major role in making the United States the world’s preeminent tech economy.

The IBM-AT&T-Microsoft trilogy of antitrust cases each helped prevent major monopolists from killing small firms and asserting control of the future (of the 80s, 90s, and 00s, respectively).

A list of products and firms that owe at least something to the IBM-AT&T-Microsoft trilogy.

(1) IBM: software as product, Apple, Microsoft, Intel, Seagate, Sun, Dell, Compaq

(2) AT&T: Modems, ISPs, AOL, the Internet and Web industries

(3) Microsoft: Google, Facebook, Amazon

Wu argues that by breaking up the current crop of dominant tech companies, we can sow the seeds for the next one. But this reasoning depends on an incorrect — albeit increasingly popular — reading of the history of the tech industry. Entrepreneurs take purposeful action to produce innovative products for an underserved segment of the market. They also respond to broader technological change by integrating or modularizing different products in their market. This bundling and unbundling is a never-ending process.

Whether the government distracts a dominant incumbent with a failed lawsuit (e.g., IBM), imposes an ineffective conduct remedy (e.g., Microsoft), or breaks up a government-granted national monopoly into regional monopolies (e.g., AT&T), the dynamic nature of competition between tech companies will far outweigh the effects of antitrust enforcers tilting at windmills.

In a series of posts for Truth on the Market, I will review the cases against IBM, AT&T, and Microsoft and discuss what we can learn from them. In this introductory article, I will explain the relevant concepts necessary for understanding the history of market competition in the tech industry.

Competition for the Market

In industries like tech that tend toward “winner takes most,” it’s important to distinguish between competition during the market maturation phase — when no clear winner has emerged and the technology has yet to be widely adopted — and competition after the technology has been diffused in the economy. Benedict Evans recently explained how this cycle works (emphasis added):

When a market is being created, people compete at doing the same thing better. Windows versus Mac. Office versus Lotus. MySpace versus Facebook. Eventually, someone wins, and no-one else can get in. The market opportunity has closed. Be, NeXT/Path were too late. Monopoly!

But then the winner is overtaken by something completely different that makes it irrelevant. PCs overtook mainframes. HTML/LAMP overtook Win32. iOS & Android overtook Windows. Google overtook Microsoft.

Tech antitrust too often wants to insert a competitor to the winning monopolist, when it’s too late. Meanwhile, the monopolist is made irrelevant by something that comes from totally outside the entire conversation and owes nothing to any antitrust interventions.

In antitrust parlance, this is known as competing for the market. By contrast, in more static industries where the playing field doesn’t shift so radically and the market doesn’t tip toward “winner take most,” firms compete within the market. What Benedict Evans refers to as “something completely different” is often a disruptive product.

Disruptive Innovation

As Clay Christensen explains in the Innovator’s Dilemma, a disruptive product is one that is low-quality (but fast-improving), low-margin, and targeted at an underserved segment of the market. Initially, it is rational for the incumbent firms to ignore the disruptive technology and focus on improving their legacy technology to serve high-margin customers. But once the disruptive technology improves to the point it can serve the whole market, it’s too late for the incumbent to switch technologies and catch up. This process looks like overlapping s-curves:

Source: Max Mayblum

We see these S-curves in the technology industry all the time:

Source: Benedict Evans

As Christensen explains in the Innovator’s Solution, consumer needs can be thought of as “jobs-to-be-done.” Early on, when a product is just good enough to get a job done, firms compete on product quality and pursue an integrated strategy — designing, manufacturing, and distributing the product in-house. As the underlying technology improves and the product overshoots the needs of the jobs-to-be-done, products become modular and the primary dimension of competition moves to cost and convenience. As this cycle repeats itself, companies are either bundling different modules together to create more integrated products or unbundling integrated products to create more modular products.

Moore’s Law

Source: Our World in Data

Moore’s Law is the gasoline that gets poured on the fire of technology cycles. Though this “law” is nothing more than the observation that “the number of transistors in a dense integrated circuit doubles about every two years,” the implications for dynamic competition are difficult to overstate. As Bill Gates explained in a 1994 interview with Playboy magazine, Moore’s Law means that computer power is essentially “free” from an engineering perspective:

When you have the microprocessor doubling in power every two years, in a sense you can think of computer power as almost free. So you ask, Why be in the business of making something that’s almost free? What is the scarce resource? What is it that limits being able to get value out of that infinite computing power? Software.

Exponentially smaller integrated circuits can be combined with new user interfaces and networks to create new computer classes, which themselves represent the opportunity for disruption.

Bell’s Law of Computer Classes

Source: Brad Campbell

A corollary to Moore’s Law, Bell’s law of computer classes predicts that “roughly every decade a new, lower priced computer class forms based on a new programming platform, network, and interface resulting in new usage and the establishment of a new industry.” Originally formulated in 1972, we have seen this prediction play out in the birth of mainframes, minicomputers, workstations, personal computers, laptops, smartphones, and the Internet of Things.

Understanding these concepts — competition for the market, disruptive innovation, Moore’s Law, and Bell’s Law of Computer Classes — will be crucial for understanding the true effects (or lack thereof) of the antitrust cases against IBM, AT&T, and Microsoft. In my next post, I will look at the DOJ’s (ultimately unsuccessful) 13-year antitrust battle with IBM.

Qualcomm is currently in the midst of a high-profile antitrust case against the FTC. At the heart of these proceedings lies Qualcomm’s so-called “No License, No Chips” (NLNC) policy, whereby it purportedly refuses to sell chips to OEMs that have not concluded a license agreement covering its underlying intellectual property. According to the FTC and Qualcomm’s opponents, this ultimately thwarts competition in the chipset market.

Against this backdrop, Mark Lemley, Douglas Melamed, and Steven Salop penned a high-profile amicus brief supporting the FTC’s stance. 

We responded to their brief in a Truth on the Market blog post, and this led to a series of blog exchanges between the amici and ourselves. 

This post summarizes these exchanges.

1. Amicus brief supporting the FTC’s stance, and ICLE brief in support of Qualcomm’s position

The starting point of this blog exchange was an Amicus brief written by Mark Lemley, Douglas Melamed, and Steven Salop (“the amici”) , and signed by 40 law and economics scholars. 

The amici made two key normative claims:

  • Qualcomm’s no license, no chips policy is unlawful under well-established antitrust principles: 
    Qualcomm uses the NLNC policy to make it more expensive for OEMs to purchase competitors’ chipsets, and thereby disadvantages rivals and creates artificial barriers to entry and competition in the chipset markets.”
  • Qualcomm’s refusal to license chip-set rivals reinforces the no license, no chips policy and violates the antitrust laws:
    Qualcomm’s refusal to license chipmakers is also unlawful, in part because it bolsters the NLNC policy.16 In addition, Qualcomm’s refusal to license chipmakers increases the costs of using rival chipsets, excludes rivals, and raises barriers to entry even if NLNC is not itself illegal.

It is important to note that ICLE also filed an amicus brief in these proceedings. Contrary to the amici, ICLE’s scholars concluded that Qualcomm’s behavior did not raise any antitrust concerns and was ultimately a matter of contract law and .

2. ICLE response to the Lemley, Melamed and Salop Amicus brief.

We responded to the amici in a first blog post

The post argued that the amici failed to convincingly show that Qualcomm’s NLNC policy was exclusionary. We notably highlighted two important factors.

  • First, Qualcomm could not use its chipset position and NLNC policy to avert the threat of FRAND litigation, thus extracting supracompetitve royalties:
    Qualcomm will be unable to charge a total price that is significantly above the price of rivals’ chips, plus the FRAND rate for its IP (and expected litigation costs).”
  • Second, Qualcomm’s behavior did not appear to fall within standard patterns of strategic behavior:
    The amici attempt to overcome this weakness by implicitly framing their argument in terms of exclusivity, strategic entry deterrence, and tying […]. But none of these arguments totally overcomes the flaw in their reasoning.” 

3. Amici’s counterargument 

The amici wrote a thoughtful response to our post. Their piece rested on two main arguments:

  • The Amici underlined that their theory of anticompetitive harm did not imply any form of profit sacrifice on Qualcomm’s part (in the chip segment):
    Manne and Auer seem to think that the concern with the no license/no chips policy is that it enables inflated patent royalties to subsidize a profit sacrifice in chip sales, as if the issue were predatory pricing in chips.  But there is no such sacrifice.
  • The deleterious effects of Qualcomm’s behavior were merely a function of its NLNC policy and strong chipset position. In conjunction, these two factors deterred OEMs from pursuing FRAND litigation:
    Qualcomm is able to charge more than $2 for the license only because it uses the power of its chip monopoly to coerce the OEMs to give up the option of negotiating in light of the otherwise applicable constraints on the royalties it can charge.

4. ICLE rebuttal

We then responded to the amici with the following points:

  • We agreed that it would be a problem if Qualcomm could prevent OEMs from negotiating license agreements in the shadow of FRAND litigation:
    The critical question is whether there is a realistic threat of litigation to constrain the royalties commanded by Qualcomm (we believe that Lemley et al. agree with us on this point).”
  • However, Qualcomm’s behavior did not preclude OEMs from pursuing this type of strategy:
    We believe the following facts support our assertion:
    OEMs have pursued various litigation strategies in order to obtain lower rates on Qualcomm’s IP. […]
    For the most part, Qualcomm’s threats to cut off chip supplies were just that: threats. […]
    OEMs also wield powerful threats. […]
    Qualcomm’s chipsets might no longer be “must-buys” in the future.”

 5. Amici’s surrebuttal

The amici sent us a final response (reproduced here in full) :

In their original post, Manne and Auer argued that the antitrust argument against Qualcomm’s no license/no chips policy was based on bad economics and bad law.  They now seem to have abandoned that argument and claim instead – contrary to the extensive factual findings of the district court – that, while Qualcomm threatened to cut off chips, it was a paper tiger that OEMs could, and knew they could, ignore.  The implication is that the Ninth Circuit should affirm the district court on the no license/ no chips issue unless it sets aside the court’s fact findings.  That seems like agreement with the position of our amicus brief.

We will not in this post review the huge factual record.  We do note, however, that Manne and Auer cite in support of their factual argument only that 3 industry giants brought and then settled litigation against Qualcomm.  But all 3 brought antitrust litigation; their doing so hardly proves that contract litigation or what Manne and Auer call “holdout” were viable options for anyone, much less for smaller OEMs.  The fact that Qualcomm found it necessary to actually cut off only one OEM – and then it only took the OEM only 7 days to capitulate – certainly does not prove that Qualcomm’s threats lacked credibility.   Notably, Manne and Auer do not claim that any OEMs bought chips from competitors of Qualcomm (although Apple bought some chips from Intel for a short while). No license/no chips appears to have been a successful, coercive policy, not an easily ignored threat.                                                                                                                                              

6. Concluding remarks

First and foremost, we would like to thank the Amici for thoughtfully engaging with us. This is what the law & economics tradition is all about: moving the ball forward by taking part in vigorous, multidisciplinary, debates.

With that said, we do feel compelled to leave readers with two short remarks. 

First, contrary to what the amici claim, we believe that our position has remained the same throughout these debates. 

Second, and more importantly, we think that everyone agrees that the critical question is whether OEMs were prevented from negotiating licenses in the shadow of FRAND litigation. 

We leave it up to Truth on the Market readers to judge which side of this debate is correct.

[This guest post is authored by Mark A. Lemley, Professor of Law and the Director of Program in Law, Science & Technology at Stanford Law School; A. Douglas Melamed, Professor of the Practice of Law at Stanford Law School and Former Senior Vice President and General Counsel of Intel from 2009 to 2014; and Steven Salop, Professor of Economics and Law at Georgetown Law School. It is part of an ongoing debate between the authors, on one side, and Geoffrey Manne and Dirk Auer, on the other, and has been integrated into our ongoing series on the FTC v. Qualcomm case, where all of the posts in this exchange are collected.]

In their original post, Manne and Auer argued that the antitrust argument against Qualcomm’s no license/no chips policy was based on bad economics and bad law. They now seem to have abandoned that argument and claim instead – contrary to the extensive factual findings of the district court – that, while Qualcomm threatened to cut off chips, it was a paper tiger that OEMs could, and knew they could, ignore. The implication is that the Ninth Circuit should affirm the district court on the no license/ no chips issue unless it sets aside the court’s fact findings. That seems like agreement with the position of our amicus brief.

We will not in this post review the huge factual record. We do note, however, that Manne and Auer cite in support of their factual argument only that 3 industry giants brought and then settled litigation against Qualcomm. But all 3 brought antitrust litigation; their doing so hardly proves that contract litigation or what Manne and Auer call “holdout” were viable options for anyone, much less for smaller OEMs. The fact that Qualcomm found it necessary to actually cut off only one OEM – and then it only took the OEM only 7 days to capitulate – certainly does not prove that Qualcomm’s threats lacked credibility. Notably, Manne and Auer do not claim that any OEMs bought chips from competitors of Qualcomm (although Apple bought some chips from Intel for a short while). No license/no chips appears to have been a successful, coercive policy, not an easily ignored threat.

Last week, we posted a piece on TOTM, criticizing the amicus brief written by Mark Lemley, Douglas Melamed and Steven Salop in the ongoing Qualcomm litigation. The authors prepared a thoughtful response to our piece, which we published today on TOTM. 

In this post, we highlight the points where we agree with the amici (or at least we think so), as well as those where we differ.

Negotiating in the shadow of FRAND litigation

Let us imagine a hypothetical world, where an OEM must source one chipset from Qualcomm (i.e. this segment of the market is non-contestable) and one chipset from either Qualcomm or its  rivals (i.e. this segment is contestable). For both of these chipsets, the OEM must also reach a license agreement with Qualcomm.

We use the same number as the amici: 

  • The OEM has a reserve price of $20 for each chip/license combination. 
  • Rivals can produce chips at a cost of $11. 
  • The hypothetical FRAND benchmark is $2 per chip. 

With these numbers in mind, the critical question is whether there is a realistic threat of litigation to constrain the royalties commanded by Qualcomm (we believe that Lemley et al. agree with us on this point). The following table shows the prices that a hypothetical OEM would be willing to pay in both of these scenarios:

Blue cells are segments where QC can increase its profits if the threat of litigation is removed.

When the threat of litigation is present, Qualcomm obtains a total of $20 for the combination of non-contestable chips and IP. Qualcomm can use its chipset position to evade FRAND and charges the combined monopoly price of $20. At a chipset cost of $11, it would thus make $9 worth of profits. However, it earns only $13 for contestable chips ($2 in profits). This is because competition brings the price of chips down to $11 and Qualcomm does not have a chipset advantage to earn more than the FRAND rate for its IP.

When the threat of litigation is taken off the table, all chipsets effectively become non-contestable. Qualcomm still earns $20 for its previously non-contestable chips. But it can now raise its IP rate above the FRAND benchmark in the previously contestable segment (for example, by charging $10 for the IP). This squeezes its chipset competitors.

If our understanding of the amici’s response is correct, they argue that the combination of Qualcomm’s strong chipset position and its “No License, No Chips” policy (“NLNC”) effectively nullifies the threat of litigation:

Qualcomm is able to charge more than $2 for the license only because it uses the power of its chip monopoly to coerce the OEMs to give up the option of negotiating in light of the otherwise applicable constraints on the royalties it can charge. 

According to the amici, the market thus moves from a state of imperfect competition (where OEMs would pay $33 for two chips and QC’s license) to a world of monopoly (where they pay the full $40).

We beg to differ. 

Our points of disagreement

From an economic standpoint, the critical question is the extent to which Qualcomm’s chipset position and its NLNC policy deter OEMs from obtaining closer-to-FRAND rates.

While the case record is mixed and contains some ambiguities, we think it strongly suggests that Qualcomm’s chipset position and its NLNC policy do not preclude OEMs from using litigation to obtain rates that are close to the FRAND benchmark. There is thus no reason to believe that it can exclude its chipset rivals.

We believe the following facts support our assertion:

  • OEMs have pursued various litigation strategies in order to obtain lower rates on Qualcomm’s IP. As we mentioned in our previous post, this was notably the case for Apple, Samsung and LG. All three companies ultimately reached settlements with Qualcomm (and these settlements were concluded in the shadow of litigation proceedings — indeed, in Apple’s case, on the second day of trial). If anything, this suggests that court proceedings are an integral part of negotiations between Qualcomm and its OEMs.
  • For the most part, Qualcomm’s threats to cut off chip supplies were just that: threats. In any negotiation, parties will try to convince their counterpart that they have a strong outside option. Qualcomm may have done so by posturing that it would not sell chips to OEMs before they concluded a license agreement. 

    However, it seems that only once did Qualcomm apparently follow through with its threats to withhold chips (against Sony). And even then, the supply cutoff lasted only seven days.

    And while many OEMs did take Qualcomm to court in order to obtain more favorable license terms, this never resulted in Qualcomm cutting off their chipset supplies. Other OEMs thus had no reason to believe that litigation would entail disruptions to their chipset supplies.
  • OEMs also wield powerful threats. These include patent holdout, litigation, vertical integration, and purchasing chips from Qualcomm’s rivals. And of course they have aggressively pursued the bringing of this and other litigation around the world by antitrust authorities — even quite possibly manipulating the record to bolster their cases. Here’s how one observer sums up Apple’s activity in this regard:

    “Although we really only managed to get a small glimpse of Qualcomm’s evidence demonstrating the extent of Apple’s coordinated strategy to manipulate the FRAND license rate, that glimpse was particularly enlightening. It demonstrated a decade-long coordinated effort within Apple to systematically engage in what can only fairly be described as manipulation (if not creation of evidence) and classic holdout.

    Qualcomm showed during opening arguments that, dating back to at least 2009, Apple had been laying the foundation for challenging its longstanding relationship with Qualcomm.” (Emphasis added)

    Moreover, the holdout and litigation paths have been strengthened by the eBay case, which significantly reduced the financial risks involved in pursuing a holdout and/or litigation strategy. Given all of this, it is far from obvious that it is Qualcomm who enjoys the stronger bargaining position here.
  • Qualcomm’s chipsets might no longer be “must-buys” in the future. Rivals have gained increasing traction over the past couple of years. And with 5G just around the corner, this momentum could conceivably accelerate. Whether or not one believes that this will ultimately be the case, the trend surely places additional constraints on Qualcomm’s conduct. Aggressive behavior today may spur disgruntled rivals to enter the chipset market or switch suppliers tomorrow.

To summarize, as we understand their response, the delta between supracompetitive and competitive prices is entirely a function of Qualcomm’s ability to charge supra-FRAND prices for its licenses. On this we agree. But, unlike Lemley et al., we do not agree that Qualcomm is in a position to evade its FRAND pledges by using its strong position in the chipset market and its NLNC policy.

Finally, it must be said again: To the extent that that is the problem — the charging of supra-FRAND prices for licenses — the issue is manifestly a contract issue, not an antitrust one. All of the complexity of the case would fall away, and the litigation would be straightforward. But the opponents of Qualcomm’s practices do not really want to ensure that Qualcomm lowers its royalties by this delta; if they did, they would be bringing/supporting FRAND litigation. What the amici and Qualcomm’s contracting partners appear to want is to use antitrust litigation to force Qualcomm to license its technology at even lower rates — to force Qualcomm into a different business model in order to reset the baseline from which FRAND prices are determined (i.e., at the chip level, rather than at the device level). That may be an intelligible business strategy from the perspective of Qualcomm’s competitors, but it certainly isn’t sensible antitrust policy.

[This guest post is authored by Mark A. Lemley, Professor of Law and the Director of Program in Law, Science & Technology at Stanford Law School; A. Douglas Melamed, Professor of the Practice of Law at Stanford Law School and Former Senior Vice President and General Counsel of Intel from 2009 to 2014; and Steven Salop, Professor of Economics and Law at Georgetown Law School. It is a response to the post, “Exclusionary Pricing Without the Exclusion: Unpacking Qualcomm’s No License, No Chips Policy,” by Geoffrey Manne and Dirk Auer, which is itself a response to Lemley, Melamed, and Salop’s amicus brief in FTC v. Qualcomm.]

Geoffrey Manne and Dirk Auer’s defense of Qualcomm’s no license/no chips policy is based on a fundamental misunderstanding of how that policy harms competition.  The harm is straightforward in light of facts proven at trial. In a nutshell, OEMs must buy some chips from Qualcomm or else exit the handset business, even if they would also like to buy additional chips from other suppliers. OEMs must also buy a license to Qualcomm’s standard essential patents, whether they use Qualcomm’s chips or other chips implementing the same industry standards. There is a monopoly price for the package of Qualcomm’s chips plus patent license. Assume that the monopoly price is $20. Assume further that, if Qualcomm’s patents were licensed in a standalone transaction, as they would be if they were owned by a firm that did not also make chips, the market price for the patent license would be $2. In that event, the monopoly price for the chip would be $18, and a chip competitor could undersell Qualcomm if Qualcomm charged the monopoly price of $18 and the competitor could profitably sell chips for a lower price. If the competitor’s cost of producing and selling chips was $11, for example, it could easily undersell Qualcomm and force Qualcomm to lower its chip prices below $18, thereby reducing the price for the package to a level below $20.

However, the no license/no chips policy enables Qualcomm to allocate the package price of $20 any way it wishes. Because the OEMs must buy some chips from Qualcomm, Qualcomm is able to coerce the OEMs to accept any such allocation by threatening not to sell them chips if they do not agree to a license at the specified terms. The prices could thus be $18 and $2; or, for example, they could be $10 for the chips and $10 for the license. If Qualcomm sets the license price at $10 and a chip price of $10, it would continue to realize the monopoly package price of $20. But in that case, a competitor could profitably undersell Qualcomm only if its chip cost were less than 10. A competitor with a cost of $11 would then not be able to successfully enter the market, and Qualcomm would not need to lower its chip prices. That is how the no license/no chip policy blocks entry of chip competitors and maintains Qualcomm’s chip monopoly. 

Manne and Auer’s defense of the no license/no chips policy is deeply flawed. In the first place, Manne and Auer mischaracterize the problem as one in which “Qualcomm undercuts [chipset rivals] on chip prices and recoups its losses by charging supracompetitive royalty rates on its IP.” On the basis of this description of the issue, they argue that, if Qualcomm cannot charge more than $2 for the license, it cannot use license revenues to offset the chip price reduction. And if Qualcomm can charge more than $2 for the license, it does not need a chip monopoly in order to make supracompetitive licensing profits. This argument is wrong both factually and conceptually.  

As a factual matter, there are constraints on Qualcomm’s ability to charge more than $2 for the license if the license is sold by itself. If sold by itself, the license would be negotiated in the shadow of infringement litigation and the royalty would be constrained by the value of the technology claimed by the patent, the risk that the patent would be found to be invalid or not infringed, the “reasonable royalty” contemplated by the patent laws, and the contractual commitment to license on FRAND terms. But Qualcomm is able to circumvent those constraints by coercing OEMs to pay a higher price or else lose access to essential Qualcomm chips. In other words, Qualcomm’s ability to charge more than $2 for the license is not exogenous. Qualcomm is able to charge more than $2 for the license only because it uses the power of its chip monopoly to coerce the OEMs to give up the option of negotiating in light of the otherwise applicable constraints on the royalties it can charge. It is a simple story of bundling with simultaneous recoupment.  

As a conceptual matter, Manne and Auer seem to think that the concern with the no license/no chips policy is that it enables inflated patent royalties to subsidize a profit sacrifice in chip sales, as if the issue were predatory pricing in chips.  But there is no such sacrifice. Money is fungible, and Manne and Auer have it backwards. The problem is that the no license/no chips policy enables Qualcomm to make purely nominal changes by allocating some of its monopoly chip price to the license price. Qualcomm offsets that nominal license price increase when the OEM buys chips from it by lowering the chip price by that amount in order to maintain the package price at the monopoly price.  There is no profit sacrifice for Qualcomm because the lower chip price simply offsets the higher license price. Qualcomm offers no such offset when the OEM buys chips from other suppliers. To the contrary, by using its chip monopoly to increase the license price, it increases the cost to OEMs of using competitors’ chips and is thus able to perpetuate its chip monopoly and maintain its monopoly chip prices and profits. Absent this policy, OEMs would buy more chips from third parties; Qualcomm’s prices and profits would fall; and consumers would benefit.

At the end of the day, Manne and Auer rely on the old “single monopoly profit” or “double counting” idea that a monopolist cannot both charge a monopoly price and extract additional consideration as well. But, again, they have it backwards. Manne and Auer describe the issue as whether Qualcomm can leverage its patent position in the technology markets to increase its market power in chips. But that is not the issue. Qualcomm is not trying to increase profits by leveraging monopoly power from one market into a different market in order to gain additional monopoly profits in the second market. Instead, it is using its existing monopoly power in chips to maintain that monopoly power in the first place. Assuming Qualcomm has a chip monopoly, it is true that it earns the same revenue from OEMs regardless of how it allocates the all-in price of $20 to its chips versus its patents. But by allocating more of the all-in price to the patents (i.e., in our example, $10 instead of $2), Qualcomm is able to maintain its monopoly by preventing rival chipmakers from undercutting the $20 monopoly price of the package. That is how competition and consumers are harmed.

Qualcomm is currently in the midst of a high-profile antitrust case against the FTC. At the heart of these proceedings lies Qualcomm’s so-called “No License, No Chips” (NLNC) policy, whereby it purportedly refuses to sell chips to OEMs that have not concluded a license agreement covering its underlying intellectual property. According to the FTC and Qualcomm’s opponents, this ultimately thwarts competition in the chipset market.

But Qualcomm’s critics fail to convincingly explain how NLNC averts competition — a failing that is particularly evident in the short hypothetical put forward in the amicus brief penned by Mark Lemley, Douglas Melamed, and Steven Salop. This blog post responds to their brief. 

The amici’s hypothetical

In order to highlight the most salient features of the case against Qualcomm, the brief’s authors offer the following stylized example:

A hypothetical example can illustrate how Qualcomm’s strategy increases the royalties it is able to charge OEMs. Suppose that the reasonable royalty Qualcomm could charge OEMs if it licensed the patents separately from its chipsets is $2, and that the monopoly price of Qualcomm’s chips is $18 for an all-in monopoly cost to OEMs of $20. Suppose that a new chipmaker entrant is able to manufacture chipsets of comparable quality at a cost of $11 each. In that case, the rival chipmaker entrant could sell its chips to OEMs for slightly more than $11. An OEM’s all-in cost of buying from the new entrant would be slightly above $13 (i.e., the Qualcomm reasonable license royalty of $2 plus the entrant chipmaker’s price of slightly more than $11). This entry into the chipset market would induce price competition for chips. Qualcomm would still be entitled to its patent royalties of $2, but it would no longer be able to charge the monopoly all-in price of $20. The competition would force Qualcomm to reduce its chipset prices from $18 down to something closer to $11 and its all-in price from $20 down to something closer to $13.

Qualcomm’s NLNC policy prevents this competition. To illustrate, suppose instead that Qualcomm implements the NLNC policy, raising its patent royalty to $10 and cutting the chip price to $10. The all-in cost to an OEM that buys Qualcomm chips will be maintained at the monopoly level of $20. But the OEM’s cost of using the rival entrant’s chipsets now will increase to a level above $21 (i.e., the slightly higher than $11 price for the entrant’s chipset plus the $10 royalty that the OEM pays to Qualcomm of $10). Because the cost of using the entrant’s chipsets will exceed Qualcomm’s all-in monopoly price, Qualcomm will face no competitive pressure to reduce its chipset or all-in prices.

A close inspection reveals that this hypothetical is deeply flawed

There appear to be five steps in the amici’s reasoning:

  1. Chips and IP are complementary goods that are bought in fixed proportions. So buyers have a single reserve price for both; 
  2. Because of its FRAND pledges, Qualcomm is unable to directly charge a monopoly price for its IP;
  3. But, according to the amici, Qualcomm can obtain these monopoly profits by keeping competitors out of the chipset market [this would give Qualcomm a chipset monopoly and, theoretically at least, enable it to charge the combined (IP + chips) monopoly price for its chips alone, thus effectively evading its FRAND pledges]; 
  4. To keep rivals out of the chipset market, Qualcomm undercuts them on chip prices and recoups its losses by charging supracompetitive royalty rates on its IP.
  5. This is allegedly made possible by the “No License, No Chips” policy, which forces firms to obtain a license from Qualcomm, even when they purchase chips from rivals.

While points 1 and 3 of the amici’s reasoning are uncontroversial, points 2 and 4 are mutually exclusive. This flaw ultimately undermines their entire argument, notably point 5. 

The contradiction between points 2 and 4 is evident. The amici argue (using hypothetical but representative numbers) that its FRAND pledges should prevent Qualcomm from charging more than $2 in royalties per chip (“the reasonable royalty Qualcomm could charge OEMs if it licensed the patents separately from its chipsets is $2”), and that Qualcomm deters entry in the chip market by charging $10 in royalties per chip sold (“raising its patent royalty to $10 and cutting the chip price to $10”).

But these statements cannot both be true. Qualcomm either can or it cannot charge more than $2 in royalties per chip. 

There is, however, one important exception (discussed below): parties can mutually agree to depart from FRAND pricing. But let us momentarily ignore this limitation, and discuss two baseline scenarios: One where Qualcomm can evade its FRAND pledges and one where it cannot. Comparing these two settings reveals that Qualcomm cannot magically increase its profits by shifting revenue from chips to IP.

For a start, if Qualcomm cannot raise the price of its IP beyond the hypothetical FRAND benchmark ($2, in the amici’s hypo), then it cannot use its standard essential technology to compensate for foregone revenue in the chipset market. Any supracompetitive profits that it earns must thus result from its competitive position in the chipset market.

Conversely, if it can raise its IP revenue above the $2 benchmark, then it does not require a strong chipset position to earn supracompetitive profits. 

It is worth unpacking this second point. If Qualcomm can indeed evade its FRAND pledges and charge royalties of $10 per chip, then it need not exclude chipset rivals to obtain supracompetitive profits. 

Take the amici’s hypothetical numbers and assume further that Qualcomm has the same cost as its chipsets rivals (i.e. $11), and that there are 100 potential buyers with a uniform reserve price of $20 (the reserve price assumed by the amici). 

As the amici point out, Qualcomm can earn the full monopoly profits by charging $10 for IP and $10 for chips. Qualcomm would thus pocket a total of $900 in profits ((10+10-11)*100). What the amici brief fails to acknowledge is that Qualcomm could also earn the exact same profits by staying out of the chipset market. Qualcomm could let its rivals charge $11 per chip (their cost), and demand $9 for its IP. It would thus earn the same $900 of profits (9*100). 

In this hypothetical, the only reason for Qualcomm to enter the chip market is if it is a more efficient chipset producer than its chipset rivals, or if it can out-compete them with better chipsets. For instance, if Qualcomm’s costs are only $10 per chip, Qualcomm could earn a total of $1000 in profits by driving out these rivals ((10+10-10)*100). Or, if it can produce better chips, though at higher cost and price (say, $12 per chip), it could earn the same $1000 in profits ((10+12-12)*100). Both of the situations would benefit purchasers, of course. Conversely, at a higher production cost of $12 per chip, but without any quality improvement, Qualcomm would earn only $800 in profits ((10+10-12)*100) and would thus do better to exit the chipset market.

Let us recap:

  • If Qualcomm can easily evade its FRAND pledges, then it need not enter the chipset market to earn supracompetitive profits; 
  • If it cannot evade these FRAND obligations, then it will be hard-pressed to leverage its IP bottleneck so as to dominate chipsets. 

The upshot is that Qualcomm would need to benefit from exceptional circumstances in order to improperly leverage its FRAND-encumbered IP and impose anticompetitive harm by excluding its rivals in the chipset market

The NLNC policy

According to the amici, that exceptional circumstance is the NLNC policy. In their own words:

The competitive harm is a result of the royalty being higher than it would be absent the NLNC policy.

This is best understood by adding an important caveat to our previous hypothetical: The $2 FRAND benchmark of the amici’s hypothetical is only a fallback option that can be obtained via litigation. Parties are thus free to agree upon a higher rate, for instance $10. This could, notably, be the case if Qualcomm offsetted the IP increase by reducing its chipset price, such that OEMs who purchase both chipsets and IP from Qualcomm were indifferent between contracts with either of the two royalty rates.

At first sight, this caveat may appear to significantly improve the FTC’s case against Qualcomm — it raises the specter of Qualcomm charging predatory prices on its chips and then recouping its losses on IP. But further examination suggests that this is an unlikely scenario.

Though firms may nominally be paying $10 for Qualcomm’s IP and $10 for its chips, there is no escaping the fact that buyers have an outside option in both the IP and chip segments (respectively, litigation to obtain FRAND rates, and buying chips from rivals). As a result, Qualcomm will be unable to charge a total price that is significantly above the price of rivals’ chips, plus the FRAND rate for its IP (and expected litigation costs).

This is where the amici’s hypothetical is most flawed. 

It is one thing to argue that Qualcomm can charge $10 per chipset and $10 per license to firms that purchase all of their chips and IP from it (or, as the amici point out, charge a single price of $20 for the bundle). It is another matter entirely to argue — as the amici do — that Qualcomm can charge $10 for its IP to firms that receive little or no offset in the chip market because they purchase few or no chips from Qualcomm, and who have the option of suing Qualcomm, thus obtaining a license at $2 per chip (if that is, indeed, the maximum FRAND rate). Firms would have to be foolish to ignore this possibility and to acquiesce to contracts at substantially higher rates. 

Indeed, two of the largest and most powerful OEMs — Apple and Samsung — have entered into such contracts with Qualcomm. Given their ability (and, indeed, willingness) to sue for FRAND violations and to produce their own chips or assist other manufacturers in doing so, it is difficult to conclude that they have assented to supracompetitive terms. (The fact that they would prefer even lower rates, and have supported this and other antitrust suits against Qualcomm doesn’t, change this conclusion; it just means they see antitrust as a tool to reduce their costs. And the fact that Apple settled its own FRAND and antitrust suit against Qualcomm (and paid Qualcomm $4.5 billion and entered into a global licensing agreement with it) after just one day of trial further supports this conclusion).

Double counting

The amici attempt to overcome this weakness by implicitly framing their argument in terms of exclusivity, strategic entry deterrence, and tying:

An OEM cannot respond to Qualcomm’s NLNC policy by purchasing chipsets only from a rival chipset manufacturer and obtaining a license at the reasonable royalty level (i.e., $2 in the example). As the district court found, OEMs needed to procure at least some 3G CDMA and 4G LTE chipsets from Qualcomm.

* * *

The surcharge burdens rivals, leads to anticompetitive effects in the chipset markets, deters entry, and impedes follow-on innovation. 

* * *

As an economic matter, Qualcomm’s NLNC policy is analogous to the use of a tying arrangement to maintain monopoly power in the market for the tying product (here, chipsets).

But none of these arguments totally overcomes the flaw in their reasoning. Indeed, as Aldous Huxley once pointed out, “several excuses are always less convincing than one”.

For a start, the amici argue that Qualcomm uses its strong chipset position to force buyers into accepting its supracompetitive IP rates, even in those instances where they purchase chipsets from rivals. 

In making this point, the amici fall prey to the “double counting fallacy” that Robert Bork famously warned about in The Antitrust Paradox: Monopolists cannot simultaneously charge a monopoly price AND purchase exclusivity (or other contractual restrictions) from their buyers/suppliers.

The amici fail to recognize the important sacrifices that Qualcomm would have to make in order for the above strategy to be viable. In simple terms, Qualcomm would have to offset every dollar it charges above the FRAND benchmark in the IP segment with an equivalent price reduction in the chipset segment.

This has important ramifications for the FTC’s case.

Qualcomm would have to charge lower — not higher — IP fees to OEMs who purchased a large share of their chips from third party chipmakers. Otherwise, there would be no carrot to offset its greater-than-FRAND license fees, and these OEMs would have significant incentives to sue (especially in a post-eBay world where the threat of injunctions is reduced if they happen to lose). 

And yet, this is the exact opposite of what the FTC alleged:

Qualcomm sometimes expressly charged higher royalties on phones that used rivals’ chips. And even when it did not, its provision of incentive funds to offset its license fees when OEMs bought its chips effectively resulted in a discriminatory surcharge. (emphasis added)

The infeasibility of alternative explanations

One theoretical workaround would be for Qualcomm to purchase exclusivity from its OEMs, in an attempt to foreclose chipset rivals. 

Once again, Bork’s double counting argument suggests that this would be particularly onerous. By accepting exclusivity-type requirements, OEMs would not only be reducing potential competition in the chipset market, they would also be contributing to an outcome where Qualcomm could evade its FRAND pledges in the IP segment of the market. This is particularly true for pivotal OEMs (such as Apple and Samsung), who may single-handedly affect the market’s long-term trajectory. 

The amici completely overlook this possibility, while the FTC argues that this may explain the rebates that Qulacomm gave to Apple. 

But even if the rebates Qualcomm gave Apple amounted to de facto exclusivity, there are still important objections. Authorities would notably need to prove that Qualcomm could recoup its initial losses (i.e. that the rebate maximized Qualcomm’s long-term profits). If this was not the case, then the rebates may simply be due to either efficiency considerations or Apple’s significant bargaining power (Apple is routinely cited as a potential source of patent holdout; see, e.g., here and here). 

Another alternative would be for Qualcomm to evict its chipset rivals through strategic entry deterrence or limit pricing (see here and here, respectively). But while the economic literature suggests that incumbents may indeed forgo short-term profits in order to deter rivals from entering the market, these theories generally rest on assumptions of imperfect information and/or strategic commitments. Neither of these factors was alleged in the case at hand.

In particular, there is no sense that Qualcomm’s purported decision to shift royalties from chips to IP somehow harms its short-term profits, or that it is merely a strategic device used to deter the entry of rivals. As the amici themselves seem to acknowledge, the pricing structure maximizes Qualcomm’s short term revenue (even ignoring potential efficiency considerations). 

Note that this is not just a matter of economic policy. The case law relating to unilateral conduct infringements — be it Brooke Group, Alcoa, or Aspen Skiing — almost systematically requires some form of profit sacrifice on the part of the monopolist. (For a legal analysis of this issue in the Qualcomm case, see ICLE’s Amicus brief, and yesterday’s blog post on the topic).

The amici are thus left with the argument that Qualcomm could structure its prices differently, so as to maximize the profits of its rivals. Why it would choose to do so, or should indeed be forced to, is a whole other matter.

Finally, the amici refer to the strategic tying literature (here), typically associated with the Microsoft case and the so-called “platform threat”. But this analogy is highly problematic. 

Unlike Microsoft and its Internet Explorer browser, Qualcomm’s IP is de facto — and necessarily — tied to the chips that practice its technology. This is not a bug, it is a feature of the patent system. Qualcomm is entitled to royalties, whether it manufactures chips itself or leaves that task to rival manufacturers. In other words, there is no counterfactual world where OEMs could obtain Qualcomm-based chips without entering into some form of license agreement (whether directly or indirectly) with Qualcomm. The fact that OEMs must acquire a license that covers Qualcomm’s IP — even when they purchase chips from rivals — is part and parcel of the IP system.

In any case, there is little reason to believe that Qualcomm’s decision to license its IP at the OEM level is somehow exclusionary. The gist of the strategic tying literature is that incumbents may use their market power in a primary market to thwart entry in the market for a complementary good (and ultimately prevent rivals from using their newfound position in the complementary market in order to overthrow the incumbent in the primary market; Carlton & Waldman, 2002). But this is not the case here.

Qualcomm does not appear to be using what little power it might have in the IP segment in order to dominate its rivals in the chip market. As has already been explained above, doing so would imply some profit sacrifice in the IP segment in order to encourage OEMs to accept its IP/chipset bundle, rather than rivals’ offerings. This is the exact opposite of what the FTC and amici allege in the case at hand. The facts thus cut against a conjecture of strategic tying.

Conclusion

So where does this leave the amici and their brief? 

Absent further evidence, their conclusion that Qualcomm injured competition is untenable. There is no evidence that Qualcomm’s pricing structure — enacted through the NLNC policy — significantly harmed competition to the detriment of consumers. 

When all is done and dusted, the amici’s brief ultimately amounts to an assertion that Qualcomm should be made to license its intellectual property at a rate that — in their estimation — is closer to the FRAND benchmark. That judgment is a matter of contract law, not antitrust.

On November 22, the FTC filed its answering brief in the FTC v. Qualcomm litigation. As we’ve noted before, it has always seemed a little odd that the current FTC is so vigorously pursuing this case, given some of the precedents it might set and the Commission majority’s apparent views on such issues. But this may also help explain why the FTC has now opted to eschew the district court’s decision and pursue a novel, but ultimately baseless, legal theory in its brief.

The FTC’s decision to abandon the district court’s reasoning constitutes an important admission: contrary to the district court’s finding, there is no legal basis to find an antitrust duty to deal in this case. As Qualcomm stated in its reply brief (p. 12), “the FTC disclaims huge portions of the decision.” In its effort to try to salvage its case, however, the FTC reveals just how bad its arguments have been from the start, and why the case should be tossed out on its ear.

What the FTC now argues

The FTC’s new theory is that SEP holders that fail to honor their FRAND licensing commitments should be held liable under “traditional Section 2 standards,” even though they do not have an antitrust duty to deal with rivals who are members of the same standard-setting organizations (SSOs) under the “heightened” standard laid out by the Supreme Court in Aspen and Trinko:  

To be clear, the FTC does not contend that any breach of a FRAND commitment is a Sherman Act violation. But Section 2 liability is appropriate when, as here, a monopolist SEP holder commits to license its rivals on FRAND terms, and then implements a blanket policy of refusing to license those rivals on any terms, with the effect of substantially contributing to the acquisition or maintenance of monopoly power in the relevant market…. 

The FTC does not argue that Qualcomm had a duty to deal with its rivals under the Aspen/Trinko standard. But that heightened standard does not apply here, because—unlike the defendants in Aspen, Trinko, and the other duty-to-deal precedents on which it relies—Qualcomm entered into a voluntary contractual commitment to deal with its rivals as part of the SSO process, which is itself a derogation from normal market competition. And although the district court applied a different approach, this Court “may affirm on any ground finding support in the record.” Cigna Prop. & Cas. Ins. Co. v. Polaris Pictures Corp., 159 F.3d 412, 418-19 (9th Cir. 1998) (internal quotation marks omitted) (emphasis added) (pp.69-70).

In other words, according to the FTC, because Qualcomm engaged in the SSO process—which is itself “a derogation from normal market competition”—its evasion of the constraints of that process (i.e., the obligation to deal with all comers on FRAND terms) is “anticompetitive under traditional Section 2 standards.”

The most significant problem with this new standard is not that it deviates from the basis upon which the district court found Qualcomm liable; it’s that it is entirely made up and has no basis in law.

Absent an antitrust duty to deal, patent law grants patentees the right to exclude rivals from using patented technology

Part of the bundle of rights connected with the property right in patents is the right to exclude, and along with it, the right of a patent holder to decide whether, and on what terms, to sell licenses to rivals. The law curbs that right only in select circumstances. Under antitrust law, such a duty to deal, in the words of the Supreme Court in Trinko, “is at or near the outer boundary of §2 liability.” The district court’s ruling, however, is based on the presumption of harm arising from a SEP holder’s refusal to license, rather than an actual finding of anticompetitive effect under §2. The duty to deal it finds imposes upon patent holders an antitrust obligation to license their patents to competitors. (While, of course, participation in an SSO may contractually obligate an SEP-holder to license its patents to competitors, that is an entirely different issue than whether it operates under a mandatory requirement to do so as a matter of public policy).  

The right of patentees to exclude is well-established, and injunctions enforcing that right are regularly issued by courts. Although the rate of permanent injunctions has decreased since the Supreme Court’s eBay decision, research has found that federal district courts still grant them over 70% of the time after a patent holder prevails on the merits. And for patent litigation involving competitors, the same research finds that injunctions are granted 85% of the time.  In principle, even SEP holders can receive injunctions when infringers do not act in good faith in FRAND negotiations. See Microsoft Corp. v. Motorola, Inc., 795 F.3d 1024, 1049 n.19 (9th Cir. 2015):

We agree with the Federal Circuit that a RAND commitment does not always preclude an injunctive action to enforce the SEP. For example, if an infringer refused to accept an offer on RAND terms, seeking injunctive relief could be consistent with the RAND agreement, even where the commitment limits recourse to litigation. See Apple Inc., 757 F.3d at 1331–32

Aside from the FTC, federal agencies largely agree with this approach to the protection of intellectual property. For instance, the Department of Justice, the US Patent and Trademark Office, and the National Institute for Standards and Technology recently released their 2019 Joint Policy Statement on Remedies for Standards-Essential Patents Subject to Voluntary F/RAND Commitments, which clarifies that:

All remedies available under national law, including injunctive relief and adequate damages, should be available for infringement of standards-essential patents subject to a F/RAND commitment, if the facts of a given case warrant them. Consistent with the prevailing law and depending on the facts and forum, the remedies that may apply in a given patent case include injunctive relief, reasonable royalties, lost profits, enhanced damages for willful infringement, and exclusion orders issued by the U.S. International Trade Commission. These remedies are equally available in patent litigation involving standards-essential patents. While the existence of F/RAND or similar commitments, and conduct of the parties, are relevant and may inform the determination of appropriate remedies, the general framework for deciding these issues remains the same as in other patent cases. (emphasis added).

By broadening the antitrust duty to deal well beyond the bounds set by the Supreme Court, the district court opinion (and the FTC’s preferred approach, as well) eviscerates the right to exclude inherent in patent rights. In the words of retired Federal Circuit Judge Paul Michel in an amicus brief in the case: 

finding antitrust liability premised on the exercise of valid patent rights will fundamentally abrogate the patent system and its critical means for promoting and protecting important innovation.

And as we’ve noted elsewhere, this approach would seriously threaten consumer welfare:

Of course Qualcomm conditions the purchase of its chips on the licensing of its intellectual property; how could it be any other way? The alternative would require Qualcomm to actually facilitate the violation of its property rights by forcing it to sell its chips to device makers even if they refuse its patent license terms. In that world, what device maker would ever agree to pay more than a pittance for a patent license? The likely outcome is that Qualcomm charges more for its chips to compensate (or simply stops making them). Great, the FTC says; then competitors can fill the gap and — voila: the market is more competitive, prices will actually fall, and consumers will reap the benefits.

Except it doesn’t work that way. As many economists, including both the current [now former] and a prominent former chief economist of the FTC, have demonstrated, forcing royalty rates lower in such situations is at least as likely to harm competition as to benefit it. There is no sound theoretical or empirical basis for concluding that using antitrust to move royalty rates closer to some theoretical ideal will actually increase consumer welfare. All it does for certain is undermine patent holders’ property rights, virtually ensuring there will be less innovation.

The FTC realizes the district court doesn’t have the evidence to support its duty to deal analysis

Antitrust law does not abrogate the right of a patent holder to exclude and to choose when and how to deal with rivals, unless there is a proper finding of a duty to deal. In order to find a duty to deal, there must be a harm to competition, not just a competitor, which, under the Supreme Court’s Aspen and Trinko cases can be inferred in the duty-to-deal context only where the challenged conduct leads to a “profit sacrifice.” But the record does not support such a finding. As we wrote in our amicus brief:

[T]he Supreme Court has identified only a single scenario from which it may plausibly be inferred that defendant’s refusal to deal with rivals harms consumers: The existence of a prior, profitable course of dealing, and the termination and replacement of that arrangement with an alternative that not only harms rivals, but also is less profitable for defendant. 

A monopolist’s willingness to forego (short-term) profits plausibly permits an inference that conduct is not procompetitive, because harm to a rival caused by an increase in efficiency should lead to higher—not lower—profits for defendant. And “[i]f a firm has been ‘attempting to exclude rivals on some basis other than efficiency,’ it’s fair to characterize its behavior as predatory.” Aspen Skiing, 472 U.S. at 605 (quoting Robert Bork, The Antitrust Paradox 138 (1978)).

In an effort to satisfy this standard, the district court states that “because Qualcomm previously licensed its rivals, but voluntarily stopped licensing rivals even though doing so was profitable, Qualcomm terminated a voluntary and profitable course of dealing.” Slip op. at 137. 

But it is not enough merely that the prior arrangement was profitable. Rather, Trinko and Aspen Skiing hold that when a monopolist ends a profitable relationship with a rival, anticompetitive exclusion may be inferred only when it also refuses to engage in an ongoing arrangement that, in the short run, is more profitable than no relationship at all. The key is the relative value to the monopolist of the current options on offer, not the value to the monopolist of the terminated arrangement. See Trinko, 540 U.S. at 409 (“a willingness to forsake short-term profits”); Aspen Skiing, 472 U.S. at 610–11 (“it was willing to sacrifice short-run benefits”)…

The record here uniformly indicates Qualcomm expected to maximize its royalties by dealing with OEMs rather than rival chip makers; it neither anticipated nor endured short-term loss. As the district court itself concluded, Qualcomm’s licensing practices avoided patent exhaustion and earned it “humongously more lucrative” royalties. Slip op. at 1243–254. That Qualcomm anticipated greater profits from its conduct precludes an inference of anticompetitive harm.

Moreover, Qualcomm didn’t refuse to allow rivals to use its patents; it simply didn’t sell them explicit licenses to do so. As discussed in several places by the district court:

According to Andrew Hong (Legal Counsel at Samsung Intellectual Property Center), during license negotiations, Qualcomm made it clear to Samsung that “Qualcomm’s standard business practice was not to provide licenses to chip manufacturers.” Hong Depo. 161:16-19. Instead, Qualcomm had an “unwritten policy of not going after chip manufacturers.” Id. at 161:24-25… (p.123)

* * *

Alex Rogers (QTL President) testified at trial that as part of the 2018 Settlement Agreement between Samsung and Qualcomm, Qualcomm did not license Samsung, but instead promised only that Qualcomm would offer Samsung a FRAND license before suing Samsung: “Qualcomm gave Samsung an assurance that should Qualcomm ever seek to assert its cellular SEPs against that component business, against those components, we would first make Samsung an offer on fair, reasonable, and non-discriminatory terms.” Tr. at 1989:5-10. (p.124)

This is an important distinction. Qualcomm allows rivals to use its patented technology by not asserting its patent rights against them—which is to say: instead of licensing its technology for a fee, Qualcomm allows rivals to use its technology to develop their own chips royalty-free (and recoups its investment by licensing the technology to OEMs that choose to implement the technology in their devices). 

The irony of this analysis, of course, is that the district court effectively suggests that Qualcomm must charge rivals a positive, explicit price in exchange for a license in order to facilitate competition, while allowing rivals to use its patented technology for free (or at the “cost” of some small reduction in legal certainty, perhaps) is anticompetitive.

Nonetheless, the district court’s factual finding that Qualcomm’s licensing scheme was “humongously” profitable shows there was no profit sacrifice as required for a duty to deal finding. The general presumption that patent holders can exclude rivals is not subject to an antitrust duty to deal where there is no profit sacrifice by the patent holder. Here, however, Qualcomm did not sacrifice profits by adopting the challenged licensing scheme. 

It is perhaps unsurprising that the FTC chose not to support the district court’s duty-to-deal argument, even though its holding was in the FTC’s favor. But, while the FTC was correct not to countenance the district court’s flawed arguments, the FTC’s alternative argument in its reply brief is even worse.

The FTC’s novel theory of harm is unsupported and weak

As noted, the FTC’s alternative theory is that Qualcomm violated Section 2 simply by failing to live up to its contractual SSO obligations. For the FTC, because Qualcomm joined an SSO, it is no longer in a position to refuse to deal legally. Moreover, there is no need to engage in an Aspen/Trinko analysis in order to find liability. Instead, according to the FTC’s brief, liability arises because the evasion of an exogenous pricing constraint (such as an SSO’s FRAND obligation) constitutes an antitrust harm:

Of course, a breach of contract, “standing alone,” does not “give rise to antitrust liability.” City of Vernon v. S. Cal. Edison Co., 955 F.2d 1361, 1368 (9th Cir. 1992); cf. Br. 52 n.6. Instead, a monopolist’s conduct that breaches such a contractual commitment is anticompetitive only when it satisfies traditional Section 2 standards—that is, only when it “tends to impair the opportunities of rivals and either does not further competition on the merits or does so in an unnecessarily restrictive way.” Cascade Health, 515 F.3d at 894. The district court’s factual findings demonstrate that Qualcomm’s breach of its SSO commitments satisfies both elements of that traditional test. (emphasis added)

To begin, it must be noted that the operative language quoted by the FTC from Cascade Health is attributed in Cascade Health to Aspen Skiing. In other words, even Cascade Health recognizes that Aspen Skiing represents the Supreme Court’s interpretation of that language in the duty-to-deal context. And in that case—in contrast to the FTC’s argument in its brief—the Court required demonstration of such a standard to mean that a defendant “was not motivated by efficiency concerns and that it was willing to sacrifice short-run benefits and consumer goodwill in exchange for a perceived long-run impact on its… rival.” (Aspen Skiing at 610-11) (emphasis added).

The language quoted by the FTC cannot simultaneously justify an appeal to an entirely different legal standard separate from that laid out in Aspen Skiing. As such, rather than dispensing with the duty to deal requirements laid out in that case, Cascade Health actually reinforces them.

Second, to support its argument the FTC points to Broadcom v. Qualcomm, 501 F.3d 297 (3rd Cir. 2007) as an example of a court upholding an antitrust claim based on a defendant’s violation of FRAND terms. 

In Broadcom, relying on the FTC’s enforcement action against Rambus before it was overturned by the D.C. Circuit, the Third Circuit found that there was an actionable issue when Qualcomm deceived other members of an SSO by promising to

include its proprietary technology in the… standard by falsely agreeing to abide by the [FRAND policies], but then breached those agreements by licensing its technology on non-FRAND terms. The intentional acquisition of monopoly power through deception… violates antitrust law. (emphasis added)

Even assuming Broadcom were good law post-Rambus, the case is inapposite. In Broadcom the court found that Qualcomm could be held to violate antitrust law by deceiving the SSO (by falsely promising to abide by FRAND terms) in order to induce it to accept Qualcomm’s patent in the standard. The court’s concern was that, by falsely inducing the SSO to adopt its technology, Qualcomm deceptively acquired monopoly power and limited access to competing technology:

When a patented technology is incorporated in a standard, adoption of the standard eliminates alternatives to the patented technology…. Firms may become locked in to a standard requiring the use of a competitor’s patented technology. 

Key to the court’s finding was that the alleged deception induced the SSO to adopt the technology in its standard:

We hold that (1) in a consensus-oriented private standard-setting environment, (2) a patent holder’s intentionally false promise to license essential proprietary technology on FRAND terms, (3) coupled with an SDO’s reliance on that promise when including the technology in a standard, and (4) the patent holder’s subsequent breach of that promise, is actionable conduct. (emphasis added)

Here, the claim is different. There is no allegation that Qualcomm engaged in deceptive conduct that affected the incorporation of its technology into the relevant standard. Indeed, there is no allegation that Qualcomm’s alleged monopoly power arises from its challenged practices; only that it abused its lawful monopoly power to extract supracompetitive prices. Even if an SEP holder may be found liable for falsely promising not to evade a commitment to deal with rivals in order to acquire monopoly power from its inclusion in a technological standard under Broadcom, that does not mean that it can be held liable for evading a commitment to deal with rivals unrelated to its inclusion in a standard, nor that such a refusal to deal should be evaluated under any standard other than that laid out in Aspen Skiing.

Moreover, the FTC nowhere mentions the DC Circuit’s subsequent Rambus decision overturning the FTC and calling the holding in Broadcom into question, nor does it discuss the Supreme Court’s NYNEX decision in any depth. Yet these cases stand clearly for the opposite proposition: a court cannot infer competitive harm from a company’s evasion of a FRAND pricing constraint. As we wrote in our amicus brief

In Rambus Inc. v. FTC, 522 F.3d 456 (D.C. Cir. 2008), the D.C. Circuit, citing NYNEX, rejected the FTC’s contention that it may infer anticompetitive effect from defendant’s evasion of a constraint on its monopoly power in an analogous SEP-licensing case: “But again, as in NYNEX, an otherwise lawful monopolist’s end-run around price constraints, even when deceptive or fraudulent, does not alone present a harm to competition.” Id. at 466 (citation omitted). NYNEX and Rambus reinforce the Court’s repeated holding that an inference is permissible only where it points clearly to anticompetitive effect—and, bad as they may be, evading obligations under other laws or violating norms of “business morality” do not permit a court to undermine “[t]he freedom to switch suppliers [which] lies close to the heart of the competitive process that the antitrust laws seek to encourage. . . . Thus, this Court has refused to apply per se reasoning in cases involving that kind of activity.” NYNEX, 525 U.S. at 137 (citations omitted).

Essentially, the FTC’s brief alleges that Qualcomm’s conduct amounts to an evasion of the constraint imposed by FRAND terms—without which the SSO process itself is presumptively anticompetitive. Indeed, according to the FTC, it is only the FRAND obligation that saves the SSO agreement from being inherently anticompetitive. 

In fact, when a firm has made FRAND commitments to an SSO, requiring the firm to comply with its commitments mitigates the risk that the collaborative standard-setting process will harm competition. Product standards—implicit “agreement[s] not to manufacture, distribute, or purchase certain types of products”—“have a serious potential for anticompetitive harm.” Allied Tube, 486 U.S. at 500 (citation and footnote omitted). Accordingly, private SSOs “have traditionally been objects of antitrust scrutiny,” and the antitrust laws tolerate private standard-setting “only on the understanding that it will be conducted in a nonpartisan manner offering procompetitive benefits,” and in the presence of “meaningful safeguards” that prevent the standard-setting process from falling prey to “members with economic interests in stifling product competition.” Id. at 500- 01, 506-07; see Broadcom, 501 F.3d at 310, 314-15 (collecting cases). 

FRAND commitments are among the “meaningful safeguards” that SSOs have adopted to mitigate this serious risk to competition…. 

Courts have therefore recognized that conduct that breaches or otherwise “side-steps” these safeguards is appropriately subject to conventional Sherman Act scrutiny, not the heightened Aspen/Trinko standard… (p.83-84)

In defense of the proposition that courts apply “traditional antitrust standards to breaches of voluntary commitments made to mitigate antitrust concerns,” the FTC’s brief cites not only Broadcom, but also two other cases:

While this Court has long afforded firms latitude to “deal or refuse to deal with whomever [they] please[] without fear of violating the antitrust laws,” FountWip, Inc. v. Reddi-Wip, Inc., 568 F.2d 1296, 1300 (9th Cir. 1978) (citing Colgate, 250 U.S. at 307), it, too, has applied traditional antitrust standards to breaches of voluntary commitments made to mitigate antitrust concerns. In Mount Hood Stages, Inc. v. Greyhound Corp., 555 F.2d 687 (9th Cir. 1977), this Court upheld a judgment holding that Greyhound violated Section 2 by refusing to interchange bus traffic with a competing bus line after voluntarily committing to do so in order to secure antitrust approval from the Interstate Commerce Commission for proposed acquisitions. Id. at 69723; see also, e.g., Biovail Corp. Int’l v. Hoechst Aktiengesellschaft, 49 F. Supp. 2d 750, 759 (D.N.J. 1999) (breach of commitment to deal in violation of FTC merger consent decree exclusionary under Section 2). (p.85-86)

The cases the FTC cites to justify the proposition all deal with companies sidestepping obligations in order to falsely acquire monopoly power. The two cases cited above both involve companies making promises to government agencies to win merger approval and then failing to follow through. And, as noted, Broadcom deals with the acquisition of monopoly power by making false promises to an SSO to induce the choice of proprietary technology in a standard. While such conduct in the acquisition of monopoly power may be actionable under Broadcom (though this is highly dubious post-Rambus), none of these cases supports the FTC’s claim that an SEP holder violates antitrust law any time it evades an SSO obligation to license its technology to rivals. 

Conclusion

Put simply, the district court’s opinion in FTC v. Qualcomm runs headlong into the Supreme Court’s Aspen decision and founders there. This is why the FTC is trying to avoid analyzing the case under Aspen and subsequent duty-to-deal jurisprudence (including Trinko, the 9th Circuit’s MetroNet decision, and the 10th Circuit’s Novell decision): because it knows that if the appellate court applies those standards, the district court’s duty-to-deal analysis will fail. The FTC’s basis for applying a different standard is unsupportable, however. And even if its logic for applying a different standard were valid, the FTC’s proffered alternative theory is groundless in light of Rambus and NYNEX. The Ninth Circuit should vacate the district court’s finding of liability.