The FTC’s recent YouTube settlement and $170 million fine related to charges that YouTube violated the Children’s Online Privacy Protection Act (COPPA) has the issue of targeted advertising back in the news. With an upcoming FTC workshop and COPPA Rule Review looming, it’s worth looking at this case in more detail and reconsidering COPPA’s 2013 amendment to the definition of personal information.
According to the complaint issued by the FTC and the New York Attorney General, YouTube violated COPPA by collecting personal information of children on its platform without obtaining parental consent. While the headlines scream that this is an egregious violation of privacy and parental rights, a closer look suggests that there is actually very little about the case that normal people would find to be all that troubling. Instead, it appears to be another in the current spate of elitist technopanics.
COPPA defines personal information to include persistent identifiers, like cookies, used for targeted advertising. These cookies allow site operators to have some idea of what kinds of websites a user may have visited previously. Having knowledge of users’ browsing history allows companies to advertise more effectively than is possible with contextual advertisements, which guess at users’ interests based upon the type of content being viewed at the time. The age old problem for advertisers is that “half the money spent on advertising is wasted; the trouble is they don’t know which half.” While this isn’t completely solved by the use of targeted advertising based on web browsing and search history, the fact that such advertising is more lucrative compared to contextual advertisements suggests that it works better for companies.
COPPA, since the 2013 update, states that persistent identifiers are personal information by themselves, even if not linked to any other information that could be used to actually identify children (i.e., anyone under 13 years old).
As a consequence of this rule, YouTube doesn’t allow children under 13 to create an account. Instead, YouTube created a separate mobile application called YouTube Kids with curated content targeted at younger users. That application serves only contextual advertisements that do not rely on cookies or other persistent identifiers, but the content available on YouTube Kids also remains available on YouTube.
YouTube’s error, in the eyes of the FTC, was that the site left it to channel owners on YouTube’s general audience site to determine whether to monetize their content through targeted advertising or to opt out and use only contextual advertisements. Turns out, many of those channels — including channels identified by the FTC as “directed to children” — made the more lucrative choice by choosing to have targeted advertisements on their channels.
Whether YouTube’s practices violate the letter of COPPA or not, a more fundamental question remains unanswered: What is the harm, exactly?
COPPA takes for granted that it is harmful for kids to receive targeted advertisements, even where, as here, the targeting is based not on any knowledge about the users as individuals, but upon the browsing and search history of the device they happen to be on. But children under 13 are extremely unlikely to have purchased the devices they use, to pay for the access to the Internet to use the devices, or to have any disposable income or means of paying for goods and services online. Which makes one wonder: To whom are these advertisements served to children actually targeted? The answer is obvious to everyone but the FTC and those who support the COPPA Rule: the children’s parents.
Television programs aimed at children have long been supported by contextual advertisements for cereal and toys. Tony the Tiger and Lucky the Leprechaun were staples of Saturday morning cartoons when I was growing up, along with all kinds of Hot Wheels commercials. As I soon discovered as a kid, I had the ability to ask my parents to buy these things, but ultimately no ability to buy them on my own. In other words: Parental oversight is essentially built-in to any type of advertisement children see, in the sense that few children can realistically make their own purchases or even view those advertisements without their parents giving them a device and internet access to do so.
When broken down like this, it is much harder to see the harm. It’s one thing to create regulatory schemes to prevent stalkers, creepers, and perverts from using online information to interact with children. It’s quite another to greatly reduce the ability of children’s content to generate revenue by use of relatively anonymous persistent identifiers like cookies — and thus, almost certainly, to greatly reduce the amount of content actually made for and offered to children.
On the one hand, COPPA thus disregards the possibility that controls that take advantage of parental oversight may be the most cost-effective form of protection in such circumstances. As Geoffrey Manne noted regarding the FTC’s analogous complaint against Amazon under the FTC Act, which ignored the possibility that Amazon’s in-app purchasing scheme was tailored to take advantage of parental oversight in order to avoid imposing excessive and needless costs:
[For the FTC], the imagined mechanism of “affirmatively seeking a customer’s authorized consent to a charge” is all benefit and no cost. Whatever design decisions may have informed the way Amazon decided to seek consent are either irrelevant, or else the user-experience benefits they confer are negligible….
Amazon is not abdicating its obligation to act fairly under the FTC Act and to ensure that users are protected from unauthorized charges. It’s just doing so in ways that also take account of the costs such protections may impose — particularly, in this case, on the majority of Amazon customers who didn’t and wouldn’t suffer such unauthorized charges….
At the same time, enforcement of COPPA against targeted advertising on kids’ content will have perverse and self-defeating consequences. As Berin Szoka notes:
This settlement will cut advertising revenue for creators of child-directed content by more than half. This will give content creators a perverse incentive to mislabel their content. COPPA was supposed to empower parents, but the FTC’s new approach actually makes life harder for parents and cripples functionality even when they want it. In short, artists, content creators, and parents will all lose, and it is not at all clear that this will do anything to meaningfully protect children.
This war against targeted advertising aimed at children has a cost. While many cheer the fine levied against YouTube (or think it wasn’t high enough) and the promised changes to its platform (though the dissenting Commissioners didn’t think those went far enough, either), the actual result will be less content — and especially less free content — available to children.
Far from being a win for parents and children, the shift in oversight responsibility from parents to the FTC will likely lead to less-effective oversight, more difficult user interfaces, less children’s programming, and higher costs for everyone — all without obviously mitigating any harm in the first place.
Last week the International Center for Law & Economics (ICLE) and twelve noted law and economics scholars filed an amicus brief in the Ninth Circuit in FTC v. Qualcomm, in support of appellant (Qualcomm) and urging reversal of the district court’s decision. The brief was authored by Geoffrey A. Manne, President & founder of ICLE, and Ben Sperry, Associate Director, Legal Research of ICLE. Jarod M. Bona and Aaron R. Gott of Bona Law PC collaborated in drafting the brief and they and their team provided invaluable pro bono legal assistance, for which we are enormously grateful. Signatories on the brief are listed at the end of this post.
We’ve written about the case several times on Truth on the Market, as have a number of guest bloggers, in our ongoing blog series on the case here.
The ICLE amicus brief focuses on the ways that the district court exceeded the “error cost” guardrails erected by the Supreme Court to minimize the risk and cost of mistaken antitrust decisions, particularly those that wrongly condemn procompetitive behavior. As the brief notes at the outset:
The district court’s decision is disconnected from the underlying economics of the case. It improperly applied antitrust doctrine to the facts, and the result subverts the economic rationale guiding monopolization jurisprudence. The decision—if it stands—will undercut the competitive values antitrust law was designed to protect.
In essence, the Court’s monopolization case law implements the error cost framework by (among other things) obliging courts to operate under certain decision rules that limit the use of inferences about the consequences of a defendant’s conduct except when the circumstances create what game theorists call a “separating equilibrium.” A separating equilibrium is a
solution to a game in which players of different types adopt different strategies and thereby allow an uninformed player to draw inferences about an informed player’s type from that player’s actions.
The key problem in antitrust is that while the consequence of complained-of conduct for competition (i.e., consumers) is often ambiguous, its deleterious effect on competitors is typically quite evident—whether it is actually anticompetitive or not. The question is whether (and when) it is appropriate to infer anticompetitive effect from discernible harm to competitors.
Except in the narrowly circumscribed (by Trinko) instance of a unilateral refusal to deal, anticompetitive harm under the rule of reason must be proven. It may not be inferred from harm to competitors, because such an inference is too likely to be mistaken—and “mistaken inferences are especially costly, because they chill the very conduct the antitrust laws are designed to protect.” (Brooke Group (quoting yet another key Supreme Court antitrust error cost case, Matsushita (1986)).
Yet, as the brief discusses, in finding Qualcomm liable the district court did not demand or find proof of harm to competition. Instead, the court’s opinion relies on impermissible inferences from ambiguous evidence to find that Qualcomm had (and violated) an antitrust duty to deal with rival chip makers and that its conduct resulted in anticompetitive foreclosure of competition.
We urge you to read the brief (it’s pretty short—maybe the length of three blogs posts) to get the whole argument. Below we draw attention to a few points we make in the brief that are especially significant.
The district court bases its approach entirely on Microsoft — which it misinterprets in clear contravention of Supreme Court case law
The district court doesn’t stay within the strictures of the Supreme Court’s monopolization case law. In fact, although it obligingly recites some of the error cost language from Trinko, it quickly moves away from Supreme Court precedent and bases its approach entirely on its reading of the D.C. Circuit’s Microsoft(2001) decision.
Unfortunately, the district court’s reading of Microsoft is mistaken and impermissible under Supreme Court precedent. Indeed, both the Supreme Court and the D.C. Circuit make clear that a finding of illegal monopolization may not rest on an inference of anticompetitive harm.
The district court cites Microsoft for the proposition that
Where a government agency seeks injunctive relief, the Court need only conclude that Qualcomm’s conduct made a “significant contribution” to Qualcomm’s maintenance of monopoly power. The plaintiff is not required to “present direct proof that a defendant’s continued monopoly power is precisely attributable to its anticompetitive conduct.”
It’s true Microsoft held that, in government actions seeking injunctions, “courts [may] infer ‘causation’ from the fact that a defendant has engaged in anticompetitive conduct that ‘reasonably appears capable of making a significant contribution to maintaining monopoly power.’” (Emphasis added).
But Microsoft never suggested that anticompetitiveness itself may be inferred.
“Causation” and “anticompetitive effect” are not the same thing. Indeed, Microsoft addresses “anticompetitive conduct” and “causation” in separate sections of its decision. And whereas Microsoft allows that courts may infer “causation” in certain government actions, it makes no such allowance with respect to “anticompetitive effect.” In fact, it explicitly rules it out:
[T]he plaintiff… must demonstrate that the monopolist’s conduct indeed has the requisite anticompetitive effect…; no less in a case brought by the Government, it must demonstrate that the monopolist’s conduct harmed competition, not just a competitor.”
The D.C. Circuit subsequently reinforced this clear conclusion of its holding in Microsoft in Rambus:
Deceptive conduct—like any other kind—must have an anticompetitive effect in order to form the basis of a monopolization claim…. In Microsoft… [t]he focus of our antitrust scrutiny was properly placed on the resulting harms to competition.
Finding causation entails connecting evidentiary dots, while finding anticompetitive effect requires an economic assessment. Without such analysis it’s impossible to distinguish procompetitive from anticompetitive conduct, and basing liability on such an inference effectively writes “anticompetitive” out of the law.
Thus, the district court is correct when it holds that it “need not conclude that Qualcomm’s conduct is the sole reason for its rivals’ exits or impaired status.” But it is simply wrong to hold—in the same sentence—that it can thus “conclude that Qualcomm’s practices harmed competition and consumers.” The former claim is consistent with Microsoft; the latter is emphatically not.
Under Trinko and Aspen Skiing the district court’s finding of an antitrust duty to deal is impermissible
Because finding that a company operates under a duty to deal essentially permits a court to infer anticompetitive harm without proof, such a finding “comes dangerously close to being a form of ‘no-fault’ monopolization,” as Herbert Hovenkamp has written. It is also thus seriously disfavored by the Court’s error cost jurisprudence.
In Trinko the Supreme Court interprets its holding in Aspen Skiing to identify essentially a single scenario from which it may plausibly be inferred that a monopolist’s refusal to deal with rivals harms consumers: the existence of a prior, profitable course of dealing, and the termination and replacement of that arrangement with an alternative that not only harms rivals, but also is less profitable for the monopolist.
In an effort to satisfy this standard, the district court states that “because Qualcomm previously licensed its rivals, but voluntarily stopped licensing rivals even though doing so was profitable, Qualcomm terminated a voluntary and profitable course of dealing.”
But it’s not enough merely that the prior arrangement was profitable. Rather, Trinko and Aspen Skiing hold that when a monopolist ends a profitable relationship with a rival, anticompetitive exclusion may be inferred only when it also refuses to engage in an ongoing arrangement that, in the short run, is more profitable than no relationship at all. The key is the relative value to the monopolist of the current options on offer, not the value to the monopolist of the terminated arrangement. In a word, what the Court requires is that the defendant exhibit behavior that, but-for the expectation of future, anticompetitive returns, is irrational.
It should be noted, as John Lopatka (here) and Alan Meese (here) (both of whom joined the amicus brief) have written, that even the Supreme Court’s approach is likely insufficient to permit a court to distinguish between procompetitive and anticompetitive conduct.
But what is certain is that the district court’s approach in no way permits such an inference.
“Evasion of a competitive constraint” is not an antitrust-relevant refusal to deal
In order to infer anticompetitive effect, it’s not enough that a firm may have a “duty” to deal, as that term is colloquially used, based on some obligation other than an antitrust duty, because it can in no way be inferred from the evasion of that obligation that conduct is anticompetitive.
The district court bases its determination that Qualcomm’s conduct is anticompetitive on the fact that it enables the company to avoid patent exhaustion, FRAND commitments, and thus price competition in the chip market. But this conclusion is directly precluded by the Supreme Court’s holding in NYNEX.
Indeed, in Rambus, the D.C. Circuit, citing NYNEX, rejected the FTC’s contention that it may infer anticompetitive effect from defendant’s evasion of a constraint on its monopoly power in an analogous SEP-licensing case: “But again, as in NYNEX, an otherwise lawful monopolist’s end-run around price constraints, even when deceptive or fraudulent, does not alone present a harm to competition.”
[T]he objection to the “evasion” of any constraint approach is… that it opens the door to enforcement actions applied to business conduct that is not likely to harm competition and might be welfare increasing.
Thus NYNEX and Rambus (and linkLine) reinforce the Court’s repeated holding that an inference of harm to competition is permissible only where conduct points clearly to anticompetitive effect—and, bad as they may be, evading obligations under other laws or violating norms of “business morality” do not suffice.
The district court’s elaborate theory of harm rests fundamentally on the claim that Qualcomm injures rivals—and the record is devoid of evidence demonstrating actual harm to competition. Instead, the court infers it from what it labels “unreasonably high” royalty rates, enabled by Qualcomm’s evasion of competition from rivals. In turn, the court finds that that evasion of competition can be the source of liability if what Qualcomm evaded was an antitrust duty to deal. And, in impermissibly circular fashion, the court finds that Qualcomm indeed evaded an antitrust duty to deal—because its conduct allowed it to sustain “unreasonably high” prices.
The Court’s antitrust error cost jurisprudence—from Brooke Group to NYNEX to Trinko & linkLine—stands for the proposition that no such circular inferences are permitted.
The district court’s foreclosure analysis also improperly relies on inferences in lieu of economic evidence
Because the district court doesn’t perform a competitive effects analysis, it fails to demonstrate the requisite “substantial” foreclosure of competition required to sustain a claim of anticompetitive exclusion. Instead the court once again infers anticompetitive harm from harm to competitors.
The district court makes no effort to establish the quantity of competition foreclosed as required by the Supreme Court. Nor does the court demonstrate that the alleged foreclosure harms competition, as opposed to just rivals. Foreclosure per se is not impermissible and may be perfectly consistent with procompetitive conduct.
Again citing Microsoft, the district court asserts that a quantitative finding is not required. Yet, as the court’s citation to Microsoft should have made clear, in its stead a court must find actual anticompetitive effect; it may not simply assert it. As Microsoft held:
It is clear that in all cases the plaintiff must… prove the degree of foreclosure. This is a prudential requirement; exclusivity provisions in contracts may serve many useful purposes.
The court essentially infers substantiality from the fact that Qualcomm entered into exclusive deals with Apple (actually, volume discounts), from which the court concludes that Qualcomm foreclosed rivals’ access to a key customer. But its inference that this led to substantial foreclosure is based on internal business statements—so-called “hot docs”—characterizing the importance of Apple as a customer. Yet, as Geoffrey Manne and Marc Williamson explain, such documentary evidence is unreliable as a guide to economic significance or legal effect:
Business people will often characterize information from a business perspective, and these characterizations may seem to have economic implications. However, business actors are subject to numerous forces that influence the rhetoric they use and the conclusions they draw….
There are perfectly good reasons to expect to see “bad” documents in business settings when there is no antitrust violation lurking behind them.
Assuming such language has the requisite economic or legal significance is unsupportable—especially when, as here, the requisite standard demands a particular quantitative significance.
Moreover, the court’s “surcharge” theory of exclusionary harm rests on assumptions regarding the mechanism by which the alleged surcharge excludes rivals and harms consumers. But the court incorrectly asserts that only one mechanism operates—and it makes no effort to quantify it.
The court cites “basic economics” via Mankiw’s Principles of Microeconomics text for its conclusion:
The surcharge affects demand for rivals’ chips because as a matter of basic economics, regardless of whether a surcharge is imposed on OEMs or directly on Qualcomm’s rivals, “the price paid by buyers rises, and the price received by sellers falls.” Thus, the surcharge “places a wedge between the price that buyers pay and the price that sellers receive,” and demand for such transactions decreases. Rivals see lower sales volumes and lower margins, and consumers see less advanced features as competition decreases.
But even assuming the court is correct that Qualcomm’s conduct entails such a surcharge, basic economics does not hold that decreased demand for rivals’ chips is the only possible outcome.
In actuality, an increase in the cost of an input for OEMs can have three possible effects:
OEMs can pass all or some of the cost increase on to consumers in the form of higher phone prices. Assuming some elasticity of demand, this would mean fewer phone sales and thus less demand by OEMs for chips, as the court asserts. But the extent of that effect would depend on consumers’ demand elasticity and the magnitude of the cost increase as a percentage of the phone price. If demand is highly inelastic at this price (i.e., relatively insensitive to the relevant price change), it may have a tiny effect on the number of phones sold and thus the number of chips purchased—approaching zero as price insensitivity increases.
OEMs can absorb the cost increase and realize lower profits but continue to sell the same number of phones and purchase the same number of chips. This would not directly affect demand for chips or their prices.
OEMs can respond to a price increase by purchasing fewer chips from rivals and more chips from Qualcomm. While this would affect rivals’ chip sales, it would not necessarily affect consumer prices, the total number of phones sold, or OEMs’ margins—that result would depend on whether Qualcomm’s chips cost more or less than its rivals’. If the latter, it would even increase OEMs’ margins and/or lower consumer prices and increase output.
Alternatively, of course, the effect could be some combination of these.
Whether any of these outcomes would substantially exclude rivals is inherently uncertain to begin with. But demonstrating a reduction in rivals’ chip sales is a necessary but not sufficient condition for proving anticompetitive foreclosure. The FTC didn’t even demonstrate that rivals were substantially harmed, let alone that there was any effect on consumers—nor did the district court make such findings.
Doing so would entail consideration of whether decreased demand for rivals’ chips flows from reduced consumer demand or OEMs’ switching to Qualcomm for supply, how consumer demand elasticity affects rivals’ chip sales, and whether Qualcomm’s chips were actually less or more expensive than rivals’. Yet the court determined none of these.
Contrary to established Supreme Court precedent, the district court’s decision relies on mere inferences to establish anticompetitive effect. The decision, if it stands, would render a wide range of potentially procompetitive conduct presumptively illegal and thus harm consumer welfare. It should be reversed by the Ninth Circuit.
Joining ICLE on the brief are:
Donald J. Boudreaux, Professor of Economics, George Mason University
Kenneth G. Elzinga, Robert C. Taylor Professor of Economics, University of Virginia
Janice Hauge, Professor of Economics, University of North Texas
Justin (Gus) Hurwitz, Associate Professor of Law, University of Nebraska College of Law; Director of Law & Economics Programs, ICLE
Thomas A. Lambert, Wall Chair in Corporate Law and Governance, University of Missouri Law School
John E. Lopatka, A. Robert Noll Distinguished Professor of Law, Penn State University Law School
Daniel Lyons, Professor of Law, Boston College Law School
Geoffrey A. Manne, President and Founder, International Center for Law & Economics; Distinguished Fellow, Northwestern University Center on Law, Business & Economics
Alan J. Meese, Ball Professor of Law, William & Mary Law School
Paul H. Rubin, Samuel Candler Dobbs Professor of Economics Emeritus, Emory University
Vernon L. Smith, George L. Argyros Endowed Chair in Finance and Economics, Chapman University School of Business; Nobel Laureate in Economics, 2002
Michael Sykuta, Associate Professor of Economics, University of Missouri
[TOTM: The following is the eighth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here. The blog post is based on a forthcoming paper regarding patent holdup, co-authored by Dirk Auer and Julian Morris.]
In his latest book, Tyler Cowen calls big business an “American anti-hero”. Cowen argues that the growing animosity towards successful technology firms is to a large extent unwarranted. After all, these companies have generated tremendous prosperity and jobs.
Though it is less known to the public than its Silicon Valley counterparts, Qualcomm perfectly fits the anti-hero mold. Despite being a key contributor to the communications standards that enabled the proliferation of smartphones around the globe – an estimated 5 Billion people currently own a device – Qualcomm has been on the receiving end of considerable regulatory scrutiny on both sides of the Atlantic (including two in the EU; see here and here).
In the US, Judge Lucy Koh recently ruled that a combination of anticompetitive practices had enabled Qualcomm to charge “unreasonably high royalty rates” for its CDMA and LTE cellular communications technology. Chief among these practices was Qualcomm’s so-called “no license, no chips” policy, whereby the firm refuses to sell baseband processors to implementers that have not taken out a license for its communications technology. Other grievances included Qualcomm’s purported refusal to license its patents to rival chipmakers, and allegations that it attempted to extract exclusivity obligations from large handset manufacturers, such as Apple. According to Judge Koh, these practices resulted in “unreasonably high” royalty rates that failed to comply with Qualcomm’s FRAND obligations.
Judge Koh’s ruling offers an unfortunate example of the numerous pitfalls that decisionmakers face when they second-guess the distributional outcomes achieved through market forces. This is particularly true in the complex standardization space.
The elephant in the room
The first striking feature of Judge Koh’s ruling is what it omits. Throughout the more than two-hundred-page long document, there is not a single reference to the concepts of holdup or holdout (crucial terms of art for a ruling that grapples with the prices charged by an SEP holder).
At first sight, this might seem like a semantic quibble. But words are important. Patent holdup (along with the “unreasonable” royalties to which it arguably gives rise) is possible only when a number of cumulative conditions are met. Most importantly, the foundational literature on economic opportunism (here and here) shows that holdup (and holdout) mostly occur when parties have made asset-specific sunk investments. This focus on asset-specific investments is echoed by even the staunchest critics of the standardization status quo (here).
Though such investments may well have been present in the case at hand, there is no evidence that they played any part in the court’s decision. This is not without consequences. If parties did not make sunk relationship-specific investments, then the antitrust case against Qualcomm should have turned upon the alleged exclusion of competitors, not the level of Qualcomm’s royalties. The DOJ said this much in its statement of interest concerning Qualcomm’s motion for partial stay of injunction pending appeal. Conversely, if these investments existed, then patent holdout (whereby implementers refuse to license key pieces of intellectual property) was just as much of a risk as patent holdup (here and here). And yet the court completely overlooked this possibility.
The misguided push for component level pricing
The court also erred by objecting to Qualcomm’s practice of basing license fees on the value of handsets, rather than that of modem chips. In simplified terms, implementers paid Qualcomm a percentage of their devices’ resale price. The court found that this was against Federal Circuit law. Instead, it argued that royalties should be based on the value the smallest salable patent-practicing component (in this case, baseband chips). This conclusion is dubious both as a matter of law and of policy.
From a legal standpoint, the question of the appropriate royalty base seems far less clear-cut than Judge Koh’s ruling might suggest. For instance, Gregory Sidak observes that inTCL v. Ericsson Judge Selna used a device’s net selling price as a basis upon which to calculate FRAND royalties. Likewise, in CSIRO v. Cisco, the Court also declined to use the “smallest saleable practicing component” as a royalty base. And finally, as Jonathan Barnett observes, the Circuit Laser Dynamics case law cited by Judge Koh relates to the calculation of damages in patent infringement suits. There is no legal reason to believe that its findings should hold any sway outside of that narrow context. It is one thing for courts to decide upon the methodology that they will use to calculate damages in infringement cases – even if it is a contested one. It is a whole other matter to shoehorn private parties into adopting this narrow methodology in their private dealings.
More importantly, from a policy standpoint, there are important advantages to basing royalty rates on the price of an end-product, rather than that of an intermediate component. This type of pricing notably enables parties to better allocate the risk that is inherent in launching a new product. In simplified terms: implementers want to avoid paying large (fixed) license fees for failed devices; and patent holders want to share in the benefits of successful devices that rely on their inventions. The solution, as Alain Bousquet and his co-authors explain, is to agree on royalty payments that are contingent on success in the market:
Because the demand for a new product is uncertain and/or the potential cost reduction of a new technology is not perfectly known, both seller and buyer may be better off if the payment for the right to use an innovation includes a state-contingent royalty (rather than consisting of just a fixed fee). The inventor wants to benefit from a growing demand for a new product, and the licensee wishes to avoid high payments in case of disappointing sales.
While this explains why parties might opt for royalty-based payments over fixed fees, it does not entirely elucidate the practice of basing royalties on the price of an end device. One explanation is that a technology’s value will often stem from its combination with other goods or technologies. Basing royalties on the value of an end-device enables patent holders to more effectively capture the social benefits that flow from these complementarities.
Imagine the price of the smallest saleable component is identical across all industries, despite it being incorporated into highly heterogeneous devices. For instance, the same modem chip could be incorporated into smartphones (of various price ranges), tablets, vehicles, and other connected devices. The Bousquet line of reasoning (above) suggests that it is efficient for the patent holder to earn higher royalties (from the IP that underpins the modem chips) in those segments where market demand is strongest (i.e. where there are stronger complementarities between the modem chip and the end device).
One way to make royalties more contingent on market success is to use the price of the modem (which is presumably identical across all segments) as a royalty base and negotiate a separate royalty rate for each end device (charging a higher rate for devices that will presumably benefit from stronger consumer demand). But this has important drawbacks. For a start, identifying those segments (or devices) that are most likely to be successful is informationally cumbersome for the inventor. Moreover, this practice could land the patent holder in hot water. Antitrust authorities might naïvely conclude that these varying royalty rates violate the “non-discriminatory” part of FRAND.
A much simpler solution is to apply a single royalty rate (or at least attempt to do so) but use the price of the end device as a royalty base. This ensures that the patent holder’s rewards are not just contingent on the number of devices sold, but also on their value. Royalties will thus more closely track the end-device’s success in the marketplace.
In short, basing royalties on the value of an end-device is an informationally light way for the inventor to capture some of the unforeseen value that might stem from the inclusion of its technology in an end device. Mandating that royalty rates be based on the value of the smallest saleable component ignores this complex reality.
Prices are almost impossible to reconstruct
Judge Koh was similarly imperceptive when assessing Qualcomm’s contribution to the value of key standards, such as LTE and CDMA.
For a start, she reasoned that Qualcomm’s royalties were large compared to the number of patents it had contributed to these technologies:
Moreover, Qualcomm’s own documents also show that Qualcomm is not the top standards contributor, which confirms Qualcomm’s own statements that QCT’s monopoly chip market share rather than the value of QTL’s patents sustain QTL’s unreasonably high royalty rates.
Given the tremendous heterogeneity that usually exists between the different technologies that make up a standard, simply counting each firm’s contributions is a crude and misleading way to gauge the value of their patent portfolios. Accordingly, Qualcomm argued that it had made pioneering contributions to technologies such as CDMA, and 4G/5G. Though the value of Qualcomm’s technologies is ultimately an empirical question, the court’s crude patent counting was unlikely to provide a satisfying answer.
Just as problematically, the court also concluded that Qualcomm’s royalties were unreasonably high because “modem chips do not drive handset value.” In its own words:
Qualcomm’s intellectual property is for communication, and Qualcomm does not own intellectual property on color TFT LCD panel, mega-pixel DSC module, user storage memory, decoration, and mechanical parts. The costs of these non-communication-related components have become more expensive and now contribute 60-70% of the phone value. The phone is not just for communication, but also for computing, movie-playing, video-taking, and data storage.
As Luke Froeb and his co-authors have also observed, the court’s reasoning on this point is particularly unfortunate. Though it is clearly true that superior LCD panels, cameras, and storage increase a handset’s value – regardless of the modem chip that is associated with them – it is equally obvious that improvements to these components are far more valuable to consumers when they are also associated with high-performance communications technology.
For example, though there is undoubtedly standalone value in being able to take improved pictures on a smartphone, this value is multiplied by the ability to instantly share these pictures with friends, and automatically back them up on the cloud. Likewise, improving a smartphone’s LCD panel is more valuable if the device is also equipped with a cutting edge modem (both are necessary for consumers to enjoy high-definition media online).
In more technical terms, the court fails to acknowledge that, in the presence of perfect complements, each good makes an incremental contribution of 100% to the value of the whole. A smartphone’s components would be far less valuable to consumers if they were not associated with a high-performance modem, and vice versa. The fallacy to which the court falls prey is perfectly encapsulated by a quote it cites from Apple’s COO:
Apple invests heavily in the handset’s physical design and enclosures to add value, and those physical handset features clearly have nothing to do with Qualcomm’s cellular patents, it is unfair for Qualcomm to receive royalty revenue on that added value.
The question the court should be asking, however, is whether Apple would have gone to the same lengths to improve its devices were it not for Qualcomm’s complementary communications technology. By ignoring this question, Judge Koh all but guaranteed that her assessment of Qualcomm’s royalty rates would be wide of the mark.
In short, the FTC v. Qualcomm case shows that courts will often struggle when they try to act as makeshift price regulators. It thus lends further credence to Gergory Werden and Luke Froeb’s conclusion that:
Nothing is more alien to antitrust than enquiring into the reasonableness of prices.
This is especially true in complex industries, such as the standardization space. The colossal number of parameters that affect the price for a technology are almost impossible to reproduce in a top-down fashion, as the court attempted to do in the Qualcomm case. As a result, courts will routinely draw poor inferences from factors such as the royalty base agreed upon by parties, the number of patents contributed by a firm, and the complex manner in which an individual technology may contribute to the value of an end-product. Antitrust authorities and courts would thus do well to recall the wise words of Friedrich Hayek:
If we can agree that the economic problem of society is mainly one of rapid adaptation to changes in the particular circumstances of time and place, it would seem to follow that the ultimate decisions must be left to the people who are familiar with these circumstances, who know directly of the relevant changes and of the resources immediately available to meet them. We cannot expect that this problem will be solved by first communicating all this knowledge to a central board which, after integrating all knowledge, issues its orders. We must solve it by some form of decentralization.
[TOTM: The following is the seventh in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here.]
This post is authored by Gerard Lloblet, Professor of Economics at CEMFI, and Jorge Padilla, Senior Managing Director at Compass Lexecon. Both have advised SEP holders, and to a lesser extent licensees, in royalty negotiations and antitrust disputes.]
Over the last few years competition authorities in the US and elsewhere have repeatedly warned about the risk of patent hold-up in the licensing of Standard Essential Patents (SEPs). Concerns about such risks were front and center in the recent FTC case against Qualcomm, where the Court ultimately concluded that Qualcomm had used a series of anticompetitive practices to extract unreasonable royalties from implementers. This post evaluates the evidence for such a risk, as well as the countervailing risk of patent hold-out.
In general, hold up may arise when firms negotiate trading terms after they have made costly, relation-specific investments. Since the costs of these investments are sunk when trading terms are negotiated, they are not factored into the agreed terms. As a result, depending on the relative bargaining power of the firms, the investments made by the weaker party may be undercompensated (Williamson, 1979).
In the context of SEPs, patenthold-up would arise if SEP owners were able to take advantage of the essentiality of their patents to charge excessive royalties to manufacturers of products reading on those patents that made irreversible investments in the standard (see Lemley and Shapiro (2007)). Similarly, in the recent FTC v. Qualcomm ruling, trial judge Lucy Koh concluded that firms may also use commercial strategies (in this case, Qualcomm’s “no license, no chips” policy, refusing to deal with certain parties and demanding exclusivity from others) to extract royalties that depart from the FRAND benchmark.
After years of heated debate, however, there is no consensus about whether patent hold-up actually exists. Some argue that there is no evidence of hold-up in practice. If patent hold-up were a significant problem, manufacturers would anticipate that their investments would be expropriated and would thus decide not to invest in the first place. But end-product manufacturers have invested considerable amounts in standardized technologies (Galetovic et al, 2015). Others claim that while investment is indeed observed, actual investment levels are “necessarily” below those that would be observed in the absence of hold-up. They allege that, since that counterfactual scenario is not observable, it is not surprising that more than fifteen years after the patent hold-up hypothesis was first proposed, empirical evidence of its existence is lacking.
Meanwhile, innovators are concerned about a risk in the opposite direction, the risk of patent hold-out. As Epstein and Noroozi (2018) explain,
By “patent holdout” we mean the converse problem, i.e., that an implementer refuses to negotiate in good faith with an innovator for a license to valid patent(s) that the implementer infringes, and instead forces the innovator to either undertake significant litigation costs and time delays to extract a licensing payment through court order, or else to simply drop the matter because the licensing game is no longer worth the candle.
Patent hold-out, also known as “efficient infringement,” is especially relevant in the standardization context for two reasons. First, SEP owners are oftentimes required to license their patents under Fair, Reasonable and Non-Discriminatory (FRAND) conditions. Particularly when, as occurs in some jurisdictions, innovators are not allowed to request an injunction, they have little or no leverage in trying to require licensees to accept a licensing deal. Secondly, SEP owners typically possess many complementary patents and, therefore, seek to license their portfolio of SEPs at once, since that minimizes transaction costs. Yet, some manufacturers de facto refuse to negotiate in this way and choose to challenge the validity of the SEP portfolio patent-by-patent and/or jurisdiction-by-jurisdiction. This strategy involves large litigation costs and is therefore inefficient. SEP holders claim that this practice is anticompetitive and it also leads to royalties that are too low.
While the concerns of SEP holders seem to have attracted the attention of the leadership of the US DOJ (see, for example, here), some authors have dismissed them as theoretically groundless, empirically immaterial and irrelevant from an antitrust perspective (see here).
Evidence of patent hold-out from litigation
In an ongoing work, Llobet and Padilla (forthcoming), we analyze the effects of the sequential litigation strategy adopted by some manufacturers and compare its consequences with the simultaneous litigation of the whole portfolio. We show that sequential litigation results in lower royalty payments than simultaneous litigation and may result in under-compensation of innovation and the dissipation of social surplus when litigation costs are high.
The model relies on two basic and realistic assumptions. First, in sequential lawsuits, the result of a trial affects the probability that each party wins the following one. That is, if the manufacturer wins the first trial, it has a higher probability of winning the second, as a first victory may uncover information about the validity of other patents that relate to the same type of innovation, which will be less likely to be upheld in court. Second, the impact of a validity challenge on royalty payments is asymmetric: they are reduced to zero if the patent is found to be invalid but are not increased if it is found valid (and infringed).
Our results indicate that these features of the legal system can be strategically used by the manufacturer. The intuition is as follows. Suppose that the innovator sets a royalty rate for each patent for which, in the simultaneous trial case, the manufacturer would be indifferent between settling and litigating. Under sequential litigation, however, the manufacturer might be willing to challenge a patent because of the gain in a future trial. This is due to the asymmetric effects that winning or losing the second trial has on the royalty rate that this firm will have to pay. In particular, if the manufacturer wins the first trial, so that the first patent is invalidated, its probability of winning the second one increases, which means that the innovator is likely to settle for a lower royalty rate for the second patent or see both patents invalidated in court. In the opposite case, if the innovator wins the first trial, so that the second is also likely to be unfavorable to the manufacturer, the latter always has the option to pay up the original royalty rate and avoid the second trial. In other words, the possibility for the manufacturer to negotiate the royalty rate downwards after a victory, without the risk of it being increased in case of a defeat, fosters sequential litigation and results in lower royalties than the simultaneous litigation of all patents would produce.
This mechanism, while being applicable to any portfolio that includes patents the validity of which is related, becomes more significant in the context of SEPs for two reasons. The first is the difficulty of innovators to adjust their royalties upwards after the first successful trial, as it might be considered a breach of their FRAND commitments. The second is that, following recent competition law litigation in the EU and other jurisdictions, SEP owners are restricted in their ability to seek (preliminary) injunctions even in the case of willful infringement. Our analysis demonstrates that the threat of injunction mitigates, though it is unlikely to eliminate completely, the incentive to litigate sequentially and, therefore, excessively (i.e. even when such litigation reduces social welfare).
We also find a second motivation for excessive litigation: business stealing. Manufacturers litigate excessively in order to avoid payment and thus achieve a valuable cost advantage over their competitors. They prefer to litigate even when litigation costs are so large that it would be preferable for society to avoid litigation because their royalty burden is reduced both in absolute terms and relative to the royalty burden for its rivals (while it does not go up if the patents are found valid). This business stealing incentive will result in the under-compensation of innovators, as above, but importantly it may also result in the anticompetitive foreclosure of more efficient competitors.
Consider, for example, a scenario in which a large firm with the ability to fund protracted litigation efforts competes in a downstream market with a competitive fringe, comprising small firms for which litigation is not an option. In this scenario, the large manufacturer may choose to litigate to force the innovator to settle on a low royalty. The large manufacturer exploits the asymmetry with its defenseless small rivals to reduce its IP costs. In some jurisdictions it may also exploit yet another asymmetry in the legal system to achieve an even larger cost advantage. If both the large manufacturer and the innovator choose to litigate and the former wins, the patent is invalidated, and the large manufacturer avoids paying royalties altogether. Whether this confers a comparative advantage on the large manufacturer depends on whether the invalidation results in the immediate termination of all other existing licenses or not.
Our work thus shows that patent hold-out concerns are both theoretically cogent and have non-trivial antitrust implications. Whether such concerns merit intervention is an empirical matter. While reviewing that evidence is outside the scope of our work, our own litigation experience suggests that patent hold-out should be taken seriously.
[TOTM: The following is the sixthin a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here.
This post is authored by Jonathan M. Barnett, Torrey H. Webb Professor of Law at the University of Southern California Gould School of Law.]
There is little doubt that the decision in May 2019 by the Northern District of California in FTC v. Qualcomm is of historical importance. Unless reversed or modified on appeal, the decision would require that the lead innovator behind 3G and 4G smartphone technology renegotiate hundreds of existing licenses with device producers and offer new licenses to any interested chipmakers.
The court’s sweeping order caps off a global campaign by implementers to re-engineer the property-rights infrastructure of the wireless markets. Those efforts have deployed the instruments of antitrust and patent law to override existing licensing arrangements and thereby reduce the input costs borne by device producers in the downstream market. This has occurred both directly, in arguments made by those firms in antitrust and patent litigation or through the filing of amicus briefs, or indirectly by advocating that regulators bring antitrust actions against IP licensors.
Whether or not FTC v. Qualcomm is correctly decided largely depends on whether or not downstream firms’ interest in minimizing the costs of obtaining technology inputs from upstream R&D specialists aligns with the public interest in preserving dynamically efficient innovation markets. As I discuss below, there are three reasons to believe those interests are not aligned in this case. If so, the court’s order would simply engineer a wealth transfer from firms that have led innovation in wireless markets to producers that have borne few of the costs and risks involved in doing so. Members of the former group each exhibits R&D intensities (R&D expenditures as a percentage of sales) in the high teens to low twenties; the latter, approximately five percent. Of greater concern, the court’s upending of long-established licensing arrangements endangers business models that monetize R&D by licensing technology to a large pool of device producers (see Qualcomm), rather than earning returns through self-contained hardware and software ecosystems (see Apple). There is no apparent antitrust rationale for picking and choosing among these business models in innovation markets.
Reason #1: FRAND is a Two-Sided Deal
To fully appreciate the recent litigations involving the FTC and Apple on the one hand, and Qualcomm on the other hand, it is necessary to return to the origins of modern wireless markets.
Starting in the late 1980s, various firms were engaged in the launch of the GSM wireless network in Western Europe. At that time, each European telecom market typically consisted of a national monopoly carrier and a favored group of local equipment suppliers. The GSM project, which envisioned a trans-national wireless communications market, challenged this model. In particular, the national carrier and equipment monopolies were threatened by the fact that the GSM standard relied in part on patented technology held by an outside innovator—namely, Motorola. As I describe in a forthcoming publication, the “FRAND” (fair, reasonable and nondiscriminatory) principles that today govern the licensing of standard-essential patents in wireless markets emerged from a negotiation between, on the one hand, carriers and producers who sought a royalty cap and, on the other hand, a technology innovator that sought to preserve its licensing freedom going forward.
This negotiation history is important. Any informed discussion of the meaning of FRAND must recognize that this principle was adopted as something akin to a “good faith” contractual term designed to promote two objectives:
Protect downstream adopters from holdup tactics by upstream innovators; and
enable upstream innovators to enjoy an appreciable portion of the value generated by sales in the consumer market.
Any interpretation of FRAND that does not meet these conditions will induce upstream firms to reduce R&D investment, limit participation in standard-setting activities, or vertically integrate forward to capture directly a return on R&D dollars.
Reason #2: No Evidence of Actual Harm
In the December 2018 appellate court proceedings in which the Department of Justice unsuccessfully challenged the AT&T/Time-Warner merger, Judge David Sentelle of the D.C. Circuit said to the government’s legal counsel:
If you’re going to rely on an economic model, you have to rely on it with quantification. The bare theorem . . . doesn’t prove anything in a particular case.
The government could not credibly reply to that query in the AT&T case and, if appropriately challenged, could not do so in this case.
Far from being a market that calls out for federal antitrust intervention, the smartphone market offers what appears to be an almost textbook case of dynamic efficiency. For over a decade, implementers, along with sympathetic regulators and commentators, have argued that the market suffers (or, in a variation, will imminently suffer) from inflated prices, reduced output and delayed innovation as a result of “patent hold-up” and “royalty stacking” by opportunistic patent owners. In the course of several decades that have passed since the launch of the GSM network, none of these predictions have yet to materialize. To the contrary. The market has exhibited expanding output, declining prices (adjusted for increased functionality), constant innovation, and regular entry into the production market. Multiple empirical studies (e.g. this, this and this) have found that device producers bear on average an aggregate royalty burden in the single to mid-digits.
This hardly seems like a market in which producers and consumers are being “victimized” by what the Northern District of California calls “unreasonably high” licensing fees (compared to an unspecified, and inherently unspecifiable, dynamically efficient benchmark). Rather, it seems more likely that device producers—many of whom provided the testimony which the court referenced in concluding that royalty rates were “unreasonably high”—would simply prefer to pay an even lower fee to R&D input suppliers (with no assurance that any of the cost-savings would flow to consumers).
Reason #3: The “License as Tax” Fallacy
The rhetorical centerpiece of the FTC’s brief relied on an analogy between the patent license fees earned by Qualcomm in the downstream device market and the tax that everyone pays to the IRS. The court’s opinion wholeheartedly adopted this narrative, determining that Qualcomm imposes a tax (or, as Judge Koh terms it, a “surcharge”) on the smartphone market by demanding a fee from OEMs for use of its patent portfolio whether or not the OEM purchases chipsets from Qualcomm or another firm. The tax analogy is fundamentally incomplete, both in general and in this case in particular.
It is true that much of the economic literature applies monopoly taxation models to assess the deadweight losses attributed to patents. While this analogy facilitates analytical tractability, a “zero-sum” approach to patent licensing overlooks the value-creating “multiplier” effect that licensing generates in real-world markets. Specifically, broad-based downstream licensing by upstream patent owners—something to which SEP owners commit under FRAND principles—ensures that device makers can obtain the necessary technology inputs and, in doing so, facilitates entry by producers that do not have robust R&D capacities. All of that ultimately generates gains for consumers.
This “positive-sum” multiplier effect appears to be at work in the smartphone market. Far from acting as a tax, Qualcomm’s licensing policies appear to have promoted entry into the smartphone market, which has experienced fairly robust turnover in market leadership. While Apple and Samsung may currently dominate the U.S. market, they face intense competition globally from Chinese firms such as Huawei, Xiaomi and Oppo. That competitive threat is real. As of 2007, Nokia and Blackberry were the overwhelming market leaders and appeared to be indomitable. Yet neither can be found in the market today. That intense “gale of competition”, sustained by the fact that any downstream producer can access the required technology inputs upon payment of licensing fees to upstream innovators, challenges the view that Qualcomm’s licensing practices have somehow restrained market growth.
Concluding Thoughts: Antitrust Flashback
When competitive harms are so unclear (and competitive gains so evident), modern antitrust law sensibly prescribes forbearance. A famous “bad case” from antitrust history shows why.
In 1953, the Department of Justice won an antitrust suit against United Shoe Machinery Corporation, which had led innovation in shoe manufacturing equipment and subsequently dominated that market. United Shoe’s purportedly anti-competitive practices included a lease-only policy that incorporated training and repair services at no incremental charge. The court found this to be a coercive tie that preserved United Shoe’s dominant position, despite the absence of any evidence of competitive harm. Scholars have subsequently shown (e.g. this and this; see also this) that the court did not adequately consider (at least) two efficiency explanations: (1) lease-only policies were widespread in the market because this facilitated access by smaller capital-constrained manufacturers, and (2) tying support services to equipment enabled United Shoe to avoid free-riding on its training services by other equipment suppliers. In retrospect, courts relied on a mere possibility theorem ultimately to order the break-up of a technological pioneer, with potentially adverse consequences for manufacturers that relied on its R&D efforts.
The court’s decision in FTC v. Qualcomm is a flashback to cases like United Shoe in which courts found liability and imposed dramatic remedies with little economic inquiry into competitive harm. It has become fashionable to assert that current antitrust law is too cautious in finding liability. Yet there is a sound reason why, outside price-fixing, courts generally insist that theories of antitrust liability include compelling evidence of competitive harm. Antitrust remedies are strong medicine and should be administered with caution. If courts and regulators do not zealously scrutinize the factual support for antitrust claims, then they are vulnerable to capture by private entities whose business objectives may depart from the public interest in competitive markets. While no antitrust fact-pattern is free from doubt, over two decades of market performance strongly favor the view that long-standing licensing arrangements in the smartphone market have resulted in substantial net welfare gains for consumers. If so, the prudent course of action is simply to leave the market alone.
In an amicus brief filed last Friday, a diverse group of antitrust scholars joined the Washington Legal Foundation in urging the U.S. Court of Appeals for the Second Circuit to vacate the Federal Trade Commission’s misguided 1-800 Contacts decision. Reasoning that 1-800’s settlements of trademark disputes were “inherently suspect,” the FTC condemned the settlements under a cursory “quick look” analysis. In so doing, it improperly expanded the category of inherently suspect behavior and ignored an obvious procompetitive justification for the challenged settlements. If allowed to stand, the Commission’s decision will impair intellectual property protections that foster innovation.
A number of 1-800’s rivals purchased online ad placements that would appear when customers searched for “1-800 Contacts.” 1-800 sued those rivals for trademark infringement, and the lawsuits settled. As part of each settlement, 1-800 and its rival agreed not to bid on each other’s trademarked terms in search-based keyword advertising. (For example, EZ Contacts could not bid on a placement tied to a search for 1-800 Contacts, and vice-versa). Each party also agreed to employ “negative keywords” to ensure that its ads would not appear in response to a consumer’s online search for the other party’s trademarks. (For example, in bidding on keywords, 1-800 would have to specify that its ad must not appear in response to a search for EZ Contacts, and vice-versa). Notably, the settlement agreements didn’t restrict the parties’ advertisements through other media such as TV, radio, print, or other forms of online advertising. Nor did they restrict paid search advertising in response to any search terms other than the parties’ trademarks.
The FTC concluded that these settlement agreements violated the antitrust laws as unreasonable restraints of trade. Although the agreements were not unreasonable per se, as naked price-fixing is, the Commission didn’t engage in the normally applicable rule of reason analysis to determine whether the settlements passed muster. Instead, the Commission condemned the settlements under the truncated analysis that applies when, in the words of the Supreme Court, “an observer with even a rudimentary understanding of economics could conclude that the arrangements in question would have an anticompetitive effect on customers and markets.” The Commission decided that no more than a quick look was required because the settlements “restrict the ability of lower cost online sellers to show their ads to consumers.”
That was a mistake. First, the restraints in 1-800’s settlements are far less extensive than other restraints that the Supreme Court has said may not be condemned under a cursory quick look analysis. In California Dental, for example, the Supreme Court reversed a Ninth Circuit decision that employed the quick look analysis to condemn a de facto ban on all price and “comfort” advertising by members of a dental association. In light of the possibility that the ban could reduce misleading ads, enhance customer trust, and thereby stimulate demand, the Court held that the restraint must be assessed under the more probing rule of reason. A narrow limit on the placement of search ads is far less restrictive than the all-out ban for which the California Dental Court prescribed full-on rule of reason review.
1-800’s settlements are also less likely to be anticompetitive than are other settlements that the Supreme Court has said must be evaluated under the rule of reason. The Court’s Actavis decision rejected quick look and mandated full rule of reason analysis for reverse payment settlements of pharmaceutical patent litigation. In a reverse payment settlement, the patent holder pays an alleged infringer to stay out of the market for some length of time. 1-800’s settlements, by contrast, did not exclude its rivals from the market, place any restrictions on the content of their advertising, or restrict the placement of their ads except on webpages responding to searches for 1-800’s own trademarks. If the restraints in California Dental and Actavis required rule of reason analysis, then those in 1-800’s settlements surely must as well.
In addition to disregarding Supreme Court precedents that limit when mere quick look is appropriate, the FTC gave short shrift to a key procompetitive benefit of the restrictions in 1-800’s settlements. 1-800 spent millions of dollars convincing people that they could save money by ordering prescribed contact lenses from a third party rather than buying them from prescribing optometrists. It essentially built the online contact lens market in which its rivals now compete. In the process, it created a strong trademark, which undoubtedly boosts its own sales. (Trademarks point buyers to a particular seller and enhance consumer confidence in the seller’s offering, since consumers know that branded sellers will not want to tarnish their brands with shoddy products or service.)
When a rival buys ad space tied to a search for 1-800 Contacts, that rival is taking a free ride on 1-800’s investments in its own brand and in the online contact lens market itself. A rival that has advertised less extensively than 1-800—primarily because 1-800 has taken the lead in convincing consumers to buy their contact lenses online—will incur lower marketing costs than 1-800 and may therefore be able to underprice it. 1-800 may thus find that it loses sales to rivals who are not more efficient than it is but have lower costs because they have relied on 1-800’s own efforts.
If market pioneers like 1-800 cannot stop this sort of
free-riding, they will have less incentive to make the investments that create
new markets and develop strong trade names. The restrictions in the 1-800
settlements were simply an effort to prevent inefficient free-riding while otherwise
preserving the parties’ freedom to advertise. They were a narrowly tailored
solution to a problem that hurt 1-800 and
reduced incentives for future investments in market-developing activities that inure
to the benefit of consumers.
Rule of reason analysis would have allowed the FTC to assess
the full market effects of 1-800’s settlements. The Commission’s truncated assessment,
which was inconsistent with Supreme Court decisions on when a quick look will
suffice, condemned conduct that was likely procompetitive. The Second Circuit
should vacate the FTC’s order.
Last week the Senate Judiciary Committee held a hearing, Intellectual
Property and the Price of Prescription Drugs: Balancing Innovation and
Competition, that explored whether changes to the pharmaceutical patent
process could help lower drug prices. The
committee’s goal was to evaluate various legislative proposals that might
facilitate the entry of cheaper generic drugs, while also recognizing that strong
patent rights for branded drugs are essential to incentivize drug
innovation. As Committee Chairman
Lindsey Graham explained:
One thing you don’t want to do is kill the goose who laid the golden egg, which is pharmaceutical development. But you also don’t want to have a system that extends unnecessarily beyond the ability to get your money back and make a profit, a patent system that drives up costs for the average consumer.
Several proposals that were discussed at the hearing have
the potential to encourage competition in the pharmaceutical industry and help
rein in drug prices. Below, I discuss these proposals, plus a few additional
reforms. I also point out some of the language in the current draft proposals
that goes a bit too far and threatens the ability of drug makers to remain
1. Prevent brand drug makers from blocking generic companies’ access to drug samples. Some brand drug makers have attempted to delay generic entry by restricting generics’ access to the drug samples necessary to conduct FDA-required bioequivalence studies. Some brand drug manufacturers have limited the ability of pharmacies or wholesalers to sell samples to generic companies or abused the REMS (Risk Evaluation Mitigation Strategy) program to refuse samples to generics under the auspices of REMS safety requirements. The Creating and Restoring Equal Access To Equivalent Samples (CREATES) Act of 2019 would allow potential generic competitors to bring an action in federal court for both injunctive relief and damages when brand companies block access to drug samples. It also gives the FDA discretion to approve alternative REMS safety protocols for generic competitors that have been denied samples under the brand companies’ REMS protocol. Although the vast majority of brand drug companies do not engage in the delay tactics addressed by CREATES, the Act would prevent the handful that do from thwarting generic competition. Increased generic competition should, in turn, reduce drug prices.
2. Restrict abuses of FDA Citizen Petitions. The citizen petition process was created as a way for individuals and community groups to flag legitimate concerns about drugs awaiting FDA approval. However, critics claim that the process has been misused by some brand drug makers who file petitions about specific generic drugs in the hopes of delaying their approval and market entry. Although FDA has indicated that citizens petitions rarely delay the approval of generic drugs, there have been a few drug makers, such as Shire ViroPharma, that have clearly abused the process and put unnecessary strain on FDA resources. The Stop The Overuse of Petitions and Get Affordable Medicines to Enter Soon (STOP GAMES) Act is intended to prevent such abuses. The Act reinforces the FDA and FTC’s ability to crack down on petitions meant to lengthen the approval process of a generic competitor, which should deter abuses of the system that can occasionally delay generic entry. However, lawmakers should make sure that adopted legislation doesn’t limit the ability of stakeholders (including drug makers that often know more about the safety of drugs than ordinary citizens) to raise serious concerns with the FDA.
3. Curtail Anticompetitive Pay-for-Delay Settlements. The Hatch-Waxman Act incentivizes generic companies to challenge brand drug patents by granting the first successful generic challenger a period of marketing exclusivity. Like all litigation, many of these patent challenges result in settlements instead of trials. The FTC and some courts have concluded that these settlements can be anticompetitive when the brand companies agree to pay the generic challenger in exchange for the generic company agreeing to forestall the launch of their lower-priced drug. Settlements that result in a cash payment are a red flag for anti-competitive behavior, so pay-for-delay settlements have evolved to involve other forms of consideration instead. As a result, the Preserve Access to Affordable Generics and Biosimilars Act aims to make an exchange of anything of value presumptively anticompetitive if the terms include a delay in research, development, manufacturing, or marketing of a generic drug. Deterring obvious pay-for-delay settlements will prevent delays to generic entry, making cheaper drugs available as quickly as possible to patients.
However, the Act’s rigid presumption that an exchange of anything of value is presumptively anticompetitive may also prevent legitimate settlements that ultimately benefit consumers. Brand drug makers should be allowed to compensate generic challengers to eliminate litigation risk and escape litigation expenses, and many settlements result in the generic drug coming to market before the expiration of the brand patent and possibly earlier than if there was prolonged litigation between the generic and brand company. A rigid presumption of anticompetitive behavior will deter these settlements, thereby increasing expenses for all parties that choose to litigate and possibly dissuading generics from bringing patent challenges in the first place. Indeed, the U.S. Supreme Court has declined to define these settlements as per se anticompetitive, and the FTC’s most recent agreement involving such settlements exempts several forms of exchanges of value. Any adopted legislation should follow the FTC’s lead and recognize that some exchanges of value are pro-consumer and pro-competitive.
4. Restore the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. I have previously discussed how an unbalanced inter partes review (IPR) process for challenging patents threatens to stifle drug innovation. Moreover, current law allows generic challengers to file duplicative claims in both federal court and through the IPR process. And because IPR proceedings do not have a standing requirement, the process has been exploited by entities that would never be granted standing in traditional patent litigation—hedge funds betting against a company by filing an IPR challenge in hopes of crashing the stock and profiting from the bet. The added expense to drug makers of defending both duplicative claims and claims against challengers that are exploiting the system increases litigation costs, which may be passed on to consumers in the form of higher prices.
The Hatch-Waxman Integrity Act (HWIA) is designed to return the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. It requires generic challengers to choose between either Hatch-Waxman litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) or an IPR proceeding (which is faster and provides certain pro-challenger provisions). The HWIA would also eliminate the ability of hedge funds and similar entities to file IPR claims while shorting the stock. By reducing duplicative litigation and the exploitation of the IPR process, the HWIA will reduce costs and strengthen innovation incentives for drug makers. This will ensure that patent owners achieve clarity on the validity of their patents, which will spur new drug innovation and make sure that consumers continue to have access to life-improving drugs.
5. Curb illegal product hopping and patent thickets. Two drug maker tactics currently garnering a lot of attention are so-called “product hopping” and “patent thickets.” At its worst, product hopping involves brand drug makers making minor changes to a drug nearing the end of its patent so that they gets a new patent on the slightly-tweaked drug, and then withdrawing the original drug from the market so that patients shift to the newly patented drug and pharmacists can’t substitute a generic version of the original drug. Similarly, at their worst, patent thickets involve brand drug makers obtaining a web of patents on a single drug to extend the life of their exclusivity and make it too costly for other drug makers to challenge all of the patents associated with a drug. The proposed Affordable Prescriptions for Patients Act of 2019 is meant to stop these abuses of the patent system, which would facilitate generic entry and help to lower drug prices.
However, the Act goes too far by also capturing many legitimate activities in its definitions. For example, the bill defines as anticompetitive product-hopping the selling of any improved version of a drug during a window which extends to a year after the launch of the first generic competitor. Presently, to acquire a patent and FDA approval, the improved version of the drug must be different and innovative enough from the original drug, yet the Act would prevent the drug maker from selling such a product without satisfying a demanding three-pronged test before the FTC or a district court. Similarly, the Act defines as anticompetitive patent thickets any new patents filed on a drug in the same general family as the original patent, and this presumption can only be rebutted by providing extensive evidence and satisfying demanding standards to the FTC or a district court. As a result, the Act deters innovation activity that is at all related to an initial patent and, in doing so, ignores the fact that most important drug innovation is incremental innovation based on previous inventions. Thus, the proposal should be redrafted to capture truly anticompetitive product hopping and patent thicket activity, while exempting behavior this is critical for drug innovation.
Reforms that close loopholes in the current patent process should facilitate competition in the pharmaceutical industry and help to lower drug prices. However, lawmakers need to be sure that they don’t restrict patent rights to the extent that they deter innovation because a significant body of research predicts that patients’ health outcomes will suffer as a result.
[TOTM: The following is the fifth in a series of posts by TOTM guests and authors on the FTC v. Qualcommcase, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.
This post is authored by Douglas H. Ginsburg, Professor of Law, Antonin Scalia Law School at George Mason University; Senior Judge, United States Court of Appeals for the District of Columbia Circuit; and former Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice; and Joshua D. Wright, University Professor, Antonin Scalia Law School at George Mason University; Executive Director, Global Antitrust Institute; former U.S. Federal Trade Commissioner from 2013-15; and one of the founding bloggers at Truth on the Market.]
[Ginsburg & Wright: Professor Wright is recused from participation in the FTC litigation against Qualcomm, but has provided counseling advice to Qualcomm concerning other regulatory and competition matters. The views expressed here are our own and neither author received financial support.]
In a recent article Joe Kattan and Tim Muris (K&M) criticize our article on the predictive power of bargaining models in antitrust, in which we used two recent applications to explore implications for uses of bargaining models in courts and antitrust agencies moving forward. Like other theoretical models used to predict competitive effects, complex bargaining models require courts and agencies rigorously to test their predictions against data from the real world markets and institutions to which they are being applied. Where the “real-world evidence,” as Judge Leon described such data in AT&T/Time Warner, is inconsistent with the predictions of a complex bargaining model, then the tribunal should reject the model rather than reality.
K&M, who represent Intel Corporation in connection with the FTC v. Qualcomm case now pending in the Northern District of California, focus exclusively upon, and take particular issue with, one aspect of our prior article: We argued that, as in AT&T/Time Warner, the market realities at issue in FTC v. Qualcomm are inconsistent with the use of Dr. Carl Shapiro’s bargaining model to predict competitive effects in the relevant market. K&M—no doubt confident in their superior knowledge of the underlying facts due to their representation in the matter—criticize our analysis for our purported failure to get our hands sufficiently dirty with the facts. They criticize our broader analysis of bargaining models and their application for our failure to discuss specific pieces of evidence presented at trial, and offer up several quotations from Qualcomm’s customers as support for Shapiro’s economic analysis. K&M concede that, as we argue, the antitrust laws should not condemn a business practice in the absence of robust economic evidence of actual or likely harm to competition; yet, they do not see any conflict between that concession and their position that the FTC need not, through its expert, quantify the royalty surcharge imposed by Qualcomm because the “exact size of the overcharge was not relevant to the issue of Qualcomm’s liability.” [Kattan and Muris miss the point that within the context of economic modeling, the failure to identify the magnitude of an effect with any certainty when data are available, including whether the effect is statistically different than zero, calls into question the model’s robustness more generally.]
Though our prior article was a broad one, not limited to FTC v. Qualcomm or intended to cover record evidence in detail, we welcome K&M’s critique and are happy to accept their invitation to engage further on the facts of that particular case. We agree that accounting for market realities is very important when complex economic models are at play. Unfortunately, K&M’s position that the evidence “supports Shapiro’s testimony overwhelmingly” ignores the sound empirical evidence employed by Dr. Aviv Nevo during trial and has not aged well in light of the internal Apple documents made public in Qualcomm’s Opening Statement following the companies’ decision to settle the case, which Apple had initiated in January 2017.
Qualcomm’s Opening Statement in the Apple litigation revealed a number of new facts that are problematic, to say the least, for K&M’s position and, even more troublesome for Shapiro’s model and the FTC’s case. Of course, as counsel to an interested party in the FTC case, it is entirely possible that K&M were aware of the internal Apple documents cited in Qualcomm’s Opening Statement (or similar documents) and simply disagree about their significance. On the other hand, it is quite clear the Department of Justice Antitrust Division found them to be significantly damaging; it took the rare step of filing a Statement of Interest of the United States with the district court citing the documents and imploring the court to call for additional briefing and hold a hearing on issues related to a remedy in the event that it finds Qualcomm liable on any of the FTC’s claims. The internal Apple documents cited in Qualcomm’s Opening Statement leave no doubt as to several critical market realities that call into question the FTC’s theory of harm and Shapiro’s attempts to substantiate it.
(For more on the implications of these documents, see Geoffrey Manne’s post in this series, here).
First, the documents laying out Apple’s litigation strategy clearly establish that it has a high regard for Qualcomm’s technology and patent portfolio and that Apple strategized for several years about how to reduce its net royalties and to hurt Qualcomm financially.
Second, the documents undermine Apple’s public complaints about Qualcomm and call into question the validity of the underlying theory of harm in the FTC’s case. In particular, the documents plainly debunk Apple’s claims that Qualcomm’s patents weakened over time as a result of a decline in the quality of the technology and that Qualcomm devised an anticompetitive strategy in order to extract value from a weakening portfolio. The documents illustrate that in fact, Apple adopted a deliberate strategy of trying to manipulate the value of Qualcomm’s portfolio. The company planned to “creat[e] evidence” by leveraging its purchasing power to methodically license less expensive patents in hope of making Qualcomm’s royalties appear artificially inflated. In other words, if Apple’s made-for-litigation position were correct, then it would be only because of Apple’s attempt to manipulate and devalue Qualcomm’s patent portfolio, not because there had been any real change in its value.
Third, the documents directly refute some of the arguments K&M put forth in their critique of our prior article, in which we invoked Dr. Nevo’s empirical analysis of royalty rates over time as important evidence of historical facts that contradict Dr. Shapiro’s model. For example, K&M attempt to discredit Nevo’s analysis by claiming he did not control for changes in the strength of Qualcomm’s patent portfolio which, they claim, had weakened over time. According to internal Apple documents, however, “Qualcomm holds a stronger position in . . . , and particularly with respect to cellular and Wi-Fi SEPs” than do Huawei, Nokia, Ericsson, IDCC, and Apple. Another document states that “Qualcomm is widely considered the owner of the strongest patent portfolio for essential and relevant patents for wireless standards.” Indeed, Apple’s documents show that Apple sought artificially to “devalue SEPs” in the industry by “build[ing] favorable, arms-length ‘comp’ licenses” in an attempt to reduce what FRAND means. The ultimate goal of this pursuit was stated frankly by Apple: To “reduce Apple’s net royalty to Qualcomm” despite conceding that Qualcomm’s chips “engineering wise . . . have been the best.”
As new facts relevant to the FTC’s case and contrary to its theory of harm come to light, it is important to re-emphasize the fundamental point of our prior article: Model predictions that are inconsistent with actual market evidence should give fact finders serious pause before accepting the results as reliable. This advice is particularly salient in a case like FTC v. Qualcomm, where intellectual property and innovation are critical components of the industry and its competitiveness, because condemning behavior that is not truly anticompetitive may have serious, unintended consequences. (See Douglas H. Ginsburg & Joshua D. Wright, Dynamic Analysis and the Limits of Antitrust Institutions, 78 Antitrust L.J. 1 (2012); Geoffrey A. Manne & Joshua D. Wright, Innovation and the Limits of Antitrust, 6 J. Competition L. & Econ. 153 (2010)).
The serious consequences of a false positive, that is, the erroneous condemnation of a procompetitive or competitively neutral business practice, is undoubtedly what caused the Antitrust Division to file its Statement of Interest in the FTC’s case against Qualcomm. That Statementcorrectly highlights the Apple documents as support for Government’s concern that “an overly broad remedy in this case could reduce competition and innovation in markets for 5G technology and downstream applications that rely on that technology.”
In this reply, we examine closely the market realities that with and hence undermine both Dr. Shapiro’s bargaining model and the FTC’s theory of harm in its case against Qualcomm. We believe the “large body of evidence” offered by K&M supporting Shapiro’s theoretical analysis is insufficient to sustain his conclusions under standard antitrust analysis, including the requirement that a plaintiff alleging monopolization or attempted monopolization provide evidence of actual or likely anticompetitive effects. We will also discuss the implications of the newly-public internal Apple documents for the FTC’s case, which remains pending at the time of this writing, and for future government investigations involving allegedly anticompetitive licensing of intellectual property.
I. Kattan and Muris Rely Upon Inconsequential Testimony and Mischaracterize Dr. Nevo’s Empirical Analysis
K&M march through a series of statements from Qualcomm’s customers asserting that the threat of Qualcomm discontinuing the supply of modem chips forced them to agree to unreasonable licensing demands. This testimony, however, is reminiscent of Dr. Shapiro’s testimony in AT&T/Time Warner concerning the threat of a long-term blackout of CNN and other Turner channels: Qualcomm has never cut off any customer’s supply of chips. The assertion that companies negotiating with Qualcomm either had to “agree to the license or basically go out of business” ignores the reality that even if Qualcomm discontinued supplying chips to a customer, the customer could obtain chips from one of four rival sources. This was not a theoretical possibility. Indeed, Apple has been sourcing chips from Intel since 2016 and made the decision to switch to Intel specifically in order, in its own words, to exert “commercial pressure against Qualcomm.”
Further, as Dr. Nevo pointed out at trial, SEP license agreements are typically long term (e.g., 10 or 15 year agreements) and are negotiated far less frequently than chip prices, which are typically negotiated annually. In other words, Qualcomm’s royalty rate is set prior to and independent of chip sale negotiations.
K&M raise a number of theoretical objections to Nevo’s empirical analysis. For example, K&M accuse Nevo of “cherry picking” the licenses he included in his empirical analysis to show that royalty rates remained constant over time, stating that he “excluded from consideration any license that had non-standard terms.” They mischaracterize Nevo’s testimony on this point. Nevo excluded from his analysis agreements that, according to the FTC’s own theory of harm, would be unaffected (e.g., agreements that were signed subject to government supervision or agreements that have substantially different risk splitting provisions). In any event, Nevo testified that modifying his analysis to account for Shapiro’s criticism regarding the excluded agreements would have no material effect on his conclusions. To our knowledge, Nevo’s testimony is the only record evidence providing any empirical analysis of the effects of Qualcomm’s licensing agreements.
As previously mentioned, K&M also claim that Dr. Nevo’s analysis failed to account for the alleged weakening of Qualcomm’s patent portfolio over time. Apple’s internal documents, however, are fatal to that claim.. K&M also pinpoint failure to control for differences among customers and changes in the composition of handsets over time as critical errors in Nevo’s analysis. Their assertion that Nevo should have controlled for differences among customers is puzzling. They do not elaborate upon that criticism, but they seem to believe different customers are entitled to different FRAND rates for the same license. But Qualcomm’s standard practice—due to the enormous size of its patent portfolio—is and has always been to charge all licensees the same rate for the entire portfolio.
As to changes in the composition of handsets over time, no doubt a smartphone today has many more features than a first-generation handset that only made and received calls; those new features, however, would be meaningless without Qualcomm’s SEPs, which are implemented by mobile chips that enable cellular communication. One must wonder why Qualcomm should have reduced the royalty rate on licenses for patents that are just as fundamental to the functioning of mobile phones today as they were to the functioning of a first-generation handset. K&M ignore the fundamental importance of Qualcomm’s SEPs in claiming that royalty rates should have declined along with the quality adjusted/? declining prices of mobile phones. They also, conveniently, ignore the evidence that the industry has been characterized by increasing output and quality—increases which can certainly be attributed at least in part to Qualcomm’s chips being “engineering wise . . . the best.”.
II. Apple’s Internal Documents Eviscerate the FTC’s Theory of Harm
The FTC’s theory of harm is premised upon Qualcomm’s allegedly charging a supra-FRAND rate for its SEPs (the “royalty surcharge”), which squeezes the margins of OEMs and consequently prevents rival chipset suppliers from obtaining a sufficient return when negotiating with those OEMs. (See Luke Froeb, et al’s criticism of the FTC’s theory of harm on these and related grounds, here). To predict the effects of Qualcomm’s allegedly anticompetitive conduct, Dr. Shapiro compared the gains from trade OEMs receive when they purchase a chip from Qualcomm and pay Qualcomm a FRAND royalty to license its SEPs with the gains from trade OEMs receive when they purchase a chip from a rival manufacturer and pay a “royalty surcharge” to Qualcomm to license its SEPs. Shapiro testified that he had “reason to believe that the royalty surcharge was substantial” and had “inevitable consequences,” for competition and for consumers, though his bargaining model did not quantify the effects of Qualcomm’s practice.
The premise of the FTC theory requires a belief about FRAND as a meaningful, objective competitive benchmark that Qualcomm was able to evade as a result of its market power in chipsets. But Apple manipulated negotiations as a tactic to reshape FRAND itself. The closer look at the facts invited by K&M does nothing to improve one’s view of the FTC’s claims. The Apple documents exposed at trial make it clear that Apple deliberately manipulated negotiations with other suppliers in order to make it appear to courts and antitrust agencies that something other than the quality of Qualcomm’s technology was driving royalty rates. For example, Apple’s own documents show it sought artificially to “devalue SEPs” by “build[ing] favorable, arms-length ‘comp’ licenses” in an attempt to reshape what FRAND means in this industry. Simply put, Apple’s strategy was to negotiate cheap supposedly “comparable” licenses with other chipset suppliers as part of a plan to reduce its net royalties to Qualcomm.
As part of the same strategy, Apple spent years arguing to regulators and courts that Qualcomm’s patents were no better than those of its competitors. But their internal documents tell this very different story:
“Nokia’s patent portfolio is significantly weaker than Qualcomm’s.”
“[InterDigital] makes minimal contributions to [the 4G/LTE] standard”
“Compared to [Huawei, Nokia, Ericsson, IDCC, and Apple], Qualcomm holds a stronger position in , and particularly with respect to cellular and Wi-Fi SEPs.”
“Compared to other licensors, Qualcomm has more significant holdings in key areas such as media processing, non-cellular communications and hardware. Likewise, using patent citation analysis as a measure of thorough prosecution within the US PTO, Qualcomm patents (SEPs and non-SEPs both) on average score higher compared to the other, largely non-US based licensors.”
One internal document that is particularly troubling states that Apple’s plan was to “create leverage by building pressure” in order to (i) hurt Qualcomm financially and (ii) put Qualcomm’s licensing model at risk. What better way to harm Qualcomm financially and put its licensing model at risk than to complain to regulators that the business model is anticompetitive and tie the company up in multiple costly litigations? That businesses make strategic plans to harm one another is no surprise. But it underscores the importance of antitrust institutions – with their procedural and evidentiary requirements – to separate meritorious claims from fabricated ones. They failed to do so here.
III. Lessons Learned
So what should we make of evidence suggesting one of the FTC’s key informants during its investigation of Qualcomm didn’t believe the arguments it was selling? The exposure of Apple’s internal documents is a sobering reminder that the FTC is not immune from the risk of being hoodwinked by rent-seeking antitrust plaintiffs. That a firm might try to persuade antitrust agencies to investigate and sue its rivals is nothing new (see, e.g., William J. Baumol & Janusz A. Ordover, Use of Antitrust to Subvert Competition, 28 J.L. & Econ. 247 (1985)), but it is a particularly high-stakes game in modern technology markets.
Lesson number one: Requiring proof of actual anticompetitive effects rather than relying upon a model that is not robust to market realities is an important safeguard to ensure that Section 2 protects competition and not merely an individual competitor. Yet the agencies’ staked their cases on bargaining models in AT&T/Time Warner and FTC v. Qualcomm that fell short of proving anticompetitive effects. An agency convinced by one firm or firms to pursue an action against a rival for conduct that does not actually harm competition could have a significant and lasting anticompetitive effect on the market. Modern antitrust analysis requires plaintiffs to substantiate their claims with more than just theory or scant evidence that rivals have been harmed. That safeguard is particularly important when an agency is pursuing an enforcement action against a company in a market where the risks of regulatory capture and false positives are high. With calls to move away from the consumer welfare standard—which would exacerbate both the risks and consequences of false positives–it is imperative to embrace rather than reject the requirement of proof in monopolization cases. (See Elyse Dorsey, Jan Rybnicek & Joshua D. Wright, Hipster Antitrust Meets Public Choice Economics: The Consumer Welfare Standard, Rule of Law, and Rent-Seeking, CPI Antitrust Chron. (Apr. 2018); see also Joshua D. Wright et al., Requiem For a Paradox: The Dubious Rise and Inevitable Fall of Hipster Antitrust, 51 Ariz. St. L.J. 293 (2019).) The DOJ’s Statement of Interest is a reminder of this basic tenet.
Lesson number two: Antitrust should have a limited role in adjudicating disputes arising between sophisticated parties in bilateral negotiations of patent licenses. Overzealous claims of harm from patent holdup and anticompetitive licensing can deter the lawful exercise of patent rights, good faith modifications of existing contracts, and more generally interfere with the outcome of arms-length negotiations (See Bruce H. Kobayashi & Joshua D. Wright, The Limits of Antitrust and Patent Holdup: A Reply To Cary et al., 78 Antitrust L.J. 701 (2012)). It is also a difficult task for an antitrust regulator or court to identify and distinguish anticompetitive patent licenses from neutral or welfare-increasing behavior. An antitrust agency’s willingness to cast the shadow of antitrust remedies over one side of the bargaining table inevitably places the agency in the position of encouraging further rent-seeking by licensees seeking similar intervention on their behalf.
Finally, antitrust agencies intervening in patent holdup and licensing disputes on behalf of one party to a patent licensing agreement risks transforming the agency into a price regulator. Apple’s fundamental complaint in its own litigation, and the core of the similar FTC allegation against Qualcomm, is that royalty rates are too high. The risks to competition and consumers of antitrust courts and agencies playing the role of central planner for the innovation economy are well known, and are at the peak when the antitrust enterprise is used to set prices, mandate a particular organizational structure for the firm, or to intervene in garden variety contract and patent disputes in high-tech markets.
The current Commission did not vote out the Complaint now being litigated in the Northern District of California. That case was initiated by an entirely different set of Commissioners. It is difficult to imagine the new Commissioners having no reaction to the Apple documents, and in particular to the perception they create that Apple was successful in manipulating the agency in its strategy to bolster its negotiating position against Qualcomm. A thorough reevaluation of the evidence here might well lead the current Commission to reconsider the merits of the agency’s position in the litigation and whether continuing is in the public interest. The Apple documents, should they enter the record, may affect significantly the Ninth Circuit’s or Supreme Court’s understanding of the FTC’s theory of harm.
[TOTM: The following is the fourth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.This post originally appeared on the Federalist Society Blog.]
The courtroom trial in the Federal Trade Commission’s (FTC’s) antitrust case against Qualcomm ended in January with a promise from the judge in the case, Judge Lucy Koh, to issue a ruling as quickly as possible — caveated by her acknowledgement that the case is complicated and the evidence voluminous. Well, things have only gotten more complicated since the end of the trial. Not only did Apple and Qualcomm reach a settlement in the antitrust case against Qualcomm that Apple filed just three days after the FTC brought its suit, but the abbreviated trial in that case saw the presentation by Qualcomm of some damning evidence that, if accurate, seriously calls into (further) question the merits of the FTC’s case.
Apple v. Qualcomm settles — and the DOJ takes notice
The Apple v. Qualcomm case, which was based on substantially the same arguments brought by the FTC in its case, ended abruptly last month after only a day and a half of trial — just enough time for the parties to make their opening statements — when Apple and Qualcomm reached an out-of-court settlement. The settlement includes a six-year global patent licensing deal, a multi-year chip supplier agreement, an end to all of the patent disputes around the world between the two companies, and a $4.5 billion settlement payment from Apple to Qualcomm.
That alone complicates the economic environment into which Judge Koh will issue her ruling. But the Apple v. Qualcomm trial also appears to have induced the Department of Justice Antitrust Division (DOJ) to weigh in on the FTC’s case with a Statement of Interest requesting Judge Koh to use caution in fashioning a remedy in the case should she side with the FTC, followed by a somewhat snarky Reply from the FTC arguing the DOJ’s filing was untimely (and, reading the not-so-hidden subtext, unwelcome).
But buried in the DOJ’s Statement is an important indication of why it filed its Statement when it did, just about a week after the end of the Apple v. Qualcomm case, and a pointer to a much larger issue that calls the FTC’s case against Qualcomm even further into question (I previously wrote about the lack of theoretical and evidentiary merit in the FTC’s case here).
Footnote 6 of the DOJ’s Statement reads:
Internal Apple documents that recently became public describe how, in an effort to “[r]educe Apple’s net royalty to Qualcomm,” Apple planned to “[h]urt Qualcomm financially” and “[p]ut Qualcomm’s licensing model at risk,” including by filing lawsuits raising claims similar to the FTC’s claims in this case …. One commentator has observed that these documents “potentially reveal that Apple was engaging in a bad faith argument both in front of antitrust enforcers as well as the legal courts about the actual value and nature of Qualcomm’s patented innovation.” (Emphasis added).
Indeed, the slides presented by Qualcomm during that single day of trial in Apple v. Qualcomm are significant, not only for what they say about Apple’s conduct, but, more importantly, for what they say about the evidentiary basis for the FTC’s claims against the company.
The evidence presented by Qualcomm in its opening statement suggests some troubling conduct by Apple
Others have pointed to Qualcomm’s opening slides and the Apple internal documents they present to note Apple’s apparent bad conduct. As one commentator sums it up:
Although we really only managed to get a small glimpse of Qualcomm’s evidence demonstrating the extent of Apple’s coordinated strategy to manipulate the FRAND license rate, that glimpse was particularly enlightening. It demonstrated a decade-long coordinated effort within Apple to systematically engage in what can only fairly be described as manipulation (if not creation of evidence) and classic holdout.
Qualcomm showed during opening arguments that, dating back to at least 2009, Apple had been laying the foundation for challenging its longstanding relationship with Qualcomm. (Emphasis added).
The internal Apple documents presented by Qualcomm to corroborate this claim appear quite damning. Of course, absent explanation and cross-examination, it’s impossible to know for certain what the documents mean. But on their face they suggest Apple knowingly undertook a deliberate scheme (and knowingly took upon itself significant legal risk in doing so) to devalue comparable patent portfolios to Qualcomm’s:
The apparent purpose of this scheme was to devalue comparable patent licensing agreements where Apple had the power to do so (through litigation or the threat of litigation) in order to then use those agreements to argue that Qualcomm’s royalty rates were above the allowable, FRAND level, and to undermine the royalties Qualcomm would be awarded in courts adjudicating its FRAND disputes with the company. As one commentator put it:
Apple embarked upon a coordinated scheme to challenge weaker patents in order to beat down licensing prices. Once the challenges to those weaker patents were successful, and the licensing rates paid to those with weaker patent portfolios were minimized, Apple would use the lower prices paid for weaker patent portfolios as proof that Qualcomm was charging a super-competitive licensing price; a licensing price that violated Qualcomm’s FRAND obligations. (Emphasis added).
That alone is a startling revelation, if accurate, and one that would seem to undermine claims that patent holdout isn’t a real problem. It also would undermine Apple’s claims that it is a “willing licensee,” engaging with SEP licensors in good faith. (Indeed, this has been called into question before, and one Federal Circuit judge has noted in dissent that “[t]he record in this case shows evidence that Apple may have been a hold out.”). If the implications drawn from the Apple documents shown in Qualcomm’s opening statement are accurate, there is good reason to doubt that Apple has been acting in good faith.
Even more troubling is what it means for the strength of the FTC’s case
But the evidence offered in Qualcomm’s opening argument point to another, more troubling implication, as well. We know that Apple has been coordinating with the FTC and was likely an important impetus for the FTC’s decision to bring an action in the first place. It seems reasonable to assume that Apple used these “manipulated” agreements to help make its case.
But what is most troubling is the extent to which it appears to have worked.
Qualcomm’s practices, including no license, no chips, skewed negotiations towards the outcomes that favor Qualcomm and lead to higher royalties. Qualcomm is committed to license its standard essential patents on fair, reasonable, and non-discriminatory terms. But even before doing market comparison, we know that the license rates charged by Qualcomm are too high and above FRAND because Qualcomm uses its chip power to require a license.
* * *
Mr. Michael Lasinski [the FTC’s patent valuation expert] compared the royalty rates received by Qualcomm to … the range of FRAND rates that ordinarily would form the boundaries of a negotiation … Mr. Lasinski’s expert opinion … is that Qualcomm’s royalty rates are far above any indicators of fair and reasonable rates. (Emphasis added).
The key question is what constitutes the “range of FRAND rates that ordinarily would form the boundaries of a negotiation”?
Because they were discussed under seal, we don’t know the precise agreements that the FTC’s expert, Mr. Lasinski, used for his analysis. But we do know something about them: His analysis entailed a study of only eight licensing agreements; in six of them, the licensee was either Apple or Samsung; and in all of them the licensor was either Interdigital, Nokia, or Ericsson. We also know that Mr. Lasinski’s valuation study did not include any Qualcomm licenses, and that the eight agreements he looked at were all executed after the district court’s decision in Microsoft vs. Motorola in 2013.
A curiously small number of agreements
Right off the bat there is a curiosity in the FTC’s valuation analysis. Even though there are hundreds of SEP license agreements involving the relevant standards, the FTC’s analysis relied on only eight, three-quarters of which involved licensing by only two companies: Apple and Samsung.
Indeed, even since 2013 (a date to which we will return) there have been scads of licenses (see, e.g., here, here, and here). Not only Apple and Samsung make CDMA and LTE devices; there are — quite literally — hundreds of other manufacturers out there, all of them licensing essentially the same technology — including global giants like LG, Huawei, HTC, Oppo, Lenovo, and Xiaomi. Why were none of their licenses included in the analysis?
At the same time, while Interdigital, Nokia, and Ericsson are among the largest holders of CDMA and LTE SEPs, several dozen companies have declared such patents, including Motorola (Alphabet), NEC, Huawei, Samsung, ZTE, NTT DOCOMO, etc. Again — why were none of their licenses included in the analysis?
All else equal, more data yields better results. This is particularly true where the data are complex license agreements which are often embedded in larger, even-more-complex commercial agreements and which incorporate widely varying patent portfolios, patent implementers, and terms.
Yet the FTC relied on just eight agreements in its comparability study, covering a tiny fraction of the industry’s licensors and licensees, and, notably, including primarily licenses taken by the two companies (Samsung and Apple) that have most aggressively litigated their way to lower royalty rates.
A curiously crabbed selection of licensors
And it is not just that the selected licensees represent a weirdly small and biased sample; it is also not necessarily even a particularly comparable sample.
One thing we can be fairly confident of, given what we know of the agreements used, is that at least one of the license agreements involved Nokia licensing to Apple, and another involved InterDigital licensing to Apple. But these companies’ patent portfolios are not exactly comparable to Qualcomm’s. About Nokia’s patents, Apple said:
And about InterDigital’s:
Meanwhile, Apple’s view of Qualcomm’s patent portfolio (despite its public comments to the contrary) was that it was considerably better than the others’:
The FTC’s choice of such a limited range of comparable license agreements is curious for another reason, as well: It includes no Qualcomm agreements. Qualcomm is certainly one of the biggest players in the cellular licensing space, and no doubt more than a few license agreements involve Qualcomm. While it might not make sense to include Qualcomm licenses that the FTC claims incorporate anticompetitive terms, that doesn’t describe the huge range of Qualcomm licenses with which the FTC has no quarrel. Among other things, Qualcomm licenses from before it began selling chips would not have been affected by its alleged “no license, no chips” scheme, nor would licenses granted to companies that didn’t also purchase Qualcomm chips. Furthermore, its licenses for technology reading on the WCDMA standard are not claimed to be anticompetitive by the FTC.
And yet none of these licenses were deemed “comparable” by the FTC’s expert, even though, on many dimensions — most notably, with respect to the underlying patent portfolio being valued — they would have been the most comparable (i.e., identical).
A curiously circumscribed timeframe
That the FTC’s expert should use the 2013 cut-off date is also questionable. According to Lasinski, he chose to use agreements after 2013 because it was in 2013 that the U.S. District Court for the Western District of Washington decided the Microsoft v. Motorola case. Among other things, the court in Microsoft v Motorola held that the proper value of a SEP is its “intrinsic” patent value, including its value to the standard, but not including the additional value it derives from being incorporatedinto a widely used standard.
According to the FTC’s expert,
prior to [Microsoft v. Motorola], people were trying to value … the standard and the license based on the value of the standard, not the value of the patents ….
Asked by Qualcomm’s counsel if his concern was that the “royalty rates derived in license agreements for cellular SEPs [before Microsoft v. Motorola] could very well have been above FRAND,” Mr. Lasinski concurred.
The problem with this approach is that it’s little better than arbitrary. The Motorola decision was an important one, to be sure, but the notion that sophisticated parties in a multi-billion dollar industry were systematically agreeing to improper terms until a single court in Washington suggested otherwise is absurd. To be sure, such agreements are negotiated in “the shadow of the law,” and judicial decisions like the one in Washington (later upheld by the Ninth Circuit) can affect the parties’ bargaining positions.
But even if it were true that the court’s decision had some effect on licensing rates, the decision would still have been only one of myriad factors determining parties’ relative bargaining power and their assessment of the proper valuation of SEPs. There is no basis to support the assertion that the Motorola decision marked a sea-change between “improper” and “proper” patent valuations. And, even if it did, it was certainly not alone in doing so, and the FTC’s expert offers no justification for determining that agreements reached before, say, the European Commission’s decision against Qualcomm in 2018 were “proper,” or that the Korea FTC’s decision against Qualcomm in 2009 didn’t have the same sort of corrective effect as the Motorola court’s decision in 2013.
At the same time, a review of a wider range of agreements suggested that Qualcomm’s licensing royalties weren’t inflated
Meanwhile, one of Qualcomm’s experts in the FTC case, former DOJ Chief Economist Aviv Nevo, looked at whether the FTC’s theory of anticompetitive harm was borne out by the data by looking at Qualcomm’s royalty rates across time periods and standards, and using a much larger set of agreements. Although his remit was different than Mr. Lasinski’s, and although he analyzed only Qualcomm licenses, his analysis still sheds light on Mr. Lasinski’s conclusions:
[S]pecifically what I looked at was the predictions from the theory to see if they’re actually borne in the data….
[O]ne of the clear predictions from the theory is that during periods of alleged market power, the theory predicts that we should see higher royalty rates.
So that’s a very clear prediction that you can take to data. You can look at the alleged market power period, you can look at the royalty rates and the agreements that were signed during that period and compare to other periods to see whether we actually see a difference in the rates.
Dr. Nevo’s analysis, which looked at royalty rates in Qualcomm’s SEP license agreements for CDMA, WCDMA, and LTE ranging from 1990 to 2017, found no differences in rates between periods when Qualcomm was alleged to have market power and when it was not alleged to have market power (or could not have market power, on the FTC’s theory, because it did not sell corresponding chips).
The reason this is relevant is that Mr. Lasinski’s assessment implies that Qualcomm’s higher royalty rates weren’t attributable to its superior patent portfolio, leaving either anticompetitive conduct or non-anticompetitive, superior bargaining ability as the explanation. No one thinks Qualcomm has cornered the market on exceptional negotiators, so really the only proffered explanation for the results of Mr. Lasinski’s analysis is anticompetitive conduct. But this assumes that his analysis is actually reliable. Prof. Nevo’s analysis offers some reason to think that it is not.
All of the agreements studied by Mr. Lasinski were drawn from the period when Qualcomm is alleged to have employed anticompetitive conduct to elevate its royalty rates above FRAND. But when the actual royalties charged by Qualcomm during its alleged exercise of market power are compared to those charged when and where it did not have market power, the evidence shows it received identical rates. Mr Lasinki’s results, then, would imply that Qualcomm’s royalties were “too high” not only while it was allegedly acting anticompetitively, but also when it was not. That simple fact suggests on its face that Mr. Lasinski’s analysis may have been flawed, and that it systematically under-valued Qualcomm’s patents.
Connecting the dots and calling into question the strength of the FTC’s case
In its closing argument, the FTC pulled together the implications of its allegations of anticompetitive conduct by pointing to Mr. Lasinski’s testimony:
Now, looking at the effect of all of this conduct, Qualcomm’s own documents show that it earned many times the licensing revenue of other major licensors, like Ericsson.
* * *
Mr. Lasinski analyzed whether this enormous difference in royalties could be explained by the relative quality and size of Qualcomm’s portfolio, but that massive disparity was not explained.
Qualcomm’s royalties are disproportionate to those of other SEP licensors and many times higher than any plausible calculation of a FRAND rate.
* * *
The overwhelming direct evidence, some of which is cited here, shows that Qualcomm’s conduct led licensees to pay higher royalties than they would have in fair negotiations.
It is possible, of course, that Lasinki’s methodology was flawed; indeed, at trial Qualcomm argued exactly this in challenging his testimony. But it is also possible that, whether his methodology was flawed or not, his underlying data was flawed.
It is impossible from the publicly available evidence to definitively draw this conclusion, but the subsequent revelation that Apple may well have manipulated at least a significant share of the eight agreements that constituted Mr. Lasinski’s data certainly increases the plausibility of this conclusion: We now know, following Qualcomm’s opening statement in Apple v. Qualcomm, that that stilted set of comparable agreements studied by the FTC’s expert also happens to be tailor-made to be dominated by agreements that Apple may have manipulated to reflect lower-than-FRAND rates.
What is most concerning is that the FTC may have built up its case on such questionable evidence, either by intentionally cherry picking the evidence upon which it relied, or inadvertently because it rested on such a needlessly limited range of data, some of which may have been tainted.
Intentionally or not, the FTC appears to have performed its valuation analysis using a needlessly circumscribed range of comparable agreements and justified its decision to do so using questionable assumptions. This seriously calls into question the strength of the FTC’s case.
[N]ew combinations are, as a rule, embodied, as it were, in new firms which generally do not arise out of the old ones but start producing beside them; … in general it is not the owner of stagecoaches who builds railways. – Joseph Schumpeter, January 1934
Elizabeth Warren wants to break up the tech giants — Facebook, Google, Amazon, and Apple — claiming they have too much power and represent a danger to our democracy. As part of our response to her proposal, we shared a couple of headlines from 2007 claiming that MySpace had an unassailable monopoly in the social media market.
Tommaso Valletti, the chief economist of the Directorate-General for Competition (DG COMP) of the European Commission, said, in what we assume was a reference to our posts, “they go on and on with that single example to claim that [Facebook] and [Google] are not a problem 15 years later … That’s not what I would call an empirical regularity.”
We appreciate the invitation to show that prematurely dubbing companies “unassailable monopolies” is indeed an empirical regularity.
It’s Tough to Make Predictions, Especially About the Future of Competition in Tech
No one is immune to this phenomenon. Antitrust regulators often take a static view of competition, failing to anticipate dynamic technological forces that will upend market structure and competition.
Scientists and academics make a different kind of error. They are driven by the need to satisfy their curiosity rather than shareholders. Upon inventing a new technology or discovering a new scientific truth, academics often fail to see the commercial implications of their findings.
Maybe the titans of industry don’t make these kinds of mistakes because they have skin in the game? The profit and loss statement is certainly a merciless master. But it does not give CEOs the power of premonition. Corporate executives hailed as visionaries in one era often become blinded by their success, failing to see impending threats to their company’s core value propositions.
Furthermore, it’s often hard as outside observers to tell after the fact whether business leaders just didn’t see a tidal wave of disruption coming or, worse, they did see it coming and were unable to steer their bureaucratic, slow-moving ships to safety. Either way, the outcome is the same.
Here’s the pattern we observe over and over: extreme success in one context makes it difficult to predict how and when the next paradigm shift will occur in the market. Incumbents become less innovative as they get lulled into stagnation by high profit margins in established lines of business. (This is essentially the thesis of Clay Christensen’s The Innovator’s Dilemma).
Even if the anti-tech populists are powerless to make predictions, history does offer us some guidance about the future. We have seen time and again that apparently unassailable monopolists are quite effectively assailed by technological forces beyond their control.
Nov 2007: “Nokia: One Billion Customers—Can Anyone Catch the Cell Phone King?” (Forbes)
Sep 2013: “Microsoft CEO Ballmer Bids Emotional Farewell to Wall Street” (Reuters)
If there’s one thing I regret, there was a period in the early 2000s when we were so focused on what we had to do around Windows that we weren’t able to redeploy talent to the new device form factor called the phone.
Mar 1998: “How Yahoo! Won the Search Wars” (Fortune)
Once upon a time, Yahoo! was an Internet search site with mediocre technology. Now it has a market cap of $2.8 billion. Some people say it’s the next America Online.
AOL’s dominance of instant messaging technology, the kind of real-time e-mail that also lets users know when others are online, has emerged as a major concern of regulators scrutinizing the company’s planned merger with Time Warner Inc. (twx). Competitors to Instant Messenger, such as Microsoft Corp. (msft) and Yahoo! Inc. (yhoo), have been pressing the Federal Communications Commission to force AOL to make its services compatible with competitors’.
Dec 2000: “AOL’s Instant Messaging Monopoly?” (Wired)
There have been isolated examples, as in the case of obligations of the merged AOL / Time Warner to make AOL Instant Messenger interoperable with competing messaging services. These obligations on AOL are widely viewed as having been a dismal failure.
Seventy percent of Yahoo 360 users, for example, also use other social networking sites — MySpace in particular. Ditto for Facebook, Windows Live Spaces and Friendster … This presents an obvious, long-term business challenge to the competitors. If they cannot build up a large base of unique users, they will always be on MySpace’s periphery.
Feb 2007: “Will Myspace Ever Lose Its Monopoly?” (Guardian)
Jun 2011: “Myspace Sold for $35m in Spectacular Fall from $12bn Heyday” (Guardian)
Dec 2003: “The subscription model of buying music is bankrupt. I think you could make available the Second Coming in a subscription model, and it might not be successful.” – Steve Jobs (Rolling Stone)
Predicting the future of competition in the tech industry is such a fraught endeavor that even articles about how hard it is to make predictions include incorrect predictions. The authors just cannot help themselves. A March 2012 BBC article “The Future of Technology… Who Knows?” derided the naysayers who predicted doom for Apple’s retail store strategy. Its kicker?
And that is why when you read that the Blackberry is doomed, or that Microsoft will never make an impression on mobile phones, or that Apple will soon dominate the connected TV market, you need to take it all with a pinch of salt.
But Blackberry was doomed and Microsoft never made an impression on mobile phones. (Half credit for Apple TV, which currently has a 15% market share).
Nobel Prize-winning economist Paul Krugman wrote a piece for Red Herring magazine (seriously) in June 1998 with the title “Why most economists’ predictions are wrong.” Headline-be-damned, near the end of the article he made the following prediction:
The growth of the Internet will slow drastically, as the flaw in “Metcalfe’s law”—which states that the number of potential connections in a network is proportional to the square of the number of participants—becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.
Robert Metcalfe himself predicted in a 1995 column that the Internet would “go spectacularly supernova and in 1996 catastrophically collapse.” After pledging to “eat his words” if the prediction did not come true, “in front of an audience, he put that particular column into a blender, poured in some water, and proceeded to eat the resulting frappe with a spoon.”
A Change Is Gonna Come
Benedict Evans, a venture capitalist at Andreessen Horowitz, has the best summary of why competition in tech is especially difficult to predict:
IBM, Microsoft and Nokia were not beaten by companies doing what they did, but better. They were beaten by companies that moved the playing field and made their core competitive assets irrelevant. The same will apply to Facebook (and Google, Amazon and Apple).
Elsewhere, Evans tried to reassure his audience that we will not be stuck with the current crop of tech giants forever:
With each cycle in tech, companies find ways to build a moat and make a monopoly. Then people look at the moat and think it’s invulnerable. They’re generally right. IBM still dominates mainframes and Microsoft still dominates PC operating systems and productivity software. But… It’s not that someone works out how to cross the moat. It’s that the castle becomes irrelevant. IBM didn’t lose mainframes and Microsoft didn’t lose PC operating systems. Instead, those stopped being ways to dominate tech. PCs made IBM just another big tech company. Mobile and the web made Microsoft just another big tech company. This will happen to Google or Amazon as well. Unless you think tech progress is over and there’ll be no more cycles … It is deeply counter-intuitive to say ‘something we cannot predict is certain to happen’. But this is nonetheless what’s happened to overturn pretty much every tech monopoly so far.
If this time is different — or if there are more false negatives than false positives in the monopoly prediction game — then the advocates for breaking up Big Tech should try to make that argument instead of falling back on “big is bad” rhetoric. As for us, we’ll bet that we have not yet reached the end of history — tech progress is far from over.
[TOTM: The following is the third in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.
This post is authored by Douglas H. Ginsburg, Professor of Law, Antonin Scalia Law School at George Mason University; Senior Judge, United States Court of Appeals for the District of Columbia Circuit; and former Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice; and Joshua D. Wright, University Professor, Antonin Scalia Law School at George Mason University; Executive Director, Global Antitrust Institute; former U.S. Federal Trade Commissioner from 2013-15; and one of the founding bloggers at Truth on the Market.]
[Ginsburg & Wright: Professor Wright is recused from participation in the FTC litigation against Qualcomm, but has provided counseling advice to Qualcomm concerning other regulatory and competition matters. The views expressed here are our own and neither author received financial support.]
The Department of Justice Antitrust Division (DOJ) and Federal Trade Commission (FTC) have spent a significant amount of time in federal court litigating major cases premised upon an anticompetitive foreclosure theory of harm. Bargaining models, a tool used commonly in foreclosure cases, have been essential to the government’s theory of harm in these cases. In vertical merger or conduct cases, the core theory of harm is usually a variant of the claim that the transaction (or conduct) strengthens the firm’s incentives to engage in anticompetitive strategies that depend on negotiations with input suppliers. Bargaining models are a key element of the agency’s attempt to establish those claims and to predict whether and how firm incentives will affect negotiations with input suppliers, and, ultimately, the impact on equilibrium prices and output. Application of bargaining models played a key role in evaluating the anticompetitive foreclosure theories in the DOJ’s litigation to block the proposed merger of AT&T and Time Warner Cable. A similar model is at the center of the FTC’s antitrust claims against Qualcomm and its patent licensing business model.
Modern antitrust analysis does not condemn business practices as anticompetitive without solid economic evidence of an actual or likely harm to competition. This cautious approach was developed in the courts for two reasons. The first is that the difficulty of distinguishing between procompetitive and anticompetitive explanations for the same conduct suggests there is a high risk of error. The second is that those errors are more likely to be false positives than false negatives because empirical evidence and judicial learning have established that unilateral conduct is usually either procompetitive or competitively neutral. In other words, while the risk of anticompetitive foreclosure is real, courts have sensibly responded by requiring plaintiffs to substantiate their claims with more than just theory or scant evidence that rivals have been harmed.
An economic model can help establish the likelihood and/or magnitude of competitive harm when the model carefully captures the key institutional features of the competition it attempts to explain. Naturally, this tends to mean that the economic theories and models proffered by dueling economic experts to predict competitive effects take center stage in antitrust disputes. The persuasiveness of an economic model turns on the robustness of its assumptions about the underlying market. Model predictions that are inconsistent with actual market evidence give one serious pause before accepting the results as reliable.
For example, many industries are characterized by bargaining between providers and distributors. The Nash bargaining framework can be used to predict the outcomes of bilateral negotiations based upon each party’s bargaining leverage. The model assumes that both parties are better off if an agreement is reached, but that as the utility of one party’s outside option increases relative to the bargain, it will capture an increasing share of the surplus. Courts have had to reconcile these seemingly complicated economic models with prior case law and, in some cases, with direct evidence that is apparently inconsistent with the results of the model.
Indeed, Professor Carl Shapiro recently used bargaining models to analyze harm to competition in two prominent cases alleging anticompetitive foreclosure—one initiated by the DOJ and one by the FTC—in which he served as the government’s expert economist. In United States v. AT&T Inc., Dr. Shapiro testified that the proposed transaction between AT&T and Time Warner would give the vertically integrated company leverage to extract higher prices for content from AT&T’s rival, Dish Network. Soon after, Dr. Shapiro presented a similar bargaining model in FTC v. Qualcomm Inc. He testified that Qualcomm leveraged its monopoly power over chipsets to extract higher royalty rates from smartphone OEMs, such as Apple, wishing to license its standard essential patents (SEPs). In each case, Dr. Shapiro’s models were criticized heavily by the defendants’ expert economists for ignoring market realities that play an important role in determining whether the challenged conduct was likely to harm competition.
Judge Leon’s opinion in AT&T/Time Warner—recently upheld on appeal—concluded that Dr. Shapiro’s application of the bargaining model was significantly flawed, based upon unreliable inputs, and undermined by evidence about actual market performance presented by defendant’s expert, Dr. Dennis Carlton. Dr. Shapiro’s theory of harm posited that the combined company would increase its bargaining leverage and extract greater affiliate fees for Turner content from AT&T’s distributor rivals. The increase in bargaining leverage was made possible by the threat of a post-merger blackout of Turner content for AT&T’s rivals. This theory rested on the assumption that the combined firm would have reduced financial exposure from a long-term blackout of Turner content and would therefore have more leverage to threaten a blackout in content negotiations. The purpose of his bargaining model was to quantify how much AT&T could extract from competitors subjected to a long-term blackout of Turner content.
Judge Leon highlighted a number of reasons for rejecting the DOJ’s argument. First, Dr. Shapiro’s model failed to account for existing long-term affiliate contracts, post-litigation offers of arbitration agreements, and the increasing competitiveness of the video programming and distribution industry. Second, Dr. Carlton had demonstrated persuasively that previous vertical integration in the video programming and distribution industry did not have a significant effect on content prices. Finally, Dr. Shapiro’s model primarily relied upon three inputs: (1) the total number of subscribers the unaffiliated distributor would lose in the event of a long-term blackout of Turner content, (2) the percentage of the distributor’s lost subscribers who would switch to AT&T as a result of the blackout, and (3) the profit margin AT&T would derive from the subscribers it gained from the blackout. Many of Dr. Shapiro’s inputs necessarily relied on critical assumptions and/or third-party sources. Judge Leon considered and discredited each input in turn.
The parties in Qualcomm are, as of the time of this posting, still awaiting a ruling. Dr. Shapiro’s model in that case attempts to predict the effect of Qualcomm’s alleged “no license, no chips” policy. He compared the gains from trade OEMs receive when they purchase a chip from Qualcomm and pay Qualcomm a FRAND royalty to license its SEPs with the gains from trade OEMs receive when they purchase a chip from a rival manufacturer and pay a “royalty surcharge” to Qualcomm to license its SEPs. In other words, the FTC’s theory of harm is based upon the premise that Qualcomm is charging a supra-FRAND rate for its SEPs (the“royalty surcharge”) that squeezes the margins of OEMs. That margin squeeze, the FTC alleges, prevents rival chipset suppliers from obtaining a sufficient return when negotiating with OEMs. The FTC predicts the end result is a reduction in competition and an increase in the price of devices to consumers.
Qualcomm, like Judge Leon in AT&T, questioned the robustness of Dr. Shapiro’s model and its predictions in light of conflicting market realities. For example, Dr. Shapiro, argued that the
leverage that Qualcomm brought to bear on the chips shifted the licensing negotiations substantially in Qualcomm’s favor and led to a significantly higher royalty than Qualcomm would otherwise have been able to achieve.
Yet, on cross-examination, Dr. Shapiro declined to move from theory to empirics when asked if he had quantified the effects of Qualcomm’s practice on any other chip makers. Instead, Dr. Shapiro responded that he had not, but he had “reason to believe that the royalty surcharge was substantial” and had “inevitable consequences.” Under Dr. Shapiro’s theory, one would predict that royalty rates were higher after Qualcomm obtained market power.
As with Dr. Carlton’s testimony inviting Judge Leon to square the DOJ’s theory with conflicting historical facts in the industry, Qualcomm’s economic expert, Dr. Aviv Nevo, provided an analysis of Qualcomm’s royalty agreements from 1990-2017, confirming that there was no economic and meaningful difference between the royalty rates during the time frame when Qualcomm was alleged to have market power and the royalty rates outside of that time frame. He also presented evidence that ex ante royalty rates did not increase upon implementation of the CDMA standard or the LTE standard. Moreover, Dr.Nevo testified that the industry itself was characterized by declining prices and increasing output and quality.
Dr. Shapiro’s model in Qualcomm appears to suffer from many of the same flaws that ultimately discredited his model in AT&T/Time Warner: It is based upon assumptions that are contrary to real-world evidence and it does not robustly or persuasively identify anticompetitive effects. Some observers, including our Scalia Law School colleague and former FTC Chairman, Tim Muris, would apparently find it sufficient merely to allege a theoretical “ability to manipulate the marketplace.” But antitrust cases require actual evidence of harm. We think Professor Muris instead captured the appropriate standard in his important article rejecting attempts by the FTC to shortcut its requirement of proof in monopolization cases:
This article does reject, however, the FTC’s attempt to make it easier for the government to prevail in Section 2 litigation. Although the case law is hardly a model of clarity, one point that is settled is that injury to competitors by itself is not a sufficient basis to assume injury to competition …. Inferences of competitive injury are, of course, the heart of per se condemnation under the rule of reason. Although long a staple of Section 1, such truncation has never been a part of Section 2. In an economy as dynamic as ours, now is hardly the time to short-circuit Section 2 cases. The long, and often sorry, history of monopolization in the courts reveals far too many mistakes even without truncation.
Timothy J. Muris, The FTC and the Law of Monopolization, 67 Antitrust L. J. 693 (2000)
We agree. Proof of actual anticompetitive effects rather than speculation derived from models that are not robust to market realities are an important safeguard to ensure that Section 2 protects competition and not merely individual competitors.
The future of bargaining models in antitrust remains to be seen. Judge Leon certainly did not question the proposition that they could play an important role in other cases. Judge Leon closely dissected the testimony and models presented by both experts in AT&T/Time Warner. His opinion serves as an important reminder. As complex economic evidence like bargaining models become more common in antitrust litigation, judges must carefully engage with the experts on both sides to determine whether there is direct evidence on the likely competitive effects of the challenged conduct. Where “real-world evidence,” as Judge Leon called it, contradicts the predictions of a bargaining model, judges should reject the model rather than the reality. Bargaining models have many potentially important antitrust applications including horizontal mergers involving a bargaining component – such as hospital mergers, vertical mergers, and licensing disputes. The analysis of those models by the Ninth and D.C. Circuits will have important implications for how they will be deployed by the agencies and parties moving forward.
[TOTM: The following is the second in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.
This post is authored by Luke Froeb (William C. Oehmig Chair in Free Enterprise and Entrepreneurship at the Owen Graduate School of Management at Vanderbilt University; former chief economist at the Antitrust Division of the US Department of Justice and the Federal Trade Commission), Michael Doane (Competition Economics, LLC) & Mikhael Shor (Associate Professor of Economics, University of Connecticut).]
It is not uncommon—in fact it is expected—that parties to a negotiation would have different opinions about the reasonableness of any deal. Every buyer asks for a price as low as possible, and sellers naturally request prices at which buyers (feign to) balk. A recent movement among some lawyers and economists has been to label such disagreements in the context of standard-essential patents not as a natural part of bargaining, but as dispositive proof of “hold-up,” or the innovator’s purported abuse of newly gained market power to extort implementers. We have four primary issues with this hold-up fad.
First, such claims of “hold-up” are trotted out whenever an innovator’s royalty request offends the commentator’s sensibilities, and usually with reference to a theoretical hold-up possibility rather than any matter-specific evidence that hold-up is actually present. Second, as we have argued elsewhere, such arguments usually ignore the fact that implementers of innovations often possess significant countervailing power to “hold-out” as well. This is especially true as implementers have successfully pushed to curtail injunctive relief in standard-essential patent cases. Third, as Greg Werden and Froeb have recently argued, it is not clear why patent holdup—even where it might exist—need implicate antitrust law rather than be adequately handled as a contractual dispute. Lastly, it is certainly not the case that every disagreement over the value of an innovation is an exercise in hold-up, as even economists and lawyers have not reached anything resembling a consensus on the correct interpretation of a “fair” royalty.
At the heart of this case (and many recent cases) is (1) an indictment of Qualcomm’s desire to charge royalties to the maker of consumer devices based on the value of its technology and (2) a lack (to the best of our knowledge from public documents) of well vetted theoretical models that can provide the underpinning for the theory of the case. We discuss these in turn.
The smallest component “principle”
In arguing that “Qualcomm’s royalties are disproportionately high relative to the value contributed by its patented inventions,” (Complaint, ¶ 77) a key issue is whether Qualcomm can calculate royalties as a percentage of the price of a device, rather than a small percentage of the price of a chip. (Complaint, ¶¶ 61-76).
So what is wrong with basing a royalty on the price of the final product? A fixed portion of the price is not a perfect proxy for the value of embedded intellectual property, but it is a reasonable first approximation, much like retailers use fixed markups for products rather than optimizing the price of each SKU if the cost of individual determinations negate any benefits to doing so. The FTC’s main issue appears to be that the price of a smartphone reflects “many features in addition to the cellular connectivity and associated voice and text capabilities provided by early feature phones.” (Complaint, ¶ 26). This completely misses the point. What would the value of an iPhone be if it contained all of those “many features” but without the phone’s communication abilities? We have some idea, as Apple has for years marketed its iPod Touch for a quarter of the price of its iPhone line. Yet, “[f]or most users, the choice between an iPhone 5s and an iPod touch will be a no-brainer: Being always connected is one of the key reasons anyone owns a smartphone.”
What the FTC and proponents of the smallest component principle miss is that some of the value of all components of a smartphone are derived directly from the phone’s communication ability. Smartphones didn’t initially replace small portable cameras because they were better at photography (in fact, smartphone cameras were and often continue to be much worse than devoted cameras). The value of a smartphone camera is that it combines picture taking with immediate sharing over text or through social media. Thus, unlike the FTC’s claim that most of the value of a smartphone comes from features that are not communication, many features on a smartphone derive much of their value from the communication powers of the phone.
In the alternative, what the FTC wants is for the royalty not to reflect the value of the intellectual property but instead to be a small portion of the cost of some chipset—akin to an author of a paperback negotiating royalties based on the cost of plain white paper. As a matter of economics, a single chipset royalty cannot allow an innovator to capture the value of its innovation. This, in turn, implies that innovators underinvest in future technologies. As we have previously written:
For example, imagine that the same component (incorporating the same essential patent) is used to help stabilize flight of both commercial airplanes and toy airplanes. Clearly, these industries are likely to have different values for the patent. By negotiating over a single royalty rate based on the component price, the innovator would either fail to realize the added value of its patent to commercial airlines, or (in the case that the component is targeted primary to the commercial airlines) would not realize the incremental market potential from the patent’s use in toy airplanes. In either case, the innovator will not be negotiating over the entirety of the value it creates, leading to too little innovation.
The role of economics
Modern antitrust practice is to use economic models to explain how one gets from the evidence presented in a case to an anticompetitive conclusion. As Froeb, et al. have discussed, by laying out a mapping from the evidence to the effects, the legal argument is made clear, and gains credibility because it becomes falsifiable. The FTC complaint hypothesizes that “Qualcomm has excluded competitors and harmed competition through a set of interrelated policies and practices.” (Complaint, ¶ 3). Although Qualcomm explains how each of these policies and practices, by themselves, have clear business justifications, the FTC claims that combining them leads to an anticompetitive outcome.
Without providing a formal mapping from the evidence to an effect, it becomes much more difficult for a court to determine whether the theory of harm is correct or how to weigh the evidence that feeds the conclusion. Without a model telling it “what matters, why it matters, and how much it matters,” it is much more difficult for a tribunal to evaluate the “interrelated policies and practices.” In previous work, we have modeled the bilateral bargaining between patentees and licensees and have shown that when bilateral patent contracts are subject to review by an antitrust court, bargaining in the shadow of such a court can reduce the incentive to invest and thereby reduce welfare.
Concluding policy thoughts
What the FTC makes sound nefarious seems like a simple policy: requiring companies to seek licenses to Qualcomm’s intellectual property independent of any hardware that those companies purchase, and basing the royalty of that intellectual property on (an admittedly crude measure of) the value the IP contributes to that product. High prices alone do not constitute harm to competition. The FTC must clearly explain why their complaint is not simply about the “fairness” of the outcome or its desire that Qualcomm employ different bargaining paradigms, but rather how Qualcomm’s behavior harms the process of competition.
In the late 1950s, Nobel Laureate Robert Solow attributed about seven-eighths of the growth in U.S. GDP to technical progress. As Solow later commented: “Adding a couple of tenths of a percentage point to the growth rate is an achievement that eventually dwarfs in welfare significance any of the standard goals of economic policy.” While he did not have antitrust in mind, the import of his comment is clear: whatever static gains antitrust litigation may achieve, they are likely dwarfed by the dynamic gains represented by innovation.
Patent law is designed to maintain a careful balance between the costs of short-term static losses and the benefits of long-term gains that result from new technology. The FTC should present a sound theoretical or empirical basis for believing that the proposed relief sufficiently rewards inventors and allows them to capture a reasonable share of the whole value their innovations bring to consumers, lest such antitrust intervention deter investments in innovation.