The FTC’s recent YouTube settlement and $170 million fine related to charges that YouTube violated the Children’s Online Privacy Protection Act (COPPA) has the issue of targeted advertising back in the news. With an upcoming FTC workshop and COPPA Rule Review looming, it’s worth looking at this case in more detail and reconsidering COPPA’s 2013 amendment to the definition of personal information.
According to the complaint issued by the FTC and the New York Attorney General, YouTube violated COPPA by collecting personal information of children on its platform without obtaining parental consent. While the headlines scream that this is an egregious violation of privacy and parental rights, a closer look suggests that there is actually very little about the case that normal people would find to be all that troubling. Instead, it appears to be another in the current spate of elitist technopanics.
COPPA defines personal information to include persistent identifiers, like cookies, used for targeted advertising. These cookies allow site operators to have some idea of what kinds of websites a user may have visited previously. Having knowledge of users’ browsing history allows companies to advertise more effectively than is possible with contextual advertisements, which guess at users’ interests based upon the type of content being viewed at the time. The age old problem for advertisers is that “half the money spent on advertising is wasted; the trouble is they don’t know which half.” While this isn’t completely solved by the use of targeted advertising based on web browsing and search history, the fact that such advertising is more lucrative compared to contextual advertisements suggests that it works better for companies.
COPPA, since the 2013 update, states that persistent identifiers are personal information by themselves, even if not linked to any other information that could be used to actually identify children (i.e., anyone under 13 years old).
As a consequence of this rule, YouTube doesn’t allow children under 13 to create an account. Instead, YouTube created a separate mobile application called YouTube Kids with curated content targeted at younger users. That application serves only contextual advertisements that do not rely on cookies or other persistent identifiers, but the content available on YouTube Kids also remains available on YouTube.
YouTube’s error, in the eyes of the FTC, was that the site left it to channel owners on YouTube’s general audience site to determine whether to monetize their content through targeted advertising or to opt out and use only contextual advertisements. Turns out, many of those channels — including channels identified by the FTC as “directed to children” — made the more lucrative choice by choosing to have targeted advertisements on their channels.
Whether YouTube’s practices violate the letter of COPPA or not, a more fundamental question remains unanswered: What is the harm, exactly?
COPPA takes for granted that it is harmful for kids to receive targeted advertisements, even where, as here, the targeting is based not on any knowledge about the users as individuals, but upon the browsing and search history of the device they happen to be on. But children under 13 are extremely unlikely to have purchased the devices they use, to pay for the access to the Internet to use the devices, or to have any disposable income or means of paying for goods and services online. Which makes one wonder: To whom are these advertisements served to children actually targeted? The answer is obvious to everyone but the FTC and those who support the COPPA Rule: the children’s parents.
Television programs aimed at children have long been supported by contextual advertisements for cereal and toys. Tony the Tiger and Lucky the Leprechaun were staples of Saturday morning cartoons when I was growing up, along with all kinds of Hot Wheels commercials. As I soon discovered as a kid, I had the ability to ask my parents to buy these things, but ultimately no ability to buy them on my own. In other words: Parental oversight is essentially built-in to any type of advertisement children see, in the sense that few children can realistically make their own purchases or even view those advertisements without their parents giving them a device and internet access to do so.
When broken down like this, it is much harder to see the harm. It’s one thing to create regulatory schemes to prevent stalkers, creepers, and perverts from using online information to interact with children. It’s quite another to greatly reduce the ability of children’s content to generate revenue by use of relatively anonymous persistent identifiers like cookies — and thus, almost certainly, to greatly reduce the amount of content actually made for and offered to children.
On the one hand, COPPA thus disregards the possibility that controls that take advantage of parental oversight may be the most cost-effective form of protection in such circumstances. As Geoffrey Manne noted regarding the FTC’s analogous complaint against Amazon under the FTC Act, which ignored the possibility that Amazon’s in-app purchasing scheme was tailored to take advantage of parental oversight in order to avoid imposing excessive and needless costs:
[For the FTC], the imagined mechanism of “affirmatively seeking a customer’s authorized consent to a charge” is all benefit and no cost. Whatever design decisions may have informed the way Amazon decided to seek consent are either irrelevant, or else the user-experience benefits they confer are negligible….
Amazon is not abdicating its obligation to act fairly under the FTC Act and to ensure that users are protected from unauthorized charges. It’s just doing so in ways that also take account of the costs such protections may impose — particularly, in this case, on the majority of Amazon customers who didn’t and wouldn’t suffer such unauthorized charges….
At the same time, enforcement of COPPA against targeted advertising on kids’ content will have perverse and self-defeating consequences. As Berin Szoka notes:
This settlement will cut advertising revenue for creators of child-directed content by more than half. This will give content creators a perverse incentive to mislabel their content. COPPA was supposed to empower parents, but the FTC’s new approach actually makes life harder for parents and cripples functionality even when they want it. In short, artists, content creators, and parents will all lose, and it is not at all clear that this will do anything to meaningfully protect children.
This war against targeted advertising aimed at children has a cost. While many cheer the fine levied against YouTube (or think it wasn’t high enough) and the promised changes to its platform (though the dissenting Commissioners didn’t think those went far enough, either), the actual result will be less content — and especially less free content — available to children.
Far from being a win for parents and children, the shift in oversight responsibility from parents to the FTC will likely lead to less-effective oversight, more difficult user interfaces, less children’s programming, and higher costs for everyone — all without obviously mitigating any harm in the first place.
Last week the International Center for Law & Economics (ICLE) and twelve noted law and economics scholars filed an amicus brief in the Ninth Circuit in FTC v. Qualcomm, in support of appellant (Qualcomm) and urging reversal of the district court’s decision. The brief was authored by Geoffrey A. Manne, President & founder of ICLE, and Ben Sperry, Associate Director, Legal Research of ICLE. Jarod M. Bona and Aaron R. Gott of Bona Law PC collaborated in drafting the brief and they and their team provided invaluable pro bono legal assistance, for which we are enormously grateful. Signatories on the brief are listed at the end of this post.
We’ve written about the case several times on Truth on the Market, as have a number of guest bloggers, in our ongoing blog series on the case here.
The ICLE amicus brief focuses on the ways that the district court exceeded the “error cost” guardrails erected by the Supreme Court to minimize the risk and cost of mistaken antitrust decisions, particularly those that wrongly condemn procompetitive behavior. As the brief notes at the outset:
The district court’s decision is disconnected from the underlying economics of the case. It improperly applied antitrust doctrine to the facts, and the result subverts the economic rationale guiding monopolization jurisprudence. The decision—if it stands—will undercut the competitive values antitrust law was designed to protect.
In essence, the Court’s monopolization case law implements the error cost framework by (among other things) obliging courts to operate under certain decision rules that limit the use of inferences about the consequences of a defendant’s conduct except when the circumstances create what game theorists call a “separating equilibrium.” A separating equilibrium is a
solution to a game in which players of different types adopt different strategies and thereby allow an uninformed player to draw inferences about an informed player’s type from that player’s actions.
The key problem in antitrust is that while the consequence of complained-of conduct for competition (i.e., consumers) is often ambiguous, its deleterious effect on competitors is typically quite evident—whether it is actually anticompetitive or not. The question is whether (and when) it is appropriate to infer anticompetitive effect from discernible harm to competitors.
Except in the narrowly circumscribed (by Trinko) instance of a unilateral refusal to deal, anticompetitive harm under the rule of reason must be proven. It may not be inferred from harm to competitors, because such an inference is too likely to be mistaken—and “mistaken inferences are especially costly, because they chill the very conduct the antitrust laws are designed to protect.” (Brooke Group (quoting yet another key Supreme Court antitrust error cost case, Matsushita (1986)).
Yet, as the brief discusses, in finding Qualcomm liable the district court did not demand or find proof of harm to competition. Instead, the court’s opinion relies on impermissible inferences from ambiguous evidence to find that Qualcomm had (and violated) an antitrust duty to deal with rival chip makers and that its conduct resulted in anticompetitive foreclosure of competition.
We urge you to read the brief (it’s pretty short—maybe the length of three blogs posts) to get the whole argument. Below we draw attention to a few points we make in the brief that are especially significant.
The district court bases its approach entirely on Microsoft — which it misinterprets in clear contravention of Supreme Court case law
The district court doesn’t stay within the strictures of the Supreme Court’s monopolization case law. In fact, although it obligingly recites some of the error cost language from Trinko, it quickly moves away from Supreme Court precedent and bases its approach entirely on its reading of the D.C. Circuit’s Microsoft(2001) decision.
Unfortunately, the district court’s reading of Microsoft is mistaken and impermissible under Supreme Court precedent. Indeed, both the Supreme Court and the D.C. Circuit make clear that a finding of illegal monopolization may not rest on an inference of anticompetitive harm.
The district court cites Microsoft for the proposition that
Where a government agency seeks injunctive relief, the Court need only conclude that Qualcomm’s conduct made a “significant contribution” to Qualcomm’s maintenance of monopoly power. The plaintiff is not required to “present direct proof that a defendant’s continued monopoly power is precisely attributable to its anticompetitive conduct.”
It’s true Microsoft held that, in government actions seeking injunctions, “courts [may] infer ‘causation’ from the fact that a defendant has engaged in anticompetitive conduct that ‘reasonably appears capable of making a significant contribution to maintaining monopoly power.’” (Emphasis added).
But Microsoft never suggested that anticompetitiveness itself may be inferred.
“Causation” and “anticompetitive effect” are not the same thing. Indeed, Microsoft addresses “anticompetitive conduct” and “causation” in separate sections of its decision. And whereas Microsoft allows that courts may infer “causation” in certain government actions, it makes no such allowance with respect to “anticompetitive effect.” In fact, it explicitly rules it out:
[T]he plaintiff… must demonstrate that the monopolist’s conduct indeed has the requisite anticompetitive effect…; no less in a case brought by the Government, it must demonstrate that the monopolist’s conduct harmed competition, not just a competitor.”
The D.C. Circuit subsequently reinforced this clear conclusion of its holding in Microsoft in Rambus:
Deceptive conduct—like any other kind—must have an anticompetitive effect in order to form the basis of a monopolization claim…. In Microsoft… [t]he focus of our antitrust scrutiny was properly placed on the resulting harms to competition.
Finding causation entails connecting evidentiary dots, while finding anticompetitive effect requires an economic assessment. Without such analysis it’s impossible to distinguish procompetitive from anticompetitive conduct, and basing liability on such an inference effectively writes “anticompetitive” out of the law.
Thus, the district court is correct when it holds that it “need not conclude that Qualcomm’s conduct is the sole reason for its rivals’ exits or impaired status.” But it is simply wrong to hold—in the same sentence—that it can thus “conclude that Qualcomm’s practices harmed competition and consumers.” The former claim is consistent with Microsoft; the latter is emphatically not.
Under Trinko and Aspen Skiing the district court’s finding of an antitrust duty to deal is impermissible
Because finding that a company operates under a duty to deal essentially permits a court to infer anticompetitive harm without proof, such a finding “comes dangerously close to being a form of ‘no-fault’ monopolization,” as Herbert Hovenkamp has written. It is also thus seriously disfavored by the Court’s error cost jurisprudence.
In Trinko the Supreme Court interprets its holding in Aspen Skiing to identify essentially a single scenario from which it may plausibly be inferred that a monopolist’s refusal to deal with rivals harms consumers: the existence of a prior, profitable course of dealing, and the termination and replacement of that arrangement with an alternative that not only harms rivals, but also is less profitable for the monopolist.
In an effort to satisfy this standard, the district court states that “because Qualcomm previously licensed its rivals, but voluntarily stopped licensing rivals even though doing so was profitable, Qualcomm terminated a voluntary and profitable course of dealing.”
But it’s not enough merely that the prior arrangement was profitable. Rather, Trinko and Aspen Skiing hold that when a monopolist ends a profitable relationship with a rival, anticompetitive exclusion may be inferred only when it also refuses to engage in an ongoing arrangement that, in the short run, is more profitable than no relationship at all. The key is the relative value to the monopolist of the current options on offer, not the value to the monopolist of the terminated arrangement. In a word, what the Court requires is that the defendant exhibit behavior that, but-for the expectation of future, anticompetitive returns, is irrational.
It should be noted, as John Lopatka (here) and Alan Meese (here) (both of whom joined the amicus brief) have written, that even the Supreme Court’s approach is likely insufficient to permit a court to distinguish between procompetitive and anticompetitive conduct.
But what is certain is that the district court’s approach in no way permits such an inference.
“Evasion of a competitive constraint” is not an antitrust-relevant refusal to deal
In order to infer anticompetitive effect, it’s not enough that a firm may have a “duty” to deal, as that term is colloquially used, based on some obligation other than an antitrust duty, because it can in no way be inferred from the evasion of that obligation that conduct is anticompetitive.
The district court bases its determination that Qualcomm’s conduct is anticompetitive on the fact that it enables the company to avoid patent exhaustion, FRAND commitments, and thus price competition in the chip market. But this conclusion is directly precluded by the Supreme Court’s holding in NYNEX.
Indeed, in Rambus, the D.C. Circuit, citing NYNEX, rejected the FTC’s contention that it may infer anticompetitive effect from defendant’s evasion of a constraint on its monopoly power in an analogous SEP-licensing case: “But again, as in NYNEX, an otherwise lawful monopolist’s end-run around price constraints, even when deceptive or fraudulent, does not alone present a harm to competition.”
[T]he objection to the “evasion” of any constraint approach is… that it opens the door to enforcement actions applied to business conduct that is not likely to harm competition and might be welfare increasing.
Thus NYNEX and Rambus (and linkLine) reinforce the Court’s repeated holding that an inference of harm to competition is permissible only where conduct points clearly to anticompetitive effect—and, bad as they may be, evading obligations under other laws or violating norms of “business morality” do not suffice.
The district court’s elaborate theory of harm rests fundamentally on the claim that Qualcomm injures rivals—and the record is devoid of evidence demonstrating actual harm to competition. Instead, the court infers it from what it labels “unreasonably high” royalty rates, enabled by Qualcomm’s evasion of competition from rivals. In turn, the court finds that that evasion of competition can be the source of liability if what Qualcomm evaded was an antitrust duty to deal. And, in impermissibly circular fashion, the court finds that Qualcomm indeed evaded an antitrust duty to deal—because its conduct allowed it to sustain “unreasonably high” prices.
The Court’s antitrust error cost jurisprudence—from Brooke Group to NYNEX to Trinko & linkLine—stands for the proposition that no such circular inferences are permitted.
The district court’s foreclosure analysis also improperly relies on inferences in lieu of economic evidence
Because the district court doesn’t perform a competitive effects analysis, it fails to demonstrate the requisite “substantial” foreclosure of competition required to sustain a claim of anticompetitive exclusion. Instead the court once again infers anticompetitive harm from harm to competitors.
The district court makes no effort to establish the quantity of competition foreclosed as required by the Supreme Court. Nor does the court demonstrate that the alleged foreclosure harms competition, as opposed to just rivals. Foreclosure per se is not impermissible and may be perfectly consistent with procompetitive conduct.
Again citing Microsoft, the district court asserts that a quantitative finding is not required. Yet, as the court’s citation to Microsoft should have made clear, in its stead a court must find actual anticompetitive effect; it may not simply assert it. As Microsoft held:
It is clear that in all cases the plaintiff must… prove the degree of foreclosure. This is a prudential requirement; exclusivity provisions in contracts may serve many useful purposes.
The court essentially infers substantiality from the fact that Qualcomm entered into exclusive deals with Apple (actually, volume discounts), from which the court concludes that Qualcomm foreclosed rivals’ access to a key customer. But its inference that this led to substantial foreclosure is based on internal business statements—so-called “hot docs”—characterizing the importance of Apple as a customer. Yet, as Geoffrey Manne and Marc Williamson explain, such documentary evidence is unreliable as a guide to economic significance or legal effect:
Business people will often characterize information from a business perspective, and these characterizations may seem to have economic implications. However, business actors are subject to numerous forces that influence the rhetoric they use and the conclusions they draw….
There are perfectly good reasons to expect to see “bad” documents in business settings when there is no antitrust violation lurking behind them.
Assuming such language has the requisite economic or legal significance is unsupportable—especially when, as here, the requisite standard demands a particular quantitative significance.
Moreover, the court’s “surcharge” theory of exclusionary harm rests on assumptions regarding the mechanism by which the alleged surcharge excludes rivals and harms consumers. But the court incorrectly asserts that only one mechanism operates—and it makes no effort to quantify it.
The court cites “basic economics” via Mankiw’s Principles of Microeconomics text for its conclusion:
The surcharge affects demand for rivals’ chips because as a matter of basic economics, regardless of whether a surcharge is imposed on OEMs or directly on Qualcomm’s rivals, “the price paid by buyers rises, and the price received by sellers falls.” Thus, the surcharge “places a wedge between the price that buyers pay and the price that sellers receive,” and demand for such transactions decreases. Rivals see lower sales volumes and lower margins, and consumers see less advanced features as competition decreases.
But even assuming the court is correct that Qualcomm’s conduct entails such a surcharge, basic economics does not hold that decreased demand for rivals’ chips is the only possible outcome.
In actuality, an increase in the cost of an input for OEMs can have three possible effects:
OEMs can pass all or some of the cost increase on to consumers in the form of higher phone prices. Assuming some elasticity of demand, this would mean fewer phone sales and thus less demand by OEMs for chips, as the court asserts. But the extent of that effect would depend on consumers’ demand elasticity and the magnitude of the cost increase as a percentage of the phone price. If demand is highly inelastic at this price (i.e., relatively insensitive to the relevant price change), it may have a tiny effect on the number of phones sold and thus the number of chips purchased—approaching zero as price insensitivity increases.
OEMs can absorb the cost increase and realize lower profits but continue to sell the same number of phones and purchase the same number of chips. This would not directly affect demand for chips or their prices.
OEMs can respond to a price increase by purchasing fewer chips from rivals and more chips from Qualcomm. While this would affect rivals’ chip sales, it would not necessarily affect consumer prices, the total number of phones sold, or OEMs’ margins—that result would depend on whether Qualcomm’s chips cost more or less than its rivals’. If the latter, it would even increase OEMs’ margins and/or lower consumer prices and increase output.
Alternatively, of course, the effect could be some combination of these.
Whether any of these outcomes would substantially exclude rivals is inherently uncertain to begin with. But demonstrating a reduction in rivals’ chip sales is a necessary but not sufficient condition for proving anticompetitive foreclosure. The FTC didn’t even demonstrate that rivals were substantially harmed, let alone that there was any effect on consumers—nor did the district court make such findings.
Doing so would entail consideration of whether decreased demand for rivals’ chips flows from reduced consumer demand or OEMs’ switching to Qualcomm for supply, how consumer demand elasticity affects rivals’ chip sales, and whether Qualcomm’s chips were actually less or more expensive than rivals’. Yet the court determined none of these.
Contrary to established Supreme Court precedent, the district court’s decision relies on mere inferences to establish anticompetitive effect. The decision, if it stands, would render a wide range of potentially procompetitive conduct presumptively illegal and thus harm consumer welfare. It should be reversed by the Ninth Circuit.
Joining ICLE on the brief are:
Donald J. Boudreaux, Professor of Economics, George Mason University
Kenneth G. Elzinga, Robert C. Taylor Professor of Economics, University of Virginia
Janice Hauge, Professor of Economics, University of North Texas
Justin (Gus) Hurwitz, Associate Professor of Law, University of Nebraska College of Law; Director of Law & Economics Programs, ICLE
Thomas A. Lambert, Wall Chair in Corporate Law and Governance, University of Missouri Law School
John E. Lopatka, A. Robert Noll Distinguished Professor of Law, Penn State University Law School
Daniel Lyons, Professor of Law, Boston College Law School
Geoffrey A. Manne, President and Founder, International Center for Law & Economics; Distinguished Fellow, Northwestern University Center on Law, Business & Economics
Alan J. Meese, Ball Professor of Law, William & Mary Law School
Paul H. Rubin, Samuel Candler Dobbs Professor of Economics Emeritus, Emory University
Vernon L. Smith, George L. Argyros Endowed Chair in Finance and Economics, Chapman University School of Business; Nobel Laureate in Economics, 2002
Michael Sykuta, Associate Professor of Economics, University of Missouri
Last week the Senate Judiciary Committee held a hearing, Intellectual
Property and the Price of Prescription Drugs: Balancing Innovation and
Competition, that explored whether changes to the pharmaceutical patent
process could help lower drug prices. The
committee’s goal was to evaluate various legislative proposals that might
facilitate the entry of cheaper generic drugs, while also recognizing that strong
patent rights for branded drugs are essential to incentivize drug
innovation. As Committee Chairman
Lindsey Graham explained:
One thing you don’t want to do is kill the goose who laid the golden egg, which is pharmaceutical development. But you also don’t want to have a system that extends unnecessarily beyond the ability to get your money back and make a profit, a patent system that drives up costs for the average consumer.
Several proposals that were discussed at the hearing have
the potential to encourage competition in the pharmaceutical industry and help
rein in drug prices. Below, I discuss these proposals, plus a few additional
reforms. I also point out some of the language in the current draft proposals
that goes a bit too far and threatens the ability of drug makers to remain
1. Prevent brand drug makers from blocking generic companies’ access to drug samples. Some brand drug makers have attempted to delay generic entry by restricting generics’ access to the drug samples necessary to conduct FDA-required bioequivalence studies. Some brand drug manufacturers have limited the ability of pharmacies or wholesalers to sell samples to generic companies or abused the REMS (Risk Evaluation Mitigation Strategy) program to refuse samples to generics under the auspices of REMS safety requirements. The Creating and Restoring Equal Access To Equivalent Samples (CREATES) Act of 2019 would allow potential generic competitors to bring an action in federal court for both injunctive relief and damages when brand companies block access to drug samples. It also gives the FDA discretion to approve alternative REMS safety protocols for generic competitors that have been denied samples under the brand companies’ REMS protocol. Although the vast majority of brand drug companies do not engage in the delay tactics addressed by CREATES, the Act would prevent the handful that do from thwarting generic competition. Increased generic competition should, in turn, reduce drug prices.
2. Restrict abuses of FDA Citizen Petitions. The citizen petition process was created as a way for individuals and community groups to flag legitimate concerns about drugs awaiting FDA approval. However, critics claim that the process has been misused by some brand drug makers who file petitions about specific generic drugs in the hopes of delaying their approval and market entry. Although FDA has indicated that citizens petitions rarely delay the approval of generic drugs, there have been a few drug makers, such as Shire ViroPharma, that have clearly abused the process and put unnecessary strain on FDA resources. The Stop The Overuse of Petitions and Get Affordable Medicines to Enter Soon (STOP GAMES) Act is intended to prevent such abuses. The Act reinforces the FDA and FTC’s ability to crack down on petitions meant to lengthen the approval process of a generic competitor, which should deter abuses of the system that can occasionally delay generic entry. However, lawmakers should make sure that adopted legislation doesn’t limit the ability of stakeholders (including drug makers that often know more about the safety of drugs than ordinary citizens) to raise serious concerns with the FDA.
3. Curtail Anticompetitive Pay-for-Delay Settlements. The Hatch-Waxman Act incentivizes generic companies to challenge brand drug patents by granting the first successful generic challenger a period of marketing exclusivity. Like all litigation, many of these patent challenges result in settlements instead of trials. The FTC and some courts have concluded that these settlements can be anticompetitive when the brand companies agree to pay the generic challenger in exchange for the generic company agreeing to forestall the launch of their lower-priced drug. Settlements that result in a cash payment are a red flag for anti-competitive behavior, so pay-for-delay settlements have evolved to involve other forms of consideration instead. As a result, the Preserve Access to Affordable Generics and Biosimilars Act aims to make an exchange of anything of value presumptively anticompetitive if the terms include a delay in research, development, manufacturing, or marketing of a generic drug. Deterring obvious pay-for-delay settlements will prevent delays to generic entry, making cheaper drugs available as quickly as possible to patients.
However, the Act’s rigid presumption that an exchange of anything of value is presumptively anticompetitive may also prevent legitimate settlements that ultimately benefit consumers. Brand drug makers should be allowed to compensate generic challengers to eliminate litigation risk and escape litigation expenses, and many settlements result in the generic drug coming to market before the expiration of the brand patent and possibly earlier than if there was prolonged litigation between the generic and brand company. A rigid presumption of anticompetitive behavior will deter these settlements, thereby increasing expenses for all parties that choose to litigate and possibly dissuading generics from bringing patent challenges in the first place. Indeed, the U.S. Supreme Court has declined to define these settlements as per se anticompetitive, and the FTC’s most recent agreement involving such settlements exempts several forms of exchanges of value. Any adopted legislation should follow the FTC’s lead and recognize that some exchanges of value are pro-consumer and pro-competitive.
4. Restore the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. I have previously discussed how an unbalanced inter partes review (IPR) process for challenging patents threatens to stifle drug innovation. Moreover, current law allows generic challengers to file duplicative claims in both federal court and through the IPR process. And because IPR proceedings do not have a standing requirement, the process has been exploited by entities that would never be granted standing in traditional patent litigation—hedge funds betting against a company by filing an IPR challenge in hopes of crashing the stock and profiting from the bet. The added expense to drug makers of defending both duplicative claims and claims against challengers that are exploiting the system increases litigation costs, which may be passed on to consumers in the form of higher prices.
The Hatch-Waxman Integrity Act (HWIA) is designed to return the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. It requires generic challengers to choose between either Hatch-Waxman litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) or an IPR proceeding (which is faster and provides certain pro-challenger provisions). The HWIA would also eliminate the ability of hedge funds and similar entities to file IPR claims while shorting the stock. By reducing duplicative litigation and the exploitation of the IPR process, the HWIA will reduce costs and strengthen innovation incentives for drug makers. This will ensure that patent owners achieve clarity on the validity of their patents, which will spur new drug innovation and make sure that consumers continue to have access to life-improving drugs.
5. Curb illegal product hopping and patent thickets. Two drug maker tactics currently garnering a lot of attention are so-called “product hopping” and “patent thickets.” At its worst, product hopping involves brand drug makers making minor changes to a drug nearing the end of its patent so that they gets a new patent on the slightly-tweaked drug, and then withdrawing the original drug from the market so that patients shift to the newly patented drug and pharmacists can’t substitute a generic version of the original drug. Similarly, at their worst, patent thickets involve brand drug makers obtaining a web of patents on a single drug to extend the life of their exclusivity and make it too costly for other drug makers to challenge all of the patents associated with a drug. The proposed Affordable Prescriptions for Patients Act of 2019 is meant to stop these abuses of the patent system, which would facilitate generic entry and help to lower drug prices.
However, the Act goes too far by also capturing many legitimate activities in its definitions. For example, the bill defines as anticompetitive product-hopping the selling of any improved version of a drug during a window which extends to a year after the launch of the first generic competitor. Presently, to acquire a patent and FDA approval, the improved version of the drug must be different and innovative enough from the original drug, yet the Act would prevent the drug maker from selling such a product without satisfying a demanding three-pronged test before the FTC or a district court. Similarly, the Act defines as anticompetitive patent thickets any new patents filed on a drug in the same general family as the original patent, and this presumption can only be rebutted by providing extensive evidence and satisfying demanding standards to the FTC or a district court. As a result, the Act deters innovation activity that is at all related to an initial patent and, in doing so, ignores the fact that most important drug innovation is incremental innovation based on previous inventions. Thus, the proposal should be redrafted to capture truly anticompetitive product hopping and patent thicket activity, while exempting behavior this is critical for drug innovation.
Reforms that close loopholes in the current patent process should facilitate competition in the pharmaceutical industry and help to lower drug prices. However, lawmakers need to be sure that they don’t restrict patent rights to the extent that they deter innovation because a significant body of research predicts that patients’ health outcomes will suffer as a result.
[TOTM: The following is the fifth in a series of posts by TOTM guests and authors on the FTC v. Qualcommcase, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.
This post is authored by Douglas H. Ginsburg, Professor of Law, Antonin Scalia Law School at George Mason University; Senior Judge, United States Court of Appeals for the District of Columbia Circuit; and former Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice; and Joshua D. Wright, University Professor, Antonin Scalia Law School at George Mason University; Executive Director, Global Antitrust Institute; former U.S. Federal Trade Commissioner from 2013-15; and one of the founding bloggers at Truth on the Market.]
[Ginsburg & Wright: Professor Wright is recused from participation in the FTC litigation against Qualcomm, but has provided counseling advice to Qualcomm concerning other regulatory and competition matters. The views expressed here are our own and neither author received financial support.]
In a recent article Joe Kattan and Tim Muris (K&M) criticize our article on the predictive power of bargaining models in antitrust, in which we used two recent applications to explore implications for uses of bargaining models in courts and antitrust agencies moving forward. Like other theoretical models used to predict competitive effects, complex bargaining models require courts and agencies rigorously to test their predictions against data from the real world markets and institutions to which they are being applied. Where the “real-world evidence,” as Judge Leon described such data in AT&T/Time Warner, is inconsistent with the predictions of a complex bargaining model, then the tribunal should reject the model rather than reality.
K&M, who represent Intel Corporation in connection with the FTC v. Qualcomm case now pending in the Northern District of California, focus exclusively upon, and take particular issue with, one aspect of our prior article: We argued that, as in AT&T/Time Warner, the market realities at issue in FTC v. Qualcomm are inconsistent with the use of Dr. Carl Shapiro’s bargaining model to predict competitive effects in the relevant market. K&M—no doubt confident in their superior knowledge of the underlying facts due to their representation in the matter—criticize our analysis for our purported failure to get our hands sufficiently dirty with the facts. They criticize our broader analysis of bargaining models and their application for our failure to discuss specific pieces of evidence presented at trial, and offer up several quotations from Qualcomm’s customers as support for Shapiro’s economic analysis. K&M concede that, as we argue, the antitrust laws should not condemn a business practice in the absence of robust economic evidence of actual or likely harm to competition; yet, they do not see any conflict between that concession and their position that the FTC need not, through its expert, quantify the royalty surcharge imposed by Qualcomm because the “exact size of the overcharge was not relevant to the issue of Qualcomm’s liability.” [Kattan and Muris miss the point that within the context of economic modeling, the failure to identify the magnitude of an effect with any certainty when data are available, including whether the effect is statistically different than zero, calls into question the model’s robustness more generally.]
Though our prior article was a broad one, not limited to FTC v. Qualcomm or intended to cover record evidence in detail, we welcome K&M’s critique and are happy to accept their invitation to engage further on the facts of that particular case. We agree that accounting for market realities is very important when complex economic models are at play. Unfortunately, K&M’s position that the evidence “supports Shapiro’s testimony overwhelmingly” ignores the sound empirical evidence employed by Dr. Aviv Nevo during trial and has not aged well in light of the internal Apple documents made public in Qualcomm’s Opening Statement following the companies’ decision to settle the case, which Apple had initiated in January 2017.
Qualcomm’s Opening Statement in the Apple litigation revealed a number of new facts that are problematic, to say the least, for K&M’s position and, even more troublesome for Shapiro’s model and the FTC’s case. Of course, as counsel to an interested party in the FTC case, it is entirely possible that K&M were aware of the internal Apple documents cited in Qualcomm’s Opening Statement (or similar documents) and simply disagree about their significance. On the other hand, it is quite clear the Department of Justice Antitrust Division found them to be significantly damaging; it took the rare step of filing a Statement of Interest of the United States with the district court citing the documents and imploring the court to call for additional briefing and hold a hearing on issues related to a remedy in the event that it finds Qualcomm liable on any of the FTC’s claims. The internal Apple documents cited in Qualcomm’s Opening Statement leave no doubt as to several critical market realities that call into question the FTC’s theory of harm and Shapiro’s attempts to substantiate it.
(For more on the implications of these documents, see Geoffrey Manne’s post in this series, here).
First, the documents laying out Apple’s litigation strategy clearly establish that it has a high regard for Qualcomm’s technology and patent portfolio and that Apple strategized for several years about how to reduce its net royalties and to hurt Qualcomm financially.
Second, the documents undermine Apple’s public complaints about Qualcomm and call into question the validity of the underlying theory of harm in the FTC’s case. In particular, the documents plainly debunk Apple’s claims that Qualcomm’s patents weakened over time as a result of a decline in the quality of the technology and that Qualcomm devised an anticompetitive strategy in order to extract value from a weakening portfolio. The documents illustrate that in fact, Apple adopted a deliberate strategy of trying to manipulate the value of Qualcomm’s portfolio. The company planned to “creat[e] evidence” by leveraging its purchasing power to methodically license less expensive patents in hope of making Qualcomm’s royalties appear artificially inflated. In other words, if Apple’s made-for-litigation position were correct, then it would be only because of Apple’s attempt to manipulate and devalue Qualcomm’s patent portfolio, not because there had been any real change in its value.
Third, the documents directly refute some of the arguments K&M put forth in their critique of our prior article, in which we invoked Dr. Nevo’s empirical analysis of royalty rates over time as important evidence of historical facts that contradict Dr. Shapiro’s model. For example, K&M attempt to discredit Nevo’s analysis by claiming he did not control for changes in the strength of Qualcomm’s patent portfolio which, they claim, had weakened over time. According to internal Apple documents, however, “Qualcomm holds a stronger position in . . . , and particularly with respect to cellular and Wi-Fi SEPs” than do Huawei, Nokia, Ericsson, IDCC, and Apple. Another document states that “Qualcomm is widely considered the owner of the strongest patent portfolio for essential and relevant patents for wireless standards.” Indeed, Apple’s documents show that Apple sought artificially to “devalue SEPs” in the industry by “build[ing] favorable, arms-length ‘comp’ licenses” in an attempt to reduce what FRAND means. The ultimate goal of this pursuit was stated frankly by Apple: To “reduce Apple’s net royalty to Qualcomm” despite conceding that Qualcomm’s chips “engineering wise . . . have been the best.”
As new facts relevant to the FTC’s case and contrary to its theory of harm come to light, it is important to re-emphasize the fundamental point of our prior article: Model predictions that are inconsistent with actual market evidence should give fact finders serious pause before accepting the results as reliable. This advice is particularly salient in a case like FTC v. Qualcomm, where intellectual property and innovation are critical components of the industry and its competitiveness, because condemning behavior that is not truly anticompetitive may have serious, unintended consequences. (See Douglas H. Ginsburg & Joshua D. Wright, Dynamic Analysis and the Limits of Antitrust Institutions, 78 Antitrust L.J. 1 (2012); Geoffrey A. Manne & Joshua D. Wright, Innovation and the Limits of Antitrust, 6 J. Competition L. & Econ. 153 (2010)).
The serious consequences of a false positive, that is, the erroneous condemnation of a procompetitive or competitively neutral business practice, is undoubtedly what caused the Antitrust Division to file its Statement of Interest in the FTC’s case against Qualcomm. That Statementcorrectly highlights the Apple documents as support for Government’s concern that “an overly broad remedy in this case could reduce competition and innovation in markets for 5G technology and downstream applications that rely on that technology.”
In this reply, we examine closely the market realities that with and hence undermine both Dr. Shapiro’s bargaining model and the FTC’s theory of harm in its case against Qualcomm. We believe the “large body of evidence” offered by K&M supporting Shapiro’s theoretical analysis is insufficient to sustain his conclusions under standard antitrust analysis, including the requirement that a plaintiff alleging monopolization or attempted monopolization provide evidence of actual or likely anticompetitive effects. We will also discuss the implications of the newly-public internal Apple documents for the FTC’s case, which remains pending at the time of this writing, and for future government investigations involving allegedly anticompetitive licensing of intellectual property.
I. Kattan and Muris Rely Upon Inconsequential Testimony and Mischaracterize Dr. Nevo’s Empirical Analysis
K&M march through a series of statements from Qualcomm’s customers asserting that the threat of Qualcomm discontinuing the supply of modem chips forced them to agree to unreasonable licensing demands. This testimony, however, is reminiscent of Dr. Shapiro’s testimony in AT&T/Time Warner concerning the threat of a long-term blackout of CNN and other Turner channels: Qualcomm has never cut off any customer’s supply of chips. The assertion that companies negotiating with Qualcomm either had to “agree to the license or basically go out of business” ignores the reality that even if Qualcomm discontinued supplying chips to a customer, the customer could obtain chips from one of four rival sources. This was not a theoretical possibility. Indeed, Apple has been sourcing chips from Intel since 2016 and made the decision to switch to Intel specifically in order, in its own words, to exert “commercial pressure against Qualcomm.”
Further, as Dr. Nevo pointed out at trial, SEP license agreements are typically long term (e.g., 10 or 15 year agreements) and are negotiated far less frequently than chip prices, which are typically negotiated annually. In other words, Qualcomm’s royalty rate is set prior to and independent of chip sale negotiations.
K&M raise a number of theoretical objections to Nevo’s empirical analysis. For example, K&M accuse Nevo of “cherry picking” the licenses he included in his empirical analysis to show that royalty rates remained constant over time, stating that he “excluded from consideration any license that had non-standard terms.” They mischaracterize Nevo’s testimony on this point. Nevo excluded from his analysis agreements that, according to the FTC’s own theory of harm, would be unaffected (e.g., agreements that were signed subject to government supervision or agreements that have substantially different risk splitting provisions). In any event, Nevo testified that modifying his analysis to account for Shapiro’s criticism regarding the excluded agreements would have no material effect on his conclusions. To our knowledge, Nevo’s testimony is the only record evidence providing any empirical analysis of the effects of Qualcomm’s licensing agreements.
As previously mentioned, K&M also claim that Dr. Nevo’s analysis failed to account for the alleged weakening of Qualcomm’s patent portfolio over time. Apple’s internal documents, however, are fatal to that claim.. K&M also pinpoint failure to control for differences among customers and changes in the composition of handsets over time as critical errors in Nevo’s analysis. Their assertion that Nevo should have controlled for differences among customers is puzzling. They do not elaborate upon that criticism, but they seem to believe different customers are entitled to different FRAND rates for the same license. But Qualcomm’s standard practice—due to the enormous size of its patent portfolio—is and has always been to charge all licensees the same rate for the entire portfolio.
As to changes in the composition of handsets over time, no doubt a smartphone today has many more features than a first-generation handset that only made and received calls; those new features, however, would be meaningless without Qualcomm’s SEPs, which are implemented by mobile chips that enable cellular communication. One must wonder why Qualcomm should have reduced the royalty rate on licenses for patents that are just as fundamental to the functioning of mobile phones today as they were to the functioning of a first-generation handset. K&M ignore the fundamental importance of Qualcomm’s SEPs in claiming that royalty rates should have declined along with the quality adjusted/? declining prices of mobile phones. They also, conveniently, ignore the evidence that the industry has been characterized by increasing output and quality—increases which can certainly be attributed at least in part to Qualcomm’s chips being “engineering wise . . . the best.”.
II. Apple’s Internal Documents Eviscerate the FTC’s Theory of Harm
The FTC’s theory of harm is premised upon Qualcomm’s allegedly charging a supra-FRAND rate for its SEPs (the “royalty surcharge”), which squeezes the margins of OEMs and consequently prevents rival chipset suppliers from obtaining a sufficient return when negotiating with those OEMs. (See Luke Froeb, et al’s criticism of the FTC’s theory of harm on these and related grounds, here). To predict the effects of Qualcomm’s allegedly anticompetitive conduct, Dr. Shapiro compared the gains from trade OEMs receive when they purchase a chip from Qualcomm and pay Qualcomm a FRAND royalty to license its SEPs with the gains from trade OEMs receive when they purchase a chip from a rival manufacturer and pay a “royalty surcharge” to Qualcomm to license its SEPs. Shapiro testified that he had “reason to believe that the royalty surcharge was substantial” and had “inevitable consequences,” for competition and for consumers, though his bargaining model did not quantify the effects of Qualcomm’s practice.
The premise of the FTC theory requires a belief about FRAND as a meaningful, objective competitive benchmark that Qualcomm was able to evade as a result of its market power in chipsets. But Apple manipulated negotiations as a tactic to reshape FRAND itself. The closer look at the facts invited by K&M does nothing to improve one’s view of the FTC’s claims. The Apple documents exposed at trial make it clear that Apple deliberately manipulated negotiations with other suppliers in order to make it appear to courts and antitrust agencies that something other than the quality of Qualcomm’s technology was driving royalty rates. For example, Apple’s own documents show it sought artificially to “devalue SEPs” by “build[ing] favorable, arms-length ‘comp’ licenses” in an attempt to reshape what FRAND means in this industry. Simply put, Apple’s strategy was to negotiate cheap supposedly “comparable” licenses with other chipset suppliers as part of a plan to reduce its net royalties to Qualcomm.
As part of the same strategy, Apple spent years arguing to regulators and courts that Qualcomm’s patents were no better than those of its competitors. But their internal documents tell this very different story:
“Nokia’s patent portfolio is significantly weaker than Qualcomm’s.”
“[InterDigital] makes minimal contributions to [the 4G/LTE] standard”
“Compared to [Huawei, Nokia, Ericsson, IDCC, and Apple], Qualcomm holds a stronger position in , and particularly with respect to cellular and Wi-Fi SEPs.”
“Compared to other licensors, Qualcomm has more significant holdings in key areas such as media processing, non-cellular communications and hardware. Likewise, using patent citation analysis as a measure of thorough prosecution within the US PTO, Qualcomm patents (SEPs and non-SEPs both) on average score higher compared to the other, largely non-US based licensors.”
One internal document that is particularly troubling states that Apple’s plan was to “create leverage by building pressure” in order to (i) hurt Qualcomm financially and (ii) put Qualcomm’s licensing model at risk. What better way to harm Qualcomm financially and put its licensing model at risk than to complain to regulators that the business model is anticompetitive and tie the company up in multiple costly litigations? That businesses make strategic plans to harm one another is no surprise. But it underscores the importance of antitrust institutions – with their procedural and evidentiary requirements – to separate meritorious claims from fabricated ones. They failed to do so here.
III. Lessons Learned
So what should we make of evidence suggesting one of the FTC’s key informants during its investigation of Qualcomm didn’t believe the arguments it was selling? The exposure of Apple’s internal documents is a sobering reminder that the FTC is not immune from the risk of being hoodwinked by rent-seeking antitrust plaintiffs. That a firm might try to persuade antitrust agencies to investigate and sue its rivals is nothing new (see, e.g., William J. Baumol & Janusz A. Ordover, Use of Antitrust to Subvert Competition, 28 J.L. & Econ. 247 (1985)), but it is a particularly high-stakes game in modern technology markets.
Lesson number one: Requiring proof of actual anticompetitive effects rather than relying upon a model that is not robust to market realities is an important safeguard to ensure that Section 2 protects competition and not merely an individual competitor. Yet the agencies’ staked their cases on bargaining models in AT&T/Time Warner and FTC v. Qualcomm that fell short of proving anticompetitive effects. An agency convinced by one firm or firms to pursue an action against a rival for conduct that does not actually harm competition could have a significant and lasting anticompetitive effect on the market. Modern antitrust analysis requires plaintiffs to substantiate their claims with more than just theory or scant evidence that rivals have been harmed. That safeguard is particularly important when an agency is pursuing an enforcement action against a company in a market where the risks of regulatory capture and false positives are high. With calls to move away from the consumer welfare standard—which would exacerbate both the risks and consequences of false positives–it is imperative to embrace rather than reject the requirement of proof in monopolization cases. (See Elyse Dorsey, Jan Rybnicek & Joshua D. Wright, Hipster Antitrust Meets Public Choice Economics: The Consumer Welfare Standard, Rule of Law, and Rent-Seeking, CPI Antitrust Chron. (Apr. 2018); see also Joshua D. Wright et al., Requiem For a Paradox: The Dubious Rise and Inevitable Fall of Hipster Antitrust, 51 Ariz. St. L.J. 293 (2019).) The DOJ’s Statement of Interest is a reminder of this basic tenet.
Lesson number two: Antitrust should have a limited role in adjudicating disputes arising between sophisticated parties in bilateral negotiations of patent licenses. Overzealous claims of harm from patent holdup and anticompetitive licensing can deter the lawful exercise of patent rights, good faith modifications of existing contracts, and more generally interfere with the outcome of arms-length negotiations (See Bruce H. Kobayashi & Joshua D. Wright, The Limits of Antitrust and Patent Holdup: A Reply To Cary et al., 78 Antitrust L.J. 701 (2012)). It is also a difficult task for an antitrust regulator or court to identify and distinguish anticompetitive patent licenses from neutral or welfare-increasing behavior. An antitrust agency’s willingness to cast the shadow of antitrust remedies over one side of the bargaining table inevitably places the agency in the position of encouraging further rent-seeking by licensees seeking similar intervention on their behalf.
Finally, antitrust agencies intervening in patent holdup and licensing disputes on behalf of one party to a patent licensing agreement risks transforming the agency into a price regulator. Apple’s fundamental complaint in its own litigation, and the core of the similar FTC allegation against Qualcomm, is that royalty rates are too high. The risks to competition and consumers of antitrust courts and agencies playing the role of central planner for the innovation economy are well known, and are at the peak when the antitrust enterprise is used to set prices, mandate a particular organizational structure for the firm, or to intervene in garden variety contract and patent disputes in high-tech markets.
The current Commission did not vote out the Complaint now being litigated in the Northern District of California. That case was initiated by an entirely different set of Commissioners. It is difficult to imagine the new Commissioners having no reaction to the Apple documents, and in particular to the perception they create that Apple was successful in manipulating the agency in its strategy to bolster its negotiating position against Qualcomm. A thorough reevaluation of the evidence here might well lead the current Commission to reconsider the merits of the agency’s position in the litigation and whether continuing is in the public interest. The Apple documents, should they enter the record, may affect significantly the Ninth Circuit’s or Supreme Court’s understanding of the FTC’s theory of harm.
[TOTM: The following is the fourth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.This post originally appeared on the Federalist Society Blog.]
The courtroom trial in the Federal Trade Commission’s (FTC’s) antitrust case against Qualcomm ended in January with a promise from the judge in the case, Judge Lucy Koh, to issue a ruling as quickly as possible — caveated by her acknowledgement that the case is complicated and the evidence voluminous. Well, things have only gotten more complicated since the end of the trial. Not only did Apple and Qualcomm reach a settlement in the antitrust case against Qualcomm that Apple filed just three days after the FTC brought its suit, but the abbreviated trial in that case saw the presentation by Qualcomm of some damning evidence that, if accurate, seriously calls into (further) question the merits of the FTC’s case.
Apple v. Qualcomm settles — and the DOJ takes notice
The Apple v. Qualcomm case, which was based on substantially the same arguments brought by the FTC in its case, ended abruptly last month after only a day and a half of trial — just enough time for the parties to make their opening statements — when Apple and Qualcomm reached an out-of-court settlement. The settlement includes a six-year global patent licensing deal, a multi-year chip supplier agreement, an end to all of the patent disputes around the world between the two companies, and a $4.5 billion settlement payment from Apple to Qualcomm.
That alone complicates the economic environment into which Judge Koh will issue her ruling. But the Apple v. Qualcomm trial also appears to have induced the Department of Justice Antitrust Division (DOJ) to weigh in on the FTC’s case with a Statement of Interest requesting Judge Koh to use caution in fashioning a remedy in the case should she side with the FTC, followed by a somewhat snarky Reply from the FTC arguing the DOJ’s filing was untimely (and, reading the not-so-hidden subtext, unwelcome).
But buried in the DOJ’s Statement is an important indication of why it filed its Statement when it did, just about a week after the end of the Apple v. Qualcomm case, and a pointer to a much larger issue that calls the FTC’s case against Qualcomm even further into question (I previously wrote about the lack of theoretical and evidentiary merit in the FTC’s case here).
Footnote 6 of the DOJ’s Statement reads:
Internal Apple documents that recently became public describe how, in an effort to “[r]educe Apple’s net royalty to Qualcomm,” Apple planned to “[h]urt Qualcomm financially” and “[p]ut Qualcomm’s licensing model at risk,” including by filing lawsuits raising claims similar to the FTC’s claims in this case …. One commentator has observed that these documents “potentially reveal that Apple was engaging in a bad faith argument both in front of antitrust enforcers as well as the legal courts about the actual value and nature of Qualcomm’s patented innovation.” (Emphasis added).
Indeed, the slides presented by Qualcomm during that single day of trial in Apple v. Qualcomm are significant, not only for what they say about Apple’s conduct, but, more importantly, for what they say about the evidentiary basis for the FTC’s claims against the company.
The evidence presented by Qualcomm in its opening statement suggests some troubling conduct by Apple
Others have pointed to Qualcomm’s opening slides and the Apple internal documents they present to note Apple’s apparent bad conduct. As one commentator sums it up:
Although we really only managed to get a small glimpse of Qualcomm’s evidence demonstrating the extent of Apple’s coordinated strategy to manipulate the FRAND license rate, that glimpse was particularly enlightening. It demonstrated a decade-long coordinated effort within Apple to systematically engage in what can only fairly be described as manipulation (if not creation of evidence) and classic holdout.
Qualcomm showed during opening arguments that, dating back to at least 2009, Apple had been laying the foundation for challenging its longstanding relationship with Qualcomm. (Emphasis added).
The internal Apple documents presented by Qualcomm to corroborate this claim appear quite damning. Of course, absent explanation and cross-examination, it’s impossible to know for certain what the documents mean. But on their face they suggest Apple knowingly undertook a deliberate scheme (and knowingly took upon itself significant legal risk in doing so) to devalue comparable patent portfolios to Qualcomm’s:
The apparent purpose of this scheme was to devalue comparable patent licensing agreements where Apple had the power to do so (through litigation or the threat of litigation) in order to then use those agreements to argue that Qualcomm’s royalty rates were above the allowable, FRAND level, and to undermine the royalties Qualcomm would be awarded in courts adjudicating its FRAND disputes with the company. As one commentator put it:
Apple embarked upon a coordinated scheme to challenge weaker patents in order to beat down licensing prices. Once the challenges to those weaker patents were successful, and the licensing rates paid to those with weaker patent portfolios were minimized, Apple would use the lower prices paid for weaker patent portfolios as proof that Qualcomm was charging a super-competitive licensing price; a licensing price that violated Qualcomm’s FRAND obligations. (Emphasis added).
That alone is a startling revelation, if accurate, and one that would seem to undermine claims that patent holdout isn’t a real problem. It also would undermine Apple’s claims that it is a “willing licensee,” engaging with SEP licensors in good faith. (Indeed, this has been called into question before, and one Federal Circuit judge has noted in dissent that “[t]he record in this case shows evidence that Apple may have been a hold out.”). If the implications drawn from the Apple documents shown in Qualcomm’s opening statement are accurate, there is good reason to doubt that Apple has been acting in good faith.
Even more troubling is what it means for the strength of the FTC’s case
But the evidence offered in Qualcomm’s opening argument point to another, more troubling implication, as well. We know that Apple has been coordinating with the FTC and was likely an important impetus for the FTC’s decision to bring an action in the first place. It seems reasonable to assume that Apple used these “manipulated” agreements to help make its case.
But what is most troubling is the extent to which it appears to have worked.
Qualcomm’s practices, including no license, no chips, skewed negotiations towards the outcomes that favor Qualcomm and lead to higher royalties. Qualcomm is committed to license its standard essential patents on fair, reasonable, and non-discriminatory terms. But even before doing market comparison, we know that the license rates charged by Qualcomm are too high and above FRAND because Qualcomm uses its chip power to require a license.
* * *
Mr. Michael Lasinski [the FTC’s patent valuation expert] compared the royalty rates received by Qualcomm to … the range of FRAND rates that ordinarily would form the boundaries of a negotiation … Mr. Lasinski’s expert opinion … is that Qualcomm’s royalty rates are far above any indicators of fair and reasonable rates. (Emphasis added).
The key question is what constitutes the “range of FRAND rates that ordinarily would form the boundaries of a negotiation”?
Because they were discussed under seal, we don’t know the precise agreements that the FTC’s expert, Mr. Lasinski, used for his analysis. But we do know something about them: His analysis entailed a study of only eight licensing agreements; in six of them, the licensee was either Apple or Samsung; and in all of them the licensor was either Interdigital, Nokia, or Ericsson. We also know that Mr. Lasinski’s valuation study did not include any Qualcomm licenses, and that the eight agreements he looked at were all executed after the district court’s decision in Microsoft vs. Motorola in 2013.
A curiously small number of agreements
Right off the bat there is a curiosity in the FTC’s valuation analysis. Even though there are hundreds of SEP license agreements involving the relevant standards, the FTC’s analysis relied on only eight, three-quarters of which involved licensing by only two companies: Apple and Samsung.
Indeed, even since 2013 (a date to which we will return) there have been scads of licenses (see, e.g., here, here, and here). Not only Apple and Samsung make CDMA and LTE devices; there are — quite literally — hundreds of other manufacturers out there, all of them licensing essentially the same technology — including global giants like LG, Huawei, HTC, Oppo, Lenovo, and Xiaomi. Why were none of their licenses included in the analysis?
At the same time, while Interdigital, Nokia, and Ericsson are among the largest holders of CDMA and LTE SEPs, several dozen companies have declared such patents, including Motorola (Alphabet), NEC, Huawei, Samsung, ZTE, NTT DOCOMO, etc. Again — why were none of their licenses included in the analysis?
All else equal, more data yields better results. This is particularly true where the data are complex license agreements which are often embedded in larger, even-more-complex commercial agreements and which incorporate widely varying patent portfolios, patent implementers, and terms.
Yet the FTC relied on just eight agreements in its comparability study, covering a tiny fraction of the industry’s licensors and licensees, and, notably, including primarily licenses taken by the two companies (Samsung and Apple) that have most aggressively litigated their way to lower royalty rates.
A curiously crabbed selection of licensors
And it is not just that the selected licensees represent a weirdly small and biased sample; it is also not necessarily even a particularly comparable sample.
One thing we can be fairly confident of, given what we know of the agreements used, is that at least one of the license agreements involved Nokia licensing to Apple, and another involved InterDigital licensing to Apple. But these companies’ patent portfolios are not exactly comparable to Qualcomm’s. About Nokia’s patents, Apple said:
And about InterDigital’s:
Meanwhile, Apple’s view of Qualcomm’s patent portfolio (despite its public comments to the contrary) was that it was considerably better than the others’:
The FTC’s choice of such a limited range of comparable license agreements is curious for another reason, as well: It includes no Qualcomm agreements. Qualcomm is certainly one of the biggest players in the cellular licensing space, and no doubt more than a few license agreements involve Qualcomm. While it might not make sense to include Qualcomm licenses that the FTC claims incorporate anticompetitive terms, that doesn’t describe the huge range of Qualcomm licenses with which the FTC has no quarrel. Among other things, Qualcomm licenses from before it began selling chips would not have been affected by its alleged “no license, no chips” scheme, nor would licenses granted to companies that didn’t also purchase Qualcomm chips. Furthermore, its licenses for technology reading on the WCDMA standard are not claimed to be anticompetitive by the FTC.
And yet none of these licenses were deemed “comparable” by the FTC’s expert, even though, on many dimensions — most notably, with respect to the underlying patent portfolio being valued — they would have been the most comparable (i.e., identical).
A curiously circumscribed timeframe
That the FTC’s expert should use the 2013 cut-off date is also questionable. According to Lasinski, he chose to use agreements after 2013 because it was in 2013 that the U.S. District Court for the Western District of Washington decided the Microsoft v. Motorola case. Among other things, the court in Microsoft v Motorola held that the proper value of a SEP is its “intrinsic” patent value, including its value to the standard, but not including the additional value it derives from being incorporatedinto a widely used standard.
According to the FTC’s expert,
prior to [Microsoft v. Motorola], people were trying to value … the standard and the license based on the value of the standard, not the value of the patents ….
Asked by Qualcomm’s counsel if his concern was that the “royalty rates derived in license agreements for cellular SEPs [before Microsoft v. Motorola] could very well have been above FRAND,” Mr. Lasinski concurred.
The problem with this approach is that it’s little better than arbitrary. The Motorola decision was an important one, to be sure, but the notion that sophisticated parties in a multi-billion dollar industry were systematically agreeing to improper terms until a single court in Washington suggested otherwise is absurd. To be sure, such agreements are negotiated in “the shadow of the law,” and judicial decisions like the one in Washington (later upheld by the Ninth Circuit) can affect the parties’ bargaining positions.
But even if it were true that the court’s decision had some effect on licensing rates, the decision would still have been only one of myriad factors determining parties’ relative bargaining power and their assessment of the proper valuation of SEPs. There is no basis to support the assertion that the Motorola decision marked a sea-change between “improper” and “proper” patent valuations. And, even if it did, it was certainly not alone in doing so, and the FTC’s expert offers no justification for determining that agreements reached before, say, the European Commission’s decision against Qualcomm in 2018 were “proper,” or that the Korea FTC’s decision against Qualcomm in 2009 didn’t have the same sort of corrective effect as the Motorola court’s decision in 2013.
At the same time, a review of a wider range of agreements suggested that Qualcomm’s licensing royalties weren’t inflated
Meanwhile, one of Qualcomm’s experts in the FTC case, former DOJ Chief Economist Aviv Nevo, looked at whether the FTC’s theory of anticompetitive harm was borne out by the data by looking at Qualcomm’s royalty rates across time periods and standards, and using a much larger set of agreements. Although his remit was different than Mr. Lasinski’s, and although he analyzed only Qualcomm licenses, his analysis still sheds light on Mr. Lasinski’s conclusions:
[S]pecifically what I looked at was the predictions from the theory to see if they’re actually borne in the data….
[O]ne of the clear predictions from the theory is that during periods of alleged market power, the theory predicts that we should see higher royalty rates.
So that’s a very clear prediction that you can take to data. You can look at the alleged market power period, you can look at the royalty rates and the agreements that were signed during that period and compare to other periods to see whether we actually see a difference in the rates.
Dr. Nevo’s analysis, which looked at royalty rates in Qualcomm’s SEP license agreements for CDMA, WCDMA, and LTE ranging from 1990 to 2017, found no differences in rates between periods when Qualcomm was alleged to have market power and when it was not alleged to have market power (or could not have market power, on the FTC’s theory, because it did not sell corresponding chips).
The reason this is relevant is that Mr. Lasinski’s assessment implies that Qualcomm’s higher royalty rates weren’t attributable to its superior patent portfolio, leaving either anticompetitive conduct or non-anticompetitive, superior bargaining ability as the explanation. No one thinks Qualcomm has cornered the market on exceptional negotiators, so really the only proffered explanation for the results of Mr. Lasinski’s analysis is anticompetitive conduct. But this assumes that his analysis is actually reliable. Prof. Nevo’s analysis offers some reason to think that it is not.
All of the agreements studied by Mr. Lasinski were drawn from the period when Qualcomm is alleged to have employed anticompetitive conduct to elevate its royalty rates above FRAND. But when the actual royalties charged by Qualcomm during its alleged exercise of market power are compared to those charged when and where it did not have market power, the evidence shows it received identical rates. Mr Lasinki’s results, then, would imply that Qualcomm’s royalties were “too high” not only while it was allegedly acting anticompetitively, but also when it was not. That simple fact suggests on its face that Mr. Lasinski’s analysis may have been flawed, and that it systematically under-valued Qualcomm’s patents.
Connecting the dots and calling into question the strength of the FTC’s case
In its closing argument, the FTC pulled together the implications of its allegations of anticompetitive conduct by pointing to Mr. Lasinski’s testimony:
Now, looking at the effect of all of this conduct, Qualcomm’s own documents show that it earned many times the licensing revenue of other major licensors, like Ericsson.
* * *
Mr. Lasinski analyzed whether this enormous difference in royalties could be explained by the relative quality and size of Qualcomm’s portfolio, but that massive disparity was not explained.
Qualcomm’s royalties are disproportionate to those of other SEP licensors and many times higher than any plausible calculation of a FRAND rate.
* * *
The overwhelming direct evidence, some of which is cited here, shows that Qualcomm’s conduct led licensees to pay higher royalties than they would have in fair negotiations.
It is possible, of course, that Lasinki’s methodology was flawed; indeed, at trial Qualcomm argued exactly this in challenging his testimony. But it is also possible that, whether his methodology was flawed or not, his underlying data was flawed.
It is impossible from the publicly available evidence to definitively draw this conclusion, but the subsequent revelation that Apple may well have manipulated at least a significant share of the eight agreements that constituted Mr. Lasinski’s data certainly increases the plausibility of this conclusion: We now know, following Qualcomm’s opening statement in Apple v. Qualcomm, that that stilted set of comparable agreements studied by the FTC’s expert also happens to be tailor-made to be dominated by agreements that Apple may have manipulated to reflect lower-than-FRAND rates.
What is most concerning is that the FTC may have built up its case on such questionable evidence, either by intentionally cherry picking the evidence upon which it relied, or inadvertently because it rested on such a needlessly limited range of data, some of which may have been tainted.
Intentionally or not, the FTC appears to have performed its valuation analysis using a needlessly circumscribed range of comparable agreements and justified its decision to do so using questionable assumptions. This seriously calls into question the strength of the FTC’s case.
[TOTM: The following is the third in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.
This post is authored by Douglas H. Ginsburg, Professor of Law, Antonin Scalia Law School at George Mason University; Senior Judge, United States Court of Appeals for the District of Columbia Circuit; and former Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice; and Joshua D. Wright, University Professor, Antonin Scalia Law School at George Mason University; Executive Director, Global Antitrust Institute; former U.S. Federal Trade Commissioner from 2013-15; and one of the founding bloggers at Truth on the Market.]
[Ginsburg & Wright: Professor Wright is recused from participation in the FTC litigation against Qualcomm, but has provided counseling advice to Qualcomm concerning other regulatory and competition matters. The views expressed here are our own and neither author received financial support.]
The Department of Justice Antitrust Division (DOJ) and Federal Trade Commission (FTC) have spent a significant amount of time in federal court litigating major cases premised upon an anticompetitive foreclosure theory of harm. Bargaining models, a tool used commonly in foreclosure cases, have been essential to the government’s theory of harm in these cases. In vertical merger or conduct cases, the core theory of harm is usually a variant of the claim that the transaction (or conduct) strengthens the firm’s incentives to engage in anticompetitive strategies that depend on negotiations with input suppliers. Bargaining models are a key element of the agency’s attempt to establish those claims and to predict whether and how firm incentives will affect negotiations with input suppliers, and, ultimately, the impact on equilibrium prices and output. Application of bargaining models played a key role in evaluating the anticompetitive foreclosure theories in the DOJ’s litigation to block the proposed merger of AT&T and Time Warner Cable. A similar model is at the center of the FTC’s antitrust claims against Qualcomm and its patent licensing business model.
Modern antitrust analysis does not condemn business practices as anticompetitive without solid economic evidence of an actual or likely harm to competition. This cautious approach was developed in the courts for two reasons. The first is that the difficulty of distinguishing between procompetitive and anticompetitive explanations for the same conduct suggests there is a high risk of error. The second is that those errors are more likely to be false positives than false negatives because empirical evidence and judicial learning have established that unilateral conduct is usually either procompetitive or competitively neutral. In other words, while the risk of anticompetitive foreclosure is real, courts have sensibly responded by requiring plaintiffs to substantiate their claims with more than just theory or scant evidence that rivals have been harmed.
An economic model can help establish the likelihood and/or magnitude of competitive harm when the model carefully captures the key institutional features of the competition it attempts to explain. Naturally, this tends to mean that the economic theories and models proffered by dueling economic experts to predict competitive effects take center stage in antitrust disputes. The persuasiveness of an economic model turns on the robustness of its assumptions about the underlying market. Model predictions that are inconsistent with actual market evidence give one serious pause before accepting the results as reliable.
For example, many industries are characterized by bargaining between providers and distributors. The Nash bargaining framework can be used to predict the outcomes of bilateral negotiations based upon each party’s bargaining leverage. The model assumes that both parties are better off if an agreement is reached, but that as the utility of one party’s outside option increases relative to the bargain, it will capture an increasing share of the surplus. Courts have had to reconcile these seemingly complicated economic models with prior case law and, in some cases, with direct evidence that is apparently inconsistent with the results of the model.
Indeed, Professor Carl Shapiro recently used bargaining models to analyze harm to competition in two prominent cases alleging anticompetitive foreclosure—one initiated by the DOJ and one by the FTC—in which he served as the government’s expert economist. In United States v. AT&T Inc., Dr. Shapiro testified that the proposed transaction between AT&T and Time Warner would give the vertically integrated company leverage to extract higher prices for content from AT&T’s rival, Dish Network. Soon after, Dr. Shapiro presented a similar bargaining model in FTC v. Qualcomm Inc. He testified that Qualcomm leveraged its monopoly power over chipsets to extract higher royalty rates from smartphone OEMs, such as Apple, wishing to license its standard essential patents (SEPs). In each case, Dr. Shapiro’s models were criticized heavily by the defendants’ expert economists for ignoring market realities that play an important role in determining whether the challenged conduct was likely to harm competition.
Judge Leon’s opinion in AT&T/Time Warner—recently upheld on appeal—concluded that Dr. Shapiro’s application of the bargaining model was significantly flawed, based upon unreliable inputs, and undermined by evidence about actual market performance presented by defendant’s expert, Dr. Dennis Carlton. Dr. Shapiro’s theory of harm posited that the combined company would increase its bargaining leverage and extract greater affiliate fees for Turner content from AT&T’s distributor rivals. The increase in bargaining leverage was made possible by the threat of a post-merger blackout of Turner content for AT&T’s rivals. This theory rested on the assumption that the combined firm would have reduced financial exposure from a long-term blackout of Turner content and would therefore have more leverage to threaten a blackout in content negotiations. The purpose of his bargaining model was to quantify how much AT&T could extract from competitors subjected to a long-term blackout of Turner content.
Judge Leon highlighted a number of reasons for rejecting the DOJ’s argument. First, Dr. Shapiro’s model failed to account for existing long-term affiliate contracts, post-litigation offers of arbitration agreements, and the increasing competitiveness of the video programming and distribution industry. Second, Dr. Carlton had demonstrated persuasively that previous vertical integration in the video programming and distribution industry did not have a significant effect on content prices. Finally, Dr. Shapiro’s model primarily relied upon three inputs: (1) the total number of subscribers the unaffiliated distributor would lose in the event of a long-term blackout of Turner content, (2) the percentage of the distributor’s lost subscribers who would switch to AT&T as a result of the blackout, and (3) the profit margin AT&T would derive from the subscribers it gained from the blackout. Many of Dr. Shapiro’s inputs necessarily relied on critical assumptions and/or third-party sources. Judge Leon considered and discredited each input in turn.
The parties in Qualcomm are, as of the time of this posting, still awaiting a ruling. Dr. Shapiro’s model in that case attempts to predict the effect of Qualcomm’s alleged “no license, no chips” policy. He compared the gains from trade OEMs receive when they purchase a chip from Qualcomm and pay Qualcomm a FRAND royalty to license its SEPs with the gains from trade OEMs receive when they purchase a chip from a rival manufacturer and pay a “royalty surcharge” to Qualcomm to license its SEPs. In other words, the FTC’s theory of harm is based upon the premise that Qualcomm is charging a supra-FRAND rate for its SEPs (the“royalty surcharge”) that squeezes the margins of OEMs. That margin squeeze, the FTC alleges, prevents rival chipset suppliers from obtaining a sufficient return when negotiating with OEMs. The FTC predicts the end result is a reduction in competition and an increase in the price of devices to consumers.
Qualcomm, like Judge Leon in AT&T, questioned the robustness of Dr. Shapiro’s model and its predictions in light of conflicting market realities. For example, Dr. Shapiro, argued that the
leverage that Qualcomm brought to bear on the chips shifted the licensing negotiations substantially in Qualcomm’s favor and led to a significantly higher royalty than Qualcomm would otherwise have been able to achieve.
Yet, on cross-examination, Dr. Shapiro declined to move from theory to empirics when asked if he had quantified the effects of Qualcomm’s practice on any other chip makers. Instead, Dr. Shapiro responded that he had not, but he had “reason to believe that the royalty surcharge was substantial” and had “inevitable consequences.” Under Dr. Shapiro’s theory, one would predict that royalty rates were higher after Qualcomm obtained market power.
As with Dr. Carlton’s testimony inviting Judge Leon to square the DOJ’s theory with conflicting historical facts in the industry, Qualcomm’s economic expert, Dr. Aviv Nevo, provided an analysis of Qualcomm’s royalty agreements from 1990-2017, confirming that there was no economic and meaningful difference between the royalty rates during the time frame when Qualcomm was alleged to have market power and the royalty rates outside of that time frame. He also presented evidence that ex ante royalty rates did not increase upon implementation of the CDMA standard or the LTE standard. Moreover, Dr.Nevo testified that the industry itself was characterized by declining prices and increasing output and quality.
Dr. Shapiro’s model in Qualcomm appears to suffer from many of the same flaws that ultimately discredited his model in AT&T/Time Warner: It is based upon assumptions that are contrary to real-world evidence and it does not robustly or persuasively identify anticompetitive effects. Some observers, including our Scalia Law School colleague and former FTC Chairman, Tim Muris, would apparently find it sufficient merely to allege a theoretical “ability to manipulate the marketplace.” But antitrust cases require actual evidence of harm. We think Professor Muris instead captured the appropriate standard in his important article rejecting attempts by the FTC to shortcut its requirement of proof in monopolization cases:
This article does reject, however, the FTC’s attempt to make it easier for the government to prevail in Section 2 litigation. Although the case law is hardly a model of clarity, one point that is settled is that injury to competitors by itself is not a sufficient basis to assume injury to competition …. Inferences of competitive injury are, of course, the heart of per se condemnation under the rule of reason. Although long a staple of Section 1, such truncation has never been a part of Section 2. In an economy as dynamic as ours, now is hardly the time to short-circuit Section 2 cases. The long, and often sorry, history of monopolization in the courts reveals far too many mistakes even without truncation.
Timothy J. Muris, The FTC and the Law of Monopolization, 67 Antitrust L. J. 693 (2000)
We agree. Proof of actual anticompetitive effects rather than speculation derived from models that are not robust to market realities are an important safeguard to ensure that Section 2 protects competition and not merely individual competitors.
The future of bargaining models in antitrust remains to be seen. Judge Leon certainly did not question the proposition that they could play an important role in other cases. Judge Leon closely dissected the testimony and models presented by both experts in AT&T/Time Warner. His opinion serves as an important reminder. As complex economic evidence like bargaining models become more common in antitrust litigation, judges must carefully engage with the experts on both sides to determine whether there is direct evidence on the likely competitive effects of the challenged conduct. Where “real-world evidence,” as Judge Leon called it, contradicts the predictions of a bargaining model, judges should reject the model rather than the reality. Bargaining models have many potentially important antitrust applications including horizontal mergers involving a bargaining component – such as hospital mergers, vertical mergers, and licensing disputes. The analysis of those models by the Ninth and D.C. Circuits will have important implications for how they will be deployed by the agencies and parties moving forward.
[TOTM: The following is the first in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.This post originally appeared on the Federalist Society Blog.]
Just days before leaving office, the outgoing Obama FTC left what should have been an unwelcome parting gift for the incoming Commission: an antitrust suit against Qualcomm. This week the FTC — under a new Chairman and with an entirely new set of Commissioners — finished unwrapping its present, and rested its case in the trial begun earlier this month in FTC v Qualcomm.
This complex case is about an overreaching federal agency seeking to set prices and dictate the business model of one of the world’s most innovative technology companies. As soon-to-be Acting FTC Chairwoman, Maureen Ohlhausen, noted in her dissent from the FTC’s decision to bring the case, it is “an enforcement action based on a flawed legal theory… that lacks economic and evidentiary support…, and that, by its mere issuance, will undermine U.S. intellectual property rights… worldwide.”
Implicit in the FTC’s case is the assumption that Qualcomm charges smartphone makers “too much” for its wireless communications patents — patents that are essential to many smartphones. But, as former FTC and DOJ chief economist, Luke Froeb, puts it, “[n]othing is more alien to antitrust than enquiring into the reasonableness of prices.” Even if Qualcomm’s royalty rates could somehow be deemed “too high” (according to whom?), excessive pricing on its own is not an antitrust violation under U.S. law.
Knowing this, the FTC “dances around that essential element” (in Ohlhausen’s words) and offers instead a convoluted argument that Qualcomm’s business model is anticompetitive. Qualcomm both sells wireless communications chipsets used in mobile phones, as well as licenses the technology on which those chips rely. According to the complaint, by licensing its patents only to end-users (mobile device makers) instead of to chip makers further up the supply chain, Qualcomm is able to threaten to withhold the supply of its chipsets to its licensees and thereby extract onerous terms in its patent license agreements.
There are numerous problems with the FTC’s case. Most fundamental among them is the “no duh” problem: Of course Qualcomm conditions the purchase of its chips on the licensing of its intellectual property; how could it be any other way? The alternative would require Qualcomm to actually facilitate the violation of its property rights by forcing it to sell its chips to device makers even if they refuse its patent license terms. In that world, what device maker would ever agree to pay more than a pittance for a patent license? The likely outcome is that Qualcomm charges more for its chips to compensate (or simply stops making them). Great, the FTC says; then competitors can fill the gap and — voila: the market is more competitive, prices will actually fall, and consumers will reap the benefits.
Except it doesn’t work that way. As many economists, including both the current and a prominent former chief economist of the FTC, have demonstrated, forcing royalty rates lower in such situations is at least as likely to harm competition as to benefit it. There is no sound theoretical or empirical basis for concluding that using antitrust to move royalty rates closer to some theoretical ideal will actually increase consumer welfare. All it does for certain is undermine patent holders’ property rights, virtually ensuring there will be less innovation.
In fact, given this inescapable reality, it is unclear why the current Commission is continuing to pursue the case at all. The bottom line is that, if it wins the case, the current FTC will have done more to undermine intellectual property rights than any other administration’s Commission has been able to accomplish.
It is not difficult to identify the frailties of the case that would readily support the agency backing away from pursuing it further. To begin with, the claim that device makers cannot refuse Qualcomm’s terms because the company effectively controls the market’s supply of mobile broadband modem chips is fanciful. While it’s true that Qualcomm is the largest supplier of these chipsets, it’s an absurdity to claim that device makers have no alternatives. In fact, Qualcomm has faced stiff competition from some of the world’s other most successful companies since well before the FTC brought its case. Samsung — the largest maker of Android phones — developed its own chip to replace Qualcomm’s in 2015, for example. More recently, Intel has provided Apple with all of the chips for its 2018 iPhones, and Apple is rumored to be developing its own 5G cellular chips in-house. In any case, the fact that most device makers have preferred to use Qualcomm’s chips in the past says nothing about the ability of other firms to take business from it.
The possibility (and actuality) of entry from competitors like Intel ensures that sophisticated purchasers like Apple have bargaining leverage. Yet, ironically, the FTC points to Apple’s claimthat Qualcomm “forced” it to use Intel modems in its latest iPhones as evidence of Qualcomm’s dominance. Think about that: Qualcomm “forced” a company worth many times its own value to use a competitor’s chips in its new iPhones — and that shows Qualcomm has a stranglehold on the market?
The FTC implies that Qualcomm’s refusal to license its patents to competing chip makers means that competitors cannot reliably supply the market. Yet Qualcomm has never asserted its patents against a competing chip maker, every one of which uses Qualcomm’s technology without paying any royalties to do so. The FTC nevertheless paints the decision to license only to device makers as the aberrant choice of an exploitative, dominant firm. The reality, however, is that device-level licensing is the norm practiced by every company in the industry — and has been since the 1980s.
Not only that, but Qualcomm has not altered its licensing terms or practices since it was decidedly an upstart challenger in the market — indeed, since before it even started producing chips, and thus before it even had the supposed means to leverage its chip sales to extract anticompetitive licensing terms. It would be a remarkable coincidence if precisely the same licensing structure and the exact same royalty rate served the company’s interests both as a struggling startup and as an alleged rapacious monopolist. Yet that is the implication of the FTC’s theory.
When Qualcomm introduced CDMA technology to the mobile phone industry in 1989, it was a promising but unproven new technology in an industry dominated by different standards. Qualcomm happily encouraged chip makers to promote the standard by enabling them to produce compliant components without paying any royalties; and it willingly licensed its patents to device makers based on a percentage of sales of the handsets that incorporated CDMA chips. Qualcomm thus shared both the financial benefits and the financial risk associated with the development and sales of devices implementing its new technology.
Qualcomm’s favorable (to handset makers) licensing terms may have helped CDMA become one of the industry standards for 2G and 3G devices. But it’s an unsupportable assertion to say that those identical terms are suddenly the source of anticompetitive power, particularly as 2G and 3G are rapidly disappearing from the market and as competing patent holders gain prominence with each successive cellular technology standard.
To be sure, successful handset makers like Apple that sell their devices at a significant premium would prefer to share less of their revenue with Qualcomm. But their success was built in large part on Qualcomm’s technology. They may regret the terms of the deal that propelled CDMA technology to prominence, but Apple’s regret is not the basis of a sound antitrust case.
And although it’s unsurprising that manufacturers of premium handsets would like to use antitrust law to extract better terms from their negotiations with standard-essential patent holders, it is astonishing that the current FTC is carrying on the Obama FTC’s willingness to do it for them.
None of this means that Qualcomm is free to charge an unlimited price: standard-essential patents must be licensed on “FRAND” terms, meaning they must be fair, reasonable, and nondiscriminatory. It is difficult to asses what constitutes FRAND, but the most restrictive method is to estimate what negotiated terms would look like before a patent was incorporated into a standard. “[R]oyalties that are or would be negotiated ex ante with full information are a market bench-mark reflecting legitimate return to innovation,” writes Carl Shapiro, the FTC’s own economic expert in the case.
And that is precisely what happened here: We don’t have to guess what the pre-standard terms of trade would look like; we know them, because they are the same terms that Qualcomm offers now.
We don’t know exactly what the consequence would be for consumers, device makers, and competitors if Qualcomm were forced to accede to the FTC’s benighted vision of how the market should operate. But we do know that the market we actually have is thriving, with new entry at every level, enormous investment in R&D, and continuous technological advance. These aren’t generally the characteristics of a typical monopoly market. While the FTC’s effort to “fix” the market may help Apple and Samsung reap a larger share of the benefits, it will undoubtedly end up only hurting consumers.
On Monday, the U.S. Federal Trade Commission and Qualcomm reportedly requested a 30 day delay to a preliminary ruling in their ongoing dispute over the terms of Qualcomm’s licensing agreements–indicating that they may seek a settlement. The dispute raises important issues regarding the scope of so-called FRAND (“fair reasonable and non-discriminatory”) commitments in the context of standards setting bodies and whether these obligations extend to component level licensing in the absence of an express agreement to do so.
At issue is the FTC’s allegation that Qualcomm has been engaging in “exclusionary conduct” that harms its competitors. Underpinning this allegation is the FTC’s claim that Qualcomm’s voluntary contracts with two American standards bodies imply that Qualcomm is obliged to license on the same terms to rival chip makers. In this post, we examine the allegation and the claim upon which it rests.
The recently requested delay relates to a motion for partial summary judgment filed by the FTC on August 30, 2018–about which more below. But the dispute itself stretches back to January 17, 2017, when the FTC filed for a permanent injunction against Qualcomm Inc. for engaging in unfair methods of competition in violation of Section 5(a) of the FTC Act. FTC’s major claims against Qualcomm were as follows:
It has been engaging in “exclusionary conduct” that taxes its competitors’ baseband processor sales, reduces competitors’ ability and incentives to innovate, and raises the prices to be paid by end consumers for cellphones and tablets.
Qualcomm is causing considerable harm to competition and consumers through its “no license, no chips” policy; its refusal to license to its chipset-maker rivals; and its exclusive deals with Apple.
The above practices allow Qualcomm to abuse its dominant position in the supply of CDMA and premium LTE modem chips.
Given that Qualcomm has made a commitment to standard setting bodies to license these patents on FRAND terms, such behaviour qualifies as a breach of FRAND.
The complaint was filed on the eve of the new presidential administration, when only three of the five commissioners were in place. Moreover, the Commissioners were not unanimous. Commissioner Ohlhausen delivered a dissenting statement in which she argued:
[T]here is no robust economic evidence of exclusion and anticompetitive effects, either as to the complaint’s core “taxation” theory or to associated allegations like exclusive dealing. Instead the Commission speaks about a possibility that less than supports a vague standalone action under a Section 5 FTC claim.
Qualcomm filed a motion to dismiss on April 3, 2017. This was denied by the U.S. District Court for the Northern District of California. The court found that the FTC has adequately alleged that Qualcomm’s conduct violates § 1 and § 2 of the Sherman Act and that it had entered into exclusive dealing arrangements with Apple. Thus, the court asserted, the FTC has adequately stated a claim under § 5 of the FTCA.
It is important to note that the core of the FTC’s arguments regarding Qualcomm’s abuse of dominant position rests on how it adopts the “no license, no chip” policy and thus breaches its FRAND obligations. However, it falls short of proving how the royalties charged by Qualcomm to OEMs exceeds the FRAND rates actually amounting to a breach, and qualifies as what FTC defines as a “tax” under the price squeeze theory that it puts forth.
(The Court did not go into whether there was a violation of § 5 of the FTC independent of a Sherman Act violation. Had it done so, this would have added more clarity to Section 5 claims, which are increasingly being invoked in antitrust cases even though its scope remains quite amorphous.)
On August 30, the FTC filed a partial summary judgement motion in relation to claims on the applicability of local California contract laws. This would leave antitrust issues to be decided in the subsequent hearing, which is set for January next year.
In a well-reasoned submission, the FTC asserts that Qualcomm is bound by voluntary agreements that it signed with two U.S. based standards development organisations (SDOs):
The Telecommunications Industry Association (TIA) and
The Alliance for Telecommunications Industry Solutions (ATIS).
These agreements extend to Qualcomm’s standard essential patents (SEPs) on CDMA, UMTS and LTE wireless technologies. Under these contracts, Qualcomm is obligated to license its SEPs to all applicants implementing these standards on FRAND terms.
The FTC asserts that this obligation should be interpreted to extend to Qualcomm’s rival modem chip manufacturers and sellers. It requests the Court to therefore grant a summary judgment since there are no disputed facts on such obligation. It submits that this should “streamline the trial by obviating the need for extrinsic evidence regarding the meaning of Qualcomm’s commitments on the requirement to license to competitors, to ETSI, a third SDO.” A review of a heavily redacted filing by FTC and a subsequent response by Qualcomm indicates that questions of fact and law continue to remain as regards Qualcomm’s licensing commitments and their scope. Thus, contrary to the FTC’s assertions, extrinsic evidence is still needed for resolution to some of the questions raised by the parties.
Indeed, the evidence produced by both parties points towards the need for resolution of ambiguities in the contractual agreements that Qualcomm has signed with ATIS and TIA. The scope and purpose of these licensing obligations lie at the core of the motion.
The IP licensing policies of the two SDOs provide for licensing of relevant patents to all applicants who implement these standards on FRAND terms. However, the key issues are whether components such as modem chips can be said to implement standards and whether component level licensing falls within this ambit. Yet, the resolution to these key issues, is unclear.
Qualcomm explains that commitments to ATIS and TIA do not require licenses to be made available for modem chips because modem chips do not implement or practice cellular standards and that standards do not define the operation of modem chips.
In contrast, the complaint by FTC raises the question of whether FRAND commitments extend to licensing at all levels. Different components needed for a device come together to facilitate the adoption and implementation of a standard. However, it does not logically follow that each individual component of the device separately practices or implements that standard even though it contributes to the implementation. While a single component may fully implement a standard, this need not always be the case.
These distinctions are significant from the point of interpreting the scope of the FRAND promise, which is commonly understood to extend to licensing of technologies incorporated in a standard to potential users of the standard. Understanding the meaning of a “user” becomes critical here and Qualcomm’s submission draws attention to this.
An important factor in the determination of a “user” of a particular standard is the extent to which the standard is practiced or implemented therein. Some standards development organisations (SDOs) have addressed this in their policies by clarifying that FRAND obligations extend to those “wholly compliant” or “fully conforming” to the specific standards. Clause 6.1 of the ETSI IPR Policy, clarifies that a patent holder’s obligation to make licenses available is limited to “methods” and “equipments”. It defines an equipment as “a system or device fully conforming to a standard.” And methods as “any method or operation fully conforming to a standard.”
It is noteworthy that the American National Standards Institute’s (ANSI) Executive Standards Council Appeals Panel in a decision has said that there is no agreement on the definition of the phrase “wholly compliant implementation.”
Device level licensing is the prevailing industry wide practice by companies like Ericsson, InterDigital, Nokia and others. In November 2017, the European Commission issued guidelines on licensing of SEPs and took a balanced approach on this issue by not prescribing component level licensing in its guidelines.
The former director general of ETSI, Karl Rosenbrock, adopts a contrary view, explaining ETSI’s policy, “allows every company that requests a license to obtain one, regardless of where the prospective licensee is in the chain of production and regardless of whether the prospective licensee is active upstream or downstream.”
Dr. Bertram Huber, a legal expert who personally participated in the drafting of the IPR policy of ETSI, wrote a response to Rosenbrock, in which he explains that ETSI’s IPR policies required licensing obligations for systems “fully conforming” to the standard:
[O]nce a commitment is given to license on FRAND terms, it does not necessarily extend to chipsets and other electronic components of standards-compliant end-devices. He highlights how, in adopting its IPR Policy, ETSI intended to safeguard access to the cellular standards without changing the prevailing industry practice of manufacturers of complete end-devices concluding licenses to the standard essential patents practiced in those end-devices.
Both ATIS and TIA are organizational partners of a collaboration called 3rd Generation Partnership Project along with ETSI and four other SDOs who work on development of cellular technologies. TIA and ATIS are both accredited by ANSI. Therefore, these SDOs are likely to impact one another with the policies each one adopts. In the absence of definitive guidance on interpretation of the IPR policy and contractual terms within the institutional mechanism of ATIS and TIA, at the very least, clarity is needed on the ambit of these policies with respect to component level licensing.
The non-discrimination obligation, which as per FTC, mandates Qualcomm to license to its competitors who manufacture and sell chips, would be limited by the scope of the IPR policy and contractual agreements that bind Qualcomm and depends upon the specific SDO’s policy.As discussed, the policies of ATIS and TIA are unclear on this.
In conclusion, FTC’s filing does not obviate the need to hear extrinsic evidenceon what Qualcomm’s commitments to the ETSI mean. Given the ambiguities in the policies and agreements of ATIS and TIA on whether they include component level licensing or whether the modem chips in their entirety can be said to practice the standard, it would be incorrect to say that there is no genuine dispute of fact (and law) in this instance.
I posted this originally on my own blog, but decided to cross-post here since Thom and I have been blogging on this topic.
“The U.S. stock market is having another solid year. You wouldn’t know it by looking at the shares of companies that manage money.”
That’s the lead from Charles Stein on Bloomberg’s Markets’ page today. Stein goes on to offer three possible explanations: 1) a weary bull market, 2) a move toward more active stock-picking by individual investors, and 3) increasing pressure on fees.
So what has any of that to do with the common ownership issue? A few things.
First, it shows that large institutional investors must not be very good at harvesting the benefits of the non-competitive behavior they encourage among the firms the invest in–if you believe they actually do that in the first place. In other words, if you believe common ownership is a problem because CEOs are enriching institutional investors by softening competition, you must admit they’re doing a pretty lousy job of capturing that value.
Second, and more importantly–as well as more relevant–the pressure on fees has led money managers to emphasis low-cost passive index funds. Indeed, among the firms doing well according to the article is BlackRock, “whose iShares exchange-traded fund business tracks indexes, won $20 billion.” In an aggressive move, Fidelity has introduced a total of four zero-fee index funds as a way to draw fee-conscious investors. These index tracking funds are exactly the type of inter-industry diversified funds that negate any incentive for competition softening in any one industry.
Finally, this also illustrates the cost to the investing public of the limits on common ownership proposed by the likes of Einer Elhague, Eric Posner, and Glen Weyl. Were these types of proposals in place, investment managers could not offer diversified index funds that include more than one firm’s stock from any industry with even a moderate level of market concentration. Given competitive forces are pushing investment companies to increase the offerings of such low-cost index funds, any regulatory proposal that precludes those possibilities is sure to harm the investing public.
Just one more piece of real evidence that common ownership is not only not a problem, but that the proposed “fixes” are.
The Eleventh Circuit’s LabMDopinion came out last week and has been something of a rorschach test for those of us who study consumer protection law.
Neil Chilson found the result to be a disturbing sign of slippage in Congress’s command that the FTC refrain from basing enforcement on “public policy.” Berin Szóka, on the other hand, saw the ruling as a long-awaited rebuke against the FTC’s expansive notion of its “unfairness” authority. Whereas Daniel Solove and Woodrow Hartzog described the decision as “quite narrow and… far from crippling,” in part, because “[t]he opinion says very little about the FTC’s general power to enforce Section 5 unfairness.” Even among the ICLE crew, our understandings of the opinion reflect our priors, from it being best understood as expressing due process concerns about injury-based enforcement of Section 5, on the one hand, to being about the meaning of Section 5(n)’s causation requirement, on the other.
You can expect to hear lots more about these and other LabMD-related issues from us soon, but for now we want to write about the only thing more exciting than dueling histories of the FTC’s 1980 Unfairness Statement: administrative law.
While most of those watching the LabMD case come from some nexus of FTC watchers, data security specialists, and privacy lawyers, the reality is that the case itself is mostly about administrative law (the law that governs how federal agencies are given and use their power). And the court’s opinion is best understood from a primarily administrative law perspective.
From that perspective, the case should lead to some significant introspection at the Commission. While the FTC may find ways to comply with the letter of the opinion without substantially altering its approach to data security cases, it will likely face difficulty defending that approach before the courts. True compliance with this decision will require the FTC to define what makes certain data security practices unfair in a more-coherent and far-more-readily ascertainable fashion.
The devil is in the (well-specified) details
The actual holding in the case comes in Part III of the 11th Circuit’s opinion, where the court finds for LabMD on the ground that, owing to a fatal lack of specificity in the FTC’s proposed order, “the Commission’s cease and desist order is itself unenforceable.” This is the punchline of the opinion, to which we will return. But it is worth spending some time on the path that the court takes to get there.
It should be stressed at the outset that Part II of the opinion — in which the Court walks through the conceptual and statutory framework that supports an “unfairness” claim — is surprisingly unimportant to the court’s ultimate holding. This was the meat of the case for FTC watchers and privacy and data security lawyers, and it is a fascinating exposition. Doubtless it will be the focus of most analysis of the opinion.
But, for purposes of the court’s disposition of the case, it’s of (perhaps-frustratingly) scant importance. In short, the court assumes, arguendo, that the FTC has sufficient basis to make out an unfairness claim against LabMD before moving on to Part III of the opinion analyzing the FTC’s order given that assumption.
It’s not clear why the court took this approach — and it is dangerous to assume any particular explanation (although it is and will continue to be the subject of much debate). There are several reasonable explanations for the approach, ranging from the court thinking it obvious that the FTC’s unfairness analysis was correct, to it side-stepping the thorny question of how to define injury under Section 5, to the court avoiding writing a decision that could call into question the fundamental constitutionality of a significant portion of the FTC’s legal portfolio. Regardless — and regardless of its relative lack of importance to the ultimate holding — the analysis offered in Part II bears, and will receive, significant attention.
The FTC has two basic forms of consumer protection authority: It can take action against 1) unfair acts or practices and 2) deceptive acts or practices. The FTC’s case against LabMD was framed in terms of unfairness. Unsurprisingly, “unfairness” is a broad, ambiguous concept — one that can easily grow into an amorphous blob of ill-defined enforcement authority.
As discussed by the court (as well as by us, ad nauseum), in the 1970s the FTC made very aggressive use of its unfairness authority to regulate the advertising industry, effectively usurping Congress’ authority to legislate in that area. This over-aggressive enforcement didn’t sit well with Congress, of course, and led it to shut down the FTC for a period of time until the agency adopted a more constrained understanding of the meaning of its unfairness authority. This understanding was communicated to Congress in the FTC’s 1980 Unfairness Statement. That statement was subsequently codified by Congress, in slightly modified form, as Section 5(n) of the FTC Act.
Section 5(n) states that
The Commission shall have no authority under this section or section 57a of this title to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.
The meaning of Section 5(n) has been the subject of intense debate for years (for example, here, here and here). In particular, it is unclear whether Section 5(n) defines a test for what constitutes unfair conduct (that which “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition”) or whether instead imposes a necessary, but not necessarily sufficient, condition on the extent of the FTC’s authority to bring cases. The meaning of “cause” under 5(n) is also unclear because, unlike causation in traditional legal contexts, Section 5(n) also targets conduct that is “likely to cause” harm.
Section 5(n) concludes with an important, but also somewhat inscrutable, discussion of the role of “public policy” in the Commission’s unfairness enforcement, indicating that that Commission is free to consider “established public policies” as evidence of unfair conduct, but may not use such considerations “as a primary basis” for its unfairness enforcement.
Just say no to public policy
Section 5 empowers and directs the FTC to police unfair business practices, and there is little reason to think that bad data security practices cannot sometimes fall under its purview. But the FTC’s efforts with respect to data security (and, for that matter, privacy) over the past nearly two decades have focused extensively on developing what it considers to be a comprehensive jurisprudence to address data security concerns. This creates a distinct impression that the FTC has been using its unfairness authority to develop a new area of public policy — to legislate data security standards, in other words — as opposed to policing data security practices that are unfair under established principles of unfairness.
This is a subtle distinction — and there is frankly little guidance for understanding when the agency is acting on the basis of public policy versus when it is proscribing conduct that falls within the meaning of unfairness.
But it is an important distinction. If it is the case — or, more precisely, if the courts think that it is the case — that the FTC is acting on the basis of public policy, then the FTC’s data security efforts are clearly problematic under Section 5(n)’s prohibition on the use of public policy as the primary basis for unfairness actions.
And this is where the Commission gets itself into trouble. The Commission’s efforts to develop its data security enforcement program looks an awful lot like something being driven by public policy, and not so much as merely enforcing existing policy as captured by, in the LabMD court’s words (echoing the FTC’s pre-Section 5(n) unfairness factors), “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.”
The distinction between effecting public policy and enforcing legal norms is… not very clear. Nonetheless, exploring and respecting that distinction is an important task for courts and agencies.
Unfortunately, this case does not well describe how to make that distinction. The opinion is more than a bit muddled and difficult to clearly interpret. Nonetheless, reading the court’s dicta in Part II is instructive. It’s clearly the case that some bad security practices, in some contexts, can be unfair practices. So the proper task for the FTC is to discover how to police “unfairness” within data security cases rather than setting out to become a first-order data security enforcement agency.
How does public policy become well-established law?
Part II of the Eleventh Circuit’s opinion — even if dicta — is important for future interpretations of Section 5 cases. The court goes to great lengths to demonstrate, based on the FTC’s enforcement history and related Congressional rebukes, that the Commission may not rely upon vague “public policy” standards for bringing “unfairness” actions.
But this raises a critical question about the nature of the FTC’s unfairness authority. The Commission was created largely to police conduct that could not readily be proscribed by statute or simple rules. In some cases this means conduct that is hard to label or describe in text with any degree of precision — “I know it when I see it” kinds of acts and practices. In other cases, it may refer to novel or otherwise unpredictable conduct that could not be foreseen by legislators or regulators. In either case, the very purpose of the FTC is to be able to protect consumers from conduct that is not necessarily proscribed elsewhere.
This means that the Commission must have some ability to take action against “unfair” conduct that has not previously been enshrined as “unfair” in “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.” But that ability is not unbounded, of course.
The court explained that the Commission could expound upon what acts fall within the meaning of “unfair” in one of two ways: It could use its rulemaking authority to issue Congressionally reviewable rules, or it could proceed on a case-by-case basis.
In either case, the court’s discussion of how the Commission is to determine what is “unfair” within the constraints of Section 5(n) is frustratingly vague. The earlier parts of the opinion tell us that unfairness is to be adjudged based upon “well-established legal standards,” but here the court tells us that the scope of unfairness can be altered — that is, those well-established legal standards can be changed — through adjudication. It is difficult to square what the court means by this. Regardless, it is the guidance that we have been given by the court.
This is Admin Law 101
And yet perhaps there is some resolution to this conundrum in administrative law. For administrative law scholars, the 11th Circuit’s discussion of the permissibility of agencies developing binding legal norms using either rulemaking or adjudication procedures, is straight out of Chenery II.
Chenery II is a bedrock case of American administrative law, standing broadly for the proposition (as echoed by the 11th Circuit) that agencies can generally develop legal rules through either rulemaking or adjudication, that there may be good reasons to use either in any given case, and that (assuming Congress has empowered the agency to use both) it is primarily up to the agency to determine which approach is preferable in any given case.
But, while Chenery II certainly allows agencies to proceed on a case-by-case basis, that permission is not a broad license to eschew the development of determinate legal standards. And the reason is fairly obvious: if an agency develops rules that are difficult to know ex ante, they can hardly provide guidance for private parties as they order their affairs.
Chenery II places an important caveat on the use of case-by-case adjudication. Much like the judges in the LabMD opinion, the Chenery II court was concerned with specificity and clarity, and tells us that agencies may not rely on vague bases for their rules or enforcement actions and expect courts to “chisel” out the details. Rather:
If the administrative action is to be tested by the basis upon which it purports to rest, that basis must be set forth with such clarity as to be understandable. It will not do for a court to be compelled to guess at the theory underlying the agency’s action; nor can a court be expected to chisel that which must be precise from what the agency has left vague and indecisive. In other words, ‘We must know what a decision means before the duty becomes ours to say whether it is right or wrong.’ (emphasis added)
The parallels between the 11th Circuit’s opinion in LabMD and the Supreme Court’s opinion in Chenery II 70 years earlier are uncanny. It is also not very surprising that the 11th Circuit opinion would reflect the principles discussed in Chenery II, nor that it would do so without reference to Chenery II: these are, after all, bedrock principles of administrative law.
The principles set out in Chenery II, of course, do not answer the data-security law question whether the FTC properly exercised its authority in this (or any) case under Section 5. But they do provide an intelligible basis for the court sidestepping this question, and asking whether the FTC sufficiently defined what it was doing in the first place.
The FTC’s data security mission has been, in essence, a voyage of public policy exploration. Its method of case-by-case adjudication, based on ill-defined consent decrees, non-binding guidance documents, and broadly-worded complaints creates the vagueness that the Court in Chenery II rejected, and that the 11th Circuit held results in unenforceable remedies.
Even in its best light, the Commission’s public materials are woefully deficient as sources of useful (and legally-binding) guidance. In its complaints the FTC does typically mention some of the facts that led it to investigate, and presents some rudimentary details of how those facts relate to its Section 5 authority. Yet the FTC issues complaints based merely on its “reason to believe” that an unfair act has taken place. This is a far different standard than that faced in district court, and undoubtedly leads the Commission to construe facts liberally in its own favor.
Moreover, targets of complaints settle for myriad reasons, and no outside authority need review the sufficiency of a complaint as part of a settlement. And the consent orders themselves are largely devoid of legal and even factual specificity. As a result, the FTC’s authority to initiate an enforcement action is effectively based on an ill-defined series of hunches — hardly a sufficient basis for defining a clear legal standard.
So, while the court’s opinion in this case was narrowly focused on the FTC’s proposed order, the underlying legal analysis that supports its holding should be troubling to the Commission.
The specificity the 11th Circuit demands in the remedial order must exist no less in the theories of harm the Commission alleges against targets. And those theories cannot be based on mere public policy preferences. Courts that follow the Eleventh Circuit’s approach — which indeed Section 5(n) reasonably seems to require — will look more deeply into the Commission’s allegations of “unreasonable” data security in order to determine if it is actually attempting to pursue harms by proving something like negligence, or is instead simply ascribing “unfairness” to certain conduct that the Commission deems harmful.
The FTC may find ways to comply with the letter of this particular opinion without substantially altering its overall approach — but that seems unlikely. True compliance with this decision will require the FTC to respect real limits on its authority and to develop ascertainable data security requirements out of much more than mere consent decrees and kitchen-sink complaints.
This week the FCC will vote on Chairman Ajit Pai’s Restoring Internet Freedom Order. Once implemented, the Order will rescind the 2015 Open Internet Order and return antitrust and consumer protection enforcement to primacy in Internet access regulation in the U.S.
In anticipation of that, earlier this week the FCC and FTC entered into a Memorandum of Understanding delineating how the agencies will work together to police ISPs. Under the MOU, the FCC will review informal complaints regarding ISPs’ disclosures about their blocking, throttling, paid prioritization, and congestion management practices. Where an ISP fails to make the proper disclosures, the FCC will take enforcement action. The FTC, for its part, will investigate and, where warranted, take enforcement action against ISPs for unfair, deceptive, or otherwise unlawful acts.
Critics of Chairman Pai’s plan contend (among other things) that the reversion to antitrust-agency oversight of competition and consumer protection in telecom markets (and the Internet access market particularly) would be an aberration — that the US will become the only place in the world to move backward away from net neutrality rules and toward antitrust law.
But this characterization has it exactly wrong. In fact, much of the world has been moving toward an antitrust-based approach to telecom regulation. The aberration was the telecom-specific, common-carrier regulation of the 2015 Open Internet Order.
The longstanding, global transition from telecom regulation to antitrust enforcement
The decade-old discussion around net neutrality has morphed, perhaps inevitably, to join the larger conversation about competition in the telecom sector and the proper role of antitrust law in addressing telecom-related competition issues. Today, with the latest net neutrality rules in the US on the chopping block, the discussion has grown more fervent (and even sometimes inordinately violent).
On the one hand, opponents of the 2015 rules express strong dissatisfaction with traditional, utility-style telecom regulation of innovative services, and view the 2015 rules as a meritless usurpation of antitrust principles in guiding the regulation of the Internet access market. On the other hand, proponents of the 2015 rules voice skepticism that antitrust can actually provide a way to control competitive harms in the tech and telecom sectors, and see the heavy hand of Title II, common-carrier regulation as a necessary corrective.
While the evidence seems clear that an early-20th-century approach to telecom regulation is indeed inappropriate for the modern Internet (see our lengthy discussions on this point, e.g., here and here, as well as Thom Lambert’s recent post), it is perhaps less clear whether antitrust, with its constantly evolving, common-law foundation, is up to the task.
To answer that question, it is important to understand that for decades, the arc of telecom regulation globally has been sweeping in the direction of ex post competition enforcement, and away from ex ante, sector-specific regulation.
Howard Shelanski, who served as President Obama’s OIRA Administrator from 2013-17, Director of the Bureau of Economics at the FTC from 2012-2013, and Chief Economist at the FCC from 1999-2000, noted in 2002, for instance, that
[i]n many countries, the first transition has been from a government monopoly to a privatizing entity controlled by an independent regulator. The next transformation on the horizon is away from the independent regulator and towards regulation through general competition law.
Globally, nowhere perhaps has this transition been more clearly stated than in the EU’s telecom regulatory framework which asserts:
The aim is to progressively reduce ex ante sector-specific regulation progressively as competition in markets develops and, ultimately, for electronic communications [i.e., telecommunications] to be governed by competition law only. (Emphasis added.)
To facilitate the transition and quash regulatory inconsistencies among member states, the EC identified certain markets for national regulators to decide, consistent with EC guidelines on market analysis, whether ex ante obligations were necessary in their respective countries due to an operator holding “significant market power.” In 2003 the EC identified 18 such markets. After observing technological and market changes over the next four years, the EC reduced that number to seven in 2007 and, in 2014, the number was further reduced to four markets, all wholesale markets, that could potentially require ex ante regulation.
It is important to highlight that this framework is not uniquely achievable in Europe because of some special trait in its markets, regulatory structure, or antitrust framework. Determining the right balance of regulatory rules and competition law, whether enforced by a telecom regulator, antitrust regulator, or multi-purpose authority (i.e., with authority over both competition and telecom) means choosing from a menu of options that should be periodically assessed to move toward better performance and practice. There is nothing jurisdiction-specific about this; it is simply a matter of good governance.
And since the early 2000s, scholars have highlighted that the US is in an intriguing position to transition to a merged regulator because, for example, it has both a “highly liberalized telecommunications sector and a well-established body of antitrust law.” For Shelanski, among others, the US has been ready to make the transition since 2007.
Far from being an aberrant move away from sound telecom regulation, the FCC’s Restoring Internet Freedom Order is actually a step in the direction of sensible, antitrust-based telecom regulation — one that many parts of the world have long since undertaken.
How antitrust oversight of telecom markets has been implemented around the globe
In implementing the EU’s shift toward antitrust oversight of the telecom sector since 2003, agencies have adopted a number of different organizational reforms.
Some telecom regulators assumed new duties over competition — e.g., Ofcom in the UK. Other non-European countries, including, e.g., Mexico have also followed this model.
Other European Member States have eliminated their telecom regulator altogether. In a useful case study, Roslyn Layton and Joe Kane outline Denmark’s approach, which includes disbanding its telecom regulator and passing the regulation of the sector to various executive agencies.
Meanwhile, the Netherlands and Spain each elected to merge its telecom regulator into its competition authority. New Zealand has similarly adopted this framework.
A few brief case studies will illuminate these and other reforms:
In 2013, the Netherlands merged its telecom, consumer protection, and competition regulators to form the Netherlands Authority for Consumers and Markets (ACM). The ACM’s structure streamlines decision-making on pending industry mergers and acquisitions at the managerial level, eliminating the challenges arising from overlapping agency reviews and cross-agency coordination. The reform also unified key regulatory methodologies, such as creating a consistent calculation method for the weighted average cost of capital (WACC).
The Netherlands also claims that the ACM’s ex postapproach is better able to adapt to “technological developments, dynamic markets, and market trends”:
The combination of strength and flexibility allows for a problem-based approach where the authority first engages in a dialogue with a particular market player in order to discuss market behaviour and ensure the well-functioning of the market.
The Netherlands also cited a significant reduction in the risk of regulatory capture as staff no longer remain in positions for long tenures but rather rotate on a project-by-project basis from a regulatory to a competition department or vice versa. Moving staff from team to team has also added value in terms of knowledge transfer among the staff. Finally, while combining the cultures of each regulator was less difficult than expected, the government reported that the largest cause of consternation in the process was agreeing on a single IT system for the ACM.
In 2013, Spain created the National Authority for Markets and Competition (CNMC), merging the National Competition Authority with several sectoral regulators, including the telecom regulator, to “guarantee cohesion between competition rulings and sectoral regulation.” In a report to the OECD, Spain stated that moving to the new model was necessary because of increasing competition and technological convergence in the sector (i.e., the ability for different technologies to offer the substitute services (like fixed and wireless Internet access)). It added that integrating its telecom regulator with its competition regulator ensures
a predictable business environment and legal certainty [i.e., removing “any threat of arbitrariness”] for the firms. These two conditions are indispensable for network industries — where huge investments are required — but also for the rest of the business community if investment and innovation are to be promoted.
Like in the Netherlands, additional benefits include significantly lowering the risk of regulatory capture by “preventing the alignment of the authority’s performance with sectoral interests.”
In 2011, the Danish government unexpectedly dismantled the National IT and Telecom Agency and split its duties between four regulators. While the move came as a surprise, it did not engender national debate — vitriolic or otherwise — nor did it receive much attention in the press.
Since the dismantlement scholars have observed less politicization of telecom regulation. And even though the competition authority didn’t take over telecom regulatory duties, the Ministry of Business and Growth implemented a light touch regime, which, as Layton and Kane note, has helped to turn Denmark into one of the “top digital nations” according to the International Telecommunication Union’s Measuring the Information Society Report.
The New Zealand Commerce Commission (NZCC) is responsible for antitrust enforcement, economic regulation, consumer protection, and certain sectoral regulations, including telecommunications. By combining functions into a single regulator New Zealand asserts that it can more cost-effectively administer government operations. Combining regulatory functions also created spillover benefits as, for example, competition analysis is a prerequisite for sectoral regulation, and merger analysis in regulated sectors (like telecom) can leverage staff with detailed and valuable knowledge. Similar to the other countries, New Zealand also noted that the possibility of regulatory capture “by the industries they regulate is reduced in an agency that regulates multiple sectors or also has competition and consumer law functions.”
Advantages identified by other organizations
The GSMA, a mobile industry association, notes in its 2016 report, Resetting Competition Policy Frameworks for the Digital Ecosystem, that merging the sector regulator into the competition regulator also mitigates regulatory creep by eliminating the prodding required to induce a sector regulator to roll back regulation as technological evolution requires it, as well as by curbing the sector regulator’s temptation to expand its authority. After all, regulators exist to regulate.
At the same time, it’s worth noting that eliminating the telecom regulator has not gone off without a hitch in every case (most notably, in Spain). It’s important to understand, however, that the difficulties that have arisen in specific contexts aren’t endemic to the nature of competition versus telecom regulation. Nothing about these cases suggests that economic-based telecom regulations are inherently essential, or that replacing sector-specific oversight with antitrust oversight can’t work.
Contrasting approaches to net neutrality in the EU and New Zealand
Unfortunately, adopting a proper framework and implementing sweeping organizational reform is no guarantee of consistent decisionmaking in its implementation. Thus, in 2015, the European Parliament and Council of the EU went against two decades of telecommunications best practices by implementing ex ante net neutrality regulations without hard evidence of widespread harm and absent any competition analysis to justify its decision. The EU placed net neutrality under the universal service and user’s rights prong of the regulatory framework, and the resulting rules lack coherence and economic rigor.
BEREC’s net neutrality guidelines, meant to clarify the EU regulations, offered an ambiguous, multi-factored standard to evaluate ISP practices like free data programs. And, as mentioned in a previous TOTM post, whether or not they allow the practice, regulators (e.g., Norway’s Nkom and the UK’s Ofcom) have lamented the lack of regulatory certainty surrounding free data programs.
Notably, while BEREC has not provided clear guidance, a 2017 report commissioned by the EU’s Directorate-General for Competition weighing competitive benefits and harms of zero rating concluded “there appears to be little reason to believe that zero-rating gives rise to competition concerns.”
The report also provides an ex post framework for analyzing such deals in the context of a two-sided market by assessing a deal’s impact on competition between ISPs and between content and application providers.
The EU example demonstrates that where a telecom regulator perceives a novel problem, competition law, grounded in economic principles, brings a clear framework to bear.
In New Zealand, if a net neutrality issue were to arise, the ISP’s behavior would be examined under the context of existing antitrust law, including a determination of whether the ISP is exercising market power, and by the Telecommunications Commissioner, who monitors competition and the development of telecom markets for the NZCC.
The TCF Code is a mandatory code of practice establishing requirements concerning the information ISPs are required to disclose to consumers about their services. For example, ISPs must disclose any arrangements that prioritize certain traffic. Regarding traffic management, complaints of unfair contract terms — when not resolved by a process administered by an independent industry group — may be referred to the NZCC for an investigation in accordance with the Fair Trading Act. Under the Commerce Act, the NZCC can prohibit anticompetitive mergers, or practices that substantially lessen competition or that constitute price fixing or abuse of market power.
In addition, the NZCC has been active in patrolling vertical agreements between ISPs and content providers — precisely the types of agreements bemoaned by Title II net neutrality proponents.
In February 2017, the NZCC blocked Vodafone New Zealand’s proposed merger with Sky Network (combining Sky’s content and pay TV business with Vodafone’s broadband and mobile services) because the Commission concluded that the deal would substantially lessen competition in relevant broadband and mobile services markets. The NZCC was
unable to exclude the real chance that the merged entity would use its market power over premium live sports rights to effectively foreclose a substantial share of telecommunications customers from rival telecommunications services providers (TSPs), resulting in a substantial lessening of competition in broadband and mobile services markets.
Such foreclosure would result, the NZCC argued, from exclusive content and integrated bundles with features such as “zero rated Sky Sport viewing over mobile.” In addition, Vodafone would have the ability to prevent rivals from creating bundles using Sky Sport.
The substance of the Vodafone/Sky decision notwithstanding, the NZCC’s intervention is further evidence that antitrust isn’t a mere smokescreen for regulators to do nothing, and that regulators don’t need to design novel tools (such as the Internet conduct rule in the 2015 OIO) to regulate something neither they nor anyone else knows very much about: “not just the sprawling Internet of today, but also the unknowable Internet of tomorrow.” Instead, with ex post competition enforcement, regulators can allow dynamic innovation and competition to develop, and are perfectly capable of intervening — when and if identifiable harm emerges.
Unfortunately for Title II proponents — who have spent a decade at the FCC lobbying for net neutrality rules despite a lack of actionable evidence — the FCC is not acting without precedent by enabling the FTC’s antitrust and consumer protection enforcement to police conduct in Internet access markets. For two decades, the object of telecommunications regulation globally has been to transition away from sector-specific ex ante regulation to ex post competition review and enforcement. It’s high time the U.S. got on board.
The populists are on the march, and as the 2018 campaign season gets rolling we’re witnessing more examples of political opportunism bolstered by economic illiteracy aimed at increasingly unpopular big tech firms.
The latest example comes in the form of a new investigation of Google opened by Missouri’s Attorney General, Josh Hawley. Mr. Hawley — a Republican who, not coincidentally, is running for Senate in 2018 — alleges various consumer protection violations and unfair competition practices.
But while Hawley’s investigation may jump start his campaign and help a few vocal Google rivals intent on mobilizing the machinery of the state against the company, it is unlikely to enhance consumer welfare — in Missouri or anywhere else.
According to the press release issued by the AG’s office:
[T]he investigation will seek to determine if Google has violated the Missouri Merchandising Practices Act—Missouri’s principal consumer-protection statute—and Missouri’s antitrust laws.
The business practices in question are Google’s collection, use, and disclosure of information about Google users and their online activities; Google’s alleged misappropriation of online content from the websites of its competitors; and Google’s alleged manipulation of search results to preference websites owned by Google and to demote websites that compete with Google.
Mr. Hawley’s justification for his investigation is a flourish of populist rhetoric:
We should not just accept the word of these corporate giants that they have our best interests at heart. We need to make sure that they are actually following the law, we need to make sure that consumers are protected, and we need to hold them accountable.
But Hawley’s “strong” concern is based on tired retreads of the same faulty arguments that Google’s competitors (Yelp chief among them), have been plying for the better part of a decade. In fact, all of his apparent grievances against Google were exhaustively scrutinized by the FTC and ultimately rejected or settled in separate federal investigations in 2012 and 2013.
The antitrust issues
To begin with, AG Hawley references the EU antitrust investigation as evidence that
this is not the first-time Google’s business practices have come into question. In June, the European Union issued Google a record $2.7 billion antitrust fine.
True enough — and yet, misleadingly incomplete. Missing from Hawley’s recitation of Google’s antitrust rap sheet are the following investigations, which were closed without any finding of liability related to Google Search, Android, Google’s advertising practices, etc.:
United States FTC, 2013. The FTC found no basis to pursue a case after a two-year investigation: “Challenging Google’s product design decisions in this case would require the Commission — or a court — to second-guess a firm’s product design decisions where plausible procompetitive justifications have been offered, and where those justifications are supported by ample evidence.” The investigation did result in a consent order regarding patent licensing unrelated in any way to search and a voluntary commitment by Google not to engage in certain search-advertising-related conduct.
South Korea FTC, 2013. The KFTC cleared Google after a two-year investigation. It opened a new investigation in 2016, but, as I have discussed, “[i]f anything, the economic conditions supporting [the KFTC’s 2013] conclusion have only gotten stronger since.”
Canada Competition Bureau, 2016. The CCB closed a three-year long investigation into Google’s search practices without taking any action.
Similar investigations have been closed without findings of liability (or simply lie fallow) in a handful of other countries (e.g., Taiwan and Brazil) and even several states (e.g., Ohio and Texas). In fact, of all the jurisdictions that have investigated Google, only the EU and Russia have actually assessed liability.
As Beth Wilkinson, outside counsel to the FTC during the Google antitrust investigation, noted upon closing the case:
Undoubtedly, Google took aggressive actions to gain advantage over rival search providers. However, the FTC’s mission is to protect competition, and not individual competitors. The evidence did not demonstrate that Google’s actions in this area stifled competition in violation of U.S. law.
The Bureau sought evidence of the harm allegedly caused to market participants in Canada as a result of any alleged preferential treatment of Google’s services. The Bureau did not find adequate evidence to support the conclusion that this conduct has had an exclusionary effect on rivals, or that it has resulted in a substantial lessening or prevention of competition in a market.
Unfortunately, rather than follow the lead of these agencies, Missouri’s investigation appears to have more in common with Russia’s effort to prop up a favored competitor (Yandex) at the expense of consumer welfare.
The Yelp Claim
Take Mr. Hawley’s focus on “Google’s alleged misappropriation of online content from the websites of its competitors,” for example, which cleaves closely to what should become known henceforth as “The Yelp Claim.”
While the sordid history of Yelp’s regulatory crusade against Google is too long to canvas in its entirety here, the primary elements are these:
Once upon a time (in 2005), Google licensed Yelp’s content for inclusion in its local search results. In 2007 Yelp ended the deal. By 2010, and without a license from Yelp (asserting fair use), Google displayed small snippets of Yelp’s reviews that, if clicked on, led to Yelp’s site. Even though Yelp received more user traffic from those links as a result, Yelp complained, and Google removed Yelp snippets from its local results.
In its 2013 agreement with the FTC, Google guaranteed that Yelp could opt-out of having even snippets displayed in local search results by committing Google to:
make available a web-based notice form that provides website owners with the option to opt out from display on Google’s Covered Webpages of content from their website that has been crawled by Google. When a website owner exercises this option, Google will cease displaying crawled content from the domain name designated by the website owner….
The commitments also ensured that websites (like Yelp) that opt out would nevertheless remain in Google’s general index.
Ironically, Yelp now claims in a recent study that Google should show not only snippets of Yelp reviews, but even more of Yelp’s content. (For those interested, my colleagues and I have a paper explaining why the study’s claims are spurious).
The key bit here, of course, is that Google stopped pulling content from Yelp’s pages to use in its local search results, and that it implemented a simple mechanism for any other site wishing to opt out of the practice to do so.
It’s difficult to imagine why Missouri’s citizens might require more than this to redress alleged anticompetitive harms arising from the practice.
Perhaps AG Hawley thinks consumers would be better served by an opt-in mechanism? Of course, this is absurd, particularly if any of Missouri’s citizens — and their businesses — have websites. Most websites want at least some of their content to appear on Google’s search results pages as prominently as possible — see this and this, for example — and making this information more accessible to users is why Google exists.
To be sure, some websites may take issue with how much of their content Google features and where it places that content. But the easy opt out enables them to prevent Google from showing their content in a manner they disapprove of. Yelp is an outlier in this regard because it views Google as a direct competitor, especially to the extent it enables users to read some of Yelp’s reviews without visiting Yelp’s pages.
For Yelp and a few similarly situated companies the opt out suffices. But for almost everyone else the opt out is presumably rarely exercised, and any more-burdensome requirement would just impose unnecessary costs, harming instead of helping their websites.
The privacy issues
The Missouri investigation also applies to “Google’s collection, use, and disclosure of information about Google users and their online activities.” More pointedly, Hawley claims that “Google may be collecting more information from users than the company was telling consumers….”
Presumably this would come as news to the FTC, which, with a much larger staff and far greater expertise, currently has Google under a 20 year consent order (with some 15 years left to go) governing its privacy disclosures and information-sharing practices, thus ensuring that the agency engages in continual — and well-informed — oversight of precisely these issues.
The FTC’s consent order with Google (the result of an investigation into conduct involving Google’s short-lived Buzz social network, allegedly in violation of Google’s privacy policies), requires the company to:
“[N]ot misrepresent in any manner, expressly or by implication… the extent to which respondent maintains and protects the privacy and confidentiality of any [user] information…”;
“Obtain express affirmative consent from” users “prior to any new or additional sharing… of the Google user’s identified information with any third party” if doing so would in any way deviate from previously disclosed practices;
“[E]stablish and implement, and thereafter maintain, a comprehensive privacy program that is reasonably designed to  address privacy risks related to the development and management of new and existing products and services for consumers, and (2) protect the privacy and confidentiality of [users’] information”; and
Along with a laundry list of other reporting requirements, “[submit] biennial assessments and reports  from a qualified, objective, independent third-party professional…, approved by the [FTC] Associate Director for Enforcement, Bureau of Consumer Protection… in his or her sole discretion.”
What, beyond the incredibly broad scope of the FTC’s consent order, could the Missouri AG’s office possibly hope to obtain from an investigation?
Google is already expressly required to provide privacy reports to the FTC every two years. It must provide several of the items Hawley demands in his CID to the FTC; others are required to be made available to the FTC upon demand. What materials could the Missouri AG collect beyond those the FTC already receives, or has the authority to demand, under its consent order?
And what manpower and expertise could Hawley apply to those materials that would even begin to equal, let alone exceed, those of the FTC?
That penalty is of undeniable import, not only for its amount (at the time it was the largest in FTC history) and for stemming from alleged problems completely unrelated to the issue underlying the initial action, but also because it was so easy to obtain. Having put Google under a 20-year consent order, the FTC need only prove (or threaten to prove) contempt of the consent order, rather than the specific elements of a new violation of the FTC Act, to bring the company to heel. The former is far easier to prove, and comes with the ability to impose (significant) damages.
So what’s really going on in Jefferson City?
While states are, of course, free to enforce their own consumer protection laws to protect their citizens, there is little to be gained — other than cold hard cash, perhaps — from pursuing cases that, at best, duplicate enforcement efforts already undertaken by the federal government (to say nothing of innumerable other jurisdictions).
To take just one relevant example, in 2013 — almost a year to the day following the court’s approval of the settlement in the FTC’s case alleging Google’s violation of the Buzz consent order — 37 states plus DC (not including Missouri) settled their own, follow-on litigation against Google on the same facts. Significantly, the terms of the settlement did not impose upon Google any obligation not already a part of the Buzz consent order or the subsequent FTC settlement — but it did require Google to fork over an additional $17 million.
Not only is there little to be gained from yet another ill-conceived antitrust campaign, there is much to be lost. Such massive investigations require substantial resources to conduct, and the opportunity cost of doing so may mean real consumer issues go unaddressed. The Consumer Protection Section of the Missouri AG’s office says it receives some 100,000 consumer complaints a year. How many of those will have to be put on the back burner to accommodate an investigation like this one?
Even when not politically motivated, state enforcement of CPAs is not an unalloyed good. In fact, empirical studies of state consumer protection actions like the one contemplated by Mr. Hawley have shown that such actions tend toward overreach — good for lawyers, perhaps, but expensive for taxpayers and often detrimental to consumers. According to a recent study by economists James Cooper and Joanna Shepherd:
[I]n recent decades, this thoughtful balance [between protecting consumers and preventing the proliferation of lawsuits that harm both consumers and businesses] has yielded to damaging legislative and judicial overcorrections at the state level with a common theoretical mistake: the assumption that more CPA litigation automatically yields more consumer protection…. [C]ourts and legislatures gradually have abolished many of the procedural and remedial protections designed to cabin state CPAs to their original purpose: providing consumers with redress for actual harm in instances where tort and contract law may provide insufficient remedies. The result has been an explosion in consumer protection litigation, which serves no social function and for which consumers pay indirectly through higher prices and reduced innovation.
AG Hawley’s investigation seems almost tailored to duplicate the FTC’s extensive efforts — and to score political points. Or perhaps Mr. Hawley is just perturbed that Missouri missed out its share of the $17 million multistate settlement in 2013.
Which raises the spectre of a further problem with the Missouri case: “rent extraction.”
It’s no coincidence that Mr. Hawley’s investigation follows closely on the heels of Yelp’s recent letter to the FTC and every state AG (as well as four members of Congress and the EU’s chief competition enforcer, for good measure) alleging that Google had re-started scraping Yelp’s content, thus violating the terms of its voluntary commitments to the FTC.
It’s also no coincidence that Yelp “notified” Google of the problem only by lodging a complaint with every regulator who might listen rather than by actuallynotifying Google. But an action like the one Missouri is undertaking — not resolution of the issue — is almost certainly exactly what Yelp intended, and AG Hawley is playing right into Yelp’s hands.
Google, for its part, strongly disputes Yelp’s allegation, and, indeed, has — even according to Yelp — complied fully with Yelp’s request to keep its content off Google Local and other “vertical” search pages since 18 months before Google entered into its commitments with the FTC. Google claims that the recent scraping was inadvertent, and that it would happily have rectified the problem if only Yelp had actually bothered to inform Google.
Indeed, Yelp’s allegations don’t really pass the smell test: That Google would suddenly change its practices now, in violation of its commitments to the FTC and at a time of extraordinarily heightened scrutiny by the media, politicians of all stripes, competitors like Yelp, the FTC, the EU, and a host of other antitrust or consumer protection authorities, strains belief.
But, again, identifying and resolving an actual commercial dispute was likely never the goal. As a recent, fawning New York Times article on “Yelp’s Six-Year Grudge Against Google” highlights (focusing in particular on Luther Lowe, now Yelp’s VP of Public Policy and the author of the letter):
Yelp elevated Mr. Lowe to the new position of director of government affairs, a job that more or less entails flying around the world trying to sic antitrust regulators on Google. Over the next few years, Yelp hired its first lobbyist and started a political action committee. Recently, it has started filing complaints in Brazil.
Missouri, in other words, may just be carrying Yelp’s water.
The one clear lesson of the decades-long Microsoft antitrust saga is that companies that struggle to compete in the market can profitably tax their rivals by instigating antitrust actions against them. As Milton Friedman admonished, decrying “the business community’s suicidal impulse” to invite regulation:
As a believer in the pursuit of self-interest in a competitive capitalist system, I can’t blame a businessman who goes to Washington [or is it Jefferson City?] and tries to get special privileges for his company.… Blame the rest of us for being so foolish as to let him get away with it.
Taking a tough line on Silicon Valley firms in the midst of today’s anti-tech-company populist resurgence may help with the electioneering in Mr. Hawley’s upcoming bid for a US Senate seat and serve Yelp, but it doesn’t offer any clear, actual benefits to Missourians. As I’ve wondered before: “Exactly when will regulators be a little more skeptical of competitors trying to game the antitrust laws for their own advantage?”