Archives For law and economics

FTC v. Qualcomm

Last week the International Center for Law & Economics (ICLE) and twelve noted law and economics scholars filed an amicus brief in the Ninth Circuit in FTC v. Qualcomm, in support of appellant (Qualcomm) and urging reversal of the district court’s decision. The brief was authored by Geoffrey A. Manne, President & founder of ICLE, and Ben Sperry, Associate Director, Legal Research of ICLE. Jarod M. Bona and Aaron R. Gott of Bona Law PC collaborated in drafting the brief and they and their team provided invaluable pro bono legal assistance, for which we are enormously grateful. Signatories on the brief are listed at the end of this post.

We’ve written about the case several times on Truth on the Market, as have a number of guest bloggers, in our ongoing blog series on the case here.   

The ICLE amicus brief focuses on the ways that the district court exceeded the “error cost” guardrails erected by the Supreme Court to minimize the risk and cost of mistaken antitrust decisions, particularly those that wrongly condemn procompetitive behavior. As the brief notes at the outset:

The district court’s decision is disconnected from the underlying economics of the case. It improperly applied antitrust doctrine to the facts, and the result subverts the economic rationale guiding monopolization jurisprudence. The decision—if it stands—will undercut the competitive values antitrust law was designed to protect.  

The antitrust error cost framework was most famously elaborated by Frank Easterbrook in his seminal article, The Limits of Antitrust (1984). It has since been squarely adopted by the Supreme Court—most significantly in Brooke Group (1986), Trinko (2003), and linkLine (2009).  

In essence, the Court’s monopolization case law implements the error cost framework by (among other things) obliging courts to operate under certain decision rules that limit the use of inferences about the consequences of a defendant’s conduct except when the circumstances create what game theorists call a “separating equilibrium.” A separating equilibrium is a 

solution to a game in which players of different types adopt different strategies and thereby allow an uninformed player to draw inferences about an informed player’s type from that player’s actions.

Baird, Gertner & Picker, Game Theory and the Law

The key problem in antitrust is that while the consequence of complained-of conduct for competition (i.e., consumers) is often ambiguous, its deleterious effect on competitors is typically quite evident—whether it is actually anticompetitive or not. The question is whether (and when) it is appropriate to infer anticompetitive effect from discernible harm to competitors. 

Except in the narrowly circumscribed (by Trinko) instance of a unilateral refusal to deal, anticompetitive harm under the rule of reason must be proven. It may not be inferred from harm to competitors, because such an inference is too likely to be mistaken—and “mistaken inferences are especially costly, because they chill the very conduct the antitrust laws are designed to protect.” (Brooke Group (quoting yet another key Supreme Court antitrust error cost case, Matsushita (1986)). 

Yet, as the brief discusses, in finding Qualcomm liable the district court did not demand or find proof of harm to competition. Instead, the court’s opinion relies on impermissible inferences from ambiguous evidence to find that Qualcomm had (and violated) an antitrust duty to deal with rival chip makers and that its conduct resulted in anticompetitive foreclosure of competition. 

We urge you to read the brief (it’s pretty short—maybe the length of three blogs posts) to get the whole argument. Below we draw attention to a few points we make in the brief that are especially significant. 

The district court bases its approach entirely on Microsoft — which it misinterprets in clear contravention of Supreme Court case law

The district court doesn’t stay within the strictures of the Supreme Court’s monopolization case law. In fact, although it obligingly recites some of the error cost language from Trinko, it quickly moves away from Supreme Court precedent and bases its approach entirely on its reading of the D.C. Circuit’s Microsoft (2001) decision. 

Unfortunately, the district court’s reading of Microsoft is mistaken and impermissible under Supreme Court precedent. Indeed, both the Supreme Court and the D.C. Circuit make clear that a finding of illegal monopolization may not rest on an inference of anticompetitive harm.

The district court cites Microsoft for the proposition that

Where a government agency seeks injunctive relief, the Court need only conclude that Qualcomm’s conduct made a “significant contribution” to Qualcomm’s maintenance of monopoly power. The plaintiff is not required to “present direct proof that a defendant’s continued monopoly power is precisely attributable to its anticompetitive conduct.”

It’s true Microsoft held that, in government actions seeking injunctions, “courts [may] infer ‘causation’ from the fact that a defendant has engaged in anticompetitive conduct that ‘reasonably appears capable of making a significant contribution to maintaining monopoly power.’” (Emphasis added). 

But Microsoft never suggested that anticompetitiveness itself may be inferred.

“Causation” and “anticompetitive effect” are not the same thing. Indeed, Microsoft addresses “anticompetitive conduct” and “causation” in separate sections of its decision. And whereas Microsoft allows that courts may infer “causation” in certain government actions, it makes no such allowance with respect to “anticompetitive effect.” In fact, it explicitly rules it out:

[T]he plaintiff… must demonstrate that the monopolist’s conduct indeed has the requisite anticompetitive effect…; no less in a case brought by the Government, it must demonstrate that the monopolist’s conduct harmed competition, not just a competitor.”

The D.C. Circuit subsequently reinforced this clear conclusion of its holding in Microsoft in Rambus

Deceptive conduct—like any other kind—must have an anticompetitive effect in order to form the basis of a monopolization claim…. In Microsoft… [t]he focus of our antitrust scrutiny was properly placed on the resulting harms to competition.

Finding causation entails connecting evidentiary dots, while finding anticompetitive effect requires an economic assessment. Without such analysis it’s impossible to distinguish procompetitive from anticompetitive conduct, and basing liability on such an inference effectively writes “anticompetitive” out of the law.

Thus, the district court is correct when it holds that it “need not conclude that Qualcomm’s conduct is the sole reason for its rivals’ exits or impaired status.” But it is simply wrong to hold—in the same sentence—that it can thus “conclude that Qualcomm’s practices harmed competition and consumers.” The former claim is consistent with Microsoft; the latter is emphatically not.

Under Trinko and Aspen Skiing the district court’s finding of an antitrust duty to deal is impermissible 

Because finding that a company operates under a duty to deal essentially permits a court to infer anticompetitive harm without proof, such a finding “comes dangerously close to being a form of ‘no-fault’ monopolization,” as Herbert Hovenkamp has written. It is also thus seriously disfavored by the Court’s error cost jurisprudence.

In Trinko the Supreme Court interprets its holding in Aspen Skiing to identify essentially a single scenario from which it may plausibly be inferred that a monopolist’s refusal to deal with rivals harms consumers: the existence of a prior, profitable course of dealing, and the termination and replacement of that arrangement with an alternative that not only harms rivals, but also is less profitable for the monopolist.

In an effort to satisfy this standard, the district court states that “because Qualcomm previously licensed its rivals, but voluntarily stopped licensing rivals even though doing so was profitable, Qualcomm terminated a voluntary and profitable course of dealing.”

But it’s not enough merely that the prior arrangement was profitable. Rather, Trinko and Aspen Skiing hold that when a monopolist ends a profitable relationship with a rival, anticompetitive exclusion may be inferred only when it also refuses to engage in an ongoing arrangement that, in the short run, is more profitable than no relationship at all. The key is the relative value to the monopolist of the current options on offer, not the value to the monopolist of the terminated arrangement. In a word, what the Court requires is that the defendant exhibit behavior that, but-for the expectation of future, anticompetitive returns, is irrational.

It should be noted, as John Lopatka (here) and Alan Meese (here) (both of whom joined the amicus brief) have written, that even the Supreme Court’s approach is likely insufficient to permit a court to distinguish between procompetitive and anticompetitive conduct. 

But what is certain is that the district court’s approach in no way permits such an inference.

“Evasion of a competitive constraint” is not an antitrust-relevant refusal to deal

In order to infer anticompetitive effect, it’s not enough that a firm may have a “duty” to deal, as that term is colloquially used, based on some obligation other than an antitrust duty, because it can in no way be inferred from the evasion of that obligation that conduct is anticompetitive.

The district court bases its determination that Qualcomm’s conduct is anticompetitive on the fact that it enables the company to avoid patent exhaustion, FRAND commitments, and thus price competition in the chip market. But this conclusion is directly precluded by the Supreme Court’s holding in NYNEX

Indeed, in Rambus, the D.C. Circuit, citing NYNEX, rejected the FTC’s contention that it may infer anticompetitive effect from defendant’s evasion of a constraint on its monopoly power in an analogous SEP-licensing case: “But again, as in NYNEX, an otherwise lawful monopolist’s end-run around price constraints, even when deceptive or fraudulent, does not alone present a harm to competition.”

As Josh Wright has noted:

[T]he objection to the “evasion” of any constraint approach is… that it opens the door to enforcement actions applied to business conduct that is not likely to harm competition and might be welfare increasing.

Thus NYNEX and Rambus (and linkLine) reinforce the Court’s repeated holding that an inference of harm to competition is permissible only where conduct points clearly to anticompetitive effect—and, bad as they may be, evading obligations under other laws or violating norms of “business morality” do not suffice.

The district court’s elaborate theory of harm rests fundamentally on the claim that Qualcomm injures rivals—and the record is devoid of evidence demonstrating actual harm to competition. Instead, the court infers it from what it labels “unreasonably high” royalty rates, enabled by Qualcomm’s evasion of competition from rivals. In turn, the court finds that that evasion of competition can be the source of liability if what Qualcomm evaded was an antitrust duty to deal. And, in impermissibly circular fashion, the court finds that Qualcomm indeed evaded an antitrust duty to deal—because its conduct allowed it to sustain “unreasonably high” prices. 

The Court’s antitrust error cost jurisprudence—from Brooke Group to NYNEX to Trinko & linkLine—stands for the proposition that no such circular inferences are permitted.

The district court’s foreclosure analysis also improperly relies on inferences in lieu of economic evidence

Because the district court doesn’t perform a competitive effects analysis, it fails to demonstrate the requisite “substantial” foreclosure of competition required to sustain a claim of anticompetitive exclusion. Instead the court once again infers anticompetitive harm from harm to competitors. 

The district court makes no effort to establish the quantity of competition foreclosed as required by the Supreme Court. Nor does the court demonstrate that the alleged foreclosure harms competition, as opposed to just rivals. Foreclosure per se is not impermissible and may be perfectly consistent with procompetitive conduct.

Again citing Microsoft, the district court asserts that a quantitative finding is not required. Yet, as the court’s citation to Microsoft should have made clear, in its stead a court must find actual anticompetitive effect; it may not simply assert it. As Microsoft held: 

It is clear that in all cases the plaintiff must… prove the degree of foreclosure. This is a prudential requirement; exclusivity provisions in contracts may serve many useful purposes. 

The court essentially infers substantiality from the fact that Qualcomm entered into exclusive deals with Apple (actually, volume discounts), from which the court concludes that Qualcomm foreclosed rivals’ access to a key customer. But its inference that this led to substantial foreclosure is based on internal business statements—so-called “hot docs”—characterizing the importance of Apple as a customer. Yet, as Geoffrey Manne and Marc Williamson explain, such documentary evidence is unreliable as a guide to economic significance or legal effect: 

Business people will often characterize information from a business perspective, and these characterizations may seem to have economic implications. However, business actors are subject to numerous forces that influence the rhetoric they use and the conclusions they draw….

There are perfectly good reasons to expect to see “bad” documents in business settings when there is no antitrust violation lurking behind them.

Assuming such language has the requisite economic or legal significance is unsupportable—especially when, as here, the requisite standard demands a particular quantitative significance.

Moreover, the court’s “surcharge” theory of exclusionary harm rests on assumptions regarding the mechanism by which the alleged surcharge excludes rivals and harms consumers. But the court incorrectly asserts that only one mechanism operates—and it makes no effort to quantify it. 

The court cites “basic economics” via Mankiw’s Principles of Microeconomics text for its conclusion:

The surcharge affects demand for rivals’ chips because as a matter of basic economics, regardless of whether a surcharge is imposed on OEMs or directly on Qualcomm’s rivals, “the price paid by buyers rises, and the price received by sellers falls.” Thus, the surcharge “places a wedge between the price that buyers pay and the price that sellers receive,” and demand for such transactions decreases. Rivals see lower sales volumes and lower margins, and consumers see less advanced features as competition decreases.

But even assuming the court is correct that Qualcomm’s conduct entails such a surcharge, basic economics does not hold that decreased demand for rivals’ chips is the only possible outcome. 

In actuality, an increase in the cost of an input for OEMs can have three possible effects:

  1. OEMs can pass all or some of the cost increase on to consumers in the form of higher phone prices. Assuming some elasticity of demand, this would mean fewer phone sales and thus less demand by OEMs for chips, as the court asserts. But the extent of that effect would depend on consumers’ demand elasticity and the magnitude of the cost increase as a percentage of the phone price. If demand is highly inelastic at this price (i.e., relatively insensitive to the relevant price change), it may have a tiny effect on the number of phones sold and thus the number of chips purchased—approaching zero as price insensitivity increases.
  2. OEMs can absorb the cost increase and realize lower profits but continue to sell the same number of phones and purchase the same number of chips. This would not directly affect demand for chips or their prices.
  3. OEMs can respond to a price increase by purchasing fewer chips from rivals and more chips from Qualcomm. While this would affect rivals’ chip sales, it would not necessarily affect consumer prices, the total number of phones sold, or OEMs’ margins—that result would depend on whether Qualcomm’s chips cost more or less than its rivals’. If the latter, it would even increase OEMs’ margins and/or lower consumer prices and increase output.

Alternatively, of course, the effect could be some combination of these.

Whether any of these outcomes would substantially exclude rivals is inherently uncertain to begin with. But demonstrating a reduction in rivals’ chip sales is a necessary but not sufficient condition for proving anticompetitive foreclosure. The FTC didn’t even demonstrate that rivals were substantially harmed, let alone that there was any effect on consumers—nor did the district court make such findings. 

Doing so would entail consideration of whether decreased demand for rivals’ chips flows from reduced consumer demand or OEMs’ switching to Qualcomm for supply, how consumer demand elasticity affects rivals’ chip sales, and whether Qualcomm’s chips were actually less or more expensive than rivals’. Yet the court determined none of these. 

Conclusion

Contrary to established Supreme Court precedent, the district court’s decision relies on mere inferences to establish anticompetitive effect. The decision, if it stands, would render a wide range of potentially procompetitive conduct presumptively illegal and thus harm consumer welfare. It should be reversed by the Ninth Circuit.

Joining ICLE on the brief are:

  • Donald J. Boudreaux, Professor of Economics, George Mason University
  • Kenneth G. Elzinga, Robert C. Taylor Professor of Economics, University of Virginia
  • Janice Hauge, Professor of Economics, University of North Texas
  • Justin (Gus) Hurwitz, Associate Professor of Law, University of Nebraska College of Law; Director of Law & Economics Programs, ICLE
  • Thomas A. Lambert, Wall Chair in Corporate Law and Governance, University of Missouri Law School
  • John E. Lopatka, A. Robert Noll Distinguished Professor of Law, Penn State University Law School
  • Daniel Lyons, Professor of Law, Boston College Law School
  • Geoffrey A. Manne, President and Founder, International Center for Law & Economics; Distinguished Fellow, Northwestern University Center on Law, Business & Economics
  • Alan J. Meese, Ball Professor of Law, William & Mary Law School
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics Emeritus, Emory University
  • Vernon L. Smith, George L. Argyros Endowed Chair in Finance and Economics, Chapman University School of Business; Nobel Laureate in Economics, 2002
  • Michael Sykuta, Associate Professor of Economics, University of Missouri


After spending a few years away from ICLE and directly engaging in the day to day grind of indigent criminal defense as a public defender, I now have a new appreciation for the ways economic tools can explain behavior that I had not before studied. For instance, I think the law and economics tradition, specifically the insights of Ludwig von Mises and Friedrich von Hayek on the importance of price signals, can explain one of the major problems for public defenders and their clients: without price signals, there is no rational way to determine the best way to spend one’s time.

I believe the most common complaints about how public defenders represent their clients is better understood not primarily as a lack of funding, as a lack of effort or care, or even simply as a lack of time for overburdened lawyers, but as an allocation problem. In the absence of price signals, there is no rational way to determine the best way to spend one’s time as a public defender. (Note: Many jurisdictions use the model of indigent defense described here, in which lawyers are paid a salary to work for the public defender’s office. However, others use models like contracting lawyers for particular cases, appointing lawyers for a flat fee, relying on non-profit agencies, or combining approaches as some type of hybrid. These models all have their own advantages and disadvantages, but this blog post is only about the issue of price signals for lawyers who work within a public defender’s office.)

As Mises and Hayek taught us, price signals carry a great deal of information; indeed, they make economic calculation possible. Their critique of socialism was built around this idea: that the person in charge of making economic choices without prices and the profit-and-loss mechanism is “groping in the dark.”

This isn’t to say that people haven’t tried to find ways to figure out the best way to spend their time in the absence of the profit-and-loss mechanism. In such environments, bureaucratic rules often replace price signals in directing human action. For instance, lawyers have rules of professional conduct. These rules, along with concerns about reputation and other institutional checks may guide lawyers on how to best spend their time as a general matter. But even these things are no match for price signals in determining the most efficient way to allocate the scarcest resource of all: time.

Imagine two lawyers, one working for a public defender’s office who receives a salary that is not dependent on caseload or billable hours, and another private defense lawyer who charges his client for the work that is put in.

In either case the lawyer who is handed a file for a case scheduled for trial months in advance has a choice to make: do I start working on this now, or do I put it on the backburner because of cases with much closer deadlines? A cursory review of the file shows there may be a possible suppression issue that will require further investigation. A successful suppression motion would likely lead to a resolution of the case that will not result in a conviction, but it would take considerable time – time which could be spent working on numerous client files with closer trial dates. For the sake of this hypothetical, there is a strong legal basis to file suppression motion (i.e., it is not frivolous).

The private defense lawyer has a mechanism beyond what is available to public defenders to determine how to handle this case: price signals. He can bring the suppression issue to his client’s attention, explain the likelihood of success, and then offer to file and argue the suppression motion for some agreed upon price. The client would then have the ability to determine with counsel whether this is worthwhile.

The public defender, on the other hand, does not have price signals to determine where to put this suppression motion among his other workload. He could spend the time necessary to develop the facts and research the law for the suppression motion, but unless there is a quickly approaching deadline for the motion to be filed, there will be many other cases in the queue with closer deadlines begging for his attention. Clients, who have no rationing principle based in personal monetary costs, would obviously prefer their public defender file any and all motions which have any chance whatsoever to help them, regardless of merit.

What this hypothetical shows is that public defenders do not face the same incentive structure as private lawyers when it comes to allocation of time. But neither do criminal defendants. Indigent defendants who qualify for public defender representation often complain about their “public pretender” for “not doing anything for them.” But the simple truth is that the public defender is making choices on how to spend his time more or less by his own determination of where he can be most useful. Deadlines often drive the review of cases, along with who sends the most letters and/or calls. The actual evaluation of which cases have the most merit can fall through the cracks. Often times, this means cases are worked on in a chronological manner, but insufficient time and effort is spent on particular cases that would have merited more investment because of quickly approaching deadlines on other cases. Sometimes this means that the most annoying clients get the most time spent on their behalf, irrespective of the merits of their case. At best, public defenders are acting like battlefield medics and attempt to perform triage by spending their time where they believe they can help the most.

Unlike private criminal defense lawyers, public defenders can’t typically reject cases because their caseload has grown too big, or charge a higher price in order to take on a particularly difficult and time-consuming case. Therefore, the public defender is stuck in a position to simply guess at the best use of their time with the heuristics described above and do the very best they can under the circumstances. Unfortunately, those heuristics simply can’t replace price signals in determining the best use of one’s time.

As criminal justice reform becomes a policy issue for both left and right, law and economics analysis should have a place in the conversation. Any reforms of indigent defense that will be part of this broader effort should take into consideration the calculation problem inherent to the public defender’s office. Other institutional arrangements, like a well-designed voucher system, which do not suffer from this particular problem may be preferable.

It might surprise some readers to learn that we think the Court’s decision today in Apple v. Pepper reaches — superficially — the correct result. But, we hasten to add, the Court’s reasoning (and, for that matter, the dissent’s) is completely wrongheaded. It would be an understatement to say that the Court reached the right result for the wrong reason; in fact, the Court’s analysis wasn’t even in the same universe as the correct reasoning.

Below we lay out our assessment, in a post drawn from an article forthcoming in the Nebraska Law Review.

Did the Court forget that, just last year, it decided Amex, the most significant U.S. antitrust case in ages?

What is most remarkable about the decision (and the dissent) is that neither mentions Ohio v. Amex, nor even the two-sided market context in which the transactions at issue take place.

If the decision in Apple v. Pepper hewed to the precedent established by Ohio v. Amex it would start with the observation that the relevant market analysis for the provision of app services is an integrated one, in which the overall effect of Apple’s conduct on both app users and app developers must be evaluated. A crucial implication of the Amex decision is that participants on both sides of a transactional platform are part of the same relevant market, and the terms of their relationship to the platform are inextricably intertwined.

Under this conception of the market, it’s difficult to maintain that either side does not have standing to sue the platform for the terms of its overall pricing structure, whether the specific terms at issue apply directly to that side or not. Both end users and app developers are “direct” purchasers from Apple — of different products, but in a single, inextricably interrelated market. Both groups should have standing.

More controversially, the logic of Amex also dictates that both groups should be able to establish antitrust injury — harm to competition — by showing harm to either group, as long as it establishes the requisite interrelatedness of the two sides of the market.

We believe that the Court was correct to decide in Amex that effects falling on the “other” side of a tightly integrated, two-sided market from challenged conduct must be addressed by the plaintiff in making its prima facie case. But that outcome entails a market definition that places both sides of such a market in the same relevant market for antitrust analysis.

As a result, the Court’s holding in Amex should also have required a finding in Apple v. Pepper that an app user on one side of the platform who transacts with an app developer on the other side of the market, in a transaction made possible and directly intermediated by Apple’s App Store, should similarly be deemed in the same market for standing purposes.

Relative to a strict construction of the traditional baseline, the former entails imposing an additional burden on two-sided market plaintiffs, while the latter entails a lessening of that burden. Whether the net effect is more or fewer successful cases in two-sided markets is unclear, of course. But from the perspective of aligning evidentiary and substantive doctrine with economic reality such an approach would be a clear improvement.

Critics accuse the Court of making antitrust cases unwinnable against two-sided market platforms thanks to Amex’s requirement that a prima facie showing of anticompetitive effect requires assessment of the effects on both sides of a two-sided market and proof of a net anticompetitive outcome. The critics should have been chastened by a proper decision in Apple v. Pepper. As it is, the holding (although not the reasoning) still may serve to undermine their fears.

But critics should have recognized that a necessary corollary of Amex’s “expanded” market definition is that, relative to previous standing doctrine, a greater number of prospective parties should have standing to sue.

More important, the Court in Apple v. Pepper should have recognized this. Although nominally limited to the indirect purchaser doctrine, the case presented the Court with an opportunity to grapple with this logical implication of its Amex decision. It failed to do so.

On the merits, it looks like Apple should win. But, for much the same reason, the Respondents in Apple v. Pepper should have standing

This does not, of course, mean that either party should win on the merits. Indeed, on the merits of the case, the Petitioner in Apple v. Pepper appears to have the stronger argument, particularly in light of Amex which (assuming the App Store is construed as some species of a two-sided “transaction” market) directs that Respondent has the burden of considering harms and efficiencies across both sides of the market.

At least on the basis of the limited facts as presented in the case thus far, Respondents have not remotely met their burden of proving anticompetitive effects in the relevant market.

The actual question presented in Apple v. Pepper concerns standing, not whether the plaintiffs have made out a viable case on the merits. Thus it may seem premature to consider aspects of the latter in addressing the former. But the structure of the market considered by the court should be consistent throughout its analysis.

Adjustments to standing in the context of two-sided markets must be made in concert with the nature of the substantive rule of reason analysis that will be performed in a case. The two doctrines are connected not only by the just demands for consistency, but by the error-cost framework of the overall analysis, which runs throughout the stages of an antitrust case.

Here, the two-sided markets approach in Amex properly understands that conduct by a platform has relevant effects on both sides of its interrelated two-sided market. But that stems from the actual economics of the platform; it is not merely a function of a judicial construct. It thus holds true at all stages of the analysis.

The implication for standing is that users on both sides of a two-sided platform may suffer similarly direct (or indirect) injury as a result of the platform’s conduct, regardless of the side to which that conduct is nominally addressed.

The consequence, then, of Amex’s understanding of the market is that more potential plaintiffs — specifically, plaintiffs on both sides of a two-sided market — may claim to suffer antitrust injury.

Why the myopic focus of the holding (and dissent) on Illinois Brick is improper: It’s about the market definition, stupid!

Moreover, because of the Amex understanding, the problem of analyzing the pass-through of damages at issue in Illinois Brick (with which the Court entirely occupies itself in Apple v. Pepper) is either mitigated or inevitable.

In other words, either the users on the different sides of a two-sided market suffer direct injury without pass-through under a proper definition of the relevant market, or else their interrelatedness is so strong that, complicated as it may be, the needs of substantive accuracy trump the administrative costs in sorting out the incidence of the costs, and courts cannot avoid them.

Illinois Brick’s indirect purchaser doctrine was designed for an environment in which the relationship between producers and consumers is mediated by a distributor in a direct, linear supply chain; it was not designed for platforms. Although the question presented in Apple v. Pepper is explicitly about whether the Illinois Brick “indirect purchaser” doctrine applies to the Apple App Store, that determination is contingent on the underlying product market definition (whether the product market is in fact well-specified by the parties and the court or not).

Particularly where intermediaries exist precisely to address transaction costs between “producers” and “consumers,” the platform services they provide may be central to the underlying claim in a way that the traditional direct/indirect filters — and their implied relevant markets — miss.

Further, the Illinois Brick doctrine was itself based not on the substantive necessity of cutting off liability evaluations at a particular level of distribution, but on administrability concerns. In particular, the Court was concerned with preventing duplicative recovery when there were many potential groups of plaintiffs, as well as preventing injustices that would occur if unknown groups of plaintiffs inadvertently failed to have their rights adequately adjudicated in absentia. It was also concerned with avoiding needlessly complicated damages calculations.

But, almost by definition, the tightly coupled nature of the two sides of a two-sided platform should mitigate the concerns about duplicative recovery and unknown parties. Moreover, much of the presumed complexity in damages calculations in a platform setting arise from the nature of the platform itself. Assessing and apportioning damages may be complicated, but such is the nature of complex commercial relationships — the same would be true, for example, of damages calculations between vertically integrated companies that transact simultaneously at multiple levels, or between cross-licensing patent holders/implementers. In fact, if anything, the judicial efficiency concerns in Illinois Brick point toward the increased importance of properly assessing the nature of the product or service of the platform in order to ensure that it accurately encompasses the entire relevant transaction.

Put differently, under a proper, more-accurate market definition, the “direct” and “indirect” labels don’t necessarily reflect either business or antitrust realities.

Where the Court in Apple v. Pepper really misses the boat is in its overly formalistic claim that the business model (and thus the product) underlying the complained-of conduct doesn’t matter:

[W]e fail to see why the form of the upstream arrangement between the manufacturer or supplier and the retailer should determine whether a monopolistic retailer can be sued by a downstream consumer who has purchased a good or service directly from the retailer and has paid a higher-than-competitive price because of the retailer’s unlawful monopolistic conduct.

But Amex held virtually the opposite:

Because “[l]egal presumptions that rest on formalistic distinctions rather than actual market realities are generally disfavored in antitrust law,” courts usually cannot properly apply the rule of reason without an accurate definition of the relevant market.

* * *

Price increases on one side of the platform likewise do not suggest anticompetitive effects without some evidence that they have increased the overall cost of the platform’s services. Thus, courts must include both sides of the platform—merchants and cardholders—when defining the credit-card market.

In the face of novel business conduct, novel business models, and novel economic circumstances, the degree of substantive certainty may be eroded, as may the reasonableness of the expectation that typical evidentiary burdens accurately reflect competitive harm. Modern technology — and particularly the platform business model endemic to many modern technology firms — presents a need for courts to adjust their doctrines in the face of such novel issues, even if doing so adds additional complexity to the analysis.

The unlearned market-definition lesson of the Eighth Circuit’s Campos v. Ticketmaster dissent

The Eight Circuit’s Campos v. Ticketmaster case demonstrates the way market definition shapes the application of the indirect purchaser doctrine. Indeed, the dissent in that case looms large in the Ninth Circuit’s decision in Apple v. Pepper. [Full disclosure: One of us (Geoff) worked on the dissent in Campos v. Ticketmaster as a clerk to Eighth Circuit judge Morris S. Arnold]

In Ticketmaster, the plaintiffs alleged that Ticketmaster abused its monopoly in ticket distribution services to force supracompetitve charges on concert venues — a practice that led to anticompetitive prices for concert tickets. Although not prosecuted as a two-sided market, the business model is strikingly similar to the App Store model, with Ticketmaster charging fees to venues and then facilitating ticket purchases between venues and concert goers.

As the dissent noted, however:

The monopoly product at issue in this case is ticket distribution services, not tickets.

Ticketmaster supplies the product directly to concert-goers; it does not supply it first to venue operators who in turn supply it to concert-goers. It is immaterial that Ticketmaster would not be supplying the service but for its antecedent agreement with the venues.

But it is quite relevant that the antecedent agreement was not one in which the venues bought some product from Ticketmaster in order to resell it to concert-goers.

More important, and more telling, is the fact that the entirety of the monopoly overcharge, if any, is borne by concert-goers.

In contrast to the situations described in Illinois Brick and the literature that the court cites, the venues do not pay the alleged monopoly overcharge — in fact, they receive a portion of that overcharge from Ticketmaster. (Emphasis added).

Thus, if there was a monopoly overcharge it was really borne entirely by concert-goers. As a result, apportionment — the complexity of which gives rise to the standard in Illinois Brick — was not a significant issue. And the antecedent transaction that allegedly put concertgoers in an indirect relationship with Ticketmaster is one in which Ticketmaster and concert venues divvied up the alleged monopoly spoils, not one in which the venues absorb their share of the monopoly overcharge.

The analogy to Apple v. Pepper is nearly perfect. Apple sits between developers on one side and consumers on the other, charges a fee to developers for app distribution services, and facilitates app sales between developers and users. It is possible to try to twist the market definition exercise to construe the separate contracts between developers and Apple on one hand, and the developers and consumers on the other, as some sort of complicated version of the classical manufacturing and distribution chains. But, more likely, it is advisable to actually inquire into the relevant factual differences that underpin Apple’s business model and adapt how courts consider market definition for two-sided platforms.

Indeed, Hanover Shoe and Illinois Brick were born out of a particular business reality in which businesses structured themselves in what are now classical production and distribution chains. The Supreme Court adopted the indirect purchaser rule as a prudential limitation on antitrust law in order to optimize the judicial oversight of such cases. It seems strangely nostalgic to reflexively try to fit new business methods into old legal analyses, when prudence and reality dictate otherwise.

The dissent in Ticketmaster was ahead of its time insofar as it recognized that the majority’s formal description of the ticket market was an artifact of viewing what was actually something much more like a ticket-services platform operated by Ticketmaster through the poor lens of the categories established decades earlier.

The Ticketmaster dissent’s observations demonstrate that market definition and antitrust standing are interrelated. It makes no sense to adhere to a restrictive reading of the latter if it connotes an economically improper understanding of the former. Ticketmaster provided an intermediary service — perhaps not quite a two-sided market, but something close — that stands outside a traditional manufacturing supply chain. Had it been offered by the venues themselves and bundled into the price of concert tickets there would be no question of injury and of standing (nor would market definition matter much, as both tickets and distribution services would be offered as a joint product by the same parties, in fixed proportions).

What antitrust standing doctrine should look like after Amex

There are some clear implications for antitrust doctrine that (should) follow from the preceding discussion.

A plaintiff has a choice to allege that a defendant operates either as a two-sided market or in a more traditional, linear chain during the pleading stage. If the plaintiff alleges a two-sided market, then, to demonstrate standing, it need only be shown that injury occurred to some subset of platform users with which the plaintiff is inextricably interrelated. The plaintiff would not need to demonstrate injury to him or herself, nor allege net harm, nor show directness.

In response, a defendant can contest standing by challenging the interrelatedness of the plaintiff and the group of platform users with whom the plaintiff claims interrelatedness. If the defendant does not challenge the allegation that it operates a two-sided market, it could not challenge standing by showing indirectness, that plaintiff had not alleged personal injury, or that plaintiff hasn’t alleged a net harm.

Once past a determination of standing, however, a plaintiff who pleads a two-sided market would not be able to later withdraw this allegation in order to lessen the attendant legal burdens.

If the court accepts that the defendant is operating a two-sided market, both parties would be required to frame their allegations and defenses in accordance with the nature of the two-sided market and thus the holding in Amex. This is critical because, whereas alleging a two-sided market may make it easier for plaintiffs to demonstrate standing, Amex’s requirement that net harm be demonstrated across interrelated sets of users makes it more difficult for plaintiffs to present a viable prima facie case. Further, defendants would not be barred from presenting efficiencies defenses based on benefits that interrelated users enjoy.

Conclusion: The Court in Apple v. Pepper should have acknowledged the implications of its holding in Amex

After Amex, claims against two-sided platforms might require more evidence to establish anticompetitive harm, but that business model also means that firms should open themselves up to a larger pool of potential plaintiffs. The legal principles still apply, but the relative importance of those principles to judicial outcomes shifts (or should shift) in line with the unique economic position of potential plaintiffs and defendants in a platform environment.

Whether a priori the net result is more or fewer cases and more or fewer victories for plaintiffs is not the issue; what matters is matching the legal and economic theory to the relevant facts in play. Moreover, decrying Amex as the end of antitrust was premature: the actual affect on injured parties can’t be known until other changes (like standing for a greater number of plaintiffs) are factored into the analysis. The Court’s holding in Apple v. Pepper sidesteps this issue entirely, and thus fails to properly move antitrust doctrine forward in line with its holding in Amex.

Of course, it’s entirely possible that platforms and courts might be inundated with expensive and difficult to manage lawsuits. There may be reasons of administrability for limiting standing (as Illinois Brick perhaps prematurely did for fear of the costs of courts’ managing suits). But then that should have been the focus of the Court’s decision.

Allowing standing in Apple v. Pepper permits exactly the kind of legal experimentation needed to enable the evolution of antitrust doctrine along with new business realities. But in some ways the Court reached the worst possible outcome. It announced a rule that permits more plaintiffs to establish standing, but it did not direct lower courts to assess standing within the proper analytical frame. Instead, it just expands standing in a manner unmoored from the economic — and, indeed, judicial — context. That’s not a recipe for the successful evolution of antitrust doctrine.

Zoom, one of Silicon Valley’s lesser-known unicorns, has just gone public. At the time of writing, its shares are trading at about $65.70, placing the company’s value at $16.84 billion. There are good reasons for this success. According to its Form S-1, Zoom’s revenue rose from about $60 million in 2017 to a projected $330 million in 2019, and the company has already surpassed break-even . This growth was notably fueled by a thriving community of users who collectively spend approximately 5 billion minutes per month in Zoom meetings.

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects. For instance, the value of Skype to one user depends – at least to some extent – on the number of other people that might be willing to use the network. In these settings, it is often said that positive feedback loops may cause the market to tip in favor of a single firm that is then left with an unassailable market position. Although Zoom still faces significant competitive challenges, it has nonetheless established a strong position in a market previously dominated by powerful incumbents who could theoretically count on network effects to stymie its growth.

Further complicating matters, Zoom chose to compete head-on with these incumbents. It did not create a new market or a highly differentiated product. Zoom’s Form S-1 is quite revealing. The company cites the quality of its product as its most important competitive strength. Similarly, when listing the main benefits of its platform, Zoom emphasizes that its software is “easy to use”, “easy to deploy and manage”, “reliable”, etc. In its own words, Zoom has thus gained a foothold by offering an existing service that works better than that of its competitors.

And yet, this is precisely the type of story that a literal reading of the network effects literature would suggest is impossible, or at least highly unlikely. For instance, the foundational papers on network effects often cite the example of the DVORAK keyboard (David, 1985; and Farrell & Saloner, 1985). These early scholars argued that, despite it being the superior standard, the DVORAK layout failed to gain traction because of the network effects protecting the QWERTY standard. In other words, consumers failed to adopt the superior DVORAK layout because they were unable to coordinate on their preferred option. It must be noted, however, that the conventional telling of this story was forcefully criticized by Liebowitz & Margolis in their classic 1995 article, The Fable of the Keys.

Despite Liebowitz & Margolis’ critique, the dominance of the underlying network effects story persists in many respects. And in that respect, the emergence of Zoom is something of a cautionary tale. As influential as it may be, the network effects literature has tended to overlook a number of factors that may mitigate, or even eliminate, the likelihood of problematic outcomes. Zoom is yet another illustration that policymakers should be careful when they make normative inferences from positive economics.

A Coasian perspective

It is now widely accepted that multi-homing and the absence of switching costs can significantly curtail the potentially undesirable outcomes that are sometimes associated with network effects. But other possibilities are often overlooked. For instance, almost none of the foundational network effects papers pay any notice to the application of the Coase theorem (though it has been well-recognized in the two-sided markets literature).

Take a purported market failure that is commonly associated with network effects: an installed base of users prevents the market from switching towards a new standard, even if it is superior (this is broadly referred to as “excess inertia,” while the opposite scenario is referred to as “excess momentum”). DVORAK’s failure is often cited as an example.

Astute readers will quickly recognize that this externality problem is not fundamentally different from those discussed in Ronald Coase’s masterpiece, “The Problem of Social Cost,” or Steven Cheung’s “The Fable of the Bees” (to which Liebowitz & Margolis paid homage in their article’s title). In the case at hand, there are at least two sets of externalities at play. First, early adopters of the new technology impose a negative externality on the old network’s installed base (by reducing its network effects), and a positive externality on other early adopters (by growing the new network). Conversely, installed base users impose a negative externality on early adopters and a positive externality on other remaining users.

Describing these situations (with a haughty confidence reminiscent of Paul Samuelson and Arthur Cecil Pigou), Joseph Farrell and Garth Saloner conclude that:

In general, he or she [i.e. the user exerting these externalities] does not appropriately take this into account.

Similarly, Michael Katz and Carl Shapiro assert that:

In terms of the Coase theorem, it is very difficult to design a contract where, say, the (potential) future users of HDTV agree to subsidize today’s buyers of television sets to stop buying NTSC sets and start buying HDTV sets, thereby stimulating the supply of HDTV programming.

And yet it is far from clear that consumers and firms can never come up with solutions that mitigate these problems. As Daniel Spulber has suggested, referral programs offer a case in point. These programs usually allow early adopters to receive rewards in exchange for bringing new users to a network. One salient feature of these programs is that they do not simply charge a lower price to early adopters; instead, in order to obtain a referral fee, there must be some agreement between the early adopter and the user who is referred to the platform. This leaves ample room for the reallocation of rewards. Users might, for instance, choose to split the referral fee. Alternatively, the early adopter might invest time to familiarize the switching user with the new platform, hoping to earn money when the user jumps ship. Both of these arrangements may reduce switching costs and mitigate externalities.

Danial Spulber also argues that users may coordinate spontaneously. For instance, social groups often decide upon the medium they will use to communicate. Families might choose to stay on the same mobile phone network. And larger groups (such as an incoming class of students) may agree upon a social network to share necessary information, etc. In these contexts, there is at least some room to pressure peers into adopting a new platform.

Finally, firms and other forms of governance may also play a significant role. For instance, employees are routinely required to use a series of networked goods. Common examples include office suites, email clients, social media platforms (such as Slack), or video communications applications (Zoom, Skype, Google Hangouts, etc.). In doing so, firms presumably act as islands of top-down decision-making and impose those products that maximize the collective preferences of employers and employees. Similarly, a single firm choosing to join a network (notably by adopting a standard) may generate enough momentum for a network to gain critical mass. Apple’s decisions to adopt USB-C connectors on its laptops and to ditch headphone jacks on its iPhones both spring to mind. Likewise, it has been suggested that distributed ledger technology and initial coin offerings may facilitate the creation of new networks. The intuition is that so-called “utility tokens” may incentivize early adopters to join a platform, despite initially weak network effects, because they expect these tokens to increase in value as the network expands.

A combination of these arrangements might explain how Zoom managed to grow so rapidly, despite the presence of powerful incumbents. In its own words:

Our rapid adoption is driven by a virtuous cycle of positive user experiences. Individuals typically begin using our platform when a colleague or associate invites them to a Zoom meeting. When attendees experience our platform and realize the benefits, they often become paying customers to unlock additional functionality.

All of this is not to say that network effects will always be internalized through private arrangements, but rather that it is equally wrong to assume that transaction costs systematically prevent efficient coordination among users.

Misguided regulatory responses

Over the past couple of months, several antitrust authorities around the globe have released reports concerning competition in digital markets (UK, EU, Australia), or held hearings on this topic (US). A recurring theme throughout their published reports is that network effects almost inevitably weaken competition in digital markets.

For instance, the report commissioned by the European Commission mentions that:

Because of very strong network externalities (especially in multi-sided platforms), incumbency advantage is important and strict scrutiny is appropriate. We believe that any practice aimed at protecting the investment of a dominant platform should be minimal and well targeted.

The Australian Competition & Consumer Commission concludes that:

There are considerable barriers to entry and expansion for search platforms and social media platforms that reinforce and entrench Google and Facebook’s market power. These include barriers arising from same-side and cross-side network effects, branding, consumer inertia and switching costs, economies of scale and sunk costs.

Finally, a panel of experts in the United Kingdom found that:

Today, network effects and returns to scale of data appear to be even more entrenched and the market seems to have stabilised quickly compared to the much larger degree of churn in the early days of the World Wide Web.

To address these issues, these reports suggest far-reaching policy changes. These include shifting the burden of proof in competition cases from authorities to defendants, establishing specialized units to oversee digital markets, and imposing special obligations upon digital platforms.

The story of Zoom’s emergence and the important insights that can be derived from the Coase theorem both suggest that these fears may be somewhat overblown.

Rivals do indeed find ways to overthrow entrenched incumbents with some regularity, even when these incumbents are shielded by network effects. Of course, critics may retort that this is not enough, that competition may sometimes arrive too late (excess inertia, i.e., “ a socially excessive reluctance to switch to a superior new standard”) or too fast (excess momentum, i.e., “the inefficient adoption of a new technology”), and that the problem is not just one of network effects, but also one of economies of scale, information asymmetry, etc. But this comes dangerously close to the Nirvana fallacy. To begin, it assumes that regulators are able to reliably navigate markets toward these optimal outcomes — which is questionable, at best. Moreover, the regulatory cost of imposing perfect competition in every digital market (even if it were possible) may well outweigh the benefits that this achieves. Mandating far-reaching policy changes in order to address sporadic and heterogeneous problems is thus unlikely to be the best solution.

Instead, the optimal policy notably depends on whether, in a given case, users and firms can coordinate their decisions without intervention in order to avoid problematic outcomes. A case-by-case approach thus seems by far the best solution.

And competition authorities need look no further than their own decisional practice. The European Commission’s decision in the Facebook/Whatsapp merger offers a good example (this was before Margrethe Vestager’s appointment at DG Competition). In its decision, the Commission concluded that the fast-moving nature of the social network industry, widespread multi-homing, and the fact that neither Facebook nor Whatsapp controlled any essential infrastructure, prevented network effects from acting as a barrier to entry. Regardless of its ultimate position, this seems like a vastly superior approach to competition issues in digital markets. The Commission adopted a similar reasoning in the Microsoft/Skype merger. Unfortunately, the Commission seems to have departed from this measured attitude in more recent decisions. In the Google Search case, for example, the Commission assumes that the mere existence of network effects necessarily increases barriers to entry:

The existence of positive feedback effects on both sides of the two-sided platform formed by general search services and online search advertising creates an additional barrier to entry.

A better way forward

Although the positive economics of network effects are generally correct and most definitely useful, some of the normative implications that have been derived from them are deeply flawed. Too often, policymakers and commentators conclude that these potential externalities inevitably lead to stagnant markets where competition is unable to flourish. But this does not have to be the case. The emergence of Zoom shows that superior products may prosper despite the presence of strong incumbents and network effects.

Basing antitrust policies on sweeping presumptions about digital competition – such as the idea that network effects are rampant or the suggestion that online platforms necessarily imply “extreme returns to scale” – is thus likely to do more harm than good. Instead, Antitrust authorities should take a leaf out of Ronald Coase’s book, and avoid blackboard economics in favor of a more granular approach.

[TOTM: The following is the third in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.

This post is authored by Douglas H. Ginsburg, Professor of Law, Antonin Scalia Law School at George Mason University; Senior Judge, United States Court of Appeals for the District of Columbia Circuit; and former Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice; and Joshua D. Wright, University Professor, Antonin Scalia Law School at George Mason University; Executive Director, Global Antitrust Institute; former U.S. Federal Trade Commissioner from 2013-15; and one of the founding bloggers at Truth on the Market.]

[Ginsburg & Wright: Professor Wright is recused from participation in the FTC litigation against Qualcomm, but has provided counseling advice to Qualcomm concerning other regulatory and competition matters. The views expressed here are our own and neither author received financial support.]

The Department of Justice Antitrust Division (DOJ) and Federal Trade Commission (FTC) have spent a significant amount of time in federal court litigating major cases premised upon an anticompetitive foreclosure theory of harm. Bargaining models, a tool used commonly in foreclosure cases, have been essential to the government’s theory of harm in these cases. In vertical merger or conduct cases, the core theory of harm is usually a variant of the claim that the transaction (or conduct) strengthens the firm’s incentives to engage in anticompetitive strategies that depend on negotiations with input suppliers. Bargaining models are a key element of the agency’s attempt to establish those claims and to predict whether and how firm incentives will affect negotiations with input suppliers, and, ultimately, the impact on equilibrium prices and output. Application of bargaining models played a key role in evaluating the anticompetitive foreclosure theories in the DOJ’s litigation to block the proposed merger of AT&T and Time Warner Cable. A similar model is at the center of the FTC’s antitrust claims against Qualcomm and its patent licensing business model.

Modern antitrust analysis does not condemn business practices as anticompetitive without solid economic evidence of an actual or likely harm to competition. This cautious approach was developed in the courts for two reasons. The first is that the difficulty of distinguishing between procompetitive and anticompetitive explanations for the same conduct suggests there is a high risk of error. The second is that those errors are more likely to be false positives than false negatives because empirical evidence and judicial learning have established that unilateral conduct is usually either procompetitive or competitively neutral. In other words, while the risk of anticompetitive foreclosure is real, courts have sensibly responded by requiring plaintiffs to substantiate their claims with more than just theory or scant evidence that rivals have been harmed.

An economic model can help establish the likelihood and/or magnitude of competitive harm when the model carefully captures the key institutional features of the competition it attempts to explain. Naturally, this tends to mean that the economic theories and models proffered by dueling economic experts to predict competitive effects take center stage in antitrust disputes. The persuasiveness of an economic model turns on the robustness of its assumptions about the underlying market. Model predictions that are inconsistent with actual market evidence give one serious pause before accepting the results as reliable.

For example, many industries are characterized by bargaining between providers and distributors. The Nash bargaining framework can be used to predict the outcomes of bilateral negotiations based upon each party’s bargaining leverage. The model assumes that both parties are better off if an agreement is reached, but that as the utility of one party’s outside option increases relative to the bargain, it will capture an increasing share of the surplus. Courts have had to reconcile these seemingly complicated economic models with prior case law and, in some cases, with direct evidence that is apparently inconsistent with the results of the model.

Indeed, Professor Carl Shapiro recently used bargaining models to analyze harm to competition in two prominent cases alleging anticompetitive foreclosure—one initiated by the DOJ and one by the FTC—in which he served as the government’s expert economist. In United States v. AT&T Inc., Dr. Shapiro testified that the proposed transaction between AT&T and Time Warner would give the vertically integrated company leverage to extract higher prices for content from AT&T’s rival, Dish Network. Soon after, Dr. Shapiro presented a similar bargaining model in FTC v. Qualcomm Inc. He testified that Qualcomm leveraged its monopoly power over chipsets to extract higher royalty rates from smartphone OEMs, such as Apple, wishing to license its standard essential patents (SEPs). In each case, Dr. Shapiro’s models were criticized heavily by the defendants’ expert economists for ignoring market realities that play an important role in determining whether the challenged conduct was likely to harm competition.

Judge Leon’s opinion in AT&T/Time Warner—recently upheld on appeal—concluded that Dr. Shapiro’s application of the bargaining model was significantly flawed, based upon unreliable inputs, and undermined by evidence about actual market performance presented by defendant’s expert, Dr. Dennis Carlton. Dr. Shapiro’s theory of harm posited that the combined company would increase its bargaining leverage and extract greater affiliate fees for Turner content from AT&T’s distributor rivals. The increase in bargaining leverage was made possible by the threat of a post-merger blackout of Turner content for AT&T’s rivals. This theory rested on the assumption that the combined firm would have reduced financial exposure from a long-term blackout of Turner content and would therefore have more leverage to threaten a blackout in content negotiations. The purpose of his bargaining model was to quantify how much AT&T could extract from competitors subjected to a long-term blackout of Turner content.

Judge Leon highlighted a number of reasons for rejecting the DOJ’s argument. First, Dr. Shapiro’s model failed to account for existing long-term affiliate contracts, post-litigation offers of arbitration agreements, and the increasing competitiveness of the video programming and distribution industry. Second, Dr. Carlton had demonstrated persuasively that previous vertical integration in the video programming and distribution industry did not have a significant effect on content prices. Finally, Dr. Shapiro’s model primarily relied upon three inputs: (1) the total number of subscribers the unaffiliated distributor would lose in the event of a long-term blackout of Turner content, (2) the percentage of the distributor’s lost subscribers who would switch to AT&T as a result of the blackout, and (3) the profit margin AT&T would derive from the subscribers it gained from the blackout. Many of Dr. Shapiro’s inputs necessarily relied on critical assumptions and/or third-party sources. Judge Leon considered and discredited each input in turn. 

The parties in Qualcomm are, as of the time of this posting, still awaiting a ruling. Dr. Shapiro’s model in that case attempts to predict the effect of Qualcomm’s alleged “no license, no chips” policy. He compared the gains from trade OEMs receive when they purchase a chip from Qualcomm and pay Qualcomm a FRAND royalty to license its SEPs with the gains from trade OEMs receive when they purchase a chip from a rival manufacturer and pay a “royalty surcharge” to Qualcomm to license its SEPs. In other words, the FTC’s theory of harm is based upon the premise that Qualcomm is charging a supra-FRAND rate for its SEPs (the“royalty surcharge”) that squeezes the margins of OEMs. That margin squeeze, the FTC alleges, prevents rival chipset suppliers from obtaining a sufficient return when negotiating with OEMs. The FTC predicts the end result is a reduction in competition and an increase in the price of devices to consumers.

Qualcomm, like Judge Leon in AT&T, questioned the robustness of Dr. Shapiro’s model and its predictions in light of conflicting market realities. For example, Dr. Shapiro, argued that the

leverage that Qualcomm brought to bear on the chips shifted the licensing negotiations substantially in Qualcomm’s favor and led to a significantly higher royalty than Qualcomm would otherwise have been able to achieve.

Yet, on cross-examination, Dr. Shapiro declined to move from theory to empirics when asked if he had quantified the effects of Qualcomm’s practice on any other chip makers. Instead, Dr. Shapiro responded that he had not, but he had “reason to believe that the royalty surcharge was substantial” and had “inevitable consequences.” Under Dr. Shapiro’s theory, one would predict that royalty rates were higher after Qualcomm obtained market power.

As with Dr. Carlton’s testimony inviting Judge Leon to square the DOJ’s theory with conflicting historical facts in the industry, Qualcomm’s economic expert, Dr. Aviv Nevo, provided an analysis of Qualcomm’s royalty agreements from 1990-2017, confirming that there was no economic and meaningful difference between the royalty rates during the time frame when Qualcomm was alleged to have market power and the royalty rates outside of that time frame. He also presented evidence that ex ante royalty rates did not increase upon implementation of the CDMA standard or the LTE standard. Moreover, Dr.Nevo testified that the industry itself was characterized by declining prices and increasing output and quality.

Dr. Shapiro’s model in Qualcomm appears to suffer from many of the same flaws that ultimately discredited his model in AT&T/Time Warner: It is based upon assumptions that are contrary to real-world evidence and it does not robustly or persuasively identify anticompetitive effects. Some observers, including our Scalia Law School colleague and former FTC Chairman, Tim Muris, would apparently find it sufficient merely to allege a theoretical “ability to manipulate the marketplace.” But antitrust cases require actual evidence of harm. We think Professor Muris instead captured the appropriate standard in his important article rejecting attempts by the FTC to shortcut its requirement of proof in monopolization cases:

This article does reject, however, the FTC’s attempt to make it easier for the government to prevail in Section 2 litigation. Although the case law is hardly a model of clarity, one point that is settled is that injury to competitors by itself is not a sufficient basis to assume injury to competition …. Inferences of competitive injury are, of course, the heart of per se condemnation under the rule of reason. Although long a staple of Section 1, such truncation has never been a part of Section 2. In an economy as dynamic as ours, now is hardly the time to short-circuit Section 2 cases. The long, and often sorry, history of monopolization in the courts reveals far too many mistakes even without truncation.

Timothy J. Muris, The FTC and the Law of Monopolization, 67 Antitrust L. J. 693 (2000)

We agree. Proof of actual anticompetitive effects rather than speculation derived from models that are not robust to market realities are an important safeguard to ensure that Section 2 protects competition and not merely individual competitors.

The future of bargaining models in antitrust remains to be seen. Judge Leon certainly did not question the proposition that they could play an important role in other cases. Judge Leon closely dissected the testimony and models presented by both experts in AT&T/Time Warner. His opinion serves as an important reminder. As complex economic evidence like bargaining models become more common in antitrust litigation, judges must carefully engage with the experts on both sides to determine whether there is direct evidence on the likely competitive effects of the challenged conduct. Where “real-world evidence,” as Judge Leon called it, contradicts the predictions of a bargaining model, judges should reject the model rather than the reality. Bargaining models have many potentially important antitrust applications including horizontal mergers involving a bargaining component – such as hospital mergers, vertical mergers, and licensing disputes. The analysis of those models by the Ninth and D.C. Circuits will have important implications for how they will be deployed by the agencies and parties moving forward.

Near the end of her new proposal to break up Facebook, Google, Amazon, and Apple, Senator Warren asks, “So what would the Internet look like after all these reforms?”

It’s a good question, because, as she herself notes, “Twenty-five years ago, Facebook, Google, and Amazon didn’t exist. Now they are among the most valuable and well-known companies in the world.”

To Warren, our most dynamic and innovative companies constitute a problem that needs solving.

She described the details of that solution in a blog post:

First, [my administration would restore competition to the tech sector] by passing legislation that requires large tech platforms to be designated as “Platform Utilities” and broken apart from any participant on that platform.

* * *

For smaller companies…, their platform utilities would be required to meet the same standard of fair, reasonable, and nondiscriminatory dealing with users, but would not be required to structurally separate….

* * *
Second, my administration would appoint regulators committed to reversing illegal and anti-competitive tech mergers….
I will appoint regulators who are committed to… unwind[ing] anti-competitive mergers, including:

– Amazon: Whole Foods; Zappos;
– Facebook: WhatsApp; Instagram;
– Google: Waze; Nest; DoubleClick

Elizabeth Warren’s brave new world

Let’s consider for a moment what this brave new world will look like — not the nirvana imagined by regulators and legislators who believe that decimating a company’s business model will deter only the “bad” aspects of the model while preserving the “good,” as if by magic, but the inevitable reality of antitrust populism.  

Utilities? Are you kidding? For an overview of what the future of tech would look like under Warren’s “Platform Utility” policy, take a look at your water, electricity, and sewage service. Have you noticed any improvement (or reduction in cost) in those services over the past 10 or 15 years? How about the roads? Amtrak? Platform businesses operating under a similar regulatory regime would also similarly stagnate. Enforcing platform “neutrality” necessarily requires meddling in the most minute of business decisions, inevitably creating unintended and costly consequences along the way.

Network companies, like all businesses, differentiate themselves by offering unique bundles of services to customers. By definition, this means vertically integrating with some product markets and not others. Why are digital assistants like Siri bundled into mobile operating systems? Why aren’t the vast majority of third-party apps also bundled into the OS? If you want utilities regulators instead of Google or Apple engineers and designers making these decisions on the margin, then Warren’s “Platform Utility” policy is the way to go.

Grocery Stores. To take one specific case cited by Warren, how much innovation was there in the grocery store industry before Amazon bought Whole Foods? Since the acquisition, large grocery retailers, like Walmart and Kroger, have increased their investment in online services to better compete with the e-commerce champion. Many industry analysts expect grocery stores to use computer vision technology and artificial intelligence to improve the efficiency of check-out in the near future.

Smartphones. Imagine how forced neutrality would play out in the context of iPhones. If Apple can’t sell its own apps, it also can’t pre-install its own apps. A brand new iPhone with no apps — and even more importantly, no App Store — would be, well, just a phone, out of the box. How would users even access a site or app store from which to download independent apps? Would Apple be allowed to pre-install someone else’s apps? That’s discriminatory, too. Maybe it will be forced to offer a menu of all available apps in all categories (like the famously useless browser ballot screen demanded by the European Commission in its Microsoft antitrust case)? It’s hard to see how that benefits consumers — or even app developers.

Source: Free Software Magazine

Internet Search. Or take search. Calls for “search neutrality” have been bandied about for years. But most proponents of search neutrality fail to recognize that all Google’s search results entail bias in favor of its own offerings. As Geoff Manne and Josh Wright noted in 2011 at the height of the search neutrality debate:

[S]earch engines offer up results in the form not only of typical text results, but also maps, travel information, product pages, books, social media and more. To the extent that alleged bias turns on a search engine favoring its own maps, for example, over another firm’s, the allegation fails to appreciate that text results and maps are variants of the same thing, and efforts to restrain a search engine from offering its own maps is no different than preventing it from offering its own search results.

Nevermind that Google with forced non-discrimination likely means Google offering only the antiquated “ten blue links” search results page it started with in 1998 instead of the far more useful “rich” results it offers today; logically it would also mean Google somehow offering the set of links produced by any and all other search engines’ algorithms, in lieu of its own. If you think Google will continue to invest in and maintain the wealth of services it offers today on the strength of the profits derived from those search results, well, Elizabeth Warren is probably already your favorite politician.

Source: Web Design Museum  

And regulatory oversight of algorithmic content won’t just result in an impoverished digital experience; it will inevitably lead to an authoritarian one, as well:

Any agency granted a mandate to undertake such algorithmic oversight, and override or reconfigure the product of online services, thereby controls the content consumers may access…. This sort of control is deeply problematic… [because it saddles users] with a pervasive set of speech controls promulgated by the government. The history of such state censorship is one which has demonstrated strong harms to both social welfare and rule of law, and should not be emulated.

Digital Assistants. Consider also the veritable cage match among the tech giants to offer “digital assistants” and “smart home” devices with ever-more features at ever-lower prices. Today the allegedly non-existent competition among these companies is played out most visibly in this multi-featured market, comprising advanced devices tightly integrated with artificial intelligence, voice recognition, advanced algorithms, and a host of services. Under Warren’s nondiscrimination principle this market disappears. Each device can offer only a connectivity platform (if such a service is even permitted to be bundled with a physical device…) — and nothing more.

But such a world entails not only the end of an entire, promising avenue of consumer-benefiting innovation, it also entails the end of a promising avenue of consumer-benefiting competition. It beggars belief that anyone thinks consumers would benefit by forcing technology companies into their own silos, ensuring that the most powerful sources of competition for each other are confined to their own fiefdoms by order of law.

Breaking business models

Beyond the product-feature dimension, Sen. Warren’s proposal would be devastating for innovative business models. Why is Amazon Prime Video bundled with free shipping? Because the marginal cost of distribution for video is close to zero and bundling it with Amazon Prime increases the value proposition for customers. Why is almost every Google service free to users? Because Google’s business model is supported by ads, not monthly subscription fees. Each of the tech giants has carefully constructed an ecosystem in which every component reinforces the others. Sen. Warren’s plan would not only break up the companies, it would prohibit their business models — the ones that both created and continue to sustain these products. Such an outcome would manifestly harm consumers.

Both of Warren’s policy “solutions” are misguided and will lead to higher prices and less innovation. Her cause for alarm is built on a multitude of mistaken assumptions, but let’s address just a few (Warren in bold):

  • “Nearly half of all e-commerce goes through Amazon.” Yes, but it has only 5% of total retail in the United States. As my colleague Kristian Stout says, “the Internet is not a market; it’s a distribution channel.”
  • “Amazon has used its immense market power to force smaller competitors like Diapers.com to sell at a discounted rate.” The real story, as the founders of Diapers.com freely admitted, is that they sold diapers as what they hoped would be a loss leader, intending to build out sales of other products once they had a base of loyal customers:

And so we started with selling the loss leader product to basically build a relationship with mom. And once they had the passion for the brand and they were shopping with us on a weekly or a monthly basis that they’d start to fall in love with that brand. We were losing money on every box of diapers that we sold. We weren’t able to buy direct from the manufacturers.

Like all entrepreneurs, Diapers.com’s founders took a calculated risk that didn’t pay off as hoped. Amazon subsequently acquired the company (after it had declined a similar buyout offer from Walmart). (Antitrust laws protect consumers, not inefficient competitors). And no, this was not a case of predatory pricing. After many years of trying to make the business profitable as a subsidiary, Amazon shut it down in 2017.

  • “In the 1990s, Microsoft — the tech giant of its time — was trying to parlay its dominance in computer operating systems into dominance in the new area of web browsing. The federal government sued Microsoft for violating anti-monopoly laws and eventually reached a settlement. The government’s antitrust case against Microsoft helped clear a path for Internet companies like Google and Facebook to emerge.” The government’s settlement with Microsoft is not the reason Google and Facebook were able to emerge. Neither company entered the browser market at launch. Instead, they leapfrogged the browser entirely and created new platforms for the web (only later did Google create Chrome).

    Furthermore, if the Microsoft case is responsible for “clearing a path” for Google is it not also responsible for clearing a path for Google’s alleged depredations? If the answer is that antitrust enforcement should be consistently more aggressive in order to rein in Google, too, when it gets out of line, then how can we be sure that that same more-aggressive enforcement standard wouldn’t have curtailed the extent of the Microsoft ecosystem in which it was profitable for Google to become Google? Warren implicitly assumes that only the enforcement decision in Microsoft was relevant to Google’s rise. But Microsoft doesn’t exist in a vacuum. If Microsoft cleared a path for Google, so did every decision not to intervene, which, all combined, created the legal, business, and economic environment in which Google operates.

Warren characterizes Big Tech as a weight on the American economy. In fact, nothing could be further from the truth. These superstar companies are the drivers of productivity growth, all ranking at or near the top for most spending on research and development. And while data may not be the new oil, extracting value from it may require similar levels of capital expenditure. Last year, Big Tech spent as much or more on capex as the world’s largest oil companies:

Source: WSJ

Warren also faults Big Tech for a decline in startups, saying,

The number of tech startups has slumped, there are fewer high-growth young firms typical of the tech industry, and first financing rounds for tech startups have declined 22% since 2012.

But this trend predates the existence of the companies she criticizes, as this chart from Quartz shows:

The exact causes of the decline in business dynamism are still uncertain, but recent research points to a much more mundane explanation: demographics. Labor force growth has been declining, which has led to an increase in average firm age, nudging fewer workers to start their own businesses.

Furthermore, it’s not at all clear whether this is actually a decline in business dynamism, or merely a change in business model. We would expect to see the same pattern, for example, if would-be startup founders were designing their software for acquisition and further development within larger, better-funded enterprises.

Will Rinehart recently looked at the literature to determine whether there is indeed a “kill zone” for startups around Big Tech incumbents. One paper finds that “an increase in fixed costs explains most of the decline in the aggregate entrepreneurship rate.” Another shows an inverse correlation across 50 countries between GDP and entrepreneurship rates. Robert Lucas predicted these trends back in 1978, pointing out that productivity increases would lead to wage increases, pushing marginal entrepreneurs out of startups and into big companies.

It’s notable that many in the venture capital community would rather not have Sen. Warren’s “help”:

Arguably, it is also simply getting harder to innovate. As economists Nick Bloom, Chad Jones, John Van Reenen and Michael Webb argue,

just to sustain constant growth in GDP per person, the U.S. must double the amount of research effort searching for new ideas every 13 years to offset the increased difficulty of finding new ideas.

If this assessment is correct, it may well be that coming up with productive and profitable innovations is simply becoming more expensive, and thus, at the margin, each dollar of venture capital can fund less of it. Ironically, this also implies that larger firms, which can better afford the additional resources required to sustain exponential growth, are a crucial part of the solution, not the problem.

Warren believes that Big Tech is the cause of our social ills. But Americans have more trust in Amazon, Facebook, and Google than in the political institutions that would break them up. It would be wise for her to reflect on why that might be the case. By punishing our most valuable companies for past successes, Warren would chill competition and decrease returns to innovation.

Finally, in what can only be described as tragic irony, the most prominent political figure who shares Warren’s feelings on Big Tech is President Trump. Confirming the horseshoe theory of politics, far-left populism and far-right populism seem less distinguishable by the day. As our colleague Gus Hurwitz put it, with this proposal Warren is explicitly endorsing the unitary executive theory and implicitly endorsing Trump’s authority to direct his DOJ to “investigate specific cases and reach specific outcomes.” Which cases will he want to have investigated and what outcomes will he be seeking? More good questions that Senator Warren should be asking. The notion that competition, consumer welfare, and growth are likely to increase in such an environment is farcical.

[TOTM: The following is the second in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.

This post is authored by Luke Froeb (William C. Oehmig Chair in Free Enterprise and Entrepreneurship at the Owen Graduate School of Management at Vanderbilt University; former chief economist at the Antitrust Division of the US Department of Justice and the Federal Trade Commission), Michael Doane (Competition Economics, LLC) & Mikhael Shor (Associate Professor of Economics, University of Connecticut).]

[Froeb, Doane & Shor: This post does not attempt to answer the question of what the court should decide in FTC v. Qualcomm because we do not have access to the information that would allow us to make such a determination. Rather, we focus on economic issues confronting the court by drawing heavily from our writings in this area: Gregory Werden & Luke Froeb, Why Patent Hold-Up Does Not Violate Antitrust Law; Luke Froeb & Mikhael Shor, Innovators, Implementors and Two-sided Hold-up; Bernard Ganglmair, Luke Froeb & Gregory Werden, Patent Hold Up and Antitrust: How a Well-Intentioned Rule Could Retard Innovation.]

Not everything is “hold-up”

It is not uncommon—in fact it is expected—that parties to a negotiation would have different opinions about the reasonableness of any deal. Every buyer asks for a price as low as possible, and sellers naturally request prices at which buyers (feign to) balk. A recent movement among some lawyers and economists has been to label such disagreements in the context of standard-essential patents not as a natural part of bargaining, but as dispositive proof of “hold-up,” or the innovator’s purported abuse of newly gained market power to extort implementers. We have four primary issues with this hold-up fad.

First, such claims of “hold-up” are trotted out whenever an innovator’s royalty request offends the commentator’s sensibilities, and usually with reference to a theoretical hold-up possibility rather than any matter-specific evidence that hold-up is actually present. Second, as we have argued elsewhere, such arguments usually ignore the fact that implementers of innovations often possess significant countervailing power to “hold-out as well. This is especially true as implementers have successfully pushed to curtail injunctive relief in standard-essential patent cases. Third, as Greg Werden and Froeb have recently argued, it is not clear why patent holdup—even where it might exist—need implicate antitrust law rather than be adequately handled as a contractual dispute. Lastly, it is certainly not the case that every disagreement over the value of an innovation is an exercise in hold-up, as even economists and lawyers have not reached anything resembling a consensus on the correct interpretation of a “fair” royalty.

At the heart of this case (and many recent cases) is (1) an indictment of Qualcomm’s desire to charge royalties to the maker of consumer devices based on the value of its technology and (2) a lack (to the best of our knowledge from public documents) of well vetted theoretical models that can provide the underpinning for the theory of the case. We discuss these in turn.

The smallest component “principle”

In arguing that “Qualcomm’s royalties are disproportionately high relative to the value contributed by its patented inventions,” (Complaint, ¶ 77) a key issue is whether Qualcomm can calculate royalties as a percentage of the price of a device, rather than a small percentage of the price of a chip. (Complaint, ¶¶ 61-76).

So what is wrong with basing a royalty on the price of the final product? A fixed portion of the price is not a perfect proxy for the value of embedded intellectual property, but it is a reasonable first approximation, much like retailers use fixed markups for products rather than optimizing the price of each SKU if the cost of individual determinations negate any benefits to doing so. The FTC’s main issue appears to be that the price of a smartphone reflects “many features in addition to the cellular connectivity and associated voice and text capabilities provided by early feature phones.” (Complaint, ¶ 26). This completely misses the point. What would the value of an iPhone be if it contained all of those “many features” but without the phone’s communication abilities? We have some idea, as Apple has for years marketed its iPod Touch for a quarter of the price of its iPhone line. Yet, “[f]or most users, the choice between an iPhone 5s and an iPod touch will be a no-brainer: Being always connected is one of the key reasons anyone owns a smartphone.”

What the FTC and proponents of the smallest component principle miss is that some of the value of all components of a smartphone are derived directly from the phone’s communication ability. Smartphones didn’t initially replace small portable cameras because they were better at photography (in fact, smartphone cameras were and often continue to be much worse than devoted cameras). The value of a smartphone camera is that it combines picture taking with immediate sharing over text or through social media. Thus, unlike the FTC’s claim that most of the value of a smartphone comes from features that are not communication, many features on a smartphone derive much of their value from the communication powers of the phone.

In the alternative, what the FTC wants is for the royalty not to reflect the value of the intellectual property but instead to be a small portion of the cost of some chipset—akin to an author of a paperback negotiating royalties based on the cost of plain white paper. As a matter of economics, a single chipset royalty cannot allow an innovator to capture the value of its innovation. This, in turn, implies that innovators underinvest in future technologies. As we have previously written:

For example, imagine that the same component (incorporating the same essential patent) is used to help stabilize flight of both commercial airplanes and toy airplanes. Clearly, these industries are likely to have different values for the patent. By negotiating over a single royalty rate based on the component price, the innovator would either fail to realize the added value of its patent to commercial airlines, or (in the case that the component is targeted primary to the commercial airlines) would not realize the incremental market potential from the patent’s use in toy airplanes. In either case, the innovator will not be negotiating over the entirety of the value it creates, leading to too little innovation.

The role of economics

Modern antitrust practice is to use economic models to explain how one gets from the evidence presented in a case to an anticompetitive conclusion. As Froeb, et al. have discussed, by laying out a mapping from the evidence to the effects, the legal argument is made clear, and gains credibility because it becomes falsifiable. The FTC complaint hypothesizes that “Qualcomm has excluded competitors and harmed competition through a set of interrelated policies and practices.” (Complaint, ¶ 3). Although Qualcomm explains how each of these policies and practices, by themselves, have clear business justifications, the FTC claims that combining them leads to an anticompetitive outcome.

Without providing a formal mapping from the evidence to an effect, it becomes much more difficult for a court to determine whether the theory of harm is correct or how to weigh the evidence that feeds the conclusion. Without a model telling it “what matters, why it matters, and how much it matters,” it is much more difficult for a tribunal to evaluate the “interrelated policies and practices.” In previous work, we have modeled the bilateral bargaining between patentees and licensees and have shown that when bilateral patent contracts are subject to review by an antitrust court, bargaining in the shadow of such a court can reduce the incentive to invest and thereby reduce welfare.

Concluding policy thoughts

What the FTC makes sound nefarious seems like a simple policy: requiring companies to seek licenses to Qualcomm’s intellectual property independent of any hardware that those companies purchase, and basing the royalty of that intellectual property on (an admittedly crude measure of) the value the IP contributes to that product. High prices alone do not constitute harm to competition. The FTC must clearly explain why their complaint is not simply about the “fairness” of the outcome or its desire that Qualcomm employ different bargaining paradigms, but rather how Qualcomm’s behavior harms the process of competition.

In the late 1950s, Nobel Laureate Robert Solow attributed about seven-eighths of the growth in U.S. GDP to technical progress. As Solow later commented: “Adding a couple of tenths of a percentage point to the growth rate is an achievement that eventually dwarfs in welfare significance any of the standard goals of economic policy.” While he did not have antitrust in mind, the import of his comment is clear: whatever static gains antitrust litigation may achieve, they are likely dwarfed by the dynamic gains represented by innovation.

Patent law is designed to maintain a careful balance between the costs of short-term static losses and the benefits of long-term gains that result from new technology. The FTC should present a sound theoretical or empirical basis for believing that the proposed relief sufficiently rewards inventors and allows them to capture a reasonable share of the whole value their innovations bring to consumers, lest such antitrust intervention deter investments in innovation.

Writing in the New York Times, journalist E. Tammy Kim recently called for Seattle and other pricey, high-tech hubs to impose a special tax on Microsoft and other large employers of high-paid workers. Efficiency demands such a tax, she says, because those companies are imposing a negative externality: By driving up demand for housing, they are causing rents and home prices to rise, which adversely affects city residents.

Arguing that her proposal is “akin to a pollution tax,” Ms. Kim writes:

A half-century ago, it seemed inconceivable that factories, smelters or power plants should have to account for the toxins they released into the air.  But we have since accepted the idea that businesses should have to pay the public for the negative externalities they cause.

It is true that negative externalities—costs imposed on people who are “external” to the process creating those costs (as when a factory belches rancid smoke on its neighbors)—are often taxed. One justification for such a tax is fairness: It seems inequitable that one party would impose costs on another; justice may demand that the victimizer pay. The justification cited by the economist who first proposed such taxes, though, was something different. In his 1920 opus, The Economics of Welfare, British economist A.C. Pigou proposed taxing behavior involving negative externalities in order to achieve efficiency—an increase in overall social welfare.   

With respect to the proposed tax on Microsoft and other high-tech employers, the fairness argument seems a stretch, and the efficiency argument outright fails. Let’s consider each.

To achieve fairness by forcing a victimizer to pay for imposing costs on a victim, one must determine who is the victimizer. Ms. Kim’s view is that Microsoft and its high-paid employees are victimizing (imposing costs on) incumbent renters and lower-paid homebuyers. But is that so clear?

Microsoft’s desire to employ high-skilled workers, and those employees’ desire to live near their work, conflicts with incumbent renters’ desire for low rent and lower paid homebuyers’ desire for cheaper home prices. If Microsoft got its way, incumbent renters and lower paid homebuyers would be worse off.

But incumbent renters’ and lower-paid homebuyers’ insistence on low rents and home prices conflicts with the desires of Microsoft, the high-skilled workers it would like to hire, and local homeowners. If incumbent renters and lower paid homebuyers got their way and prevented Microsoft from employing high-wage workers, Microsoft, its potential employees, and local homeowners would be worse off. Who is the victim here?

As Nobel laureate Ronald Coase famously observed, in most cases involving negative externalities, there is a reciprocal harm: Each party is a victim of the other party’s demands and a victimizer with respect to its own. When both parties are victimizing each other, it’s hard to “do justice” by taxing “the” victimizer.

A desire to achieve efficiency provides a sounder basis for many so-called Pigouvian taxes. With respect to Ms. Kim’s proposed tax, however, the efficiency justification fails. To see why that is so, first consider how it is that Pigouvian taxes may enhance social welfare.

When a business engages in some productive activity, it uses resources (labor, materials, etc.) to produce some sort of valuable output (e.g., a good or service). In determining what level of productive activity to engage in (e.g., how many hours to run the factory, etc.), it compares its cost of engaging in one more unit of activity to the added benefit (revenue) it will receive from doing so. If its so-called “marginal cost” from the additional activity is less than or equal to the “marginal benefit” it will receive, it will engage in the activity; otherwise, it won’t.  

When the business is bearing all the costs and benefits of its actions, this outcome is efficient. The cost of the inputs used in production are determined by the value they could generate in alternative uses. (For example, if a flidget producer could create $4 of value from an ounce of tin, a widget-maker would have to bid at least $4 to win that tin from the flidget-maker.) If a business finds that continued production generates additional revenue (reflective of consumers’ subjective valuation of the business’s additional product) in excess of its added cost (reflective of the value its inputs could create if deployed toward their next-best use), then making more moves productive resources to their highest and best uses, enhancing social welfare. This outcome is “allocatively efficient,” meaning that productive resources have been allocated in a manner that wrings the greatest possible value from them.

Allocative efficiency may not result, though, if the producer is able to foist some of its costs onto others.  Suppose that it costs a producer $4.50 to make an additional widget that he could sell for $5.00. He’d make the widget. But what if producing the widget created pollution that imposed $1 of cost on the producer’s neighbors? In that case, it could be inefficient to produce the widget; the total marginal cost of doing so, $5.50, might well exceed the marginal benefit produced, which could be as low as $5.00. Negative externalities, then, may result in an allocative inefficiency—i.e., a use of resources that produces less total value than some alternative use.

Pigou’s idea was to use taxes to prevent such inefficiencies. If the government were to charge the producer a tax equal to the cost his activity imposed on others ($1 in the above example), then he would capture all the marginal benefit and bear all the marginal cost of his activity. He would thus be motivated to continue his activity only to the point at which its total marginal benefit equaled its total marginal cost. The point of a Pigouvian tax, then, is to achieve allocative efficiency—i.e., to channel productive resources toward their highest and best ends.

When it comes to the negative externality Ms. Kim has identified—an increase in housing prices occasioned by high-tech companies’ hiring of skilled workers—the efficiency case for a Pigouvian tax crumbles. That is because the external cost at issue here is a “pecuniary” externality, a special sort of externality that does not generate inefficiency.

A pecuniary externality is one where the adverse third-party effect consists of an increase in market prices. If that’s the case, the allocative inefficiency that may justify Pigouvian taxes does not exist. There’s no inefficiency from the mere fact that buyers pay more.  Their loss is perfectly offset by a gain to sellers, and—here’s the crucial part—the higher prices channel productive resources toward, not away from, their highest and best ends. High rent levels, for example, signal to real estate developers that more resources should be devoted to creating living spaces within the city. That’s allocatively efficient.

Now, it may well be the case that government policies thwart developers from responding to those salutary price signals. The cities that Ms. Kim says should impose a tax on high-tech employers—Seattle, San Francisco, Austin, New York, and Boulder—have some of the nation’s most restrictive real estate development rules. But that’s a government failure, not a market failure.

In the end, Ms. Kim’s pollution tax analogy fails. The efficiency case for a Pigouvian tax to remedy negative externalities does not apply when, as here, the externality at issue is pecuniary.

For more on pecuniary versus “technological” (non-pecuniary) externalities and appropriate responses thereto, check out Chapter 4 of my recent book, How to Regulate: A Guide for Policymakers.

One of the hottest topics in antitrust these days is institutional investors’ common ownership of the stock of competing firms. Large investment companies like BlackRock, Vanguard, State Street, and Fidelity offer index and actively managed mutual funds that are invested in thousands of companies. In many concentrated industries, these institutional investors are “intra-industry diversified,” meaning that they hold stakes in all the significant competitors within the industry.

Recent empirical studies (e.g., here and here) purport to show that this intra-industry diversification has led to a softening of competition in concentrated markets. The theory is that firm managers seek to maximize the profits of their largest and most powerful shareholders, all of which hold stakes in all the major firms in the market and therefore prefer maximization of industry, not firm-specific, profits. (For example, an investor that owns stock in all the airlines servicing a route would not want those airlines to engage in aggressive price competition to win business from each other. Additional sales to one airline would come at the expense of another, and prices—and thus profit margins—would be lower.)

The empirical studies on common ownership, which have received a great deal of attention in the academic literature and popular press and have inspired antitrust scholars to propose a number of policy solutions, have employed a complicated measurement known as “MHHI delta” (MHHI∆). MHHI∆ is a component of the “modified Herfindahl–Hirschman Index” (MHHI), which, as the name suggests, is an adaptation of the Herfindahl–Hirschman Index (HHI).

HHI, which ranges from near zero to 10,000 and is calculated by summing the squares of the market shares of the firms competing in a market, assesses the degree to which a market is concentrated and thus susceptible to collusion or oligopolistic coordination. MHHI endeavors to account for both market concentration (HHI) and the reduced competition incentives occasioned by common ownership of the firms within a market. MHHI∆ is the part of MHHI that accounts for common ownership incentives, so MHHI = HHI + MHHI∆.  (Notably, neither MHHI nor MHHI∆ is bounded by the 10,000 upper limit applicable to HHI.  At the end of this post, I offer an example of a market in which MHHI and MHHI∆ both exceed 10,000.)

In the leading common ownership study, which looked at the airline industry, the authors first calculated the MHHI∆ on each domestic airline route from 2001 to 2014. They then examined, for each route, how changes in the MHHI∆ over time correlated with changes in airfares on that route. To control for route-specific factors that might influence both fares and the MHHI∆, the authors ran a number of regressions. They concluded that common ownership of air carriers resulted in a 3%–7% increase in fares.

As should be apparent, it is difficult to understand the common ownership issue—the extent to which there is a competitive problem and the propriety of proposed solutions—without understanding MHHI∆. Unfortunately, the formula for the metric is extraordinarily complex. Posner et al. express it as follows:

Where:

  • βij is the fraction of shares in firm j controlled by investor I,
  • the shares are both cash flow and control shares (so control rights are assumed to be proportionate to the investor’s share of firm profits), and
  • sj is the market share of firm j.

The complexity of this formula is, for less technically oriented antitrusters (like me!), a barrier to entry into the common ownership debate.  In the paragraphs that follow, I attempt to lower that entry barrier by describing the overall process for determining MHHI∆, cataloguing the specific steps required to calculate the measure, and offering a concrete example.

Overview of the Process for Calculating MHHI∆

Determining the MHHI∆ for a market involves three primary tasks. The first is to assess, for each coupling of competing firms in the market (e.g., Southwest Airlines and United Airlines), the degree to which the investors in one of the competitors would prefer that it not attempt to win business from the other by lowering prices, etc. This assessment must be completed twice for each coupling. With the Southwest and United coupling, for example, one must assess both the degree to which United’s investors would prefer that the company not win business from Southwest and the degree to which Southwest’s investors would prefer that the company not win business from United. There will be different answers to those two questions if, for example, United has a significant shareholder who owns no Southwest stock (and thus wants United to win business from Southwest), but Southwest does not have a correspondingly significant shareholder who owns no United stock (and would thus want Southwest to win business from United).

Assessing the incentive of one firm, Firm J (to correspond to the formula above), to pull its competitive punches against another, Firm K, requires calculating a fraction that compares the interest of the first firm’s owners in “coupling” profits (the combined profits of J and K) to their interest in “own-firm” profits (J profits only). The numerator of that fraction is based on data from the coupling—i.e., the firm whose incentive to soften competition one is assessing (J) and the firm with which it is competing (K). The fraction’s denominator is based on data for the single firm whose competition-reduction incentive one is assessing (J). Specifically:

  • The numerator assesses the degree to which the firms in the coupling are commonly owned, such that their owners would not benefit from price-reducing, head-to-head competition and would instead prefer that the firms compete less vigorously so as to maximize coupling profits. To determine the numerator, then, one must examine all the investors who are invested in both firms; for each, multiply their ownership percentages in the two firms; and then sum those products for all investors with common ownership. (If an investor were invested in only one firm in the coupling, its ownership percentage would be multiplied by zero and would thus drop out; after all, an investor in only one of the firms has no interest in maximization of coupling versus own-firm profits.)
  • The denominator assesses the degree to which the investor base (weighted by control) of the firm whose competition-reduction incentive is under consideration (J) would prefer that it maximize its own profits, not the profits of the coupling. Determining the denominator requires summing the squares of the ownership percentages of investors in that firm. Squaring means that very small investors essentially drop out and that the denominator grows substantially with large ownership percentages by particular investors. Large ownership percentages suggest the presence of shareholders that are more likely able to influence management, whether those shareholders also own shares in the second company or not.

Having assessed, for each firm in a coupling, the incentive to soften competition with the other, one must proceed to the second primary task: weighing the significance of those firms’ incentives not to compete with each other in light of the coupling’s shares of the market. (The idea here is that if two small firms reduced competition with one another, the effect on overall market competition would be less significant than if two large firms held their competitive fire.) To determine the significance to the market of the two coupling members’ incentives to reduce competition with each other, one must multiply each of the two fractions determined above (in Task 1) times the product of the market shares of the two firms in the coupling. This will generate two “cross-MHHI deltas,” one for each of the two firms in the coupling (e.g., one cross-MHHI∆ for Southwest/United and another for United/Southwest).

The third and final task is to aggregate the effect of common ownership-induced competition-softening throughout the market as a whole by summing the softened competition metrics (i.e., two cross-MHHI deltas for each coupling of competitors within the market). If decimals were used to account for the firms’ market shares (e.g., if a 25% market share was denoted 0.25), the sum should be multiplied by 10,000.

Following is a detailed list of instructions for assessing the MHHI∆ for a market (assuming proportionate control—i.e., that investors’ control rights correspond to their shares of firm profits).

A Nine-Step Guide to Calculating the MHHI∆ for a Market

  1. List the firms participating in the market and the market share of each.
  2. List each investor’s ownership percentage of each firm in the market.
  3. List the potential pairings of firms whose incentives to compete with each other must be assessed. There will be two such pairings for each coupling of competitors in the market (e.g., Southwest/United and United/Southwest) because one must assess the incentive of each firm in the coupling to compete with the other, and that incentive may differ for the two firms (e.g., United may have less incentive to compete with Southwest than Southwest with United). This implies that the number of possible pairings will always be n(n-1), where n is the number of firms in the market.
  4. For each investor, perform the following for each pairing of firms: Multiply the investor’s percentage ownership of the two firms in each pairing (e.g., Institutional Investor 1’s percentage ownership in United * Institutional Investor 1’s percentage ownership in Southwest for the United/Southwest pairing).
  5. For each pairing, sum the amounts from item four across all investors that are invested in both firms. (This will be the numerator in the fraction used in Step 7 to determine the pairing’s cross-MHHI∆.)
  6. For the first firm in each pairing (the one whose incentive to compete with the other is under consideration), sum the squares of the ownership percentages of that firm held by each investor. (This will be the denominator of the fraction used in Step 7 to determine the pairing’s cross-MHHI∆.)
  7. Figure the cross-MHHI∆ for each pairing of firms by doing the following: Multiply the market shares of the two firms, and then multiply the resulting product times a fraction consisting of the relevant numerator (from Step 5) divided by the relevant denominator (from Step 6).
  8. Add together the cross-MHHI∆s for each pairing of firms in the market.
  9. Multiply that amount times 10,000.

I will now illustrate this nine-step process by working through a concrete example.

An Example

Suppose four airlines—American, Delta, Southwest, and United—service a particular market. American and Delta each have 30% of the market; Southwest and United each have a market share of 20%.

Five funds are invested in the market, and each holds stock in all four airlines. Fund 1 owns 1% of each airline’s stock. Fund 2 owns 2% of American and 1% of each of the others. Fund 3 owns 2% of Delta and 1% of each of the others. Fund 4 owns 2% of Southwest and 1% of each of the others. And Fund 4 owns 2% of United and 1% of each of the others. None of the airlines has any other significant stockholder.

Step 1: List firms and market shares.

  1. American – 30% market share
  2. Delta – 30% market share
  3. Southwest – 20% market share
  4. United – 20% market share

Step 2: List investors’ ownership percentages.

Step 3: Catalogue potential competitive pairings.

  1. American-Delta (AD)
  2. American-Southwest (AS)
  3. American-United (AU)
  4. Delta-American (DA)
  5. Delta-Southwest (DS)
  6. Delta-United (DU)
  7. Southwest-American (SA)
  8. Southwest-Delta (SD)
  9. Southwest-United (SU)
  10. United-American (UA)
  11. United-Delta (UD)
  12. United-Southwest (US)

Steps 4 and 5: Figure numerator for determining cross-MHHI∆s.

Step 6: Figure denominator for determining cross-MHHI∆s.

Steps 7 and 8: Determine cross-MHHI∆s for each potential pairing, and then sum all.

  1. AD: .09(.0007/.0008) = .07875
  2. AS: .06(.0007/.0008) = .0525
  3. AU: .06(.0007/.0008) = .0525
  4. DA: .09(.0007/.0008) = .07875
  5. DS: .06(.0007/.0008) = .0525
  6. DU: .06(.0007/.0008) = .0525
  7. SA: .06(.0007/.0008) = .0525
  8. SD: .06(.0007/.0008) = .0525
  9. SU: .04(.0007/.0008) = .035
  10. UA: .06(.0007/.0008) = .0525
  11. UD: .06(.0007/.0008) = .0525
  12. US: .04(.0007/.0008) = .035
    SUM = .6475

Step 9: Multiply by 10,000.

MHHI∆ = 6475.

(NOTE: HHI in this market would total (30)(30) + (30)(30) + (20)(20) + (20)(20) = 2600. MHHI would total 9075.)

***

I mentioned earlier that neither MHHI nor MHHI∆ is subject to an upper limit of 10,000. For example, if there are four firms in a market, five institutional investors that each own 5% of the first three firms and 1% of the fourth, and no other investors holding significant stakes in any of the firms, MHHI∆ will be 15,500 and MHHI 18,000.  (Hat tip to Steve Salop, who helped create the MHHI metric, for reminding me to point out that MHHI and MHHI∆ are not limited to 10,000.)

Today the European Commission launched its latest salvo against Google, issuing a decision in its three-year antitrust investigation into the company’s agreements for distribution of the Android mobile operating system. The massive fine levied by the Commission will dominate the headlines, but the underlying legal theory and proposed remedies are just as notable — and just as problematic.

The nirvana fallacy

It is sometimes said that the most important question in all of economics is “compared to what?” UCLA economist Harold Demsetz — one of the most important regulatory economists of the past century — coined the term “nirvana fallacy” to critique would-be regulators’ tendency to compare messy, real-world economic circumstances to idealized alternatives, and to justify policies on the basis of the discrepancy between them. Wishful thinking, in other words.

The Commission’s Android decision falls prey to the nirvana fallacy. It conjures a world in which Google offers its Android operating system on unrealistic terms, prohibits it from doing otherwise, and neglects the actual consequences of such a demand.

The idea at the core of the Commission’s decision is that by making its own services (especially Google Search and Google Play Store) easier to access than competing services on Android devices, Google has effectively foreclosed rivals from effective competition. In order to correct that claimed defect, the Commission demands that Google refrain from engaging in practices that favor its own products in its Android licensing agreements:

At a minimum, Google has to stop and to not re-engage in any of the three types of practices. The decision also requires Google to refrain from any measure that has the same or an equivalent object or effect as these practices.

The basic theory is straightforward enough, but its application here reflects a troubling departure from the underlying economics and a romanticized embrace of industrial policy that is unsupported by the realities of the market.

In a recent interview, European Commission competition chief, Margrethe Vestager, offered a revealing insight into her thinking about her oversight of digital platforms, and perhaps the economy in general: “My concern is more about whether we get the right choices,” she said. Asked about Facebook, for example, she specified exactly what she thinks the “right” choice looks like: “I would like to have a Facebook in which I pay a fee each month, but I would have no tracking and advertising and the full benefits of privacy.”

Some consumers may well be sympathetic with her preference (and even share her specific vision of what Facebook should offer them). But what if competition doesn’t result in our — or, more to the point, Margrethe Vestager’s — prefered outcomes? Should competition policy nevertheless enact the idiosyncratic consumer preferences of a particular regulator? What if offering consumers the “right” choices comes at the expense of other things they value, like innovation, product quality, or price? And, if so, can antitrust enforcers actually engineer a better world built around these preferences?

Android’s alleged foreclosure… that doesn’t really foreclose anything

The Commission’s primary concern is with the terms of Google’s deal: In exchange for royalty-free access to Android and a set of core, Android-specific applications and services (like Google Search and Google Maps) Google imposes a few contractual conditions.

Google allows manufacturers to use the Android platform — in which the company has invested (and continues to invest) billions of dollars — for free. It does not require device makers to include any of its core, Google-branded features. But if a manufacturer does decide to use any of them, it must include all of them, and make Google Search the device default. In another (much smaller) set of agreements, Google also offers device makers a small share of its revenue from Search if they agree to pre-install only Google Search on their devices (although users remain free to download and install any competing services they wish).

Essentially, that’s it. Google doesn’t allow device makers to pick and choose between parts of the ecosystem of Google products, free-riding on Google’s brand and investments. But manufacturers are free to use the Android platform and to develop their own competing brand built upon Google’s technology.

Other apps may be installed in addition to Google’s core apps. Google Search need not be the exclusive search service, but it must be offered out of the box as the default. Google Play and Chrome must be made available to users, but other app stores and browsers may be pre-installed and even offered as the default. And device makers who choose to do so may share in Search revenue by pre-installing Google Search exclusively — but users can and do install a different search service.

Alternatives to all of Google’s services (including Search) abound on the Android platform. It’s trivial both to install them and to set them as the default. Meanwhile, device makers regularly choose to offer these apps alongside Google’s services, and some, like Samsung, have developed entire customized app suites of their own. Still others, like Amazon, pre-install no Google apps and use Android without any of these constraints (and whose Google-free tablets are regularly ranked as the best-rated and most popular in Europe).

By contrast, Apple bundles its operating system with its devices, bypasses third-party device makers entirely, and offers consumers access to its operating system only if they pay (lavishly) for one of the very limited number of devices the company offers, as well. It is perhaps not surprising — although it is enlightening — that Apple earns more revenue in an average quarter from iPhone sales than Google is reported to have earned in total from Android since it began offering it in 2008.

Reality — and the limits it imposes on efforts to manufacture nirvana

The logic behind Google’s approach to Android is obvious: It is the extension of Google’s “advertisers pay” platform strategy to mobile. Rather than charging device makers (and thus consumers) directly for its services, Google earns its revenue by charging advertisers for targeted access to users via Search. Remove Search from mobile devices and you remove the mechanism by which Google gets paid.

It’s true that most device makers opt to offer Google’s suite of services to European users, and that most users opt to keep Google Search as the default on their devices — that is, indeed, the hoped-for effect, and necessary to ensure that Google earns a return on its investment.

That users often choose to keep using Google services instead of installing alternatives, and that device makers typically choose to engineer their products around the Google ecosystem, isn’t primarily the result of a Google-imposed mandate; it’s the result of consumer preferences for Google’s offerings in lieu of readily available alternatives.

The EU decision against Google appears to imagine a world in which Google will continue to develop Android and allow device makers to use the platform and Google’s services for free, even if the likelihood of recouping its investment is diminished.

The Commission also assessed in detail Google’s arguments that the tying of the Google Search app and Chrome browser were necessary, in particular to allow Google to monetise its investment in Android, and concluded that these arguments were not well founded. Google achieves billions of dollars in annual revenues with the Google Play Store alone, it collects a lot of data that is valuable to Google’s search and advertising business from Android devices, and it would still have benefitted from a significant stream of revenue from search advertising without the restrictions.

For the Commission, Google’s earned enough [trust me: you should follow the link. It’s my favorite joke…].

But that world in which Google won’t alter its investment decisions based on a government-mandated reduction in its allowable return on investment doesn’t exist; it’s a fanciful Nirvana.

Google’s real alternatives to the status quo are charging for the use of Android, closing the Android platform and distributing it (like Apple) only on a fully integrated basis, or discontinuing Android.

In reality, and compared to these actual alternatives, Google’s restrictions are trivial. Remember, Google doesn’t insist that Google Search be exclusive, only that it benefit from a “leg up” by being pre-installed as the default. And on this thin reed Google finances the development and maintenance of the (free) Android operating system and all of the other (free) apps from which Google otherwise earns little or no revenue.

It’s hard to see how consumers, device makers, or app developers would be made better off without Google’s restrictions, but in the real world in which the alternative is one of the three manifestly less desirable options mentioned above.

Missing the real competition for the trees

What’s more, while ostensibly aimed at increasing competition, the Commission’s proposed remedy — like the conduct it addresses — doesn’t relate to Google’s most significant competitors at all.

Facebook, Instagram, Firefox, Amazon, Spotify, Yelp, and Yahoo, among many others, are some of the most popular apps on Android phones, including in Europe. They aren’t foreclosed by Google’s Android distribution terms, and it’s even hard to imagine that they would be more popular if only Android phones didn’t come with, say, Google Search pre-installed.

It’s a strange anticompetitive story that has Google allegedly foreclosing insignificant competitors while apparently ignoring its most substantial threats.

The primary challenges Google now faces are from Facebook drawing away the most valuable advertising and Amazon drawing away the most valuable product searches (and increasingly advertising, as well). The fact that Google’s challenged conduct has never shifted in order to target these competitors as their threat emerged, and has had no apparent effect on these competitive dynamics, says all one needs to know about the merits of the Commission’s decision and the value of its proposed remedy.

In reality, as Demsetz suggested, Nirvana cannot be designed by politicians, especially in complex, modern technology markets. Consumers’ best hope for something close — continued innovation, low prices, and voluminous choice — lies in the evolution of markets spurred by consumer demand, not regulators’ efforts to engineer them.

Our story begins on the morning of January 9, 2007. Few people knew it at the time, but the world of wireless communications was about to change forever. Steve Jobs walked on stage wearing his usual turtleneck, and proceeded to reveal the iPhone. The rest, as they say, is history. The iPhone moved the wireless communications industry towards a new paradigm. No more physical keyboards, clamshell bodies, and protruding antennae. All of these were replaced by a beautiful black design, a huge touchscreen (3.5” was big for that time), a rear-facing camera, and (a little bit later) a revolutionary new way to consume applications: the App Store. Sales soared and Apple’s stock started an upward trajectory that would see it become one of the world’s most valuable companies.

The story could very well have ended there. If it had, we might all be using iPhones today. However, years before, Google had commenced its own march into the wireless communications space by purchasing a small startup called Android. A first phone had initially been slated for release in late 2007. But Apple’s iPhone announcement sent Google back to the drawing board. It took Google and its partners until 2010 to come up with a competitive answer – the Google Nexus One produced by HTC.

Understanding the strategy that Google put in place during this three year timespan is essential to understanding the European Commission’s Google Android decision.

How to beat one of the great innovations?

In order to overthrow — or even merely just compete with — the iPhone, Google faced the same dilemma that most second-movers have to contend with: imitate or differentiate. Its solution was a mix of both. It took the touchscreen, camera, and applications, but departed on one key aspect. Whereas Apple controls the iPhone from end-to-end, Google opted for a licensed, open-source operating system that substitutes a more-decentralized approach for Apple’s so-called “walled garden.”

Google and a number of partners founded the Open Handset Alliance (“OHA”) in November 2007. This loose association of network operators, software companies and handset manufacturers became the driving force behind the Android OS. Through the OHA, Google and its partners have worked to develop minimal specifications for OHA-compliant Android devices in order to ensure that all levels of the device ecosystem — from device makers to app developers — function well together. As its initial press release boasts, through the OHA:

Handset manufacturers and wireless operators will be free to customize Android in order to bring to market innovative new products faster and at a much lower cost. Developers will have complete access to handset capabilities and tools that will enable them to build more compelling and user-friendly services, bringing the Internet developer model to the mobile space. And consumers worldwide will have access to less expensive mobile devices that feature more compelling services, rich Internet applications and easier-to-use interfaces — ultimately creating a superior mobile experience.

The open source route has a number of advantages — notably the improved division of labor — but it is not without challenges. One key difficulty lies in coordinating and incentivizing the dozens of firms that make up the alliance. Google must not only keep the diverse Android ecosystem directed toward a common, compatible goal, it also has to monetize a product that, by its very nature, is given away free of charge. It is Google’s answers to these two problems that set off the Commission’s investigation.

The first problem is a direct consequence of Android’s decentralization. Whereas there are only a small number of iPhones (the couple of models which Apple markets at any given time) running the same operating system, Android comes in a jaw-dropping array of flavors. Some devices are produced by Google itself, others are the fruit of high-end manufacturers such as Samsung and LG, there are also so-called “flagship killers” like OnePlus, and budget phones from the likes of Motorola and Honor (one of Huawei’s brands). The differences don’t stop there. Manufacturers, like Samsung, Xiaomi and LG (to name but a few) have tinkered with the basic Android setup. Samsung phones heavily incorporate its Bixby virtual assistant, while Xiaomi packs in a novel user interface. The upshot is that the Android marketplace is tremendously diverse.

Managing this variety is challenging, to say the least (preventing projects from unravelling into a myriad of forks is always an issue for open source projects). Google and the OHA have come up with an elegant solution. The alliance penalizes so-called “incompatible” devices — that is, handsets whose software or hardware stray too far from a predetermined series of specifications. When this is the case, Google may refuse to license its proprietary applications (most notably the Play Store). This minimum level of uniformity ensures that apps will run smoothly on all devices. It also provides users with a consistent experience (thereby protecting the Android brand) and reduces the cost of developing applications for Android. Unsurprisingly, Android developers have lauded these “anti-fragmentation” measures, branding the Commission’s case a disaster.

A second important problem stems from the fact that the Android OS is an open source project. Device manufacturers can thus license the software free of charge. This is no small advantage. It shaves precious dollars from the price of Android smartphones, thus opening-up the budget end of the market. Although there are numerous factors at play, it should be noted that a top of the range Samsung Galaxy S9+ is roughly 30% cheaper ($819) than its Apple counterpart, the iPhone X ($1165).

Offering a competitive operating system free of charge might provide a fantastic deal for consumers, but it poses obvious business challenges. How can Google and other members of the OHA earn a return on the significant amounts of money poured into developing, improving, and marketing and Android devices? As is often the case with open source projects, they essentially rely on complementarities. Google produces the Android OS in the hope that it will boost users’ consumption of its profitable, ad-supported services (Google Search in particular). This is sometimes referred to as a loss leader or complementary goods strategy.

Google uses two important sets of contractual provisions to cement this loss leader strategy. First, it seemingly bundles a number of proprietary applications together. Manufacturers must pre-load the Google Search and Chrome apps in order to obtain the Play Store app (the lynchpin on which the Android ecosystem sits). Second, Google has concluded a number of “revenue sharing” deals with manufacturers and network operators. These companies receive monetary compensation when the Google Search is displayed prominently on a user’s home screen. In effect, they are receiving a cut of the marginal revenue that the use of this search bar generates for Google. Both of these measures ultimately nudge users — but do not force them, as neither prevents users from installing competing apps — into using Google’s most profitable services.

Readers would be forgiven for thinking that this is a win-win situation. Users get a competitive product free of charge, while Google and other members of the OHA earn enough money to compete against Apple.

The Commission is of another mind, however.

Commission’s hubris

The European Commission believes that Google is hurting competition. Though the text of the decision is not yet available, the thrust of its argument is that Google’s anti-fragmentation measures prevent software developers from launching competing OSs, while the bundling and revenue sharing both thwart rival search engines.

This analysis runs counter to some rather obvious facts:

  • For a start, the Android ecosystem is vibrant. Numerous firms have launched forked versions of Android, both with and without Google’s apps. Amazon’s Fire line of devices is a notable example.
  • Second, although Google’s behavior does have an effect on the search engine market, there is nothing anticompetitive about it. Yahoo could very well have avoided its high-profile failure if, way back in 2005, it had understood the importance of the mobile internet. At the time, it still had a 30% market share, compared to Google’s 36%. Firms that fail to seize upon business opportunities will fall out of the market. This is not a bug; it is possibly the most important feature of market economies. It reveals the products that consumers prefer and stops resources from being allocated to less valuable propositions.
  • Last but not least, Google’s behavior does not prevent other search engines from placing their own search bars or virtual assistants on smartphones. This is essentially what Samsung has done by ditching Google’s assistant in favor of its Bixby service. In other words, Google is merely competing with other firms to place key apps on or near the home screen of devices.

Even if the Commission’s reasoning where somehow correct, the competition watchdog is using a sledgehammer to crack a nut. The potential repercussions for Android, the software industry, and European competition law are great:

  • For a start, the Commission risks significantly weakening Android’s competitive position relative to Apple. Android is a complex ecosystem. The idea that it is possible to bring incremental changes to its strategy without threatening the viability of the whole is a sign of the Commission’s hubris.
  • More broadly, the harsh treatment of Google could have significant incentive effects for other tech platforms. As others have already pointed out, the Commission’s decision rests on the idea that dominant firms should not be allowed to favor their own services compared to those of rivals. Taken a face value, this anti-discrimination policy will push firms to design closed platforms. If rivals are excluded from the very start, there is no one against whom to discriminate. Antitrust watchdogs are thus kept at bay (and thus the Commission is acting against Google’s marginal preference for its own services, rather than Apple’s far-more-substantial preferencing of its own services). Moving to a world of only walled gardens might harm users and innovators alike.

Over the next couple of days and weeks, many will jump to the Commission’s defense. They will see its action as a necessary step against the abstract “power” of Silicon Valley’s tech giants. Rivals will feel vindicated. But when all is done and dusted, there seems to be little doubt that the decision is misguided. The Commission will have struck a blow to the heart of the most competitive offering in the smartphone space. And consumers will be the biggest losers.

This is not what the competition laws were intended to achieve.

Today would have been Henry Manne’s 90th birthday. When he passed away in 2015 he left behind an immense and impressive legacy. In 1991, at the inaugural meeting of the American Law & Economics Association (ALEA), Manne was named a Life Member of ALEA and, along with Nobel Laureate Ronald Coase, and federal appeals court judges Richard Posner and Guido Calabresi, one of the four Founders of Law and Economics. The organization I founded, the International Center for Law & Economics is dedicated to his memory, along with that of his great friend and mentor, UCLA economist Armen Alchian.

Manne is best known for his work in corporate governance and securities law and regulation, of course. But sometimes forgotten is that his work on the market for corporate control was motivated by concerns about analytical flaws in merger enforcement. As former FTC commissioners Maureen Ohlhausen and Joshua Wright noted in a 2015 dissenting statement:

The notion that the threat of takeover would induce current managers to improve firm performance to the benefit of shareholders was first developed by Henry Manne. Manne’s pathbreaking work on the market for corporate control arose out of a concern that antitrust constraints on horizontal mergers would distort its functioning. See Henry G. Manne, Mergers and the Market for Corporate Control, 73 J. POL. ECON. 110 (1965).

But Manne’s focus on antitrust didn’t end in 1965. Moreover, throughout his life he was a staunch critic of misguided efforts to expand the power of government, especially when these efforts claimed to have their roots in economic reasoning — which, invariably, was hopelessly flawed. As his obituary notes:

In his teaching, his academic writing, his frequent op-eds and essays, and his work with organizations like the Cato Institute, the Liberty Fund, the Institute for Humane Studies, and the Mont Pèlerin Society, among others, Manne advocated tirelessly for a clearer understanding of the power of markets and competition and the importance of limited government and economically sensible regulation.

Thus it came to be, in 1974, that Manne was called to testify before the Senate Judiciary Committee, Subcommittee on Antitrust and Monopoly, on Michigan Senator Philip A. Hart’s proposed Industrial Reorganization Act. His testimony is a tour de force, and a prescient rejoinder to the faddish advocates of today’s “hipster antitrust”— many of whom hearken longingly back to the antitrust of the 1960s and its misguided “gurus.”

Henry Manne’s trenchant testimony critiquing the Industrial Reorganization Act and its (ostensible) underpinnings is reprinted in full in this newly released ICLE white paper (with introductory material by Geoffrey Manne):

Henry G. Manne: Testimony on the Proposed Industrial Reorganization Act of 1973 — What’s Hip (in Antitrust) Today Should Stay Passé

Sen. Hart proposed the Industrial Reorganization Act in order to address perceived problems arising from industrial concentration. The bill was rooted in the belief that industry concentration led inexorably to monopoly power; that monopoly power, however obtained, posed an inexorable threat to freedom and prosperity; and that the antitrust laws (i.e., the Sherman and Clayton Acts) were insufficient to address the purported problems.

That sentiment — rooted in the reflexive application of the (largely-discredited structure-conduct-performance (SCP) paradigm) — had already become largely passé among economists in the 70s, but it has resurfaced today as the asserted justification for similar (although less onerous) antitrust reform legislation and the general approach to antitrust analysis commonly known as “hipster antitrust.”

The critiques leveled against the asserted economic underpinnings of efforts like the Industrial Reorganization Act are as relevant today as they were then. As Henry Manne notes in his testimony:

To be successful in this stated aim [“getting the government out of the market”] the following dreams would have to come true: The members of both the special commission and the court established by the bill would have to be satisfied merely to complete their assigned task and then abdicate their tremendous power and authority; they would have to know how to satisfactorily define and identify the limits of the industries to be restructured; the Government’s regulation would not sacrifice significant efficiencies or economies of scale; and the incentive for new firms to enter an industry would not be diminished by the threat of a punitive response to success.

The lessons of history, economic theory, and practical politics argue overwhelmingly against every one of these assumptions.

Both the subject matter of and impetus for the proposed bill (as well as Manne’s testimony explaining its economic and political failings) are eerily familiar. The preamble to the Industrial Reorganization Act asserts that

competition… preserves a democratic society, and provides an opportunity for a more equitable distribution of wealth while avoiding the undue concentration of economic, social, and political power; [and] the decline of competition in industries with oligopoly or monopoly power has contributed to unemployment, inflation, inefficiency, an underutilization of economic capacity, and the decline of exports….

The echoes in today’s efforts to rein in corporate power by adopting structural presumptions are unmistakable. Compare, for example, this language from Sen. Klobuchar’s Consolidation Prevention and Competition Promotion Act of 2017:

[C]oncentration that leads to market power and anticompetitive conduct makes it more difficult for people in the United States to start their own businesses, depresses wages, and increases economic inequality;

undue market concentration also contributes to the consolidation of political power, undermining the health of democracy in the United States; [and]

the anticompetitive effects of market power created by concentration include higher prices, lower quality, significantly less choice, reduced innovation, foreclosure of competitors, increased entry barriers, and monopsony power.

Remarkably, Sen. Hart introduced his bill as “an alternative to government regulation and control.” Somehow, it was the antithesis of “government control” to introduce legislation that, in Sen. Hart’s words,

involves changing the life styles of many of our largest corporations, even to the point of restructuring whole industries. It involves positive government action, not to control industry but to restore competition and freedom of enterprise in the economy

Like today’s advocates of increased government intervention to design the structure of the economy, Sen. Hart sought — without a trace of irony — to “cure” the problem of politicized, ineffective enforcement by doubling down on the power of the enforcers.

Henry Manne was having none of it. As he pointedly notes in his testimony, the worst problems of monopoly power are of the government’s own making. The real threat to democracy, freedom, and prosperity is the political power amassed in the bureaucratic apparatus that frequently confers monopoly, at least as much as the monopoly power it spawns:

[I]t takes two to make that bargain [political protection and subsidies in exchange for lobbying]. And as we look around at various industries we are constrained to ask who has not done this. And more to the point, who has not succeeded?

It is unhappily almost impossible to name a significant industry in the United States that has not gained some degree of protection from the rigors of competition from Federal, State or local governments.

* * *

But the solution to inefficiencies created by Government controls cannot lie in still more controls. The politically responsible task ahead for Congress is to dismantle our existing regulatory monster before it strangles us.

We have spawned a gigantic bureaucracy whose own political power threatens the democratic legitimacy of government.

We are rapidly moving toward the worst features of a centrally planned economy with none of the redeeming political, economic, or ethical features usually claimed for such systems.

The new white paper includes Manne’s testimony in full, including his exchange with Sen. Hart and committee staffers following his prepared remarks.

It is, sadly, nearly as germane today as it was then.

One final note: The subtitle for the paper is a reference to the song “What Is Hip?” by Tower of Power. Its lyrics are decidedly apt:

You done went and found you a guru,

In your effort to find you a new you,

And maybe even managed

To raise your conscious level.

While you’re striving to find the right road,

There’s one thing you should know:

What’s hip today

Might become passé.

— Tower of Power, What Is Hip? (Emilio Castillo, John David Garibaldi & Stephen M. Kupka, What Is Hip? (Bob-A-Lew Songs 1973), from the album TOWER OF POWER (Warner Bros. 1973))

And here’s the song, in all its glory: