Archives For

President Joe Biden named his post-COVID-19 agenda “Build Back Better,” but his proposals to prioritize support for government-run broadband service “with less pressure to turn profits” and to “reduce Internet prices for all Americans” will slow broadband deployment and leave taxpayers with an enormous bill.

Policymakers should pay particular heed to this danger, amid news that the Senate is moving forward with considering a $1.2 trillion bipartisan infrastructure package, and that the Federal Communications Commission, the U.S. Commerce Department’s National Telecommunications and Information Administration, and the U.S. Agriculture Department’s Rural Utilities Service will coordinate on spending broadband subsidy dollars.

In order to ensure that broadband subsidies lead to greater buildout and adoption, policymakers must correctly understand the state of competition in broadband and not assume that increasing the number of firms in a market will necessarily lead to better outcomes for consumers or the public.

A recent white paper published by us here at the International Center for Law & Economics makes the case that concentration is a poor predictor of competitiveness, while offering alternative policies for reaching Americans who don’t have access to high-speed Internet service.

The data show that the state of competition in broadband is generally healthy. ISPs routinely invest billions of dollars per year in building, maintaining, and upgrading their networks to be faster, more reliable, and more available to consumers. FCC data show that average speeds available to consumers, as well as the number of competitors providing higher-speed tiers, have increased each year. And prices for broadband, as measured by price-per-Mbps, have fallen precipitously, dropping 98% over the last 20 years. None of this makes sense if the facile narrative about the absence of competition were true.

In our paper, we argue that the real public policy issue for broadband isn’t curbing the pursuit of profits or adopting price controls, but making sure Americans have broadband access and encouraging adoption. In areas where it is very costly to build out broadband networks, like rural areas, there tend to be fewer firms in the market. But having only one or two ISPs available is far less of a problem than having none at all. Understanding the underlying market conditions and how subsidies can both help and hurt the availability and adoption of broadband is an important prerequisite to good policy.

The basic problem is that those who have decried the lack of competition in broadband often look at the number of ISPs in a given market to determine whether a market is competitive. But this is not how economists think of competition. Instead, economists look at competition as a dynamic process where changes in supply and demand factors are constantly pushing the market toward new equilibria.

In general, where a market is “contestable”—that is, where existing firms face potential competition from the threat of new entry—even just a single existing firm may have to act as if it faces vigorous competition. Such markets often have characteristics (e.g., price, quality, and level of innovation) similar or even identical to those with multiple existing competitors. This dynamic competition, driven by changes in technology or consumer preferences, ensures that such markets are regularly disrupted by innovative products and services—a process that does not always favor incumbents.

Proposals focused on increasing the number of firms providing broadband can actually reduce consumer welfare. Whether through overbuilding—by allowing new private entrants to free-ride on the initial investment by incumbent companies—or by going into the Internet business itself through municipal broadband, government subsidies can increase the number of firms providing broadband. But it can’t do so without costs―which include not just the cost of the subsidies themselves, which ultimately come from taxpayers, but also the reduced incentives for unsubsidized private firms to build out broadband in the first place.

If underlying supply and demand conditions in rural areas lead to a situation where only one provider can profitably exist, artificially adding another completely reliant on subsidies will likely just lead to the exit of the unsubsidized provider. Or, where a community already has municipal broadband, it is unlikely that a private ISP will want to enter and compete with a firm that doesn’t have to turn a profit.

A much better alternative for policymakers is to increase the demand for buildout through targeted user subsidies, while reducing regulatory barriers to entry that limit supply.

For instance, policymakers should consider offering connectivity vouchers to unserved households in order to stimulate broadband deployment and consumption. Current subsidy programs rely largely on subsidizing the supply side, but this requires the government to determine the who and where of entry. Connectivity vouchers would put the choice in the hands of consumers, while encouraging more buildout to areas that may currently be uneconomic to reach due to low population density or insufficient demand due to low adoption rates.

Local governments could also facilitate broadband buildout by reducing unnecessary regulatory barriers. Local building codes could adopt more connection-friendly standards. Local governments could also reduce the cost of access to existing poles and other infrastructure. Eligible Telecommunications Carrier (ETC) requirements could also be eliminated, because they deter potential providers from seeking funds for buildout (and don’t offer countervailing benefits).

Albert Einstein once said: “if I were given one hour to save the planet, I would spend 59 minutes defining the problem, and one minute resolving it.” When it comes to encouraging broadband buildout, policymakers should make sure they are solving the right problem. The problem is that the cost of building out broadband to unserved areas is too high or the demand too low—not that there are too few competitors.

On November 22, the FTC filed its answering brief in the FTC v. Qualcomm litigation. As we’ve noted before, it has always seemed a little odd that the current FTC is so vigorously pursuing this case, given some of the precedents it might set and the Commission majority’s apparent views on such issues. But this may also help explain why the FTC has now opted to eschew the district court’s decision and pursue a novel, but ultimately baseless, legal theory in its brief.

The FTC’s decision to abandon the district court’s reasoning constitutes an important admission: contrary to the district court’s finding, there is no legal basis to find an antitrust duty to deal in this case. As Qualcomm stated in its reply brief (p. 12), “the FTC disclaims huge portions of the decision.” In its effort to try to salvage its case, however, the FTC reveals just how bad its arguments have been from the start, and why the case should be tossed out on its ear.

What the FTC now argues

The FTC’s new theory is that SEP holders that fail to honor their FRAND licensing commitments should be held liable under “traditional Section 2 standards,” even though they do not have an antitrust duty to deal with rivals who are members of the same standard-setting organizations (SSOs) under the “heightened” standard laid out by the Supreme Court in Aspen and Trinko:  

To be clear, the FTC does not contend that any breach of a FRAND commitment is a Sherman Act violation. But Section 2 liability is appropriate when, as here, a monopolist SEP holder commits to license its rivals on FRAND terms, and then implements a blanket policy of refusing to license those rivals on any terms, with the effect of substantially contributing to the acquisition or maintenance of monopoly power in the relevant market…. 

The FTC does not argue that Qualcomm had a duty to deal with its rivals under the Aspen/Trinko standard. But that heightened standard does not apply here, because—unlike the defendants in Aspen, Trinko, and the other duty-to-deal precedents on which it relies—Qualcomm entered into a voluntary contractual commitment to deal with its rivals as part of the SSO process, which is itself a derogation from normal market competition. And although the district court applied a different approach, this Court “may affirm on any ground finding support in the record.” Cigna Prop. & Cas. Ins. Co. v. Polaris Pictures Corp., 159 F.3d 412, 418-19 (9th Cir. 1998) (internal quotation marks omitted) (emphasis added) (pp.69-70).

In other words, according to the FTC, because Qualcomm engaged in the SSO process—which is itself “a derogation from normal market competition”—its evasion of the constraints of that process (i.e., the obligation to deal with all comers on FRAND terms) is “anticompetitive under traditional Section 2 standards.”

The most significant problem with this new standard is not that it deviates from the basis upon which the district court found Qualcomm liable; it’s that it is entirely made up and has no basis in law.

Absent an antitrust duty to deal, patent law grants patentees the right to exclude rivals from using patented technology

Part of the bundle of rights connected with the property right in patents is the right to exclude, and along with it, the right of a patent holder to decide whether, and on what terms, to sell licenses to rivals. The law curbs that right only in select circumstances. Under antitrust law, such a duty to deal, in the words of the Supreme Court in Trinko, “is at or near the outer boundary of §2 liability.” The district court’s ruling, however, is based on the presumption of harm arising from a SEP holder’s refusal to license, rather than an actual finding of anticompetitive effect under §2. The duty to deal it finds imposes upon patent holders an antitrust obligation to license their patents to competitors. (While, of course, participation in an SSO may contractually obligate an SEP-holder to license its patents to competitors, that is an entirely different issue than whether it operates under a mandatory requirement to do so as a matter of public policy).  

The right of patentees to exclude is well-established, and injunctions enforcing that right are regularly issued by courts. Although the rate of permanent injunctions has decreased since the Supreme Court’s eBay decision, research has found that federal district courts still grant them over 70% of the time after a patent holder prevails on the merits. And for patent litigation involving competitors, the same research finds that injunctions are granted 85% of the time.  In principle, even SEP holders can receive injunctions when infringers do not act in good faith in FRAND negotiations. See Microsoft Corp. v. Motorola, Inc., 795 F.3d 1024, 1049 n.19 (9th Cir. 2015):

We agree with the Federal Circuit that a RAND commitment does not always preclude an injunctive action to enforce the SEP. For example, if an infringer refused to accept an offer on RAND terms, seeking injunctive relief could be consistent with the RAND agreement, even where the commitment limits recourse to litigation. See Apple Inc., 757 F.3d at 1331–32

Aside from the FTC, federal agencies largely agree with this approach to the protection of intellectual property. For instance, the Department of Justice, the US Patent and Trademark Office, and the National Institute for Standards and Technology recently released their 2019 Joint Policy Statement on Remedies for Standards-Essential Patents Subject to Voluntary F/RAND Commitments, which clarifies that:

All remedies available under national law, including injunctive relief and adequate damages, should be available for infringement of standards-essential patents subject to a F/RAND commitment, if the facts of a given case warrant them. Consistent with the prevailing law and depending on the facts and forum, the remedies that may apply in a given patent case include injunctive relief, reasonable royalties, lost profits, enhanced damages for willful infringement, and exclusion orders issued by the U.S. International Trade Commission. These remedies are equally available in patent litigation involving standards-essential patents. While the existence of F/RAND or similar commitments, and conduct of the parties, are relevant and may inform the determination of appropriate remedies, the general framework for deciding these issues remains the same as in other patent cases. (emphasis added).

By broadening the antitrust duty to deal well beyond the bounds set by the Supreme Court, the district court opinion (and the FTC’s preferred approach, as well) eviscerates the right to exclude inherent in patent rights. In the words of retired Federal Circuit Judge Paul Michel in an amicus brief in the case: 

finding antitrust liability premised on the exercise of valid patent rights will fundamentally abrogate the patent system and its critical means for promoting and protecting important innovation.

And as we’ve noted elsewhere, this approach would seriously threaten consumer welfare:

Of course Qualcomm conditions the purchase of its chips on the licensing of its intellectual property; how could it be any other way? The alternative would require Qualcomm to actually facilitate the violation of its property rights by forcing it to sell its chips to device makers even if they refuse its patent license terms. In that world, what device maker would ever agree to pay more than a pittance for a patent license? The likely outcome is that Qualcomm charges more for its chips to compensate (or simply stops making them). Great, the FTC says; then competitors can fill the gap and — voila: the market is more competitive, prices will actually fall, and consumers will reap the benefits.

Except it doesn’t work that way. As many economists, including both the current [now former] and a prominent former chief economist of the FTC, have demonstrated, forcing royalty rates lower in such situations is at least as likely to harm competition as to benefit it. There is no sound theoretical or empirical basis for concluding that using antitrust to move royalty rates closer to some theoretical ideal will actually increase consumer welfare. All it does for certain is undermine patent holders’ property rights, virtually ensuring there will be less innovation.

The FTC realizes the district court doesn’t have the evidence to support its duty to deal analysis

Antitrust law does not abrogate the right of a patent holder to exclude and to choose when and how to deal with rivals, unless there is a proper finding of a duty to deal. In order to find a duty to deal, there must be a harm to competition, not just a competitor, which, under the Supreme Court’s Aspen and Trinko cases can be inferred in the duty-to-deal context only where the challenged conduct leads to a “profit sacrifice.” But the record does not support such a finding. As we wrote in our amicus brief:

[T]he Supreme Court has identified only a single scenario from which it may plausibly be inferred that defendant’s refusal to deal with rivals harms consumers: The existence of a prior, profitable course of dealing, and the termination and replacement of that arrangement with an alternative that not only harms rivals, but also is less profitable for defendant. 

A monopolist’s willingness to forego (short-term) profits plausibly permits an inference that conduct is not procompetitive, because harm to a rival caused by an increase in efficiency should lead to higher—not lower—profits for defendant. And “[i]f a firm has been ‘attempting to exclude rivals on some basis other than efficiency,’ it’s fair to characterize its behavior as predatory.” Aspen Skiing, 472 U.S. at 605 (quoting Robert Bork, The Antitrust Paradox 138 (1978)).

In an effort to satisfy this standard, the district court states that “because Qualcomm previously licensed its rivals, but voluntarily stopped licensing rivals even though doing so was profitable, Qualcomm terminated a voluntary and profitable course of dealing.” Slip op. at 137. 

But it is not enough merely that the prior arrangement was profitable. Rather, Trinko and Aspen Skiing hold that when a monopolist ends a profitable relationship with a rival, anticompetitive exclusion may be inferred only when it also refuses to engage in an ongoing arrangement that, in the short run, is more profitable than no relationship at all. The key is the relative value to the monopolist of the current options on offer, not the value to the monopolist of the terminated arrangement. See Trinko, 540 U.S. at 409 (“a willingness to forsake short-term profits”); Aspen Skiing, 472 U.S. at 610–11 (“it was willing to sacrifice short-run benefits”)…

The record here uniformly indicates Qualcomm expected to maximize its royalties by dealing with OEMs rather than rival chip makers; it neither anticipated nor endured short-term loss. As the district court itself concluded, Qualcomm’s licensing practices avoided patent exhaustion and earned it “humongously more lucrative” royalties. Slip op. at 1243–254. That Qualcomm anticipated greater profits from its conduct precludes an inference of anticompetitive harm.

Moreover, Qualcomm didn’t refuse to allow rivals to use its patents; it simply didn’t sell them explicit licenses to do so. As discussed in several places by the district court:

According to Andrew Hong (Legal Counsel at Samsung Intellectual Property Center), during license negotiations, Qualcomm made it clear to Samsung that “Qualcomm’s standard business practice was not to provide licenses to chip manufacturers.” Hong Depo. 161:16-19. Instead, Qualcomm had an “unwritten policy of not going after chip manufacturers.” Id. at 161:24-25… (p.123)

* * *

Alex Rogers (QTL President) testified at trial that as part of the 2018 Settlement Agreement between Samsung and Qualcomm, Qualcomm did not license Samsung, but instead promised only that Qualcomm would offer Samsung a FRAND license before suing Samsung: “Qualcomm gave Samsung an assurance that should Qualcomm ever seek to assert its cellular SEPs against that component business, against those components, we would first make Samsung an offer on fair, reasonable, and non-discriminatory terms.” Tr. at 1989:5-10. (p.124)

This is an important distinction. Qualcomm allows rivals to use its patented technology by not asserting its patent rights against them—which is to say: instead of licensing its technology for a fee, Qualcomm allows rivals to use its technology to develop their own chips royalty-free (and recoups its investment by licensing the technology to OEMs that choose to implement the technology in their devices). 

The irony of this analysis, of course, is that the district court effectively suggests that Qualcomm must charge rivals a positive, explicit price in exchange for a license in order to facilitate competition, while allowing rivals to use its patented technology for free (or at the “cost” of some small reduction in legal certainty, perhaps) is anticompetitive.

Nonetheless, the district court’s factual finding that Qualcomm’s licensing scheme was “humongously” profitable shows there was no profit sacrifice as required for a duty to deal finding. The general presumption that patent holders can exclude rivals is not subject to an antitrust duty to deal where there is no profit sacrifice by the patent holder. Here, however, Qualcomm did not sacrifice profits by adopting the challenged licensing scheme. 

It is perhaps unsurprising that the FTC chose not to support the district court’s duty-to-deal argument, even though its holding was in the FTC’s favor. But, while the FTC was correct not to countenance the district court’s flawed arguments, the FTC’s alternative argument in its reply brief is even worse.

The FTC’s novel theory of harm is unsupported and weak

As noted, the FTC’s alternative theory is that Qualcomm violated Section 2 simply by failing to live up to its contractual SSO obligations. For the FTC, because Qualcomm joined an SSO, it is no longer in a position to refuse to deal legally. Moreover, there is no need to engage in an Aspen/Trinko analysis in order to find liability. Instead, according to the FTC’s brief, liability arises because the evasion of an exogenous pricing constraint (such as an SSO’s FRAND obligation) constitutes an antitrust harm:

Of course, a breach of contract, “standing alone,” does not “give rise to antitrust liability.” City of Vernon v. S. Cal. Edison Co., 955 F.2d 1361, 1368 (9th Cir. 1992); cf. Br. 52 n.6. Instead, a monopolist’s conduct that breaches such a contractual commitment is anticompetitive only when it satisfies traditional Section 2 standards—that is, only when it “tends to impair the opportunities of rivals and either does not further competition on the merits or does so in an unnecessarily restrictive way.” Cascade Health, 515 F.3d at 894. The district court’s factual findings demonstrate that Qualcomm’s breach of its SSO commitments satisfies both elements of that traditional test. (emphasis added)

To begin, it must be noted that the operative language quoted by the FTC from Cascade Health is attributed in Cascade Health to Aspen Skiing. In other words, even Cascade Health recognizes that Aspen Skiing represents the Supreme Court’s interpretation of that language in the duty-to-deal context. And in that case—in contrast to the FTC’s argument in its brief—the Court required demonstration of such a standard to mean that a defendant “was not motivated by efficiency concerns and that it was willing to sacrifice short-run benefits and consumer goodwill in exchange for a perceived long-run impact on its… rival.” (Aspen Skiing at 610-11) (emphasis added).

The language quoted by the FTC cannot simultaneously justify an appeal to an entirely different legal standard separate from that laid out in Aspen Skiing. As such, rather than dispensing with the duty to deal requirements laid out in that case, Cascade Health actually reinforces them.

Second, to support its argument the FTC points to Broadcom v. Qualcomm, 501 F.3d 297 (3rd Cir. 2007) as an example of a court upholding an antitrust claim based on a defendant’s violation of FRAND terms. 

In Broadcom, relying on the FTC’s enforcement action against Rambus before it was overturned by the D.C. Circuit, the Third Circuit found that there was an actionable issue when Qualcomm deceived other members of an SSO by promising to

include its proprietary technology in the… standard by falsely agreeing to abide by the [FRAND policies], but then breached those agreements by licensing its technology on non-FRAND terms. The intentional acquisition of monopoly power through deception… violates antitrust law. (emphasis added)

Even assuming Broadcom were good law post-Rambus, the case is inapposite. In Broadcom the court found that Qualcomm could be held to violate antitrust law by deceiving the SSO (by falsely promising to abide by FRAND terms) in order to induce it to accept Qualcomm’s patent in the standard. The court’s concern was that, by falsely inducing the SSO to adopt its technology, Qualcomm deceptively acquired monopoly power and limited access to competing technology:

When a patented technology is incorporated in a standard, adoption of the standard eliminates alternatives to the patented technology…. Firms may become locked in to a standard requiring the use of a competitor’s patented technology. 

Key to the court’s finding was that the alleged deception induced the SSO to adopt the technology in its standard:

We hold that (1) in a consensus-oriented private standard-setting environment, (2) a patent holder’s intentionally false promise to license essential proprietary technology on FRAND terms, (3) coupled with an SDO’s reliance on that promise when including the technology in a standard, and (4) the patent holder’s subsequent breach of that promise, is actionable conduct. (emphasis added)

Here, the claim is different. There is no allegation that Qualcomm engaged in deceptive conduct that affected the incorporation of its technology into the relevant standard. Indeed, there is no allegation that Qualcomm’s alleged monopoly power arises from its challenged practices; only that it abused its lawful monopoly power to extract supracompetitive prices. Even if an SEP holder may be found liable for falsely promising not to evade a commitment to deal with rivals in order to acquire monopoly power from its inclusion in a technological standard under Broadcom, that does not mean that it can be held liable for evading a commitment to deal with rivals unrelated to its inclusion in a standard, nor that such a refusal to deal should be evaluated under any standard other than that laid out in Aspen Skiing.

Moreover, the FTC nowhere mentions the DC Circuit’s subsequent Rambus decision overturning the FTC and calling the holding in Broadcom into question, nor does it discuss the Supreme Court’s NYNEX decision in any depth. Yet these cases stand clearly for the opposite proposition: a court cannot infer competitive harm from a company’s evasion of a FRAND pricing constraint. As we wrote in our amicus brief

In Rambus Inc. v. FTC, 522 F.3d 456 (D.C. Cir. 2008), the D.C. Circuit, citing NYNEX, rejected the FTC’s contention that it may infer anticompetitive effect from defendant’s evasion of a constraint on its monopoly power in an analogous SEP-licensing case: “But again, as in NYNEX, an otherwise lawful monopolist’s end-run around price constraints, even when deceptive or fraudulent, does not alone present a harm to competition.” Id. at 466 (citation omitted). NYNEX and Rambus reinforce the Court’s repeated holding that an inference is permissible only where it points clearly to anticompetitive effect—and, bad as they may be, evading obligations under other laws or violating norms of “business morality” do not permit a court to undermine “[t]he freedom to switch suppliers [which] lies close to the heart of the competitive process that the antitrust laws seek to encourage. . . . Thus, this Court has refused to apply per se reasoning in cases involving that kind of activity.” NYNEX, 525 U.S. at 137 (citations omitted).

Essentially, the FTC’s brief alleges that Qualcomm’s conduct amounts to an evasion of the constraint imposed by FRAND terms—without which the SSO process itself is presumptively anticompetitive. Indeed, according to the FTC, it is only the FRAND obligation that saves the SSO agreement from being inherently anticompetitive. 

In fact, when a firm has made FRAND commitments to an SSO, requiring the firm to comply with its commitments mitigates the risk that the collaborative standard-setting process will harm competition. Product standards—implicit “agreement[s] not to manufacture, distribute, or purchase certain types of products”—“have a serious potential for anticompetitive harm.” Allied Tube, 486 U.S. at 500 (citation and footnote omitted). Accordingly, private SSOs “have traditionally been objects of antitrust scrutiny,” and the antitrust laws tolerate private standard-setting “only on the understanding that it will be conducted in a nonpartisan manner offering procompetitive benefits,” and in the presence of “meaningful safeguards” that prevent the standard-setting process from falling prey to “members with economic interests in stifling product competition.” Id. at 500- 01, 506-07; see Broadcom, 501 F.3d at 310, 314-15 (collecting cases). 

FRAND commitments are among the “meaningful safeguards” that SSOs have adopted to mitigate this serious risk to competition…. 

Courts have therefore recognized that conduct that breaches or otherwise “side-steps” these safeguards is appropriately subject to conventional Sherman Act scrutiny, not the heightened Aspen/Trinko standard… (p.83-84)

In defense of the proposition that courts apply “traditional antitrust standards to breaches of voluntary commitments made to mitigate antitrust concerns,” the FTC’s brief cites not only Broadcom, but also two other cases:

While this Court has long afforded firms latitude to “deal or refuse to deal with whomever [they] please[] without fear of violating the antitrust laws,” FountWip, Inc. v. Reddi-Wip, Inc., 568 F.2d 1296, 1300 (9th Cir. 1978) (citing Colgate, 250 U.S. at 307), it, too, has applied traditional antitrust standards to breaches of voluntary commitments made to mitigate antitrust concerns. In Mount Hood Stages, Inc. v. Greyhound Corp., 555 F.2d 687 (9th Cir. 1977), this Court upheld a judgment holding that Greyhound violated Section 2 by refusing to interchange bus traffic with a competing bus line after voluntarily committing to do so in order to secure antitrust approval from the Interstate Commerce Commission for proposed acquisitions. Id. at 69723; see also, e.g., Biovail Corp. Int’l v. Hoechst Aktiengesellschaft, 49 F. Supp. 2d 750, 759 (D.N.J. 1999) (breach of commitment to deal in violation of FTC merger consent decree exclusionary under Section 2). (p.85-86)

The cases the FTC cites to justify the proposition all deal with companies sidestepping obligations in order to falsely acquire monopoly power. The two cases cited above both involve companies making promises to government agencies to win merger approval and then failing to follow through. And, as noted, Broadcom deals with the acquisition of monopoly power by making false promises to an SSO to induce the choice of proprietary technology in a standard. While such conduct in the acquisition of monopoly power may be actionable under Broadcom (though this is highly dubious post-Rambus), none of these cases supports the FTC’s claim that an SEP holder violates antitrust law any time it evades an SSO obligation to license its technology to rivals. 

Conclusion

Put simply, the district court’s opinion in FTC v. Qualcomm runs headlong into the Supreme Court’s Aspen decision and founders there. This is why the FTC is trying to avoid analyzing the case under Aspen and subsequent duty-to-deal jurisprudence (including Trinko, the 9th Circuit’s MetroNet decision, and the 10th Circuit’s Novell decision): because it knows that if the appellate court applies those standards, the district court’s duty-to-deal analysis will fail. The FTC’s basis for applying a different standard is unsupportable, however. And even if its logic for applying a different standard were valid, the FTC’s proffered alternative theory is groundless in light of Rambus and NYNEX. The Ninth Circuit should vacate the district court’s finding of liability. 

FTC v. Qualcomm

Last week the International Center for Law & Economics (ICLE) and twelve noted law and economics scholars filed an amicus brief in the Ninth Circuit in FTC v. Qualcomm, in support of appellant (Qualcomm) and urging reversal of the district court’s decision. The brief was authored by Geoffrey A. Manne, President & founder of ICLE, and Ben Sperry, Associate Director, Legal Research of ICLE. Jarod M. Bona and Aaron R. Gott of Bona Law PC collaborated in drafting the brief and they and their team provided invaluable pro bono legal assistance, for which we are enormously grateful. Signatories on the brief are listed at the end of this post.

We’ve written about the case several times on Truth on the Market, as have a number of guest bloggers, in our ongoing blog series on the case here.   

The ICLE amicus brief focuses on the ways that the district court exceeded the “error cost” guardrails erected by the Supreme Court to minimize the risk and cost of mistaken antitrust decisions, particularly those that wrongly condemn procompetitive behavior. As the brief notes at the outset:

The district court’s decision is disconnected from the underlying economics of the case. It improperly applied antitrust doctrine to the facts, and the result subverts the economic rationale guiding monopolization jurisprudence. The decision—if it stands—will undercut the competitive values antitrust law was designed to protect.  

The antitrust error cost framework was most famously elaborated by Frank Easterbrook in his seminal article, The Limits of Antitrust (1984). It has since been squarely adopted by the Supreme Court—most significantly in Brooke Group (1986), Trinko (2003), and linkLine (2009).  

In essence, the Court’s monopolization case law implements the error cost framework by (among other things) obliging courts to operate under certain decision rules that limit the use of inferences about the consequences of a defendant’s conduct except when the circumstances create what game theorists call a “separating equilibrium.” A separating equilibrium is a 

solution to a game in which players of different types adopt different strategies and thereby allow an uninformed player to draw inferences about an informed player’s type from that player’s actions.

Baird, Gertner & Picker, Game Theory and the Law

The key problem in antitrust is that while the consequence of complained-of conduct for competition (i.e., consumers) is often ambiguous, its deleterious effect on competitors is typically quite evident—whether it is actually anticompetitive or not. The question is whether (and when) it is appropriate to infer anticompetitive effect from discernible harm to competitors. 

Except in the narrowly circumscribed (by Trinko) instance of a unilateral refusal to deal, anticompetitive harm under the rule of reason must be proven. It may not be inferred from harm to competitors, because such an inference is too likely to be mistaken—and “mistaken inferences are especially costly, because they chill the very conduct the antitrust laws are designed to protect.” (Brooke Group (quoting yet another key Supreme Court antitrust error cost case, Matsushita (1986)). 

Yet, as the brief discusses, in finding Qualcomm liable the district court did not demand or find proof of harm to competition. Instead, the court’s opinion relies on impermissible inferences from ambiguous evidence to find that Qualcomm had (and violated) an antitrust duty to deal with rival chip makers and that its conduct resulted in anticompetitive foreclosure of competition. 

We urge you to read the brief (it’s pretty short—maybe the length of three blogs posts) to get the whole argument. Below we draw attention to a few points we make in the brief that are especially significant. 

The district court bases its approach entirely on Microsoft — which it misinterprets in clear contravention of Supreme Court case law

The district court doesn’t stay within the strictures of the Supreme Court’s monopolization case law. In fact, although it obligingly recites some of the error cost language from Trinko, it quickly moves away from Supreme Court precedent and bases its approach entirely on its reading of the D.C. Circuit’s Microsoft (2001) decision. 

Unfortunately, the district court’s reading of Microsoft is mistaken and impermissible under Supreme Court precedent. Indeed, both the Supreme Court and the D.C. Circuit make clear that a finding of illegal monopolization may not rest on an inference of anticompetitive harm.

The district court cites Microsoft for the proposition that

Where a government agency seeks injunctive relief, the Court need only conclude that Qualcomm’s conduct made a “significant contribution” to Qualcomm’s maintenance of monopoly power. The plaintiff is not required to “present direct proof that a defendant’s continued monopoly power is precisely attributable to its anticompetitive conduct.”

It’s true Microsoft held that, in government actions seeking injunctions, “courts [may] infer ‘causation’ from the fact that a defendant has engaged in anticompetitive conduct that ‘reasonably appears capable of making a significant contribution to maintaining monopoly power.’” (Emphasis added). 

But Microsoft never suggested that anticompetitiveness itself may be inferred.

“Causation” and “anticompetitive effect” are not the same thing. Indeed, Microsoft addresses “anticompetitive conduct” and “causation” in separate sections of its decision. And whereas Microsoft allows that courts may infer “causation” in certain government actions, it makes no such allowance with respect to “anticompetitive effect.” In fact, it explicitly rules it out:

[T]he plaintiff… must demonstrate that the monopolist’s conduct indeed has the requisite anticompetitive effect…; no less in a case brought by the Government, it must demonstrate that the monopolist’s conduct harmed competition, not just a competitor.”

The D.C. Circuit subsequently reinforced this clear conclusion of its holding in Microsoft in Rambus

Deceptive conduct—like any other kind—must have an anticompetitive effect in order to form the basis of a monopolization claim…. In Microsoft… [t]he focus of our antitrust scrutiny was properly placed on the resulting harms to competition.

Finding causation entails connecting evidentiary dots, while finding anticompetitive effect requires an economic assessment. Without such analysis it’s impossible to distinguish procompetitive from anticompetitive conduct, and basing liability on such an inference effectively writes “anticompetitive” out of the law.

Thus, the district court is correct when it holds that it “need not conclude that Qualcomm’s conduct is the sole reason for its rivals’ exits or impaired status.” But it is simply wrong to hold—in the same sentence—that it can thus “conclude that Qualcomm’s practices harmed competition and consumers.” The former claim is consistent with Microsoft; the latter is emphatically not.

Under Trinko and Aspen Skiing the district court’s finding of an antitrust duty to deal is impermissible 

Because finding that a company operates under a duty to deal essentially permits a court to infer anticompetitive harm without proof, such a finding “comes dangerously close to being a form of ‘no-fault’ monopolization,” as Herbert Hovenkamp has written. It is also thus seriously disfavored by the Court’s error cost jurisprudence.

In Trinko the Supreme Court interprets its holding in Aspen Skiing to identify essentially a single scenario from which it may plausibly be inferred that a monopolist’s refusal to deal with rivals harms consumers: the existence of a prior, profitable course of dealing, and the termination and replacement of that arrangement with an alternative that not only harms rivals, but also is less profitable for the monopolist.

In an effort to satisfy this standard, the district court states that “because Qualcomm previously licensed its rivals, but voluntarily stopped licensing rivals even though doing so was profitable, Qualcomm terminated a voluntary and profitable course of dealing.”

But it’s not enough merely that the prior arrangement was profitable. Rather, Trinko and Aspen Skiing hold that when a monopolist ends a profitable relationship with a rival, anticompetitive exclusion may be inferred only when it also refuses to engage in an ongoing arrangement that, in the short run, is more profitable than no relationship at all. The key is the relative value to the monopolist of the current options on offer, not the value to the monopolist of the terminated arrangement. In a word, what the Court requires is that the defendant exhibit behavior that, but-for the expectation of future, anticompetitive returns, is irrational.

It should be noted, as John Lopatka (here) and Alan Meese (here) (both of whom joined the amicus brief) have written, that even the Supreme Court’s approach is likely insufficient to permit a court to distinguish between procompetitive and anticompetitive conduct. 

But what is certain is that the district court’s approach in no way permits such an inference.

“Evasion of a competitive constraint” is not an antitrust-relevant refusal to deal

In order to infer anticompetitive effect, it’s not enough that a firm may have a “duty” to deal, as that term is colloquially used, based on some obligation other than an antitrust duty, because it can in no way be inferred from the evasion of that obligation that conduct is anticompetitive.

The district court bases its determination that Qualcomm’s conduct is anticompetitive on the fact that it enables the company to avoid patent exhaustion, FRAND commitments, and thus price competition in the chip market. But this conclusion is directly precluded by the Supreme Court’s holding in NYNEX

Indeed, in Rambus, the D.C. Circuit, citing NYNEX, rejected the FTC’s contention that it may infer anticompetitive effect from defendant’s evasion of a constraint on its monopoly power in an analogous SEP-licensing case: “But again, as in NYNEX, an otherwise lawful monopolist’s end-run around price constraints, even when deceptive or fraudulent, does not alone present a harm to competition.”

As Josh Wright has noted:

[T]he objection to the “evasion” of any constraint approach is… that it opens the door to enforcement actions applied to business conduct that is not likely to harm competition and might be welfare increasing.

Thus NYNEX and Rambus (and linkLine) reinforce the Court’s repeated holding that an inference of harm to competition is permissible only where conduct points clearly to anticompetitive effect—and, bad as they may be, evading obligations under other laws or violating norms of “business morality” do not suffice.

The district court’s elaborate theory of harm rests fundamentally on the claim that Qualcomm injures rivals—and the record is devoid of evidence demonstrating actual harm to competition. Instead, the court infers it from what it labels “unreasonably high” royalty rates, enabled by Qualcomm’s evasion of competition from rivals. In turn, the court finds that that evasion of competition can be the source of liability if what Qualcomm evaded was an antitrust duty to deal. And, in impermissibly circular fashion, the court finds that Qualcomm indeed evaded an antitrust duty to deal—because its conduct allowed it to sustain “unreasonably high” prices. 

The Court’s antitrust error cost jurisprudence—from Brooke Group to NYNEX to Trinko & linkLine—stands for the proposition that no such circular inferences are permitted.

The district court’s foreclosure analysis also improperly relies on inferences in lieu of economic evidence

Because the district court doesn’t perform a competitive effects analysis, it fails to demonstrate the requisite “substantial” foreclosure of competition required to sustain a claim of anticompetitive exclusion. Instead the court once again infers anticompetitive harm from harm to competitors. 

The district court makes no effort to establish the quantity of competition foreclosed as required by the Supreme Court. Nor does the court demonstrate that the alleged foreclosure harms competition, as opposed to just rivals. Foreclosure per se is not impermissible and may be perfectly consistent with procompetitive conduct.

Again citing Microsoft, the district court asserts that a quantitative finding is not required. Yet, as the court’s citation to Microsoft should have made clear, in its stead a court must find actual anticompetitive effect; it may not simply assert it. As Microsoft held: 

It is clear that in all cases the plaintiff must… prove the degree of foreclosure. This is a prudential requirement; exclusivity provisions in contracts may serve many useful purposes. 

The court essentially infers substantiality from the fact that Qualcomm entered into exclusive deals with Apple (actually, volume discounts), from which the court concludes that Qualcomm foreclosed rivals’ access to a key customer. But its inference that this led to substantial foreclosure is based on internal business statements—so-called “hot docs”—characterizing the importance of Apple as a customer. Yet, as Geoffrey Manne and Marc Williamson explain, such documentary evidence is unreliable as a guide to economic significance or legal effect: 

Business people will often characterize information from a business perspective, and these characterizations may seem to have economic implications. However, business actors are subject to numerous forces that influence the rhetoric they use and the conclusions they draw….

There are perfectly good reasons to expect to see “bad” documents in business settings when there is no antitrust violation lurking behind them.

Assuming such language has the requisite economic or legal significance is unsupportable—especially when, as here, the requisite standard demands a particular quantitative significance.

Moreover, the court’s “surcharge” theory of exclusionary harm rests on assumptions regarding the mechanism by which the alleged surcharge excludes rivals and harms consumers. But the court incorrectly asserts that only one mechanism operates—and it makes no effort to quantify it. 

The court cites “basic economics” via Mankiw’s Principles of Microeconomics text for its conclusion:

The surcharge affects demand for rivals’ chips because as a matter of basic economics, regardless of whether a surcharge is imposed on OEMs or directly on Qualcomm’s rivals, “the price paid by buyers rises, and the price received by sellers falls.” Thus, the surcharge “places a wedge between the price that buyers pay and the price that sellers receive,” and demand for such transactions decreases. Rivals see lower sales volumes and lower margins, and consumers see less advanced features as competition decreases.

But even assuming the court is correct that Qualcomm’s conduct entails such a surcharge, basic economics does not hold that decreased demand for rivals’ chips is the only possible outcome. 

In actuality, an increase in the cost of an input for OEMs can have three possible effects:

  1. OEMs can pass all or some of the cost increase on to consumers in the form of higher phone prices. Assuming some elasticity of demand, this would mean fewer phone sales and thus less demand by OEMs for chips, as the court asserts. But the extent of that effect would depend on consumers’ demand elasticity and the magnitude of the cost increase as a percentage of the phone price. If demand is highly inelastic at this price (i.e., relatively insensitive to the relevant price change), it may have a tiny effect on the number of phones sold and thus the number of chips purchased—approaching zero as price insensitivity increases.
  2. OEMs can absorb the cost increase and realize lower profits but continue to sell the same number of phones and purchase the same number of chips. This would not directly affect demand for chips or their prices.
  3. OEMs can respond to a price increase by purchasing fewer chips from rivals and more chips from Qualcomm. While this would affect rivals’ chip sales, it would not necessarily affect consumer prices, the total number of phones sold, or OEMs’ margins—that result would depend on whether Qualcomm’s chips cost more or less than its rivals’. If the latter, it would even increase OEMs’ margins and/or lower consumer prices and increase output.

Alternatively, of course, the effect could be some combination of these.

Whether any of these outcomes would substantially exclude rivals is inherently uncertain to begin with. But demonstrating a reduction in rivals’ chip sales is a necessary but not sufficient condition for proving anticompetitive foreclosure. The FTC didn’t even demonstrate that rivals were substantially harmed, let alone that there was any effect on consumers—nor did the district court make such findings. 

Doing so would entail consideration of whether decreased demand for rivals’ chips flows from reduced consumer demand or OEMs’ switching to Qualcomm for supply, how consumer demand elasticity affects rivals’ chip sales, and whether Qualcomm’s chips were actually less or more expensive than rivals’. Yet the court determined none of these. 

Conclusion

Contrary to established Supreme Court precedent, the district court’s decision relies on mere inferences to establish anticompetitive effect. The decision, if it stands, would render a wide range of potentially procompetitive conduct presumptively illegal and thus harm consumer welfare. It should be reversed by the Ninth Circuit.

Joining ICLE on the brief are:

  • Donald J. Boudreaux, Professor of Economics, George Mason University
  • Kenneth G. Elzinga, Robert C. Taylor Professor of Economics, University of Virginia
  • Janice Hauge, Professor of Economics, University of North Texas
  • Justin (Gus) Hurwitz, Associate Professor of Law, University of Nebraska College of Law; Director of Law & Economics Programs, ICLE
  • Thomas A. Lambert, Wall Chair in Corporate Law and Governance, University of Missouri Law School
  • John E. Lopatka, A. Robert Noll Distinguished Professor of Law, Penn State University Law School
  • Daniel Lyons, Professor of Law, Boston College Law School
  • Geoffrey A. Manne, President and Founder, International Center for Law & Economics; Distinguished Fellow, Northwestern University Center on Law, Business & Economics
  • Alan J. Meese, Ball Professor of Law, William & Mary Law School
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics Emeritus, Emory University
  • Vernon L. Smith, George L. Argyros Endowed Chair in Finance and Economics, Chapman University School of Business; Nobel Laureate in Economics, 2002
  • Michael Sykuta, Associate Professor of Economics, University of Missouri


While we all wait on pins and needles for the DC Circuit to issue its long-expected ruling on the FCC’s Open Internet Order, another federal appeals court has pushed back on Tom Wheeler’s FCC for its unremitting “just trust us” approach to federal rulemaking.

The case, round three of Prometheus, et al. v. FCC, involves the FCC’s long-standing rules restricting common ownership of local broadcast stations and their extension by Tom Wheeler’s FCC to the use of joint sales agreements (JSAs). (For more background see our previous post here). Once again the FCC lost (it’s now only 1 for 3 in this case…), as the Third Circuit Court of Appeals took the Commission to task for failing to establish that its broadcast ownership rules were still in the public interest, as required by law, before it decided to extend those rules.

While much of the opinion deals with the FCC’s unreasonable delay (of more than 7 years) in completing two Quadrennial Reviews in relation to its diversity rules, the court also vacated the FCC’s rule expanding its duopoly rule (or local television ownership rule) to ban joint sales agreements without first undertaking the reviews.

We (the International Center for Law and Economics, along with affiliated scholars of law, economics, and communications) filed an amicus brief arguing for precisely this result, noting that

the 2014 Order [] dramatically expands its scope by amending the FCC’s local ownership attribution rules to make the rule applicable to JSAs, which had never before been subject to it. The Commission thereby suddenly declares unlawful JSAs in scores of local markets, many of which have been operating for a decade or longer without any harm to competition. Even more remarkably, it does so despite the fact that both the DOJ and the FCC itself had previously reviewed many of these JSAs and concluded that they were not likely to lessen competition. In doing so, the FCC also fails to examine the empirical evidence accumulated over the nearly two decades some of these JSAs have been operating. That evidence shows that many of these JSAs have substantially reduced the costs of operating TV stations and improved the quality of their programming without causing any harm to competition, thereby serving the public interest.

The Third Circuit agreed that the FCC utterly failed to justify its continued foray into banning potentially pro-competitive arrangements, finding that

the Commission violated § 202(h) by expanding the reach of the ownership rules without first justifying their preexisting scope through a Quadrennial Review. In Prometheus I we made clear that § 202(h) requires that “no matter what the Commission decides to do to any particular rule—retain, repeal, or modify (whether to make more or less stringent)—it must do so in the public interest and support its decision with a reasoned analysis.” Prometheus I, 373 F.3d at 395. Attribution of television JSAs modifies the Commission’s ownership rules by making them more stringent. And, unless the Commission determines that the preexisting ownership rules are sound, it cannot logically demonstrate that an expansion is in the public interest. Put differently, we cannot decide whether the Commission’s rationale—the need to avoid circumvention of ownership rules—makes sense without knowing whether those rules are in the public interest. If they are not, then the public interest might not be served by closing loopholes to rules that should no longer exist.

Perhaps this decision will be a harbinger of good things to come. The FCC — and especially Tom Wheeler’s FCC — has a history of failing to justify its rules with anything approaching rigorous analysis. The Open Internet Order is a case in point. We will all be better off if courts begin to hold the Commission’s feet to the fire and throw out their rules when the FCC fails to do the work needed to justify them.

A number of blockbuster mergers have received (often negative) attention from media and competition authorities in recent months. From the recently challenged Staples-Office Depot merger to the abandoned Comcast-Time Warner merger to the heavily scrutinized Aetna-Humana merger (among many others), there has been a wave of potential mega-mergers throughout the economy—many of them met with regulatory resistance. We’ve discussed several of these mergers at TOTM (see, e.g., here, here, here and here).

Many reporters, analysts, and even competition authorities have adopted various degrees of the usual stance that big is bad, and bigger is even badder. But worse yet, once this presumption applies, agencies have been skeptical of claimed efficiencies, placing a heightened burden on the merging parties to prove them and often ignoring them altogether. And, of course (and perhaps even worse still), there is the perennial problem of (often questionable) market definition — which tanked the Sysco/US Foods merger and which undergirds the FTC’s challenge of the Staples/Office Depot merger.

All of these issues are at play in the proposed acquisition of British aluminum can manufacturer Rexam PLC by American can manufacturer Ball Corp., which has likewise drawn the attention of competition authorities around the world — including those in Brazil, the European Union, and the United States.

But the Ball/Rexam merger has met with some important regulatory successes. Just recently the members of CADE, Brazil’s competition authority, unanimously approved the merger with limited divestitures. The most recent reports also indicate that the EU will likely approve it, as well. It’s now largely down to the FTC, which should approve the merger and not kill it or over-burden it with required divestitures on the basis of questionable antitrust economics.

The proposed merger raises a number of interesting issues in the surprisingly complex beverage container market. But this merger merits regulatory approval.

The International Center for Law & Economics recently released a research paper entitled, The Ball-Rexam Merger: The Case for a Competitive Can Market. The white paper offers an in-depth assessment of the economics of the beverage packaging industry; the place of the Ball-Rexam merger within this remarkably complex, global market; and the likely competitive effects of the deal.

The upshot is that the proposed merger is unlikely to have anticompetitive effects, and any competitive concerns that do arise can be readily addressed by a few targeted divestitures.

The bottom line

The production and distribution of aluminum cans is a surprisingly dynamic industry, characterized by evolving technology, shifting demand, complex bargaining dynamics, and significant changes in the costs of production and distribution. Despite the superficial appearance that the proposed merger will increase concentration in aluminum can manufacturing, we conclude that a proper understanding of the marketplace dynamics suggests that the merger is unlikely to have actual anticompetitive effects.

All told, and as we summarize in our Executive Summary, we found at least seven specific reasons for this conclusion:

  1. Because the appropriately defined product market includes not only stand-alone can manufacturers, but also vertically integrated beverage companies, as well as plastic and glass packaging manufacturers, the actual increase in concentration from the merger will be substantially less than suggested by the change in the number of nationwide aluminum can manufacturers.
  2. Moreover, in nearly all of the relevant geographic markets (which are much smaller than the typically nationwide markets from which concentration numbers are derived), the merger will not affect market concentration at all.
  3. While beverage packaging isn’t a typical, rapidly evolving, high-technology market, technological change is occurring. Coupled with shifting consumer demand (often driven by powerful beverage company marketing efforts), and considerable (and increasing) buyer power, historical beverage packaging market shares may have little predictive value going forward.
  4. The key importance of transportation costs and the effects of current input prices suggest that expanding demand can be effectively met only by expanding the geographic scope of production and by economizing on aluminum supply costs. These, in turn, suggest that increasing overall market concentration is consistent with increased, rather than decreased, competitiveness.
  5. The markets in which Ball and Rexam operate are dominated by a few large customers, who are themselves direct competitors in the upstream marketplace. These companies have shown a remarkable willingness and ability to invest in competing packaging supply capacity and to exert their substantial buyer power to discipline prices.
  6. For this same reason, complaints leveled against the proposed merger by these beverage giants — which are as much competitors as they are customers of the merging companies — should be viewed with skepticism.
  7. Finally, the merger should generate significant managerial and overhead efficiencies, and the merged firm’s expanded geographic footprint should allow it to service larger geographic areas for its multinational customers, thus lowering transaction costs and increasing its value to these customers.

Distinguishing Ardagh: The interchangeability of aluminum and glass

An important potential sticking point for the FTC’s review of the merger is its recent decision to challenge the Ardagh-Saint Gobain merger. The cases are superficially similar, in that they both involve beverage packaging. But Ardagh should not stand as a model for the Commission’s treatment of Ball/Rexam. The FTC made a number of mistakes in Ardagh (including market definition and the treatment of efficiencies — the latter of which brought out a strenuous dissent from Commissioner Wright). But even on its own (questionable) terms, Ardagh shouldn’t mean trouble for Ball/Rexam.

As we noted in our December 1st letter to the FTC on the Ball/Rexam merger, and as we discuss in detail in the paper, the situation in the aluminum can market is quite different than the (alleged) market for “(1) the manufacture and sale of glass containers to Brewers; and (2) the manufacture and sale of glass containers to Distillers” at issue in Ardagh.

Importantly, the FTC found (almost certainly incorrectly, at least for the brewers) that other container types (e.g., plastic bottles and aluminum cans) were not part of the relevant product market in Ardagh. But in the markets in which aluminum cans are a primary form of packaging (most notably, soda and beer), our research indicates that glass, plastic, and aluminum are most definitely substitutes.

The Big Four beverage companies (Coca-Cola, PepsiCo, Anheuser-Busch InBev, and MillerCoors), which collectively make up 80% of the U.S. market for Ball and Rexam, are all vertically integrated to some degree, and provide much of their own supply of containers (a situation significantly different than the distillers in Ardagh). These companies exert powerful price discipline on the aluminum packaging market by, among other things, increasing (or threatening to increase) their own container manufacturing capacity, sponsoring new entry, and shifting production (and, via marketing, consumer demand) to competing packaging types.

For soda, Ardagh is obviously inapposite, as soda packaging wasn’t at issue there. But the FTC’s conclusion in Ardagh that aluminum cans (which in fact make up 56% of the beer packaging market) don’t compete with glass bottles for beer packaging is also suspect.

For aluminum can manufacturers Ball and Rexam, aluminum can’t be excluded from the market (obviously), and much of the beer in the U.S. that is packaged in aluminum is quite clearly also packaged in glass. The FTC claimed in Ardagh that glass and aluminum are consumed in distinct situations, so they don’t exert price pressure on each other. But that ignores the considerable ability of beer manufacturers to influence consumption choices, as well as the reality that consumer preferences for each type of container (whether driven by beer company marketing efforts or not) are merging, with cost considerations dominating other factors.

In fact, consumers consume beer in both packaging types largely interchangeably (with a few limited exceptions — e.g., poolside drinking demands aluminum or plastic), and beer manufacturers readily switch between the two types of packaging as the relative production costs shift.

Craft brewers, to take one important example, are rapidly switching to aluminum from glass, despite a supposed stigma surrounding canned beers. Some craft brewers (particularly the larger ones) do package at least some of their beers in both types of containers, or simultaneously package some of their beers in glass and some of their beers in cans, while for many craft brewers it’s one or the other. Yet there’s no indication that craft beer consumption has fallen off because consumers won’t drink beer from cans in some situations — and obviously the prospect of this outcome hasn’t stopped craft brewers from abandoning bottles entirely in favor of more economical cans, nor has it induced them, as a general rule, to offer both types of packaging.

A very short time ago it might have seemed that aluminum wasn’t in the same market as glass for craft beer packaging. But, as recent trends have borne out, that differentiation wasn’t primarily a function of consumer preference (either at the brewer or end-consumer level). Rather, it was a function of bottling/canning costs (until recently the machinery required for canning was prohibitively expensive), materials costs (at various times glass has been cheaper than aluminum, depending on volume), and transportation costs (which cut against glass, but the relative attractiveness of different packaging materials is importantly a function of variable transportation costs). To be sure, consumer preference isn’t irrelevant, but the ease with which brewers have shifted consumer preferences suggests that it isn’t a strong constraint.

Transportation costs are key

Transportation costs, in fact, are a key part of the story — and of the conclusion that the Ball/Rexam merger is unlikely to have anticompetitive effects. First of all, transporting empty cans (or bottles, for that matter) is tremendously inefficient — which means that the relevant geographic markets for assessing the competitive effects of the Ball/Rexam merger are essentially the largely non-overlapping 200 mile circles around the companies’ manufacturing facilities. Because there are very few markets in which the two companies both have plants, the merger doesn’t change the extent of competition in the vast majority of relevant geographic markets.

But transportation costs are also relevant to the interchangeability of packaging materials. Glass is more expensive to transport than aluminum, and this is true not just for empty bottles, but for full ones, of course. So, among other things, by switching to cans (even if it entails up-front cost), smaller breweries can expand their geographic reach, potentially expanding sales enough to more than cover switching costs. The merger would further lower the costs of cans (and thus of geographic expansion) by enabling beverage companies to transact with a single company across a wider geographic range.

The reality is that the most important factor in packaging choice is cost, and that the packaging alternatives are functionally interchangeable. As a result, and given that the direct consumers of beverage packaging are beverage companies rather than end-consumers, relatively small cost changes readily spur changes in packaging choices. While there are some switching costs that might impede these shifts, they are readily overcome. For large beverage companies that already use multiple types and sizes of packaging for the same product, the costs are trivial: They already have packaging designs, marketing materials, distribution facilities and the like in place. For smaller companies, a shift can be more difficult, but innovations in labeling, mobile canning/bottling facilities, outsourced distribution and the like significantly reduce these costs.  

“There’s a great future in plastics”

All of this is even more true for plastic — even in the beer market. In fact, in 2010, 10% of the beer consumed in Europe was sold in plastic bottles, as was 15% of all beer consumed in South Korea. We weren’t able to find reliable numbers for the U.S., but particularly for cheaper beers, U.S. brewers are increasingly moving to plastic. And plastic bottles are the norm at stadiums and arenas. Whatever the exact numbers, clearly plastic holds a small fraction of the beer container market compared to glass and aluminum. But that number is just as clearly growing, and as cost considerations impel them (and technology enables them), giant, powerful brewers like AB InBev and MillerCoors are certainly willing and able to push consumers toward plastic.

Meanwhile soda companies like Coca-cola and Pepsi have successfully moved their markets so that today a majority of packaged soda is sold in plastic containers. There’s no evidence that this shift came about as a result of end-consumer demand, nor that the shift to plastic was delayed by a lack of demand elasticity; rather, it was primarily a function of these companies’ ability to realize bigger profits on sales in plastic containers (not least because they own their own plastic packaging production facilities).

And while it’s not at issue in Ball/Rexam because spirits are rarely sold in aluminum packaging, the FTC’s conclusion in Ardagh that

[n]on-glass packaging materials, such as plastic containers, are not in this relevant product market because not enough spirits customers would switch to non-glass packaging materials to make a SSNIP in glass containers to spirits customers unprofitable for a hypothetical monopolist

is highly suspect — which suggests the Commission may have gotten it wrong in other ways, too. For example, as one report notes:

But the most noteworthy inroads against glass have been made in distilled liquor. In terms of total units, plastic containers, almost all of them polyethylene terephthalate (PET), have surpassed glass and now hold a 56% share, which is projected to rise to 69% by 2017.

True, most of this must be tiny-volume airplane bottles, but by no means all of it is, and it’s clear that the cost advantages of plastic are driving a shift in distilled liquor packaging, as well. Some high-end brands are even moving to plastic. Whatever resistance (and this true for beer, too) that may have existed in the past because of glass’s “image,” is breaking down: Don’t forget that even high-quality wines are now often sold with screw-tops or even in boxes — something that was once thought impossible.

The overall point is that the beverage packaging market faced by can makers like Ball and Rexam is remarkably complex, and, crucially, the presence of powerful, vertically integrated customers means that past or current demand by end-users is a poor indicator of what the market will look like in the future as input costs and other considerations faced by these companies shift. Right now, for example, over 50% of the world’s soda is packaged in plastic bottles, and this margin is set to increase: The global plastic packaging market (not limited to just beverages) is expected to grow at a CAGR of 5.2% between 2014 and 2020, while aluminum packaging is expected to grow at just 2.9%.

A note on efficiencies

As noted above, the proposed Ball/Rexam merger also holds out the promise of substantial efficiencies (estimated at $300 million by the merging parties, due mainly to decreased transportation costs). There is a risk, however, that the FTC may effectively disregard those efficiencies, as it did in Ardagh (and in St. Luke’s before it), by saddling them with a higher burden of proof than it requires of its own prima facie claims. If the goal of antitrust law is to promote consumer welfare, competition authorities can’t ignore efficiencies in merger analysis.

In his Ardagh dissent, Commissioner Wright noted that:

Even when the same burden of proof is applied to anticompetitive effects and efficiencies, of course, reasonable minds can and often do differ when identifying and quantifying cognizable efficiencies as appears to have occurred in this case.  My own analysis of cognizable efficiencies in this matter indicates they are significant.   In my view, a critical issue highlighted by this case is whether, when, and to what extent the Commission will credit efficiencies generally, as well as whether the burden faced by the parties in establishing that proffered efficiencies are cognizable under the Merger Guidelines is higher than the burden of proof facing the agencies in establishing anticompetitive effects. After reviewing the record evidence on both anticompetitive effects and efficiencies in this case, my own view is that it would be impossible to come to the conclusions about each set forth in the Complaint and by the Commission — and particularly the conclusion that cognizable efficiencies are nearly zero — without applying asymmetric burdens.

The Commission shouldn’t make the same mistake here. In fact, here, where can manufacturers are squeezed between powerful companies both upstream (e.g., Alcoa) and downstream (e.g., AB InBev), and where transportation costs limit the opportunities for expanding the customer base of any particular plant, the ability to capitalize on economies of scale and geographic scope is essential to independent manufacturers’ abilities to efficiently meet rising demand.

Read our complete assessment of the merger’s effect here.

Last week concluded round 3 of Congressional hearings on mergers in the healthcare provider and health insurance markets. Much like the previous rounds, the hearing saw predictable representatives, of predictable constituencies, saying predictable things.

The pattern is pretty clear: The American Hospital Association (AHA) makes the case that mergers in the provider market are good for consumers, while mergers in the health insurance market are bad. A scholar or two decries all consolidation in both markets. Another interested group, like maybe the American Medical Association (AMA), also criticizes the mergers. And it’s usually left to a representative of the insurance industry, typically one or more of the merging parties themselves, or perhaps a scholar from a free market think tank, to defend the merger.

Lurking behind the public and politicized airings of these mergers, and especially the pending Anthem/Cigna and Aetna/Humana health insurance mergers, is the Affordable Care Act (ACA). Unfortunately, the partisan politics surrounding the ACA, particularly during this election season, may be trumping the sensible economic analysis of the competitive effects of these mergers.

In particular, the partisan assessments of the ACA’s effect on the marketplace have greatly colored the Congressional (mis-)understandings of the competitive consequences of the mergers.  

Witness testimony and questions from members of Congress at the hearings suggest that there is widespread agreement that the ACA is encouraging increased consolidation in healthcare provider markets, for example, but there is nothing approaching unanimity of opinion in Congress or among interested parties regarding what, if anything, to do about it. Congressional Democrats, for their part, have insisted that stepped up vigilance, particularly of health insurance mergers, is required to ensure that continued competition in health insurance markets isn’t undermined, and that the realization of the ACA’s objectives in the provider market aren’t undermined by insurance companies engaging in anticompetitive conduct. Meanwhile, Congressional Republicans have generally been inclined to imply (or outright state) that increased concentration is bad, so that they can blame increasing concentration and any lack of competition on the increased regulatory costs or other effects of the ACA. Both sides appear to be missing the greater complexities of the story, however.

While the ACA may be creating certain impediments in the health insurance market, it’s also creating some opportunities for increased health insurance competition, and implementing provisions that should serve to hold down prices. Furthermore, even if the ACA is encouraging more concentration, those increases in concentration can’t be assumed to be anticompetitive. Mergers may very well be the best way for insurers to provide benefits to consumers in a post-ACA world — that is, the world we live in. The ACA may have plenty of negative outcomes, and there may be reasons to attack the ACA itself, but there is no reason to assume that any increased concentration it may bring about is a bad thing.

Asking the right questions about the ACA

We don’t need more self-serving and/or politicized testimony We need instead to apply an economic framework to the competition issues arising from these mergers in order to understand their actual, likely effects on the health insurance marketplace we have. This framework has to answer questions like:

  • How do we understand the effects of the ACA on the marketplace?
    • In what ways does the ACA require us to alter our understanding of the competitive environment in which health insurance and healthcare are offered?
    • Does the ACA promote concentration in health insurance markets?
    • If so, is that a bad thing?
  • Do efficiencies arise from increased integration in the healthcare provider market?
  • Do efficiencies arise from increased integration in the health insurance market?
  • How do state regulatory regimes affect the understanding of what markets are at issue, and what competitive effects are likely, for antitrust analysis?
  • What are the potential competitive effects of increased concentration in the health care markets?
  • Does increased health insurance market concentration exacerbate or counteract those effects?

Beginning with this post, at least a few of us here at TOTM will take on some of these issues, as part of a blog series aimed at better understanding the antitrust law and economics of the pending health insurance mergers.

Today, we will focus on the ambiguous competitive implications of the ACA. Although not a comprehensive analysis, in this post we will discuss some key insights into how the ACA’s regulations and subsidies should inform our assessment of the competitiveness of the healthcare industry as a whole, and the antitrust review of health insurance mergers in particular.

The ambiguous effects of the ACA

It’s an understatement to say that the ACA is an issue of great political controversy. While many Democrats argue that it has been nothing but a boon to consumers, Republicans usually have nothing good to say about the law’s effects. But both sides miss important but ambiguous effects of the law on the healthcare industry. And because they miss (or disregard) this ambiguity for political reasons, they risk seriously misunderstanding the legal and economic implications of the ACA for healthcare industry mergers.

To begin with, there are substantial negative effects, of course. Requiring insurance companies to accept patients with pre-existing conditions reduces the ability of insurance companies to manage risk. This has led to upward pricing pressure for premiums. While the mandate to buy insurance was supposed to help bring more young, healthy people into the risk pool, so far the projected signups haven’t been realized.

The ACA’s redefinition of what is an acceptable insurance policy has also caused many consumers to lose the policy of their choice. And the ACA’s many regulations, such as the Minimum Loss Ratio requiring insurance companies to spend 80% of premiums on healthcare, have squeezed the profit margins of many insurance companies, leading, in some cases, to exit from the marketplace altogether and, in others, to a reduction of new marketplace entry or competition in other submarkets.

On the other hand, there may be benefits from the ACA. While many insurers participated in private exchanges even before the ACA-mandated health insurance exchanges, the increased consumer education from the government’s efforts may have helped enrollment even in private exchanges, and may also have helped to keep premiums from increasing as much as they would have otherwise. At the same time, the increased subsidies for individuals have helped lower-income people afford those premiums. Some have even argued that increased participation in the on-demand economy can be linked to the ability of individuals to buy health insurance directly. On top of that, there has been some entry into certain health insurance submarkets due to lower barriers to entry (because there is less need for agents to sell in a new market with the online exchanges). And the changes in how Medicare pays, with a greater focus on outcomes rather than services provided, has led to the adoption of value-based pricing from both health care providers and health insurance companies.

Further, some of the ACA’s effects have  decidedly ambiguous consequences for healthcare and health insurance markets. On the one hand, for example, the ACA’s compensation rules have encouraged consolidation among healthcare providers, as noted. One reason for this is that the government gives higher payments for Medicare services delivered by a hospital versus an independent doctor. Similarly, increased regulatory burdens have led to higher compliance costs and more consolidation as providers attempt to economize on those costs. All of this has happened perhaps to the detriment of doctors (and/or patients) who wanted to remain independent from hospitals and larger health network systems, and, as a result, has generally raised costs for payors like insurers and governments.

But much of this consolidation has also arguably led to increased efficiency and greater benefits for consumers. For instance, the integration of healthcare networks leads to increased sharing of health information and better analytics, better care for patients, reduced overhead costs, and other efficiencies. Ultimately these should translate into higher quality care for patients. And to the extent that they do, they should also translate into lower costs for insurers and lower premiums — provided health insurers are not prevented from obtaining sufficient bargaining power to impose pricing discipline on healthcare providers.

In other words, both the AHA and AMA could be right as to different aspects of the ACA’s effects.

Understanding mergers within the regulatory environment

But what they can’t say is that increased consolidation per se is clearly problematic, nor that, even if it is correlated with sub-optimal outcomes, it is consolidation causing those outcomes, rather than something else (like the ACA) that is causing both the sub-optimal outcomes as well as consolidation.

In fact, it may well be the case that increased consolidation improves overall outcomes in healthcare provider and health insurance markets relative to what would happen under the ACA absent consolidation. For Congressional Democrats and others interested in bolstering the ACA and offering the best possible outcomes for consumers, reflexively challenging health insurance mergers because consolidation is “bad,” may be undermining both of these objectives.

Meanwhile, and for the same reasons, Congressional Republicans who decry Obamacare should be careful that they do not likewise condemn mergers under what amounts to a “big is bad” theory that is inconsistent with the rigorous law and economics approach that they otherwise generally support. To the extent that the true target is not health insurance industry consolidation, but rather underlying regulatory changes that have encouraged that consolidation, scoring political points by impugning mergers threatens both health insurance consumers in the short run, as well as consumers throughout the economy in the long run (by undermining the well-established economic critiques of a reflexive “big is bad” response).

It is simply not clear that ACA-induced health insurance mergers are likely to be anticompetitive. In fact, because the ACA builds on state regulation of insurance providers, requiring greater transparency and regulatory review of pricing and coverage terms, it seems unlikely that health insurers would be free to engage in anticompetitive price increases or reduced coverage that could harm consumers.

On the contrary, the managerial and transactional efficiencies from the proposed mergers, combined with greater bargaining power against now-larger providers are likely to lead to both better quality care and cost savings passed-on to consumers. Increased entry, at least in part due to the ACA in most of the markets in which the merging companies will compete, along with integrated health networks themselves entering and threatening entry into insurance markets, will almost certainly lead to more consumer cost savings. In the current regulatory environment created by the ACA, in other words, insurance mergers have considerable upside potential, with little downside risk.

Conclusion

In sum, regardless of what one thinks about the ACA and its likely effects on consumers, it is not clear that health insurance mergers, especially in a post-ACA world, will be harmful.

Rather, assessing the likely competitive effects of health insurance mergers entails consideration of many complicated (and, unfortunately, politicized) issues. In future blog posts we will discuss (among other things): the proper treatment of efficiencies arising from health insurance mergers, the appropriate geographic and product markets for health insurance merger reviews, the role of state regulations in assessing likely competitive effects, and the strengths and weaknesses of arguments for potential competitive harms arising from the mergers.

Recent years have seen an increasing interest in incorporating privacy into antitrust analysis. The FTC and regulators in Europe have rejected these calls so far, but certain scholars and activists continue their attempts to breathe life into this novel concept. Elsewhere we have written at length on the scholarship addressing the issue and found the case for incorporation wanting. Among the errors proponents make is a persistent (and woefully unsubstantiated) assertion that online data can amount to a barrier to entry, insulating incumbent services from competition and ensuring that only the largest providers thrive. This data barrier to entry, it is alleged, can then allow firms with monopoly power to harm consumers, either directly through “bad acts” like price discrimination, or indirectly by raising the costs of advertising, which then get passed on to consumers.

A case in point was on display at last week’s George Mason Law & Economics Center Briefing on Big Data, Privacy, and Antitrust. Building on their growing body of advocacy work, Nathan Newman and Allen Grunes argued that this hypothesized data barrier to entry actually exists, and that it prevents effective competition from search engines and social networks that are interested in offering services with heightened privacy protections.

According to Newman and Grunes, network effects and economies of scale ensure that dominant companies in search and social networking (they specifically named Google and Facebook — implying that they are in separate markets) operate without effective competition. This results in antitrust harm, they assert, because it precludes competition on the non-price factor of privacy protection.

In other words, according to Newman and Grunes, even though Google and Facebook offer their services for a price of $0 and constantly innovate and upgrade their products, consumers are nevertheless harmed because the business models of less-privacy-invasive alternatives are foreclosed by insufficient access to data (an almost self-contradicting and silly narrative for many reasons, including the big question of whether consumers prefer greater privacy protection to free stuff). Without access to, and use of, copious amounts of data, Newman and Grunes argue, the algorithms underlying search and targeted advertising are necessarily less effective and thus the search product without such access is less useful to consumers. And even more importantly to Newman, the value to advertisers of the resulting consumer profiles is diminished.

Newman has put forth a number of other possible antitrust harms that purportedly result from this alleged data barrier to entry, as well. Among these is the increased cost of advertising to those who wish to reach consumers. Presumably this would harm end users who have to pay more for goods and services because the costs of advertising are passed on to them. On top of that, Newman argues that ad networks inherently facilitate price discrimination, an outcome that he asserts amounts to antitrust harm.

FTC Commissioner Maureen Ohlhausen (who also spoke at the George Mason event) recently made the case that antitrust law is not well-suited to handling privacy problems. She argues — convincingly — that competition policy and consumer protection should be kept separate to preserve doctrinal stability. Antitrust law deals with harms to competition through the lens of economic analysis. Consumer protection law is tailored to deal with broader societal harms and aims at protecting the “sanctity” of consumer transactions. Antitrust law can, in theory, deal with privacy as a non-price factor of competition, but this is an uneasy fit because of the difficulties of balancing quality over two dimensions: Privacy may be something some consumers want, but others would prefer a better algorithm for search and social networks, and targeted ads with free content, for instance.

In fact, there is general agreement with Commissioner Ohlhausen on her basic points, even among critics like Newman and Grunes. But, as mentioned above, views diverge over whether there are some privacy harms that should nevertheless factor into competition analysis, and on whether there is in fact  a data barrier to entry that makes these harms possible.

As we explain below, however, the notion of data as an antitrust-relevant barrier to entry is simply a myth. And, because all of the theories of “privacy as an antitrust harm” are essentially predicated on this, they are meritless.

First, data is useful to all industries — this is not some new phenomenon particular to online companies

It bears repeating (because critics seem to forget it in their rush to embrace “online exceptionalism”) that offline retailers also receive substantial benefit from, and greatly benefit consumers by, knowing more about what consumers want and when they want it. Through devices like coupons and loyalty cards (to say nothing of targeted mailing lists and the age-old practice of data mining check-out receipts), brick-and-mortar retailers can track purchase data and better serve consumers. Not only do consumers receive better deals for using them, but retailers know what products to stock and advertise and when and on what products to run sales. For instance:

  • Macy’s analyzes tens of millions of terabytes of data every day to gain insights from social media and store transactions. Over the past three years, the use of big data analytics alone has helped Macy’s boost its revenue growth by 4 percent annually.
  • Following its acquisition of Kosmix in 2011, Walmart established @WalmartLabs, which created its own product search engine for online shoppers. In the first year of its use alone, the number of customers buying a product on Walmart.com after researching a purchase increased by 20 percent. According to Ron Bensen, the vice president of engineering at @WalmartLabs, the combination of in-store and online data could give brick-and-mortar retailers like Walmart an advantage over strictly online stores.
  • Panera and a whole host of restaurants, grocery stores, drug stores and retailers use loyalty cards to advertise and learn about consumer preferences.

And of course there is a host of others uses for data, as well, including security, fraud prevention, product optimization, risk reduction to the insured, knowing what content is most interesting to readers, etc. The importance of data stretches far beyond the online world, and far beyond mere retail uses more generally. To describe even online giants like Amazon, Apple, Microsoft, Facebook and Google as having a monopoly on data is silly.

Second, it’s not the amount of data that leads to success but building a better mousetrap

The value of knowing someone’s birthday, for example, is not in that tidbit itself, but in the fact that you know this is a good day to give that person a present. Most of the data that supports the advertising networks underlying the Internet ecosphere is of this sort: Information is important to companies because of the value that can be drawn from it, not for the inherent value of the data itself. Companies don’t collect information about you to stalk you, but to better provide goods and services to you.

Moreover, data itself is not only less important than what can be drawn from it, but data is also less important than the underlying product it informs. For instance, Snapchat created a challenger to  Facebook so successfully (and in such short time) that Facebook attempted to buy it for $3 billion (Google offered $4 billion). But Facebook’s interest in Snapchat wasn’t about its data. Instead, Snapchat was valuable — and a competitive challenge to Facebook — because it cleverly incorporated the (apparently novel) insight that many people wanted to share information in a more private way.

Relatedly, Twitter, Instagram, LinkedIn, Yelp, Pinterest (and Facebook itself) all started with little (or no) data and they have had a lot of success. Meanwhile, despite its supposed data advantages, Google’s attempts at social networking — Google+ — have never caught up to Facebook in terms of popularity to users (and thus not to advertisers either). And scrappy social network Ello is starting to build a significant base without data collection for advertising at all.

At the same time it’s simply not the case that the alleged data giants — the ones supposedly insulating themselves behind data barriers to entry — actually have the type of data most relevant to startups anyway. As Andres Lerner has argued, if you wanted to start a travel business, the data from Kayak or Priceline would be far more relevant. Or if you wanted to start a ride-sharing business, data from cab companies would be more useful than the broad, market-cross-cutting profiles Google and Facebook have. Consider companies like Uber, Lyft and Sidecar that had no customer data when they began to challenge established cab companies that did possess such data. If data were really so significant, they could never have competed successfully. But Uber, Lyft and Sidecar have been able to effectively compete because they built products that users wanted to use — they came up with an idea for a better mousetrap.The data they have accrued came after they innovated, entered the market and mounted their successful challenges — not before.

In reality, those who complain about data facilitating unassailable competitive advantages have it exactly backwards. Companies need to innovate to attract consumer data, otherwise consumers will switch to competitors (including both new entrants and established incumbents). As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results: The continued explosion of new products, services and other apps is evidence that data is not a bottleneck to competition but a spur to drive it.

Third, competition online is one click or thumb swipe away; that is, barriers to entry and switching costs are low

Somehow, in the face of alleged data barriers to entry, competition online continues to soar, with newcomers constantly emerging and triumphing. This suggests that the barriers to entry are not so high as to prevent robust competition.

Again, despite the supposed data-based monopolies of Facebook, Google, Amazon, Apple and others, there exist powerful competitors in the marketplaces they compete in:

  • If consumers want to make a purchase, they are more likely to do their research on Amazon than Google.
  • Google flight search has failed to seriously challenge — let alone displace —  its competitors, as critics feared. Kayak, Expedia and the like remain the most prominent travel search sites — despite Google having literally purchased ITA’s trove of flight data and data-processing acumen.
  • People looking for local reviews go to Yelp and TripAdvisor (and, increasingly, Facebook) as often as Google.
  • Pinterest, one of the most highly valued startups today, is now a serious challenger to traditional search engines when people want to discover new products.
  • With its recent acquisition of the shopping search engine, TheFind, and test-run of a “buy” button, Facebook is also gearing up to become a major competitor in the realm of e-commerce, challenging Amazon.
  • Likewise, Amazon recently launched its own ad network, “Amazon Sponsored Links,” to challenge other advertising players.

Even assuming for the sake of argument that data creates a barrier to entry, there is little evidence that consumers cannot easily switch to a competitor. While there are sometimes network effects online, like with social networking, history still shows that people will switch. MySpace was considered a dominant network until it made a series of bad business decisions and everyone ended up on Facebook instead. Similarly, Internet users can and do use Bing, DuckDuckGo, Yahoo, and a plethora of more specialized search engines on top of and instead of Google. And don’t forget that Google itself was once an upstart new entrant that replaced once-household names like Yahoo and AltaVista.

Fourth, access to data is not exclusive

Critics like Newman have compared Google to Standard Oil and argued that government authorities need to step in to limit Google’s control over data. But to say data is like oil is a complete misnomer. If Exxon drills and extracts oil from the ground, that oil is no longer available to BP. Data is not finite in the same way. To use an earlier example, Google knowing my birthday doesn’t limit the ability of Facebook to know my birthday, as well. While databases may be proprietary, the underlying data is not. And what matters more than the data itself is how well it is analyzed.

This is especially important when discussing data online, where multi-homing is ubiquitous, meaning many competitors end up voluntarily sharing access to data. For instance, I can use the friend-finder feature on WordPress to find Facebook friends, Google connections, and people I’m following on Twitter who also use the site for blogging. Using this feature allows WordPress to access your contact list on these major online players.

Friend-Finder

Further, it is not apparent that Google’s competitors have less data available to them. Microsoft, for instance, has admitted that it may actually have more data. And, importantly for this discussion, Microsoft may have actually garnered some of its data for Bing from Google.

If Google has a high cost per click, then perhaps it’s because it is worth it to advertisers: There are more eyes on Google because of its superior search product. Contra Newman and Grunes, Google may just be more popular for consumers and advertisers alike because the algorithm makes it more useful, not because it has more data than everyone else.

Fifth, the data barrier to entry argument does not have workable antitrust remedies

The misguided logic of data barrier to entry arguments leaves a lot of questions unanswered. Perhaps most important among these is the question of remedies. What remedy would apply to a company found guilty of leveraging its market power with data?

It’s actually quite difficult to conceive of a practical means for a competition authority to craft remedies that would address the stated concerns without imposing enormous social costs. In the unilateral conduct context, the most obvious remedy would involve the forced sharing of data.

On the one hand, as we’ve noted, it’s not clear this would actually accomplish much. If competitors can’t actually make good use of data, simply having more of it isn’t going to change things. At the same time, such a result would reduce the incentive to build data networks to begin with. In their startup stage, companies like Uber and Facebook required several months and hundreds of thousands, if not millions, of dollars to design and develop just the first iteration of the products consumers love. Would any of them have done it if they had to share their insights? In fact, it may well be that access to these free insights is what competitors actually want; it’s not the data they’re lacking, but the vision or engineering acumen to use it.

Other remedies limiting collection and use of data are not only outside of the normal scope of antitrust remedies, they would also involve extremely costly court supervision and may entail problematic “collisions between new technologies and privacy rights,” as the last year’s White House Report on Big Data and Privacy put it.

It is equally unclear what an antitrust enforcer could do in the merger context. As Commissioner Ohlhausen has argued, blocking specific transactions does not necessarily stop data transfer or promote privacy interests. Parties could simply house data in a standalone entity and enter into licensing arrangements. And conditioning transactions with forced data sharing requirements would lead to the same problems described above.

If antitrust doesn’t provide a remedy, then it is not clear why it should apply at all. The absence of workable remedies is in fact a strong indication that data and privacy issues are not suitable for antitrust. Instead, such concerns would be better dealt with under consumer protection law or by targeted legislation.

In short, all of this hand-wringing over privacy is largely a tempest in a teapot — especially when one considers the extent to which the White House and other government bodies have studiously ignored the real threat: government misuse of data à la the NSA. It’s almost as if the White House is deliberately shifting the public’s gaze from the reality of extensive government spying by directing it toward a fantasy world of nefarious corporations abusing private information….

The White House’s proposed bill is emblematic of many government “fixes” to largely non-existent privacy issues, and it exhibits the same core defects that undermine both its claims and its proposed solutions. As a result, the proposed bill vastly overemphasizes regulation to the dangerous detriment of the innovative benefits of Big Data for consumers and society at large.

Continue Reading...