Archives For antitrust

Qualcomm is currently in the midst of a high-profile antitrust case against the FTC. At the heart of these proceedings lies Qualcomm’s so-called “No License, No Chips” (NLNC) policy, whereby it purportedly refuses to sell chips to OEMs that have not concluded a license agreement covering its underlying intellectual property. According to the FTC and Qualcomm’s opponents, this ultimately thwarts competition in the chipset market.

But Qualcomm’s critics fail to convincingly explain how NLNC averts competition — a failing that is particularly evident in the short hypothetical put forward in the amicus brief penned by Mark Lemley, Douglas Melamed, and Steven Salop. This blog post responds to their brief. 

The amici’s hypothetical

In order to highlight the most salient features of the case against Qualcomm, the brief’s authors offer the following stylized example:

A hypothetical example can illustrate how Qualcomm’s strategy increases the royalties it is able to charge OEMs. Suppose that the reasonable royalty Qualcomm could charge OEMs if it licensed the patents separately from its chipsets is $2, and that the monopoly price of Qualcomm’s chips is $18 for an all-in monopoly cost to OEMs of $20. Suppose that a new chipmaker entrant is able to manufacture chipsets of comparable quality at a cost of $11 each. In that case, the rival chipmaker entrant could sell its chips to OEMs for slightly more than $11. An OEM’s all-in cost of buying from the new entrant would be slightly above $13 (i.e., the Qualcomm reasonable license royalty of $2 plus the entrant chipmaker’s price of slightly more than $11). This entry into the chipset market would induce price competition for chips. Qualcomm would still be entitled to its patent royalties of $2, but it would no longer be able to charge the monopoly all-in price of $20. The competition would force Qualcomm to reduce its chipset prices from $18 down to something closer to $11 and its all-in price from $20 down to something closer to $13.

Qualcomm’s NLNC policy prevents this competition. To illustrate, suppose instead that Qualcomm implements the NLNC policy, raising its patent royalty to $10 and cutting the chip price to $10. The all-in cost to an OEM that buys Qualcomm chips will be maintained at the monopoly level of $20. But the OEM’s cost of using the rival entrant’s chipsets now will increase to a level above $21 (i.e., the slightly higher than $11 price for the entrant’s chipset plus the $10 royalty that the OEM pays to Qualcomm of $10). Because the cost of using the entrant’s chipsets will exceed Qualcomm’s all-in monopoly price, Qualcomm will face no competitive pressure to reduce its chipset or all-in prices.

A close inspection reveals that this hypothetical is deeply flawed

There appear to be five steps in the amici’s reasoning:

  1. Chips and IP are complementary goods that are bought in fixed proportions. So buyers have a single reserve price for both; 
  2. Because of its FRAND pledges, Qualcomm is unable to directly charge a monopoly price for its IP;
  3. But, according to the amici, Qualcomm can obtain these monopoly profits by keeping competitors out of the chipset market [this would give Qualcomm a chipset monopoly and, theoretically at least, enable it to charge the combined (IP + chips) monopoly price for its chips alone, thus effectively evading its FRAND pledges]; 
  4. To keep rivals out of the chipset market, Qualcomm undercuts them on chip prices and recoups its losses by charging supracompetitive royalty rates on its IP.
  5. This is allegedly made possible by the “No License, No Chips” policy, which forces firms to obtain a license from Qualcomm, even when they purchase chips from rivals.

While points 1 and 3 of the amici’s reasoning are uncontroversial, points 2 and 4 are mutually exclusive. This flaw ultimately undermines their entire argument, notably point 5. 

The contradiction between points 2 and 4 is evident. The amici argue (using hypothetical but representative numbers) that its FRAND pledges should prevent Qualcomm from charging more than $2 in royalties per chip (“the reasonable royalty Qualcomm could charge OEMs if it licensed the patents separately from its chipsets is $2”), and that Qualcomm deters entry in the chip market by charging $10 in royalties per chip sold (“raising its patent royalty to $10 and cutting the chip price to $10”).

But these statements cannot both be true. Qualcomm either can or it cannot charge more than $2 in royalties per chip. 

There is, however, one important exception (discussed below): parties can mutually agree to depart from FRAND pricing. But let us momentarily ignore this limitation, and discuss two baseline scenarios: One where Qualcomm can evade its FRAND pledges and one where it cannot. Comparing these two settings reveals that Qualcomm cannot magically increase its profits by shifting revenue from chips to IP.

For a start, if Qualcomm cannot raise the price of its IP beyond the hypothetical FRAND benchmark ($2, in the amici’s hypo), then it cannot use its standard essential technology to compensate for foregone revenue in the chipset market. Any supracompetitive profits that it earns must thus result from its competitive position in the chipset market.

Conversely, if it can raise its IP revenue above the $2 benchmark, then it does not require a strong chipset position to earn supracompetitive profits. 

It is worth unpacking this second point. If Qualcomm can indeed evade its FRAND pledges and charge royalties of $10 per chip, then it need not exclude chipset rivals to obtain supracompetitive profits. 

Take the amici’s hypothetical numbers and assume further that Qualcomm has the same cost as its chipsets rivals (i.e. $11), and that there are 100 potential buyers with a uniform reserve price of $20 (the reserve price assumed by the amici). 

As the amici point out, Qualcomm can earn the full monopoly profits by charging $10 for IP and $10 for chips. Qualcomm would thus pocket a total of $900 in profits ((10+10-11)*100). What the amici brief fails to acknowledge is that Qualcomm could also earn the exact same profits by staying out of the chipset market. Qualcomm could let its rivals charge $11 per chip (their cost), and demand $9 for its IP. It would thus earn the same $900 of profits (9*100). 

In this hypothetical, the only reason for Qualcomm to enter the chip market is if it is a more efficient chipset producer than its chipset rivals, or if it can out-compete them with better chipsets. For instance, if Qualcomm’s costs are only $10 per chip, Qualcomm could earn a total of $1000 in profits by driving out these rivals ((10+10-10)*100). Or, if it can produce better chips, though at higher cost and price (say, $12 per chip), it could earn the same $1000 in profits ((10+12-12)*100). Both of the situations would benefit purchasers, of course. Conversely, at a higher production cost of $12 per chip, but without any quality improvement, Qualcomm would earn only $800 in profits ((10+10-12)*100) and would thus do better to exit the chipset market.

Let us recap:

  • If Qualcomm can easily evade its FRAND pledges, then it need not enter the chipset market to earn supracompetitive profits; 
  • If it cannot evade these FRAND obligations, then it will be hard-pressed to leverage its IP bottleneck so as to dominate chipsets. 

The upshot is that Qualcomm would need to benefit from exceptional circumstances in order to improperly leverage its FRAND-encumbered IP and impose anticompetitive harm by excluding its rivals in the chipset market

The NLNC policy

According to the amici, that exceptional circumstance is the NLNC policy. In their own words:

The competitive harm is a result of the royalty being higher than it would be absent the NLNC policy.

This is best understood by adding an important caveat to our previous hypothetical: The $2 FRAND benchmark of the amici’s hypothetical is only a fallback option that can be obtained via litigation. Parties are thus free to agree upon a higher rate, for instance $10. This could, notably, be the case if Qualcomm offsetted the IP increase by reducing its chipset price, such that OEMs who purchase both chipsets and IP from Qualcomm were indifferent between contracts with either of the two royalty rates.

At first sight, this caveat may appear to significantly improve the FTC’s case against Qualcomm — it raises the specter of Qualcomm charging predatory prices on its chips and then recouping its losses on IP. But further examination suggests that this is an unlikely scenario.

Though firms may nominally be paying $10 for Qualcomm’s IP and $10 for its chips, there is no escaping the fact that buyers have an outside option in both the IP and chip segments (respectively, litigation to obtain FRAND rates, and buying chips from rivals). As a result, Qualcomm will be unable to charge a total price that is significantly above the price of rivals’ chips, plus the FRAND rate for its IP (and expected litigation costs).

This is where the amici’s hypothetical is most flawed. 

It is one thing to argue that Qualcomm can charge $10 per chipset and $10 per license to firms that purchase all of their chips and IP from it (or, as the amici point out, charge a single price of $20 for the bundle). It is another matter entirely to argue — as the amici do — that Qualcomm can charge $10 for its IP to firms that receive little or no offset in the chip market because they purchase few or no chips from Qualcomm, and who have the option of suing Qualcomm, thus obtaining a license at $2 per chip (if that is, indeed, the maximum FRAND rate). Firms would have to be foolish to ignore this possibility and to acquiesce to contracts at substantially higher rates. 

Indeed, two of the largest and most powerful OEMs — Apple and Samsung — have entered into such contracts with Qualcomm. Given their ability (and, indeed, willingness) to sue for FRAND violations and to produce their own chips or assist other manufacturers in doing so, it is difficult to conclude that they have assented to supracompetitive terms. (The fact that they would prefer even lower rates, and have supported this and other antitrust suits against Qualcomm doesn’t, change this conclusion; it just means they see antitrust as a tool to reduce their costs. And the fact that Apple settled its own FRAND and antitrust suit against Qualcomm (and paid Qualcomm $4.5 billion and entered into a global licensing agreement with it) after just one day of trial further supports this conclusion).

Double counting

The amici attempt to overcome this weakness by implicitly framing their argument in terms of exclusivity, strategic entry deterrence, and tying:

An OEM cannot respond to Qualcomm’s NLNC policy by purchasing chipsets only from a rival chipset manufacturer and obtaining a license at the reasonable royalty level (i.e., $2 in the example). As the district court found, OEMs needed to procure at least some 3G CDMA and 4G LTE chipsets from Qualcomm.

* * *

The surcharge burdens rivals, leads to anticompetitive effects in the chipset markets, deters entry, and impedes follow-on innovation. 

* * *

As an economic matter, Qualcomm’s NLNC policy is analogous to the use of a tying arrangement to maintain monopoly power in the market for the tying product (here, chipsets).

But none of these arguments totally overcomes the flaw in their reasoning. Indeed, as Aldous Huxley once pointed out, “several excuses are always less convincing than one”.

For a start, the amici argue that Qualcomm uses its strong chipset position to force buyers into accepting its supracompetitive IP rates, even in those instances where they purchase chipsets from rivals. 

In making this point, the amici fall prey to the “double counting fallacy” that Robert Bork famously warned about in The Antitrust Paradox: Monopolists cannot simultaneously charge a monopoly price AND purchase exclusivity (or other contractual restrictions) from their buyers/suppliers.

The amici fail to recognize the important sacrifices that Qualcomm would have to make in order for the above strategy to be viable. In simple terms, Qualcomm would have to offset every dollar it charges above the FRAND benchmark in the IP segment with an equivalent price reduction in the chipset segment.

This has important ramifications for the FTC’s case.

Qualcomm would have to charge lower — not higher — IP fees to OEMs who purchased a large share of their chips from third party chipmakers. Otherwise, there would be no carrot to offset its greater-than-FRAND license fees, and these OEMs would have significant incentives to sue (especially in a post-eBay world where the threat of injunctions is reduced if they happen to lose). 

And yet, this is the exact opposite of what the FTC alleged:

Qualcomm sometimes expressly charged higher royalties on phones that used rivals’ chips. And even when it did not, its provision of incentive funds to offset its license fees when OEMs bought its chips effectively resulted in a discriminatory surcharge. (emphasis added)

The infeasibility of alternative explanations

One theoretical workaround would be for Qualcomm to purchase exclusivity from its OEMs, in an attempt to foreclose chipset rivals. 

Once again, Bork’s double counting argument suggests that this would be particularly onerous. By accepting exclusivity-type requirements, OEMs would not only be reducing potential competition in the chipset market, they would also be contributing to an outcome where Qualcomm could evade its FRAND pledges in the IP segment of the market. This is particularly true for pivotal OEMs (such as Apple and Samsung), who may single-handedly affect the market’s long-term trajectory. 

The amici completely overlook this possibility, while the FTC argues that this may explain the rebates that Qulacomm gave to Apple. 

But even if the rebates Qualcomm gave Apple amounted to de facto exclusivity, there are still important objections. Authorities would notably need to prove that Qualcomm could recoup its initial losses (i.e. that the rebate maximized Qualcomm’s long-term profits). If this was not the case, then the rebates may simply be due to either efficiency considerations or Apple’s significant bargaining power (Apple is routinely cited as a potential source of patent holdout; see, e.g., here and here). 

Another alternative would be for Qualcomm to evict its chipset rivals through strategic entry deterrence or limit pricing (see here and here, respectively). But while the economic literature suggests that incumbents may indeed forgo short-term profits in order to deter rivals from entering the market, these theories generally rest on assumptions of imperfect information and/or strategic commitments. Neither of these factors was alleged in the case at hand.

In particular, there is no sense that Qualcomm’s purported decision to shift royalties from chips to IP somehow harms its short-term profits, or that it is merely a strategic device used to deter the entry of rivals. As the amici themselves seem to acknowledge, the pricing structure maximizes Qualcomm’s short term revenue (even ignoring potential efficiency considerations). 

Note that this is not just a matter of economic policy. The case law relating to unilateral conduct infringements — be it Brooke Group, Alcoa, or Aspen Skiing — almost systematically requires some form of profit sacrifice on the part of the monopolist. (For a legal analysis of this issue in the Qualcomm case, see ICLE’s Amicus brief, and yesterday’s blog post on the topic).

The amici are thus left with the argument that Qualcomm could structure its prices differently, so as to maximize the profits of its rivals. Why it would choose to do so, or should indeed be forced to, is a whole other matter.

Finally, the amici refer to the strategic tying literature (here), typically associated with the Microsoft case and the so-called “platform threat”. But this analogy is highly problematic. 

Unlike Microsoft and its Internet Explorer browser, Qualcomm’s IP is de facto — and necessarily — tied to the chips that practice its technology. This is not a bug, it is a feature of the patent system. Qualcomm is entitled to royalties, whether it manufactures chips itself or leaves that task to rival manufacturers. In other words, there is no counterfactual world where OEMs could obtain Qualcomm-based chips without entering into some form of license agreement (whether directly or indirectly) with Qualcomm. The fact that OEMs must acquire a license that covers Qualcomm’s IP — even when they purchase chips from rivals — is part and parcel of the IP system.

In any case, there is little reason to believe that Qualcomm’s decision to license its IP at the OEM level is somehow exclusionary. The gist of the strategic tying literature is that incumbents may use their market power in a primary market to thwart entry in the market for a complementary good (and ultimately prevent rivals from using their newfound position in the complementary market in order to overthrow the incumbent in the primary market; Carlton & Waldman, 2002). But this is not the case here.

Qualcomm does not appear to be using what little power it might have in the IP segment in order to dominate its rivals in the chip market. As has already been explained above, doing so would imply some profit sacrifice in the IP segment in order to encourage OEMs to accept its IP/chipset bundle, rather than rivals’ offerings. This is the exact opposite of what the FTC and amici allege in the case at hand. The facts thus cut against a conjecture of strategic tying.

Conclusion

So where does this leave the amici and their brief? 

Absent further evidence, their conclusion that Qualcomm injured competition is untenable. There is no evidence that Qualcomm’s pricing structure — enacted through the NLNC policy — significantly harmed competition to the detriment of consumers. 

When all is done and dusted, the amici’s brief ultimately amounts to an assertion that Qualcomm should be made to license its intellectual property at a rate that — in their estimation — is closer to the FRAND benchmark. That judgment is a matter of contract law, not antitrust.

On November 22, the FTC filed its answering brief in the FTC v. Qualcomm litigation. As we’ve noted before, it has always seemed a little odd that the current FTC is so vigorously pursuing this case, given some of the precedents it might set and the Commission majority’s apparent views on such issues. But this may also help explain why the FTC has now opted to eschew the district court’s decision and pursue a novel, but ultimately baseless, legal theory in its brief.

The FTC’s decision to abandon the district court’s reasoning constitutes an important admission: contrary to the district court’s finding, there is no legal basis to find an antitrust duty to deal in this case. As Qualcomm stated in its reply brief (p. 12), “the FTC disclaims huge portions of the decision.” In its effort to try to salvage its case, however, the FTC reveals just how bad its arguments have been from the start, and why the case should be tossed out on its ear.

What the FTC now argues

The FTC’s new theory is that SEP holders that fail to honor their FRAND licensing commitments should be held liable under “traditional Section 2 standards,” even though they do not have an antitrust duty to deal with rivals who are members of the same standard-setting organizations (SSOs) under the “heightened” standard laid out by the Supreme Court in Aspen and Trinko:  

To be clear, the FTC does not contend that any breach of a FRAND commitment is a Sherman Act violation. But Section 2 liability is appropriate when, as here, a monopolist SEP holder commits to license its rivals on FRAND terms, and then implements a blanket policy of refusing to license those rivals on any terms, with the effect of substantially contributing to the acquisition or maintenance of monopoly power in the relevant market…. 

The FTC does not argue that Qualcomm had a duty to deal with its rivals under the Aspen/Trinko standard. But that heightened standard does not apply here, because—unlike the defendants in Aspen, Trinko, and the other duty-to-deal precedents on which it relies—Qualcomm entered into a voluntary contractual commitment to deal with its rivals as part of the SSO process, which is itself a derogation from normal market competition. And although the district court applied a different approach, this Court “may affirm on any ground finding support in the record.” Cigna Prop. & Cas. Ins. Co. v. Polaris Pictures Corp., 159 F.3d 412, 418-19 (9th Cir. 1998) (internal quotation marks omitted) (emphasis added) (pp.69-70).

In other words, according to the FTC, because Qualcomm engaged in the SSO process—which is itself “a derogation from normal market competition”—its evasion of the constraints of that process (i.e., the obligation to deal with all comers on FRAND terms) is “anticompetitive under traditional Section 2 standards.”

The most significant problem with this new standard is not that it deviates from the basis upon which the district court found Qualcomm liable; it’s that it is entirely made up and has no basis in law.

Absent an antitrust duty to deal, patent law grants patentees the right to exclude rivals from using patented technology

Part of the bundle of rights connected with the property right in patents is the right to exclude, and along with it, the right of a patent holder to decide whether, and on what terms, to sell licenses to rivals. The law curbs that right only in select circumstances. Under antitrust law, such a duty to deal, in the words of the Supreme Court in Trinko, “is at or near the outer boundary of §2 liability.” The district court’s ruling, however, is based on the presumption of harm arising from a SEP holder’s refusal to license, rather than an actual finding of anticompetitive effect under §2. The duty to deal it finds imposes upon patent holders an antitrust obligation to license their patents to competitors. (While, of course, participation in an SSO may contractually obligate an SEP-holder to license its patents to competitors, that is an entirely different issue than whether it operates under a mandatory requirement to do so as a matter of public policy).  

The right of patentees to exclude is well-established, and injunctions enforcing that right are regularly issued by courts. Although the rate of permanent injunctions has decreased since the Supreme Court’s eBay decision, research has found that federal district courts still grant them over 70% of the time after a patent holder prevails on the merits. And for patent litigation involving competitors, the same research finds that injunctions are granted 85% of the time.  In principle, even SEP holders can receive injunctions when infringers do not act in good faith in FRAND negotiations. See Microsoft Corp. v. Motorola, Inc., 795 F.3d 1024, 1049 n.19 (9th Cir. 2015):

We agree with the Federal Circuit that a RAND commitment does not always preclude an injunctive action to enforce the SEP. For example, if an infringer refused to accept an offer on RAND terms, seeking injunctive relief could be consistent with the RAND agreement, even where the commitment limits recourse to litigation. See Apple Inc., 757 F.3d at 1331–32

Aside from the FTC, federal agencies largely agree with this approach to the protection of intellectual property. For instance, the Department of Justice, the US Patent and Trademark Office, and the National Institute for Standards and Technology recently released their 2019 Joint Policy Statement on Remedies for Standards-Essential Patents Subject to Voluntary F/RAND Commitments, which clarifies that:

All remedies available under national law, including injunctive relief and adequate damages, should be available for infringement of standards-essential patents subject to a F/RAND commitment, if the facts of a given case warrant them. Consistent with the prevailing law and depending on the facts and forum, the remedies that may apply in a given patent case include injunctive relief, reasonable royalties, lost profits, enhanced damages for willful infringement, and exclusion orders issued by the U.S. International Trade Commission. These remedies are equally available in patent litigation involving standards-essential patents. While the existence of F/RAND or similar commitments, and conduct of the parties, are relevant and may inform the determination of appropriate remedies, the general framework for deciding these issues remains the same as in other patent cases. (emphasis added).

By broadening the antitrust duty to deal well beyond the bounds set by the Supreme Court, the district court opinion (and the FTC’s preferred approach, as well) eviscerates the right to exclude inherent in patent rights. In the words of retired Federal Circuit Judge Paul Michel in an amicus brief in the case: 

finding antitrust liability premised on the exercise of valid patent rights will fundamentally abrogate the patent system and its critical means for promoting and protecting important innovation.

And as we’ve noted elsewhere, this approach would seriously threaten consumer welfare:

Of course Qualcomm conditions the purchase of its chips on the licensing of its intellectual property; how could it be any other way? The alternative would require Qualcomm to actually facilitate the violation of its property rights by forcing it to sell its chips to device makers even if they refuse its patent license terms. In that world, what device maker would ever agree to pay more than a pittance for a patent license? The likely outcome is that Qualcomm charges more for its chips to compensate (or simply stops making them). Great, the FTC says; then competitors can fill the gap and — voila: the market is more competitive, prices will actually fall, and consumers will reap the benefits.

Except it doesn’t work that way. As many economists, including both the current [now former] and a prominent former chief economist of the FTC, have demonstrated, forcing royalty rates lower in such situations is at least as likely to harm competition as to benefit it. There is no sound theoretical or empirical basis for concluding that using antitrust to move royalty rates closer to some theoretical ideal will actually increase consumer welfare. All it does for certain is undermine patent holders’ property rights, virtually ensuring there will be less innovation.

The FTC realizes the district court doesn’t have the evidence to support its duty to deal analysis

Antitrust law does not abrogate the right of a patent holder to exclude and to choose when and how to deal with rivals, unless there is a proper finding of a duty to deal. In order to find a duty to deal, there must be a harm to competition, not just a competitor, which, under the Supreme Court’s Aspen and Trinko cases can be inferred in the duty-to-deal context only where the challenged conduct leads to a “profit sacrifice.” But the record does not support such a finding. As we wrote in our amicus brief:

[T]he Supreme Court has identified only a single scenario from which it may plausibly be inferred that defendant’s refusal to deal with rivals harms consumers: The existence of a prior, profitable course of dealing, and the termination and replacement of that arrangement with an alternative that not only harms rivals, but also is less profitable for defendant. 

A monopolist’s willingness to forego (short-term) profits plausibly permits an inference that conduct is not procompetitive, because harm to a rival caused by an increase in efficiency should lead to higher—not lower—profits for defendant. And “[i]f a firm has been ‘attempting to exclude rivals on some basis other than efficiency,’ it’s fair to characterize its behavior as predatory.” Aspen Skiing, 472 U.S. at 605 (quoting Robert Bork, The Antitrust Paradox 138 (1978)).

In an effort to satisfy this standard, the district court states that “because Qualcomm previously licensed its rivals, but voluntarily stopped licensing rivals even though doing so was profitable, Qualcomm terminated a voluntary and profitable course of dealing.” Slip op. at 137. 

But it is not enough merely that the prior arrangement was profitable. Rather, Trinko and Aspen Skiing hold that when a monopolist ends a profitable relationship with a rival, anticompetitive exclusion may be inferred only when it also refuses to engage in an ongoing arrangement that, in the short run, is more profitable than no relationship at all. The key is the relative value to the monopolist of the current options on offer, not the value to the monopolist of the terminated arrangement. See Trinko, 540 U.S. at 409 (“a willingness to forsake short-term profits”); Aspen Skiing, 472 U.S. at 610–11 (“it was willing to sacrifice short-run benefits”)…

The record here uniformly indicates Qualcomm expected to maximize its royalties by dealing with OEMs rather than rival chip makers; it neither anticipated nor endured short-term loss. As the district court itself concluded, Qualcomm’s licensing practices avoided patent exhaustion and earned it “humongously more lucrative” royalties. Slip op. at 1243–254. That Qualcomm anticipated greater profits from its conduct precludes an inference of anticompetitive harm.

Moreover, Qualcomm didn’t refuse to allow rivals to use its patents; it simply didn’t sell them explicit licenses to do so. As discussed in several places by the district court:

According to Andrew Hong (Legal Counsel at Samsung Intellectual Property Center), during license negotiations, Qualcomm made it clear to Samsung that “Qualcomm’s standard business practice was not to provide licenses to chip manufacturers.” Hong Depo. 161:16-19. Instead, Qualcomm had an “unwritten policy of not going after chip manufacturers.” Id. at 161:24-25… (p.123)

* * *

Alex Rogers (QTL President) testified at trial that as part of the 2018 Settlement Agreement between Samsung and Qualcomm, Qualcomm did not license Samsung, but instead promised only that Qualcomm would offer Samsung a FRAND license before suing Samsung: “Qualcomm gave Samsung an assurance that should Qualcomm ever seek to assert its cellular SEPs against that component business, against those components, we would first make Samsung an offer on fair, reasonable, and non-discriminatory terms.” Tr. at 1989:5-10. (p.124)

This is an important distinction. Qualcomm allows rivals to use its patented technology by not asserting its patent rights against them—which is to say: instead of licensing its technology for a fee, Qualcomm allows rivals to use its technology to develop their own chips royalty-free (and recoups its investment by licensing the technology to OEMs that choose to implement the technology in their devices). 

The irony of this analysis, of course, is that the district court effectively suggests that Qualcomm must charge rivals a positive, explicit price in exchange for a license in order to facilitate competition, while allowing rivals to use its patented technology for free (or at the “cost” of some small reduction in legal certainty, perhaps) is anticompetitive.

Nonetheless, the district court’s factual finding that Qualcomm’s licensing scheme was “humongously” profitable shows there was no profit sacrifice as required for a duty to deal finding. The general presumption that patent holders can exclude rivals is not subject to an antitrust duty to deal where there is no profit sacrifice by the patent holder. Here, however, Qualcomm did not sacrifice profits by adopting the challenged licensing scheme. 

It is perhaps unsurprising that the FTC chose not to support the district court’s duty-to-deal argument, even though its holding was in the FTC’s favor. But, while the FTC was correct not to countenance the district court’s flawed arguments, the FTC’s alternative argument in its reply brief is even worse.

The FTC’s novel theory of harm is unsupported and weak

As noted, the FTC’s alternative theory is that Qualcomm violated Section 2 simply by failing to live up to its contractual SSO obligations. For the FTC, because Qualcomm joined an SSO, it is no longer in a position to refuse to deal legally. Moreover, there is no need to engage in an Aspen/Trinko analysis in order to find liability. Instead, according to the FTC’s brief, liability arises because the evasion of an exogenous pricing constraint (such as an SSO’s FRAND obligation) constitutes an antitrust harm:

Of course, a breach of contract, “standing alone,” does not “give rise to antitrust liability.” City of Vernon v. S. Cal. Edison Co., 955 F.2d 1361, 1368 (9th Cir. 1992); cf. Br. 52 n.6. Instead, a monopolist’s conduct that breaches such a contractual commitment is anticompetitive only when it satisfies traditional Section 2 standards—that is, only when it “tends to impair the opportunities of rivals and either does not further competition on the merits or does so in an unnecessarily restrictive way.” Cascade Health, 515 F.3d at 894. The district court’s factual findings demonstrate that Qualcomm’s breach of its SSO commitments satisfies both elements of that traditional test. (emphasis added)

To begin, it must be noted that the operative language quoted by the FTC from Cascade Health is attributed in Cascade Health to Aspen Skiing. In other words, even Cascade Health recognizes that Aspen Skiing represents the Supreme Court’s interpretation of that language in the duty-to-deal context. And in that case—in contrast to the FTC’s argument in its brief—the Court required demonstration of such a standard to mean that a defendant “was not motivated by efficiency concerns and that it was willing to sacrifice short-run benefits and consumer goodwill in exchange for a perceived long-run impact on its… rival.” (Aspen Skiing at 610-11) (emphasis added).

The language quoted by the FTC cannot simultaneously justify an appeal to an entirely different legal standard separate from that laid out in Aspen Skiing. As such, rather than dispensing with the duty to deal requirements laid out in that case, Cascade Health actually reinforces them.

Second, to support its argument the FTC points to Broadcom v. Qualcomm, 501 F.3d 297 (3rd Cir. 2007) as an example of a court upholding an antitrust claim based on a defendant’s violation of FRAND terms. 

In Broadcom, relying on the FTC’s enforcement action against Rambus before it was overturned by the D.C. Circuit, the Third Circuit found that there was an actionable issue when Qualcomm deceived other members of an SSO by promising to

include its proprietary technology in the… standard by falsely agreeing to abide by the [FRAND policies], but then breached those agreements by licensing its technology on non-FRAND terms. The intentional acquisition of monopoly power through deception… violates antitrust law. (emphasis added)

Even assuming Broadcom were good law post-Rambus, the case is inapposite. In Broadcom the court found that Qualcomm could be held to violate antitrust law by deceiving the SSO (by falsely promising to abide by FRAND terms) in order to induce it to accept Qualcomm’s patent in the standard. The court’s concern was that, by falsely inducing the SSO to adopt its technology, Qualcomm deceptively acquired monopoly power and limited access to competing technology:

When a patented technology is incorporated in a standard, adoption of the standard eliminates alternatives to the patented technology…. Firms may become locked in to a standard requiring the use of a competitor’s patented technology. 

Key to the court’s finding was that the alleged deception induced the SSO to adopt the technology in its standard:

We hold that (1) in a consensus-oriented private standard-setting environment, (2) a patent holder’s intentionally false promise to license essential proprietary technology on FRAND terms, (3) coupled with an SDO’s reliance on that promise when including the technology in a standard, and (4) the patent holder’s subsequent breach of that promise, is actionable conduct. (emphasis added)

Here, the claim is different. There is no allegation that Qualcomm engaged in deceptive conduct that affected the incorporation of its technology into the relevant standard. Indeed, there is no allegation that Qualcomm’s alleged monopoly power arises from its challenged practices; only that it abused its lawful monopoly power to extract supracompetitive prices. Even if an SEP holder may be found liable for falsely promising not to evade a commitment to deal with rivals in order to acquire monopoly power from its inclusion in a technological standard under Broadcom, that does not mean that it can be held liable for evading a commitment to deal with rivals unrelated to its inclusion in a standard, nor that such a refusal to deal should be evaluated under any standard other than that laid out in Aspen Skiing.

Moreover, the FTC nowhere mentions the DC Circuit’s subsequent Rambus decision overturning the FTC and calling the holding in Broadcom into question, nor does it discuss the Supreme Court’s NYNEX decision in any depth. Yet these cases stand clearly for the opposite proposition: a court cannot infer competitive harm from a company’s evasion of a FRAND pricing constraint. As we wrote in our amicus brief

In Rambus Inc. v. FTC, 522 F.3d 456 (D.C. Cir. 2008), the D.C. Circuit, citing NYNEX, rejected the FTC’s contention that it may infer anticompetitive effect from defendant’s evasion of a constraint on its monopoly power in an analogous SEP-licensing case: “But again, as in NYNEX, an otherwise lawful monopolist’s end-run around price constraints, even when deceptive or fraudulent, does not alone present a harm to competition.” Id. at 466 (citation omitted). NYNEX and Rambus reinforce the Court’s repeated holding that an inference is permissible only where it points clearly to anticompetitive effect—and, bad as they may be, evading obligations under other laws or violating norms of “business morality” do not permit a court to undermine “[t]he freedom to switch suppliers [which] lies close to the heart of the competitive process that the antitrust laws seek to encourage. . . . Thus, this Court has refused to apply per se reasoning in cases involving that kind of activity.” NYNEX, 525 U.S. at 137 (citations omitted).

Essentially, the FTC’s brief alleges that Qualcomm’s conduct amounts to an evasion of the constraint imposed by FRAND terms—without which the SSO process itself is presumptively anticompetitive. Indeed, according to the FTC, it is only the FRAND obligation that saves the SSO agreement from being inherently anticompetitive. 

In fact, when a firm has made FRAND commitments to an SSO, requiring the firm to comply with its commitments mitigates the risk that the collaborative standard-setting process will harm competition. Product standards—implicit “agreement[s] not to manufacture, distribute, or purchase certain types of products”—“have a serious potential for anticompetitive harm.” Allied Tube, 486 U.S. at 500 (citation and footnote omitted). Accordingly, private SSOs “have traditionally been objects of antitrust scrutiny,” and the antitrust laws tolerate private standard-setting “only on the understanding that it will be conducted in a nonpartisan manner offering procompetitive benefits,” and in the presence of “meaningful safeguards” that prevent the standard-setting process from falling prey to “members with economic interests in stifling product competition.” Id. at 500- 01, 506-07; see Broadcom, 501 F.3d at 310, 314-15 (collecting cases). 

FRAND commitments are among the “meaningful safeguards” that SSOs have adopted to mitigate this serious risk to competition…. 

Courts have therefore recognized that conduct that breaches or otherwise “side-steps” these safeguards is appropriately subject to conventional Sherman Act scrutiny, not the heightened Aspen/Trinko standard… (p.83-84)

In defense of the proposition that courts apply “traditional antitrust standards to breaches of voluntary commitments made to mitigate antitrust concerns,” the FTC’s brief cites not only Broadcom, but also two other cases:

While this Court has long afforded firms latitude to “deal or refuse to deal with whomever [they] please[] without fear of violating the antitrust laws,” FountWip, Inc. v. Reddi-Wip, Inc., 568 F.2d 1296, 1300 (9th Cir. 1978) (citing Colgate, 250 U.S. at 307), it, too, has applied traditional antitrust standards to breaches of voluntary commitments made to mitigate antitrust concerns. In Mount Hood Stages, Inc. v. Greyhound Corp., 555 F.2d 687 (9th Cir. 1977), this Court upheld a judgment holding that Greyhound violated Section 2 by refusing to interchange bus traffic with a competing bus line after voluntarily committing to do so in order to secure antitrust approval from the Interstate Commerce Commission for proposed acquisitions. Id. at 69723; see also, e.g., Biovail Corp. Int’l v. Hoechst Aktiengesellschaft, 49 F. Supp. 2d 750, 759 (D.N.J. 1999) (breach of commitment to deal in violation of FTC merger consent decree exclusionary under Section 2). (p.85-86)

The cases the FTC cites to justify the proposition all deal with companies sidestepping obligations in order to falsely acquire monopoly power. The two cases cited above both involve companies making promises to government agencies to win merger approval and then failing to follow through. And, as noted, Broadcom deals with the acquisition of monopoly power by making false promises to an SSO to induce the choice of proprietary technology in a standard. While such conduct in the acquisition of monopoly power may be actionable under Broadcom (though this is highly dubious post-Rambus), none of these cases supports the FTC’s claim that an SEP holder violates antitrust law any time it evades an SSO obligation to license its technology to rivals. 

Conclusion

Put simply, the district court’s opinion in FTC v. Qualcomm runs headlong into the Supreme Court’s Aspen decision and founders there. This is why the FTC is trying to avoid analyzing the case under Aspen and subsequent duty-to-deal jurisprudence (including Trinko, the 9th Circuit’s MetroNet decision, and the 10th Circuit’s Novell decision): because it knows that if the appellate court applies those standards, the district court’s duty-to-deal analysis will fail. The FTC’s basis for applying a different standard is unsupportable, however. And even if its logic for applying a different standard were valid, the FTC’s proffered alternative theory is groundless in light of Rambus and NYNEX. The Ninth Circuit should vacate the district court’s finding of liability. 

[TOTM: The following is the seventh in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Cento Veljanoski, Managing Partner, Case Associates and IEA Fellow in Law and Economics, Institute of Economic Affairs.

The concept of a “good” or “efficient” cartel is generally regarded by competition authorities as an oxymoron. A cartel is seen as the worst type of antitrust violation and one that warrants zero tolerance. Agreements between competitors to raise prices and share the market are assumed unambiguously to reduce economic welfare. As such, even if these agreements are ineffective, the law should come down hard on attempts to rig prices. In this post, I argue that this view goes too far and that even ‘hard core’ cartels that lower output and increase prices can be efficient, and pro-competitive. I discuss three examples of where hard core cartels may be efficient.

Resuscitating the efficient cartel

Basic economic theory tells us that coordination can be efficient in many instances, and this is accepted in law, e.g. joint ventures and agreements on industry standards.  But where competitors agree on prices and the volume of sales – so called “hard core” cartels – there is intolerance.

Nonetheless there is a recognition that cartel-like arrangements can promote efficiency. For example, Article 101(3)TFEU exempts anticompetitive agreements or practices whose economic and/or technical benefits outweigh their restrictions on competition, provided a fair share of those benefits are passed-on to consumers. However, this so-called ‘efficiency defence’ is highly unlikely to be accepted for hard core cartels nor are the wider economic or non-economic considerations. But as will be shown, there are classes of hard core cartels and restrictive agreement which, while they reduce output, raise prices and foreclose entry, are nonetheless efficient and not anticompetitive.

Destructive competition and the empty core

The claim that cartels have beneficial effects precedes US antitrust law. Trusts were justified as necessary to prevent ‘ruinous’ or ‘destructive’ competition in industries with high fixed costs subject to frequent ‘price wars’. This was the unsuccessful defence in Trans-Missouri (166 U.S. 290 (1897), where 18 US railroad companies formed a trust to set their rates, arguing that absent their agreement there would be ruinous competition, eventual monopoly and even higher prices.  Since then industries such as steel, cement, paper, shipping and airlines have at various times claimed that competition was unsustainable and wasteful.

These seem patently self-serving claims.  But the idea that some industries are unstable without a competitive equilibrium has long been appreciated by economists.  Nearly a century after Trans-Missouri, economist Lester Telser (1996) refreshed the idea that cooperative arrangements among firms in some industries were not attempts to impose monopoly prices but a response to their inherent structural inefficiency. This was based on the concept of an ‘empty core’. While Tesler’s article uses some hideously dense mathematical game theory, the idea is simple to state.  A market is said to have a ‘core’ if there is a set of transactions between buyers and sellers such that there are no other transactions that could make some of the buyers or sellers better off. Such a core will survive in a competitive market if all firms can make zero economic profits. In a market with an empty core no coalition of firms will be able to earn zero profits; some firms will be able to earn a surplus and thereby attract entry, but because the core is empty the new entry will inflict losses on all firms. When firms exit due to their losses, the remaining firms again earn economic profits, and attract entry. There are no competitive long-run stable equilibria for these industries.

The literature suggests that an industry is likely to have an empty core: (1) where firms have fixed production capacities; (2) that are large relative to demand; (3) there are scale economies in production; (4) incremental costs are low, (5) demand is uncertain and fluctuates markedly; and (6) output cannot be stored cheaply. Industries which have frequently been cartelised share many of these features (see above).

In the 1980s several academic studies applied empty core theory to antitrust. Brittlingmayer (1982) claimed that the US iron pipe industry had an empty core and that the famous Addyston Pipe case was thus wrongly decided, and responsible for mergers in the industry.

Sjostrom (1989) and others have argued that conference lines were not attempts to overcharge shippers but to counteract an empty core that led to volatile market shares and freight rates due to excess capacity and fixed schedules.  This type of analysis formed the basis for their exemption from competition laws. Since the nineteenth century, liner conferences had been permitted to fix prices and regulate capacity on routes between Europe, and North America and the Far East. The EU block exemption (Council Regulation 4056/86) allowed them to set common freight rates, to take joint decisions on the limitation of supply and to coordinate timetables. However the justifications for these exemptions has worn thin. As from October 2008, these EU exemptions were removed based on scepticism that liner shipping is an empty core industry particularly because, with the rise of modern leasing and chartering techniques to manage capacity, the addition of shipping capacity is no longer a lumpy process. 

While the empty core argument may have merit, it is highly unlikely to persuade European competition authorities, and the experience with legal cartels that have been allowed in order to rationalise production and costs has not been good.

Where there are environmental problems

Cartels in industries with significant environmental problems – which produce economic ‘bads’ rather than goods – can have beneficial effects. Restricting the output of an economic bad is good. Take an extreme example. When most people hear the word cartel, they think of a Colombian drugs cartel. . A drugs cartel reduces drug trafficking to keep its profits high. Competition in the supply would  lead to an over-supply of cheaper drugs, and a cartel charging higher prices and lower output is superior to a competitive outcome.

The same logic applies also to industries in which bads, such as pollution, are a by-product of otherwise legitimate and productive activities.  An industry which generates pollution does not take the full costs of its activities into account, and hence output is over-expanded and prices too low. Economic efficiency requires a reduction in the harmful activities and the associated output.  It also requires the product’s price to increase to incorporate the pollution costs. A cartel that raises prices can move such an industry’s output and harm closer to the efficient level, although this would not be in response to higher pollution-inclusive costs – which makes this a second-best solution.

There has been a fleeting recognition that competition in the presence of external costs is not efficient and that restricting output does not necessarily distort competition. In 1999, the European Commission almost uniquely exempted a cartel-like restrictive agreement among producers and importers of washing machines under Article 101(3)TFEU (Case IV.F.1/36.718. CECED).  The agreement not to produce or import the least energy efficient washing machines representing 10-11% of then EC sales would adversely affect competition and increase prices since the most polluting machines were the least expensive ones.

The Commission has since rowed back from its broad application of Article 101(3)TFEU in CECED. In its 2001 Guidelines on Horizontal Agreements it devoted a chapter to environmental agreements which it removed from its revised 2011 Guidelines (para 329) which treated CECED as a standardisation agreement.

Common property industries

A more clear-cut case of an efficient cartel is where firms compete over a common property resource for which property rights are ill-defined or absent, such as is often the case for fisheries. In these industries, competition leads to excessive entry, over-exploitation, and the dissipation of the economic returns (rents).  A cartel – a ‘club’ of fisherman – having sole control of the fishing grounds would unambiguously increase efficiency even though it increased prices, reduced production and foreclosed entry. 

The benefits of such cartels have not been accepted in law by competition authorities. The Dutch competition authority’s (MNa Case No. 2269/330) and the European Commission’s (Case COMP/39633 Shrimps) shrimps decisions in 2013-14 imposed fines on Dutch shrimp fleet and wholesalers’ organisations for agreeing quotas and prices. One study showed that the Dutch agreement reduced the fishing catch by at least 12%-16% during the cartel period and increased wholesale prices. However, this output reduction and increase in prices was not welfare-reducing if the competitive outcome resulted over-fishing. This and subsequent cases have resulted in a vigorous policy debate in the Netherlands over the use of Article 101(3)TFEU to take the wider benefits into account (ACM Position Paper 2014).

Sustainability and Article 101(3)

There is a growing debate over the conflict between and antitrust and other policy objectives, such as sustainability and industrial policy. One strand of this debate focuses on expanding the efficiency defence under Article 101(3)TFEU.  As currently framed, it has not enabled the reduction in pollution costs or resource over-exploitation to exempt restrictive agreements which distort competition even though these agreements may be efficient. In the pollution case the benefits are generalised ones to third parties not consumers, which are difficult to quantify. In the fisheries case, the short-term welfare of existing consumers is unambiguously reduced as they pay higher prices for less fish; the benefits are long term (more sustainable fish stock which can continue to be consumed) and may not be realized at all by current consumers but rather will accrue to future generations.

To accommodate sustainability concerns and other efficiency factors Article 101(3)TFEU would have to be expanded into a public interest defence based on a wider total welfare objective, not just consumers’ welfare as it is now, which took into account the long-run interest of consumers and third parties potentially affected by a restrictive agreement. This would mark a radical and open-ended expansion of the objectives of European antitrust and the grounds for exemption. It would put sustainability on the same plank as the clamour that industrial policy be taken-into-account by antitrust authorities, which has been firmly resisted so far.  This is not to belittle both the economic and environmental grounds for a public interest defence, it is just to recognise that it is difficult to see how this can be coherently incorporated into Article 101(3)TFEU while at the same time as preserving the integrity and focus of European antitrust.

[TOTM: The following is the sixth in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Kristian Stout, Associate Director at the International Center for Law & Economics.

There is a push underway to punish big tech firms, both for alleged wrongdoing and in an effort to prevent future harm. But the movement to use antitrust law to punish big tech firms is far more about political expediency than it is about sound competition policy. 

For a variety of reasons, there is a current of dissatisfaction in society with respect to the big tech companies, some of it earned, and some of it unearned. Between March 2019 and September 2019, polls suggested that Americans were increasingly willing to entertain breaking up or otherwise increasing regulation on the big tech firms. No doubt, some significant share of this movement in popular opinion is inspired by increasingly negative reporting from major news outlets (see, for a small example, 1, 2, 3, 4, 5, 6, 7, 8). But, the fact that these companies make missteps does not require that any means at hand should be used to punish them. 

Further, not only is every tool not equal in dealing with the harms these companies could cause, we must be mindful that, even when some harm occurs, these companies generate a huge amount of social welfare. Our policy approaches to dealing with misconduct, therefore, must be appropriately measured. 

To listen to  the media, politicians, and activists, however, one wouldn’t know that anything except extreme action — often using antitrust law — is required. Presidential hopefuls want to smash the big tech companies, while activists and academics see evidence of anticompetitive conduct in every facet of these  companies’ behavior. Indeed, some claim that the firms themselves are per se a harm to democracy

The confluence of consumer dissatisfaction and activist zeal leads to a toxic result: not wanting to let a good crisis go to waste, activists and politicians push the envelope on the antitrust theories they want to apply to the detriment of the rule of law. 

Consumer concerns

Missteps by the big tech companies, both perceived and real, have led to some degree of consumer dissatisfaction. In terms of real harms data breaches and privacy scandals have gained more attention in recent years and are undoubtedly valid concerns of consumers. 

In terms of perceived harms, it has, for example, become increasingly popular to blame big tech companies for tilting the communications landscape in favor of one or another political preference. Ironically, the accusations leveled against big tech are frequently at odds. Some progressives blame big tech for helping Donald Trump to be elected president, while some conservatives believe a pervasive bias in Silicon Valley in favor of progressive policies harms conservative voices. 

But, at the same time, consumers are well familiar with the benefits that search engines, the smartphone revolution, and e-commerce have provided to society. The daily life of the average consumer is considerably better today than it was in past decades thanks to the digital services and low cost technology that is in reach of even the poorest among us. 

So why do consumers appear to be listening to the heated rhetoric of the antitrust populists?

Paul Seabright pointed to one of the big things that I think is motivating consumer willingness to listen to populist attacks on otherwise well-regarded digital services. In his keynote speech at ICLE’s “Dynamic Competition and Online Platforms” conference earlier this month, he discussed the role of trust in the platform ecosystem. According to Seabright, 

Large digital firms create anxiety in proportion to how much they meet our needs… They are strong complements to many of our talents and activities – but they also threaten to provide lots of easy substitutes for us and our talents and activities… The more we trust them the more we (rightly) fear the abuse of their trust.

Extending this insight, we imbue these platforms with a great deal of trust because they are so important to our daily lives. And we have a tendency to respond dramatically to (perceived or actual) violations of trust by these platforms because they do such a great job in nearly every respect. When a breach of that trust happens — even if its relative impact on our lives is small, and the platform continues to provide a large amount of value — we respond not in terms of its proportionate effect on our lives, but in the emotional terms of one who has been wronged by a confidant.

It is that emotional lever that populist activists and politicians are able to press. The populists can frame the failure of the firms as the sum total of their existence, and push for extreme measures that otherwise (and even a few short years ago) would have been unimaginable. 

Political opportunism

The populist crusade is fueled by the underlying sentiment of consumers, but has its own separate ends. Some critics of the state of antitrust law are seeking merely a realignment of priorities within existing doctrine. The pernicious crusade of antitrust populists, however, seeks much more. These activists (and some presidential hopefuls) want nothing short of breaking up big tech and of returning the country to some ideal of “democracy” imagined as having existed in the hazy past when antitrust laws were unevenly enforced.

It is a laudable goal to ensure that the antitrust laws are being properly administered on their own terms, but it is an entirely different project to crusade to make antitrust great again based on the flawed understandings from a century ago. 

In few areas of life would most of us actually yearn to reestablish the political and social order of times gone by — notwithstanding presidential rhetoric. The sepia-toned crusade to smash tech companies into pieces inherits its fervor from Louis Brandeis and his fellow travelers who took on the mustache-twisting villains of their time: Carnegie, Morgan, Mellon and the rest of the allegedly dirty crew of scoundrels. 

Matt Stoller’s recent book Goliath captures this populist dynamic well. He describes the history of antitrust passionately, as a morality play between the forces of light and those of darkness. On one side are heroes like Wright Patman, a scrappy poor kid from Texas who went to a no-name law school and rose to prominence in Washington as an anti-big-business crusader. On the other side are shadowy characters like Andrew Mellon, who cynically manipulated his way into government after growing bored with administering his vast, immorally acquired economic empire. 

A hundred years ago the populist antitrust quest was a response to the success of industrial titans, and their concentration of wealth in the hands of relatively few. Today, a similar set of arguments are directed at the so-called big tech companies. Stoller sees the presence of large tech firms as inimical to democracy itself — “If we don’t do something about big tech, something meaningful, we’ll just become a fascist society. It’s fairly simple.” Tim Wu has made similar claims (which my colleague Alec Stapp has ably rebutted).

In the imagination of the populists, there are good guys and bad guys and the optimal economy would approach atomistic competition. In such a world, the “little guy” can have his due and nefarious large corporations cannot become too economically powerful relative to the state.

Politicians enter this mix of consumer sentiment and populist activism with their own unique priors. On the one hand, consumer dissatisfaction makes big tech a ripe target to score easy political points. It’s a hot topic that fits easily into fundraising pitches. After all, who really cares if the billionaires lose a couple of million dollars through state intervention?

In truth, I suspect that politicians are ambivalent about what exactly to do to make good on their anti-big tech rhetoric. They will be forced to admit that these companies provide an enormous amount of social value, and if they destroy that value, fickle voters will punish them. The threat at hand is if politicians allow themselves to be seduced by the simplistic policy recommendations of the populists.

Applying the right tool to the job

Antitrust is a seductive tool to use against politically disfavored companies. It is an arcane area of law which, to the average observer, will be just so much legalese. It is, therefore, relatively easy to covertly import broader social preferences through antitrust action and pretend that the ends of the law aren’t being corrupted. 

But this would be a mistake. 

The complicated problem with the big tech companies is that they indeed could be part of a broader set of social ills mentioned above. Its complicated because it’s highly unlikely that these platforms cause the problems in society, or that any convenient legal tool like antitrust will do much to actually remedy the problems we struggle with. 

Antitrust is a goal-focused body of law, and the goal it seeks—optimizing consumer welfare—is distinctly outside of the populist agenda. The real danger in the populist campaign is not just the social losses we will incur if they successfully smash productive firms, but the long term harm to the rule of law. 

The American system of law is fundamentally predicated on an idea of promulgating rules of general applicability, and resorting to sector- or issue-specific regulations when those general bodies of law are found to be inapplicable or ineffective. 

Banking regulation is a prime example. Banks are subject to general regulation from entities like the FDIC and the Federal Reserve, but, for particular issues, are subject to other agencies and laws. Requirements for deterring money laundering, customer privacy obligations, and rules mandating the separation of commercial banking from investment activities all were enacted through specific legislation aimed to tailor the regulatory regime that banks faced. 

Under many of the same theories being propounded by the populists, antitrust should have been used for at least some of these ends. Couldn’t you frame the “problem” of mixing commercial banking and investment as one of impermissible integration that harms the competitive process? Wouldn’t concerns for the privacy of bank consumers sound in exactly the same manner as that proposed by advocates who claim that concentrated industries lack the incentive to properly include privacy as a dimension of product quality?

But if we hew to rigorous interpretation of competition policy, the problem for critics is that their claims that actually sound in antitrust – that Amazon predatorily prices, or Google engages in anticompetitive tying, for example – are highly speculative and not at all an easy play if pressed in litigation. So they try “new” theories of antitrust as a way to achieve preferred policy ends. Changing well accepted doctrine, such as removing the recoupement requirement from predatory pricing in order to favor small firms, or introducing broad privacy or data sharing obligations as part of competition “remedies”, is a terrible path for the stability of society and the health of the rule of law. 

Concerns about privacy, hate speech, and, more broadly, the integrity of the democratic process are critical issues to wrestle with. But these aren’t antitrust problems. If we lived in a different sort of society, where the rule of law meant far less than it does here, its conceivable that we could use whatever legal tool was at hand to right the wrongs of society. But this isn’t a good answer if you take seriously constitutional design; allowing antitrust law to “solve” broader social problems is to suborn Congress in giving away its power to a relatively opaque enforcement process. 

We should not have our constitution redesigned by antitrust lawyers.

[TOTM: The following is the fifth in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Ramsi Woodcock, Assistant Professor, College of Law, and Assistant Professor, Department of Management at Gatton College of Business & Economics, University of Kentucky.

When in 2011 Paul Krugman attacked the press for bending over backwards to give equal billing to conservative experts on social security, even though the conservatives were plainly wrong, I celebrated. Social security isn’t the biggest part of the government’s budget, and calls to privatize it in order to save the country from bankruptcy were blatant fear mongering. Why should the press report those calls with a neutrality that could mislead readers into thinking the position reasonable?

Journalists’ ethic of balanced reporting looked, at the time, like gross negligence at best, and deceit at worst. But lost in the pathos of the moment was the rationale behind that ethic, which is not so much to ensure that the truth gets into print as to prevent the press from making policy. For if journalists do not practice balance, then they ultimately decide the angle to take.

And journalists, like the rest of us, will choose their own.

The dark underbelly of the engaged journalism unleashed by progressives like Krugman has nowhere been more starkly exposed than in the unfolding assault of journalists, operating as a special interest, on Google, Facebook, and Amazon, three companies that writers believe have decimated their earnings over the past decade.

In story after story, journalists have manufactured an antitrust movement aimed at breaking up these companies, even though virtually no expert in antitrust law or economics, on either the right or the left, can find an antitrust case against them, and virtually no expert would place any of these three companies at the top of the genuinely long list of monopolies in America that are due for an antitrust reckoning.

Bitter ledes

Headlines alone tell the story. We have: “What Happens After Amazon’s Domination Is Complete? Its Bookstore Offers Clues”; “Be Afraid, Jeff Bezos, Be Very Afraid”; “How Should Big Tech Be Reined In? Here Are 4 Prominent Ideas”;  “The Case Against Google”; and “Powerful Coalition Pushes Back on Anti-Tech Fervor.”

My favorite is: “It’s Time to Break Up Facebook.” Unlike the others, it belongs to an Op-Ed, so a bias is appropriate. Not appropriate, however, is the howler, contained in the article’s body, that “a host of legal scholars like Lina Khan, Barry Lynn and Ganesh Sitaraman are plotting a way forward” toward breakup. Lina Khan has never held an academic appointment. Barry Lynn does not even have a law degree. And Ganesh Sitaraman’s academic specialty is constitutional law, not antitrust. But editors let it through anyway.

As this unguarded moment shows, the press has treated these and other members of a small network of activists and legal scholars who operate on antitrust’s fringes as representative of scholarly sentiment regarding antitrust action. The only real antitrust scholar among them is Tim Wu, who, when you look closely at his public statements, has actually gone no further than to call for Facebook to unwind its acquisitions of Instagram and WhatsApp.

In more sober moments, the press has acknowledged that the law does not support antitrust attacks on the tech giants. But instead of helping readers to understand why, the press instead presents this as a failure of the law. “To Take Down Big Tech,” read one headline in The New York Times, “They First Need to Reinvent the Law.” I have documented further instances of unbalanced reporting here.

This is not to say that we don’t need more antitrust in America. Herbert Hovenkamp, who the New York Times once recognized as  “the dean of American antitrust law,” but has since downgraded to “an antitrust expert” after he came out against the breakup movement, has advocated stronger monopsony enforcement across labor markets. Einer Elhauge at Harvard is pushing to prevent index funds from inadvertently generating oligopolies in markets ranging from airlines to pharmacies. NYU economist Thomas Philippon has called for deconcentration of banking. Yale’s Fiona Morton has pointed to rising markups across the economy as a sign of lax antitrust enforcement. Jonathan Baker has argued with great sophistication for more antitrust enforcement in general.

But no serious antitrust scholar has traced America’s concentration problem to the tech giants.

Advertising monopolies old and new

So why does the press have an axe to grind with the tech giants? The answer lies in the creative destruction wrought by Amazon on the publishing industry, and Google and Facebook upon the newspaper industry.

Newspapers were probably the most durable monopolies of the 20th century, so lucrative that Warren Buffett famously picked them as his preferred example of businesses with “moats” around them. But that wasn’t because readers were willing to pay top dollar for newspapers’ reporting. Instead, that was because, incongruously for organizations dedicated to exposing propaganda of all forms on their front pages, newspapers have long striven to fill every other available inch of newsprint with that particular kind of corporate propaganda known as commercial advertising.

It was a lucrative arrangement. Newspapers exhibit powerful network effects, meaning that the more people read a paper the more advertisers want to advertise in it. As a result, many American cities came to have but one major newspaper monopolizing the local advertising market.

One such local paper, the Lorain Journal of Lorain, Ohio, sparked a case that has since become part of the standard antitrust curriculum in law schools. The paper tried to leverage its monopoly to destroy a local radio station that was competing for its advertising business. The Supreme Court affirmed liability for monopolization.

In the event, neither radio nor television ultimately undermined newspapers’ advertising monopolies. But the internet is different. Radio, television, and newspaper advertising can coexist, because they can target only groups, and often not the same ones, minimizing competition between them. The internet, by contrast, reaches individuals, making it strictly superior to group-based advertising. The internet also lets at least some firms target virtually all individuals in the country, allowing those firms to compete with all comers.

You might think that newspapers, which quickly became an important web destination, were perfectly positioned to exploit the new functionality. But being a destination turned out to be a problem. Consumers reveal far more valuable information about themselves to web gateways, like search and social media, than to particular destinations, like newspaper websites. But consumer data is the key to targeted advertising.

That gave Google and Facebook a competitive advantage, and because these companies also enjoy network effects—search and social media get better the more people use them—they inherited the newspapers’ old advertising monopolies.

That was a catastrophe for journalists, whose earnings and employment prospects plummeted. It was also a catastrophe for the public, because newspapers have a tradition of plowing their monopoly profits into investigative journalism that protects democracy, whereas Google and Facebook have instead invested their profits in new technologies like self-driving cars and cryptocurrencies.

The catastrophe of countervailing power

Amazon has found itself in journalists’ crosshairs for disrupting another industry that feeds writers: publishing. Book distribution was Amazon’s first big market, and Amazon won it, driving most brick and mortar booksellers to bankruptcy. Publishing, long dominated by a few big houses that used their power to extract high wholesale prices from booksellers, some of the profit from which they passed on to authors as royalties, now faced a distribution industry that was even more concentrated and powerful than was publishing. The Department of Justice stamped out a desperate attempt by publishers to cartelize in response, and profits, and author royalties, have continued to fall.

Journalists, of course, are writers, and the disruption of publishing, taken together with the disruption of news, have left journalists with the impression that they have nowhere to turn to escape the new economy.

The abuse of antitrust

Except antitrust.

Unschooled in the fine points of antitrust policy, it seems obvious to them that the Armageddon in newspapers and publishing is a problem of monopoly and that antitrust enforcers should do something about it.  

Only it isn’t and they shouldn’t. The courts have gone to great lengths over the past 130 years to distinguish between doing harm to competition, which is prohibited by the antitrust laws, and doing harm to competitors, which is not.

Disrupting markets by introducing new technologies that make products better is no antitrust violation, even if doing so does drive legacy firms into bankruptcy, and throws their employees out of work and into the streets. Because disruption is really the only thing capitalism has going for it. Disruption is the mechanism by which market economies generate technological advances and improve living standards in the long run. The antitrust laws are not there to preserve old monopolies and oligopolies such as those long enjoyed by newspapers and publishers.

In fact, by tearing down barriers to market entry, the antitrust laws strive to do the opposite: to speed the destruction and replacement of legacy monopolies with new and more innovative ones.

That’s why the entire antitrust establishment has stayed on the sidelines regarding the tech fight. It’s hard to think of three companies that have more obviously risen to prominence over the past generation by disrupting markets using superior technologies than Amazon, Google, and Facebook. It may be possible to find an anticompetitive practice here or there—I certainly have—but no serious antitrust scholar thinks the heart of these firms’ continued dominance lies other than in their technical savvy. The nuclear option of breaking up these firms just makes no sense.

Indeed, the disruption inflicted by these firms on newspapers and publishing is a measure of the extent to which these firms have improved book distribution and advertising, just as the vast disruption created by the industrial revolution was a symptom of the extraordinary technological advances of that period. Few people, and not even Karl Marx, thought that the solution to those disruptions lay with Ned Ludd. The solution to the disruption wrought by Google, Amazon, and Facebook today similarly does not lie in using the antitrust laws to smash the machines.

Governments eventually learned to address the disruption created by the original industrial revolution not by breaking up the big firms that brought that revolution about, but by using tax and transfer, and rate regulation, to ensure that the winners share their gains with the losers. However the press’s campaign turns out, rate regulation, not antitrust, is ultimately the approach that government will take to Amazon, Google, and Facebook if these companies continue to grow in power. Because we don’t have to decide between social justice and technological advance. We can have both. And voters will demand it.

The anti-progress wing of the progressive movement

Alas, smashing the machines is precisely what journalists and their supporters are demanding in calling for the breakup of Amazon, Google, and Facebook. Zephyr Teachout, for example, recently told an audience at Columbia Law School that she would ban targeted advertising except for newspapers. That would restore newspapers’ old advertising monopolies, but also make targeted advertising less effective, for the same reason that Google and Facebook are the preferred choice of advertisers today. (Of course, making advertising more effective might not be a good thing. More on this below.)

This contempt for technological advance has been coupled with a broader anti-intellectualism, best captured by an extraordinary remark made by Barry Lynn, director of the pro-breakup Open Markets Institute, and sometime advocate for the Author’s Guild. The Times quotes him saying that because the antitrust laws once contained a presumption against mergers to market shares in excess of 25%, all policymakers have to do to get antitrust right is “be able to count to four. We don’t need economists to help us count to four.”

But size really is not a good measure of monopoly power. Ask Nokia, which controlled more than half the market for cell phones in 2007, on the eve of Apple’s introduction of the iPhone, but saw its share fall almost to zero by 2012. Or Walmart, the nation’s largest retailer and a monopolist in many smaller retail markets, which nevertheless saw its stock fall after Amazon announced one-day shipping.

Journalists themselves acknowledge that size does not always translate into power when they wring their hands about the Amazon-driven financial troubles of large retailers like Macy’s. Determining whether a market lacks competition really does require more than counting the number of big firms in the market.

I keep waiting for a devastating critique of arguments that Amazon operates in highly competitive markets to emerge from the big tech breakup movement. But that’s impossible for a movement that rejects economics as a corporate plot. Indeed, even an economist as pro-antitrust as Thomas Philippon, who advocates a return to antitrust’s mid-20th century golden age of massive breakups of firms like Alcoa and AT&T, affirms in a new book that American retail is actually a bright spot in an otherwise concentrated economy.

But you won’t find journalists highlighting that. The headline of a Times column promoting Philippon’s book? “Big Business Is Overcharging you $5000 a Year.” I tend to agree. But given all the anti-tech fervor in the press, Philippon’s chapter on why the tech giants are probably not an antitrust problem ought to get a mention somewhere in the column. It doesn’t.

John Maynard Keynes famously observed that “though no one will believe it—economics is a technical and difficult subject.” So too antitrust. A failure to appreciate the field’s technical difficulty is manifest also in Democratic presidential candidate Elizabeth Warren’s antitrust proposals, which were heavily influenced by breakup advocates.

Warren has argued that no large firm should be able to compete on its own platforms, not seeming to realize that doing business means competing on your own platforms. To show up to work in the morning in your own office space is to compete on a platform, your office, from which you exclude competitors. The rule that large firms (defined by Warren as those with more than $25 billion in revenues) cannot compete on their own platforms would just make doing large amounts of business illegal, a result that Warren no doubt does not desire.

The power of the press

The press’s campaign against Amazon, Google, and Facebook is working. Because while they may not be as well financed as Amazon, Google, or Facebook, writers can offer their friends something more valuable than money: publicity.

That appears to have induced a slew of politicians, including both Senator Warren on the left and Senator Josh Hawley on the right, to pander to breakup advocates. The House antitrust investigation into the tech giants, led by a congressman who is simultaneously championing legislation advocated by the News Media Alliance, a newspaper trade group, to give newspapers an exemption from the antitrust laws, may also have similar roots. So too the investigations announced by dozens of elected state attorneys general.

The investigations recently opened by the FTC and Department of Justice may signal no more than a desire not to look idle while so many others act. Which is why the press has the power to turn fiction into reality. Moreover, under the current Administration, the Department of Justice has already undertaken two suspiciously partisan antitrust investigations, and President Trump has made clear his hatred for the liberal bastions that are Amazon, Google and Facebook. The fact that the press has made antitrust action against the tech giants a progressive cause provides convenient cover for the President to take down some enemies.

The future of the news

Rate regulation of Amazon, Google, or Facebook is the likely long-term resolution of concerns about these firms’ power. But that won’t bring back newspapers, which henceforth will always play the loom to Google and Facebook’s textile mills, at least in the advertising market.

Journalists and their defenders, like Teachout, have been pushing to restore newspapers’ old monopolies by government fiat. No doubt that would make existing newspapers, and their staffs, very happy. But what is good for Big News is not necessarily good for journalism in the long run.

The silver lining to the disruption of newspapers’ old advertising monopolies is that it has created an opportunity for newspapers to wean themselves off a funding source that has always made little sense for organizations dedicated to helping Americans make informed, independent decisions, free of the manipulation of others.

For advertising has always had a manipulative function, alongside its function of disseminating product information to consumers. And, as I have argued elsewhere, now that the vast amounts of product information available for free on the internet have made advertising obsolete as a source of product information, manipulation is now advertising’s only real remaining function.

Manipulation causes consumers to buy products they don’t really want, giving firms that advertise a competitive advantage that they don’t deserve. That makes for an antitrust problem, this time with real consequences not just for competitors, but also for technological advance, as manipulative advertising drives dollars away from superior products toward advertised products, and away from investment in innovation and toward investment in consumer seduction.

The solution is to ban all advertising, targeted or not, rather than to give newspapers an advertising monopoly. And to give journalism the state subsidies that, like all public goods, from defense to highways, are journalism’s genuine due. The BBC provides a model of how that can be done without fear of government influence.

Indeed, Teachout’s proposed newspaper advertising monopoly is itself just a government subsidy, but a subsidy extracted through an advertising medium that harms consumers. Direct government subsidization achieves the same result, without the collateral consumer harm.

The press’s brazen advocacy of antitrust action against the tech giants, without making clear how much the press itself has to gain from that action, and the utter absence of any expert support for this approach, represents an abdication by the press of its responsibility to create an informed citizenry that is every bit as profound as the press’s lapses on social security a decade ago.

I’m glad we still have social security. But I’m also starting to miss balanced journalism.

1/3/2020: Editor’s note – this post was edited for clarification and minor copy edits.

[TOTM: The following is the fourth in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Valentin Mircea, a Senior Partner at Mircea and Partners Law Firm, Bucharest, Romania.

The enforcement of competition rules in the European Union is at historic heights. Competition enforcers at the European Commission seem to think that they have reached a point of perfect equilibrium, or perfection in enforcement. “Everything we do is right,” they seem to say, because for decades no significant competition decision by the Commission has been annulled on substance. Meanwhile, the objectives of EU competition law multiply continuously, as DG Competition assumes more and more public policy objectives. Indeed, so wide is DG Competition’s remit that it has become a kind of government in itself, charged with many areas and facing several problems looking for a cure.

The consumer welfare standard is merely affirmed and rarely pursued in the enforcement of the EU competition rules, where even the abuse of dominance tends to be considered as a per se infringement, at least until the European Court of Justice had its say in Intel. It helps that this standard has been always of a secondary importance in the European Union, where the objective of market integration prevailed over time.

Now other issues are catching the eye of the European Commission and the easiest way to handle things such as the increasing power of the technology companies was to make use of the toolkit of the EU competition enforcement.  A technology giant such as Google has already been hit three times with significant fines; but beyond the transient glory of these decisions, nothing significant happened in the market, to other companies or to consumers. Or it did? I’m not sure and nobody seems to check or even care. But the impetus in investigating and applying fines on the technology companies is unshaken — and is likely to remain so at least until the European Court of Justice has its say in a new roster of cases, which will not happen very soon.

The EU competition rules look both over- and under-enforced. This seeming paradox is explained by the formalistic approach of the European Commission and its willingness to serve political purposes, often the result of lobbying from various industries.  In the European Union, competition enforcement increasingly resembles Swiss Army knife; it is good for quick fixes of various problems, while not solving entirely any of them. 

The pursuit of political goals is not necessarily bad in itself; it seems obvious that competition enforcers should listen to the worries of the societies in which they live. Once objectives such as welfare seem to have been attained, it is thus not entirely surprising that enforcement should move towards fixing other societal problems. Take the case of the antitrust laws in the United States, the enactment of which was not determined by an overwhelming concern for consumer welfare or economic efficiency but by powerful lobbies that convinced Congress to act as a referee for their long-lasting disputes with different industries.  In spite of this not-so-glorious origin, the resultant antitrust rules have generated many benefits throughout the world and are an essential part of the efforts to keep markets competitive and ensure a level-playing field. So, why worry that the European Commission – and, more recently, even certain national competition authorities (such as Germany) – have developed a tendency to use powerful competition rules to make order in other areas, where the public opinion, irrespective if it is or not aware of the real causes of concern, requires it?

But in fact, what is happening today is bad and is setting precedents never seen before.  The speed at which new fronts are being opened, where the enforcement of the EU competition rules is an essential part of the weaponry, gives rise to two main areas of concern.

First, EU competition enforcers are generally ill-equipped to address sensitive technical issues that even big experts in the field do not understand properly, such as the use of the Big Data (a vague concept itself, open to various interpretations).  While creating a different set of rules and a new toolkit for the digital economy does not seem to be warranted (debates are still raging on this subject), a dose of humility as to the right level of knowledge required for a proper understanding of the interactions and for proper enforcement, would be most welcome.  Venturing into territories where conventional economics does not apply to its full extent, such as the absence of a price, an essential element of competition, requires a prudent and diligent enforcer to hold back, advance cautiously, and act only where deemed necessary, in an appropriate and proportionate way. So doing is more likely to have an observably beneficial impact, in contrast to the illusory glory of simply confronting the tech giants.

Second, given the limited resources of the European Commission and the national competition authorities in the Member States, exaggerated attention to cases in the technology and digital economy sectors will result in less enforcement in the traditional economy, where cartels and other harmful behaviors still happen, with often more visible negative effects on consumers and the economy. It is no longer fashionable to tackle such cases, as they do not draw the same attention from the media and their outcomes are not likely to create the same fame to the EU competition enforcers.

More recently, in an interesting move, the new European Commission unified the competition and the digital economy portfolios under the astute supervision of commissioner Margrethe Vestager. Beyond the anomaly to put together ex-ante and ex-post powers, the move signals an even larger propensity towards using competition enforcement tools in order to investigate and try to rein in the power of the behemoths of the digital economy.  The change is a powerful political message that EU competition enforcement will be even more prone to cases and decisions motivated by the pursuit of various public policy goals.

I am not saying that the approach taken by the EU competition enforcers has no chance of generating benefits for European consumers. But I am worried that moving ahead with the same determination and with the same limited expertise of the case handlers as has so far been demonstrated, is unlikely to deliver such a beneficial outcome. Moreover, contrary to the stated intention of the policy, it is likely to chill further the prospects for EU technology ventures. 

Last but not least, courageous enforcement of EU competition rules is not a panacea for the unwanted effects on the evidentiary tier, which might put in danger the credibility of this enforcement, its most valuable feature. Indeed, EU competition enforcement may be at its heights but there is no certainty that it won’t fall from there — and falling could be as spectacular as the cases which made the European Commission get to this point. I thus advocate for DG Competition to be wise and humble, to take one step at a time, to acknowledge that markets are generally able to self-correct, and to remember that the history of the economy is little more than a cemetery of forgotten giants that were once assumed to be unshakeable and unstoppable.

[TOTM: The following is the second in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Luigi Zingales, Robert C. McCormack Distinguished Service Professor of Entrepreneurship and Finance, and Charles M. Harper Faculty Fellow, the University of Chicago Booth School of Business. Director, the George J. Stigler Center for the Study of the Economy and the State, and Filippo Maria Lancieri, Fellow, George J. Stigler Center for the Study of the Economy and the State. JSD Candidate, The University of Chicago Law School.

This symposium discusses the “The Politicization of Antitrust.” As the invite itself stated, this is an umbrella topic that encompasses a wide range of subjects: from incorporating environmental or labor concerns in antitrust enforcement, to political pressure in enforcement decision-making, to national security laws (CFIUS-type enforcement), protectionism, federalism, and more. This contribution will focus on the challenges of designing a system that protects the open markets and democracy that are the foundation of modern economic and social development.

The “Chicago School of antitrust” was highly critical of the antitrust doctrine prevailing during the Warren-era Supreme Court. A key objection was that the vague legal standards of the Sherman, Clayton and the Federal Trade Commission Acts allowed for the enforcement of antitrust policy based on what Bork called “inferential analysis from casuistic observations.” That is, without clearly defined goals and without objective standards against which to measure these goals, antitrust enforcement would become arbitrary or even a tool that governments could wield against a political enemy. To address this criticism, Bork and other key members of the Chicago School narrowed the scope of antitrust to a single objective—the maximization of allocative efficiency/total welfare (coined as “consumer welfare”)—and advocated the use of price theory as a method to reduce judicial discretion. It was up to markets and Congress/politics, not judges (and antitrust), to redistribute economic surplus or protect small businesses. Developments in economic theory and econometrics over the next decades increased the number of tools regulators and Courts could rely on to measure the short-term price/output impacts of many specific types of conduct. A more conservative judiciary translated much of the Chicago School’s teaching into policy, including the triumph of Bork’s narrow interpretation of “consumer welfare.”

The Chicago School’s criticism of traditional antitrust struck many correct points. Some of the Warren-era Supreme Court cases are perplexing to say the least (e.g., Brown Shoe, Von’s Grocery, Utah Pie, Schwinn). Antitrust is a very powerful tool that covers almost the entire economy. In the United States, enforcement can be initiated by multiple federal and state regulators and by private parties (for whom treble damages encourage litigation). If used without clear and objective standards, antitrust remedies could easily add an extra layer of uncertainty or could even outright prohibit perfectly legitimate conduct, which would depress competition, investment, and growth. The Chicago School was also right in warning against the creation of what it understood as extensive and potentially unchecked governmental powers to intervene in the economic sphere. At best, such extensive powers can generate rent-seeking and cronyism. At worst, they can become an instrument of political vendettas. While these concerns are always present, they are particularly worrisome now: a time of increased polarization, dysfunctional politics, and constant weakening of many governmental institutions. If “politicizing antitrust” is understood as advocating for a politically driven, uncontrolled enforcement policy, we are similarly concerned about it. Changes to antitrust policy that rely primarily on vague objectives may lead to an unmitigated disaster.

Administrability is certainly a key feature of any regulatory regime hoping to actually increase consumer welfare. Bork’s narrow interpretation of “consumer welfare” unquestionably has three important features: Its objectives are i) clearly defined, ii) clearly ranked, and iii) (somewhat) objectively measurable. Yet, whilst certainly representing some gains over previous definitions, Bork’s “consumer welfare” is not the end of history for antitrust policy. Indeed, even the triumph of “consumer welfare” is somewhat bittersweet. With time, academics challenged many of the doctrine’s key tenets. US antitrust policy also constantly accepts some form of external influences that are antagonistic to this narrow, efficiency-focused “consumer welfare” view—the “post-Chicago” United States has explicit exemptions for export cartels, State Action, the Noerr-Pennington doctrine, and regulated markets (solidified in Trinko), among others. Finally, as one of us has indicated elsewhere, while prevailing in the United States, Chicago School ideas find limited footing around the world. While there certainly are irrational or highly politicized regimes, there is little evidence that antitrust enforcement in mature jurisdictions such as the EU or even Brazil is arbitrary, is employed in political vendettas, or reflects outright protectionist policies.

Governments do not function in a vacuum. As economic, political, and social structures change, so must public policies such as antitrust. It must be possible to develop a well-designed and consistent antitrust policy that focuses on goals other than imperfectly measured short-term price/output effects—one that sits in between a narrow “consumer welfare” and uncontrolled “politicized antitrust.” An example is provided by the Stigler Committee on Digital Platforms Final Report, which defends changes to current US antitrust enforcement as a way to increase competition in digital markets. There are many similarly well-grounded proposals for changes to other specific areas, such as vertical relationships. We have not yet seen an all-encompassing, well-grounded, and generalizable framework to move beyond the “consumer welfare” standard. Nonetheless, this is simply the current state of the art, not an impossibility theorem. Academia contributes the most to society when it provides new ways to tackle hard, important questions. The Chicago School certainly did so a few decades ago. There is no reason why academia and policymakers cannot do it again.   

This is exactly why we are dedicating the 2020 Stigler Center annual antitrust conference to the topic of “monopolies and politics.” Competitive markets and democracy are often (and rightly) celebrated as the most important engines of economic and social development. Still, until recently, the relationship between the two was all but ignored. This topic had been popular in the 1930s and 1940s because many observers linked the rise of Hitler, Mussolini, and the nationalist government in Japan to the industrial concentration in the three Axis countries. Indeed, after WWII, the United States set up a “Decartelization Office” in Germany and passed the Celler-Kefauver Act to prevent gigantic conglomerates from destroying democracies. In 1949, Congressman Emanuel Celler, who sponsored the Act, declared:

“There are two main reasons why l am concerned about concentration of economic power in the United States. One is that concentration of business unavoidably leads to some kind of socialism, which is not the desire of the American people. The other is that a concentrated system is inefficient, compared with a system of free competition.

We have seen what happened in the other industrial countries of the Western World. They allowed a free growth of monopolies and cartels; until these private concentrations grew so strong that either big business would own the government or the government would have to seize control of big business. The most extreme case was in Germany, where the big business men thought they could take over the government by using Adolf Hitler as their puppet. So Germany passed from private monopoly to dictatorship and disaster.”

There are many reasons why these concerns around monopolies and democracy are resurfacing now. A key one is that freedom is in decline worldwide and so is trust in democracy, particularly amongst newer generations. At the same time, there is growing evidence that market concentration is on the rise. Correlation is not causation, thus we cannot jump to hasty conclusions. Yet, the stakes are so high that these coincidences need to be investigated further.  

Moreover, even if the correlation between monopolies and fascism were spurious, the correlation between economic concentration and political dissatisfaction in democracy might not be. The fraction of people who feel their interests are represented in government fell from almost 80% in the 1950s to 20% today. Whilst this dynamic is impacted by many different drivers, one of them could certainly be increased market concentration.

Political capture is a reality, and it seems straightforward to assume that firms’ ability to influence the political system greatly depends not only on their size but also on the degree of concentration of the markets they operate in. The reasons are numerous. In concentrated markets, legislators only hear one version of the story, and there are fewer sophisticated stakeholders to ring the alarm when wrongdoing is present, thus making it easier for the incumbents to have their way. Similarly, in concentrated markets, the one or two incumbent firms represent the main or only source of employment for retiring regulators, ensuring an incumbent’s long-term influence over policy. Concentrated markets also restrict the pool of potential employers/customers for technical experts, making it difficult for them to survive if they are hostile to the incumbent behemoths—an issue particularly concerning in complex markets where talent is both necessary and scarce. Finally, firms with market power can use their increased rents to influence public policy through lobbying or some other legal form of campaign contributions.

In other words, as markets become more concentrated, incumbent firms become better at distorting the political process in their favor. Therefore, an increase in dissatisfaction with democracy might not just be a coincidence, but might partially reflect increases in market concentration that drive politicians and regulators away from the preference of voters and closer to that of behemoths.   

We are well aware that, at the moment, these are just theories—albeit quite plausible ones. For this reason, the first day of the 2020 Stigler Center Antitrust Conference will be dedicated to presenting and critically reviewing the evidence currently available on the connections between market concentration and adverse political outcomes.

If a connection is established, then the question becomes how an antitrust (or other similar) policy aimed at preserving free markets and democracy can be implemented in a rational and consistent manner. The “consumer welfare” standard has generated measures of concentration and measures of possible harm to be used in trial. The “democratic welfare” approach would have to do the same. Fortunately, in the last 50 years political science and political economy have made great progress, so there is a growing number of potential alternative theories, evidence, and methods. For this reason, the second day of the 2020 Stigler Center Antitrust Conference will be dedicated to discussing the pros and cons of these alternatives. We are hoping to use the conference to spur further reflection on how to develop a methodology that is predictable, restricts discretion, and makes a “democratic antitrust” administrable.  As mentioned above, we agree that simply “politicizing” the current antitrust regime would be very dangerous for the economic well-being of nations. Yet, ignoring the political consequences of economic concentration on democracy can be even more dangerous—not just for the economic, but also for the democratic well-being of nations. Progress is not achieved by returning to the past nor by staying religiously fixed on the current status quo, but by moving forward: by laying new bricks on the layers of knowledge accumulated in the past. The Chicago School helped build some important foundations of modern antitrust policy. Those foundations should not become a prison; instead, they should be the base for developing new standards capable of enhancing both economic welfare and democratic values in the spirit of what Senator John Sherman, Congressman Emanuel Celler, and other early antitrust advocates envisioned.

[TOTM: The following is the third in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Geoffrey A. Manne, president and founder of the International Center for Law & Economics, and Alec Stapp, Research Fellow at the International Center for Law & Economics.

Source: The Economist

Is there a relationship between concentrated economic power and political power? Do big firms have success influencing politicians and regulators to a degree that smaller firms — or even coalitions of small firms — could only dream of? That seems to be the narrative that some activists, journalists, and scholars are pushing of late. And, to be fair, it makes some intuitive sense (before you look at the data). The biggest firms have the most resources — how could they not have an advantage in the political arena?

The argument that corporate power leads to political power faces at least four significant challenges, however. First, the little empirical research there is does not support the claim. Second, there is almost no relationship between market capitalization (a proxy for economic power) and lobbying expenditures (a, admittedly weak, proxy for political power). Third, the absolute level of spending on lobbying is surprisingly low in the US given the potential benefits from rent-seeking (this is known as the Tullock paradox). Lastly, the proposed remedy for this supposed problem is to make antitrust more political — an intervention that is likely to make the problem worse rather than better (assuming there is a problem to begin with).

The claims that political power follows economic power

The claim that large firms or industry concentration causes political power (and thus that under-enforcement of antitrust laws is a key threat to our democratic system of government) is often repeated, and accepted as a matter of faith. Take, for example, Robert Reich’s March 2019 Senate testimony on “Does America Have a Monopoly Problem?”:

These massive corporations also possess substantial political clout. That’s one reason they’re consolidating: They don’t just seek economic power; they also seek political power.

Antitrust laws were supposed to stop what’s been going on.

* * *

[S]uch large size and gigantic capitalization translate into political power. They allow vast sums to be spent on lobbying, political campaigns, and public persuasion. (emphasis added)

Similarly, in an article in August of 2019 for The Guardian, law professor Ganesh Sitaraman argued there is a tight relationship between economic power and political power:

[R]eformers recognized that concentrated economic power — in any form — was a threat to freedom and democracy. Concentrated economic power not only allowed for localized oppression, especially of workers in their daily lives, it also made it more likely that big corporations and wealthy people wouldn’t be subject to the rule of law or democratic controls. Reformers’ answer to the concentration of economic power was threefold: break up economic power, rein it in through regulation, and tax it.

It was the reformers of the Gilded Age and Progressive Era who invented America’s antitrust laws — from the Sherman Antitrust Act of 1890 to the Clayton Act and Federal Trade Commission Acts of the early 20th century. Whether it was Republican trust-buster Teddy Roosevelt or liberal supreme court justice Louis Brandeis, courageous leaders in this era understood that when companies grow too powerful they threatened not just the economy but democratic government as well. Break-ups were a way to prevent the agglomeration of economic power in the first place, and promote an economic democracy, not just a political democracy. (emphasis added)

Luigi Zingales made a similar argument in his 2017 paper “Towards a Political Theory of the Firm”:

[T]he interaction of concentrated corporate power and politics is a threat to the functioning of the free market economy and to the economic prosperity it can generate, and a threat to democracy as well. (emphasis added)

The assumption that economic power leads to political power is not a new one. Not only, as Zingales points out, have political thinkers since Adam Smith asserted versions of the same, but more modern social scientists have continued the claims with varying (but always indeterminate) degrees of quantification. Zingales quotes Adolf Berle and Gardiner Means’ 1932 book, The Modern Corporation and Private Property, for example:

The rise of the modern corporation has brought a concentration of economic power which can compete on equal terms with the modern state — economic power versus political power, each strong in its own field. 

Russell Pittman (an economist at the DOJ Antitrust Division) argued in 1988 that rent-seeking activities would be undertaken only by firms in highly concentrated industries because:

if the industry in question is unconcentrated, then the firm may decide that the level of benefits accruing to the industry will be unaffected by its own level of contributions, so that the benefits may be enjoyed without incurrence of the costs. Such a calculation may be made by other firms in the industry, of course, with the result that a free-rider problem prevents firms individually from making political contributions, even if it is in their collective interest to do so.

For the most part the claims are virtually entirely theoretical and their support anecdotal. Reich, for example, supports his claim with two thin anecdotes from which he draws a firm (but, in fact, unsupported) conclusion: 

To take one example, although the European Union filed fined [sic] Google a record $2.7 billion for forcing search engine users into its own shopping platforms, American antitrust authorities have not moved against the company.

Why not?… We can’t be sure why the FTC chose not to pursue Google. After all, section 5 of the Federal Trade Commission Act of 1914 gives the Commission broad authority to prevent unfair acts or practices. One distinct possibility concerns Google’s political power. It has one of the biggest lobbying powerhouses in Washington, and the firm gives generously to Democrats as well as Republicans.

A clearer example of an abuse of power was revealed last November when the New York Times reported that Facebook executives withheld evidence of Russian activity on their platform far longer than previously disclosed.

Even more disturbing, Facebook employed a political opposition research firm to discredit critics. How long will it be before Facebook uses its own data and platform against critics? Or before potential critics are silenced even by the possibility? As the Times’s investigation made clear, economic power cannot be separated from political power. (emphasis added)

The conclusion — that “economic power cannot be separated from political power” — simply does not follow from the alleged evidence. 

The relationship between economic power and political power is extremely weak

Few of these assertions of the relationship between economic and political power are backed by empirical evidence. Pittman’s 1988 paper is empirical (as is his previous 1977 paper looking at the relationship between industry concentration and contributions to Nixon’s re-election campaign), but it is also in direct contradiction to several other empirical studies (Zardkoohi (1985); Munger (1988); Esty and Caves (1983)) that find no correlation between concentration and political influence; Pittman’s 1988 paper is indeed a response to those papers, in part. 

In fact, as one study (Grier, Muger & Roberts (1991)) summarizes the evidence:

[O]f ten empirical investigations by six different authors/teams…, relatively few of the studies find a positive, significant relation between contributions/level of political activity and concentration, though a variety of measures of both are used…. 

There is little to recommend most of these studies as conclusive one way or the other on the question of interest. Each one suffers from a sample selection or estimation problem that renders its results suspect. (emphasis added)

And, as they point out, there is good reason to question the underlying theory of a direct correlation between concentration and political influence:

[L]egislation or regulation favorable to an industry is from the perspective of a given firm a public good, and therefore subject to Olson’s collective action problem. Concentrated industries should suffer less from this difficulty, since their sparse numbers make bargaining cheaper…. [But at the same time,] concentration itself may affect demand, suggesting that the predicted correlation between concentration and political activity may be ambiguous, or even negative. 

* * *

The only conclusion that seems possible is that the question of the correct relation between the structure of an industry and its observed level of political activity cannot be resolved theoretically. While it may be true that firms in a concentrated industry can more cheaply solve the collective action problem that inheres in political action, they are also less likely to need to do so than their more competitive brethren…. As is so often the case, the interesting question is empirical: who is right? (emphasis added)

The results of Grier, Muger & Roberts (1991)’s own empirical study are ambiguous at best (and relate only to political participation, not success, and thus not actual political power):

[A]re concentrated industries more or less likely to be politically active? Numerous previous studies have addressed this topic, but their methods are not comparable and their results are flatly contradictory. 

On the side of predicting a positive correlation between concentration and political activity is the theory that Olson’s “free rider” problem has more bite the larger the number of participants and the smaller their respective individual benefits. Opposing this view is the claim that it precisely because such industries are concentrated that they have less need for government intervention. They can act on their own to gamer the benefits of cartelization that less concentrated industries can secure only through political activity. 

Our results indicate that both sides are right, over some range of concentration. The relation between political activity and concentration is a polynomial of degree 2, rising and then falling, achieving a peak at a four-firm concentration ratio slightly below 0.5. (emphasis added)

Despite all of this, Zingales (like others) explicitly claims that there is a clear and direct relationship between economic power and political power:

In the last three decades in the United States, the power of corporations to shape the rules of the game has become stronger… [because] the size and market share of companies has increased, which reduces the competition across conflicting interests in the same sector and makes corporations more powerful vis-à-vis consumers’ interest.

But a quick look at the empirical data continues to call this assertion into serious question. Indeed, if we look at the lobbying expenditures of the top 50 companies in the US by market capitalization, we see an extremely weak (at best) relationship between firm size and political power (as proxied by lobbying expenditures):

Of course, once again, this says little about the effectiveness of efforts to exercise political power, which could, in theory, correlate with market power but not expenditures. Yet the evidence on this suggests that, while concentration “increases both [political] activity and success…, [n]either firm size nor industry size has a robust influence on political activity or success.” (emphasis added). Of course there are enormous and well-known problems with measuring industry concentration, and it’s not clear that even this attribute is well-correlated with political activity or success (and, interestingly for the argument that profits are a big part of the story because firms in more concentrated industries from lax antitrust realize higher profits have more money to spend on political influence, even concentration in the Esty and Caves study is not correlated with political expenditures.)

Indeed, a couple of examples show the wide range of lobbying expenditures for a given firm size. Costco, which currently has a market cap of $130 billion, has spent only $210,000 on lobbying so far in 2019. By contrast, Amgen, which has a $144 billion market cap, has spent $8.54 million, or more than 40 times as much. As shown in the chart above, this variance is the norm. 

However, discussing the relative differences between these companies is less important than pointing out the absolute levels of expenditure. Spending eight and a half million dollars per year would not be prohibitive for literally thousands of firms in the US. If access is this cheap, what’s going on here?

Why is there so little money in US politics?

The Tullock paradox asks why, if the return to rent-seeking is so high — which it plausibly is because the government spends trillions of dollars each year — is so little money spent on influencing policymakers?

Considering the value of public policies at stake and the reputed influence of campaign contributors in policymaking, Gordon Tullock (1972) asked, why is there so little money in U.S. politics? In 1972, when Tullock raised this question, campaign spending was about $200 million. Assuming a reasonable rate of return, such an investment could have yielded at most $250-300 million over time, a sum dwarfed by the hundreds of billions of dollars worth of public expenditures and regulatory costs supposedly at stake.

A recent article by Scott Alexander updated the numbers for 2019 and compared the total to the $12 billion US almond industry:

[A]ll donations to all candidates, all lobbying, all think tanks, all advocacy organizations, the Washington Post, Vox, Mic, Mashable, Gawker, and Tumblr, combined, are still worth a little bit less than the almond industry.

Maybe it’s because spending money on donations, lobbying, think tanks, journalism and advocacy is ineffective on net (i.e., spending by one group is counterbalanced by spending by another group) and businesses know it?

In his paper on elections, Ansolabehere focuses on the corporate perspective. He argues that money neither makes a candidate much more likely to win, nor buys much influence with a candidate who does win. Corporations know this, which is why they don’t bother spending more. (emphasis added)

To his credit, Zingales acknowledges this issue:

To the extent that US corporations are exercising political influence, it seems that they are choosing less-visible but perhaps more effective ways. In fact, since Gordon Tullock’s (1972) famous article, it has been a puzzle in political science why there is so little money in politics (as discussed in this journal by Ansolabehere, de Figueiredo, and Snyder 2003).

So, what are these “less-visible but perhaps more effective” ways? Unfortunately, the evidence in support of this claim is anecdotal and unconvincing. As noted above, Reich offers only speculation and extremely weak anecdotal assertions. Meanwhile, Zingales tells the story of Robert (mistakenly identified in the paper as “Richard”) Rubin pushing through repeal of Glass-Steagall to benefit Citigroup, then getting hired for $15 million a year when he left the government. Assuming the implication is actually true, is that amount really beyond the reach of all but the largest companies? How many banks with an interest in the repeal of Glass-Steagall were really unlikely at the time to be able to credibly offer future compensation because they would be out of business? Very few, and no doubt some of the biggest and most powerful were arguably at greater risk of bankruptcy than some of the smaller banks.

Maybe only big companies have an interest in doing this kind of thing because they have more to lose? But in concentrated industries they also have more to lose by conferring the benefit on their competitors. And it’s hard to make the repeal or passage of a law, say, apply only to you and not everyone else in the industry. Maybe they collude? Perhaps, but is there any evidence of this? Zingales offers only pure speculation here, as well. For example, why was the US Google investigation dropped but not the EU one? Clearly because of White House visits, says Zingales. OK — but how much do these visits cost firms? If that’s the source of political power, it surely doesn’t require monopoly profits to obtain it. And it’s virtually impossible that direct relationships of this kind are beyond the reach of coalitions of smaller firms, or even small firms, full stop.  

In any case, the political power explanation turns mostly on doling out favors in exchange for individuals’ payoffs — which just aren’t that expensive, and it’s doubtful that the size of a firm correlates with the quality of its one-on-one influence brokering, except to the extent that causation might run the other way — which would be an indictment not of size but of politics. Of course, in the Hobbesian world of political influence brokering, as in the Hobbesian world of pre-political society, size alone is not determinative so long as alliances can be made or outcomes turn on things other than size (e.g., weapons in the pre-Hobbesian world; family connections in the world of political influence)

The Noerr–Pennington doctrine is highly relevant here as well. In Noerr, the Court ruled that “no violation of the [Sherman] Act can be predicated upon mere attempts to influence the passage or enforcement of laws” and “[j]oint efforts to influence public officials do not violate the antitrust laws even though intended to eliminate competition.” This would seem to explain, among other things, the existence of trade associations and other entities used by coalitions of small (and large) firms to influence the policymaking process.

If what matters for influence peddling is ultimately individual relationships and lobbying power, why aren’t the biggest firms in the world the lobbying firms and consultant shops? Why is Rubin selling out for $15 million a year if the benefit to Citigroup is in the billions? And, if concentration is the culprit, why isn’t it plausibly also the solution? It isn’t only the state that keeps the power of big companies in check; it’s other big companies, too. What Henry G. Manne said in his testimony on the Industrial Reorganization Act of 1973 remains true today: 

There is simply no correlation between the concentration ratio in an industry, or the size of its firms, and the effectiveness of the industry in the halls of Government.

In addition to the data presented earlier, this analysis would be incomplete if it did not mention the role of advocacy groups in influencing outcomes, the importance and size of large foundations, the role of unions, and the role of individual relationships.

Maybe voters matter more than money?

The National Rifle Association spends very little on direct lobbying efforts (less than $10 million over the most recent two-year cycle). The organization’s total annual budget is around $400 million. In the grand scheme of things, these are not overwhelming resources. But the NRA is widely-regarded as one of the most powerful political groups in the country, particularly within the Republican Party. How could this be? In short, maybe it’s not Sturm Ruger, Remington Outdoor, and Smith & Wesson — the three largest gun manufacturers in the US — that influence gun regulations; maybe it’s the highly-motivated voters who like to buy guns. 

The NRA has 5.5 million members, many of whom vote in primaries with gun rights as one of their top issues  — if not the top issue. And with low turnout in primaries — only 8.7% of all registered voters participated in 2018 Republican primaries — a candidate seeking the Republican nomination all but has to secure an endorsement from the NRA. On this issue at least, the deciding factor is the intensity of voter preferences, not the magnitude of campaign donations from rent-seeking corporations.

The NRA is not the only counterexample to arguments like those from Zingales. Auto dealers are a constituency that is powerful not necessarily due to its raw size but through its dispersed nature. At the state level, almost every political district has an auto dealership (and the owners are some of the wealthiest and best-connected individuals in the area). It’s no surprise then that most states ban the direct sale of cars from manufacturers (i.e., you have to go through a dealer). This results in higher prices for consumers and lower output for manufacturers. But the auto dealership industry is not highly concentrated at the national level. The dealers don’t need to spend millions of dollars lobbying federal policymakers for special protections; they can do it on the local level — on a state-by-state basis — for much less money (and without merging into consolidated national chains).

Another, more recent, case highlights the factors besides money that may affect political decisions. President Trump has been highly critical of Jeff Bezos and the Washington Post (which Bezos owns) since the beginning of his administration because he views the newspaper as a political enemy. In October, Microsoft beat out Amazon for a $10 billion contract to provide cloud infrastructure for the Department of Defense (DoD). Now, Amazon is suing the government, claiming that Trump improperly influenced the competitive bidding process and cost the company a fair shot at the contract. This case is a good example of how money may not be determinative at the margin, and also how multiple “monopolies” may have conflicting incentives and we don’t know how they net out.

Politicizing antitrust will only make this problem worse

At the FTC’s “Hearings on Competition and Consumer Protection in the 21st Century,” Barry Lynn of the Open Markets Institute advocated using antitrust to counter the political power of economically powerful firms:

[T]he main practical goal of antimonopoly is to extend checks and balances into the political economy. The foremost goal is not and must never be efficiency. Markets are made, they do not exist in any platonic ether. The making of markets is a political and moral act.

In other words, the goal of breaking up economic power is not to increase economic benefits but to decrease political influence. 

But as the author of one of the empirical analyses of the relationship between economic and political power notes the asserted “solution” to the unsupported “problem” of excess political influence by economically powerful firms — more and easier antitrust enforcement — may actually make the alleged problem worse:

Economic rents may be obtained through the process of market competition or be obtained by resorting to governmental protection. Rational firms choose the least costly alternative. Collusion to obtain governmental protection will be less costly, the higher the concentration, ceteris paribus. However, high concentration in itself is neither necessary nor sufficient to induce governmental protection.

The result that rent-seeking activity is triggered when firms are affected by government regulation has a clear implication: to reduce rent-seeking waste, governmental interference in the market place needs to be attenuated. Pittman’s suggested approach, however, is “to maintain a vigorous antitrust policy” (p. 181). In fact, a more strict antitrust policy may exacerbate rent-seeking. For example, the firms which will be affected by a vigorous application of antitrust laws would have incentive to seek moderation (or rents) from Congress or from the enforcement officials.

Rent-seeking by smaller firms could both be more prevalent, and, paradoxically, ultimately lead to increased concentration. And imbuing antitrust with an ill-defined set of vague political objectives (as many proponents of these arguments desire), would also make antitrust into a sort of “meta-legislation.” As a result, the return on influencing a handful of government appointments with authority over antitrust becomes huge — increasing the ability and the incentive to do so. 

And if the underlying basis for antitrust enforcement is extended beyond economic welfare effects, how long can we expect to resist calls to restrain enforcement precisely to further those goals? With an expanded basis for increased enforcement, the effort and ability to get exemptions will be massively increased as the persuasiveness of the claimed justifications for those exemptions, which already encompass non-economic goals, will be greatly enhanced. We might find that we end up with even more concentration because the exceptions could subsume the rules. All of which of course highlights the fundamental, underlying irony of claims that we need to diminish the economic content of antitrust in order to reduce the political power of private firms: If you make antitrust more political, you’ll get less democratic, more politically determined, results.

[TOTM: The following is the first in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Steven J. Cernak, Partner at Bona Law and Adjunct Professor, University of Michigan Law School and Western Michigan University Thomas M. Cooley Law School. This paper represents the current views of the author alone and not necessarily the views of any past, present or future employer or client.

When some antitrust practitioners hear “the politicization of antitrust,” they cringe while imagining, say, merger approval hanging on the size of the bribe or closeness of the connection with the right politician.  Even a more benign interpretation of the phrase “politicization of antitrust” might drive some antitrust technocrats up the wall:  “Why must the mainstream media and, heaven forbid, politicians start weighing in on what antitrust interpretations, policy and law should be?  Don’t they know that we have it all figured out and, if we decide it needs any tweaks, we’ll make those over drinks at the ABA Antitrust Section Spring Meeting?”

While I agree with the reaction to the cringe-worthy interpretation of “politicization,” I think members of the antitrust community should not be surprised or hostile to the second interpretation, that is, all the new attention from new people.  Such attention is not unusual historically; more importantly, it provides an opportunity to explain the benefits and limits of antitrust enforcement and the competitive process it is meant to protect. 

The Sherman Act itself, along with its state-level predecessors, was the product of a political reaction to perceived problems of the late 19th Century – hence all of today’s references to a “new gilded age” as echoes of the political arguments of 1890.  Since then, the Sherman Act has not been immutable.  The U.S. antitrust laws have changed – and new antitrust enforcers have even been added – when the political debates convinced enough that change was necessary.  Today’s political discussion could be surprising to so many members of the antitrust community because they were not even alive when the last major change was debated and passed

More generally, the U.S. political position on other government regulation of – or intervention or participation in – free markets has varied considerably over the years.  While controversial when they were passed, we now take Medicare and Medicaid for granted and debate “Medicare for all” – why shouldn’t an overhaul of the Sherman Act also be a legitimate political discussion?  The Interstate Commerce Commission might be gone and forgotten but at one time it garnered political support to regulate the most powerful industries of the late 19th and early 20th Century – why should a debate on new ways to regulate today’s powerful industries be out of the question? 

So today’s antitrust practitioners should avoid the temptation to proclaim an “end of history” and that all antitrust policy questions have been asked and answered and instead, as some of us have been suggesting since at least the last election cycle, join the political debate.  But now, for those of us who are generally supportive of the U.S. antitrust status quo, the question is how? 

Some have been pushing back on the supposed evidence that a change in antitrust or other governmental policies is necessary.  For instance, in late 2015 the White House Council of Economic Advisers published a paper on increased concentration in many industries which others have used as evidence of a failure of antitrust law to protect competition.  Josh Wright has used several platforms to point out that the industry measurement was too broad and the concentration level too low to be useful in these discussions.  Also, he reminded readers that concentration and levels of competition are different concepts that are not necessarily linked.  On questions surrounding inequality and stagnation of standards of living, Russ Roberts has produced a series of videos that try to explain why any such questions are difficult to answer with the easy numbers available and why, perhaps, it is not correct that “the rich got all the gains.” 

Others, like Dan Crane for instance, have advanced the debate by trying to get those commentators who are unhappy with the status quo to explain what they see as the problems and the proposed fixes.  While it might be too much to ask for unanimity among a diverse group of commentators, the debate might be more productive now that some more specific complaints and solutions have begun to emerge

Even if the problems are properly identified, we should not allow anyone to blithely assume that any – or any particular – increase in government oversight will solve it without creating different issues.  The Federal Trade Commission tackled this issue in its final hearing on Competition and Consumer Protection in the 21st Century with a panel on Frank Easterbrook’s seminal “Limits of Antitrust” paper.  I was fortunate enough to be on that panel and tried to summarize the ongoing importance of “Limits,” and advance the broader debate, by encouraging those who would change antitrust policy and increase supervision of the market to have appropriate “regulatory humility” (a term borrowed from former FTC Chairman Maureen Ohlhausen) about what can be accomplished.

I identified three varieties of humility present in “Limits” and pertinent here.  First, there is the humility to recognize that mastering anything as complex as an economy or any significant industry will require knowledge of innumerable items, some unseen or poorly understood, and so could be impossible.  Here, Easterbrook echoes Friedrich Hayek’s “Pretense of Knowledge” Nobel acceptance speech. 

Second, there is the humility to recognize that any judge or enforcer, like any other human being, is subject to her own biases and predilections, whether based on experience or the institutional framework within which she works.  While market participants might not be perfect, great thinkers from Madison to Kovacic have recognized that “men (or any agency leaders) are not angels” either.  As Thibault Schrepel has explained, it would be “romantic” to assume that any newly-empowered government enforcer will always act in the best interest of her constituents. 

Finally, there is the humility to recognize that humanity has been around a long time and faced a number of issues and that we might learn something from how our predecessors reacted to what appear to be similar issues in history.  Given my personal history and current interests, I have focused on events from the automotive industry; however, the story of the unassailable power (until it wasn’t) of A&P and how it spawned the Robinson-Patman Act, ably told by Tim Muris and Jonathan Neuchterlein, might be more pertinent here.  So challenging those advocating for big changes to explain why they are so confident this time around can be useful. 

But while all those avenues of argument can be effective in explaining why greater government intervention in the form of new antitrust policies might be worse than the status quo, we also must do a better job at explaining why antitrust and the market forces it protects are actually good for society.  If democratic capitalism really has “lengthened the life span, made the elimination of poverty and famine thinkable, enlarged the range of human choice” as claimed by Michael Novak in The Spirit of Democratic Capitalism, we should do more to spread that good news. 

Maybe we need to spend more time telling and retelling the “I, Pencil” or “It’s a Wonderful Loaf” stories about how well markets can and do work at coordinating the self-interested behavior of many to the benefit of even more.  Then we can illustrate the limited role of antitrust in that complex effort – say, punishing any collusion among the mills or bakers in those two stories to ensure the process works as beautifully and simply displayed.  For the first time in decades, politicians and real people, like the consumers whose welfare we are supposed to be protecting, are paying attention to our wonderful world of antitrust.  We should seize the opportunity to explain what we do and why it matters and discuss if any improvements can be made.

The operative text of the Sherman Antitrust Act of 1890 is a scant 100 words:

Section 1:

Every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is declared to be illegal. Every person who shall make any contract or engage in any combination or conspiracy hereby declared to be illegal shall be deemed guilty of a felony…

Section 2:

Every person who shall monopolize, or attempt to monopolize, or combine or conspire with any other person or persons, to monopolize any part of the trade or commerce among the several States, or with foreign nations, shall be deemed guilty of a felony…

Its short length and broad implications (“Every contract… in restraint of trade… is declared to be illegal”) didn’t give the courts much to go on in terms of textualism. As for originalism, the legislative history of the Sherman Act is mixed, and no consensus currently exists among experts. In practice, that means enforcement of the antitrust laws in the US has been a product of the evolutionary common law process (and has changed over time due to economic learning). 

Over the last fifty years, academics, judges, and practitioners have generally converged on the consumer welfare standard as the best approach for protecting market competition. Although some early supporters of aggressive enforcement (e.g., Brandeis and, more recently, Pitofsky) advocated for a more political conception of antitrust, that conception of the law has been decisively rejected by the courts as the contours of the law have evolved through judicial decisionmaking. 

In the last few years, however, a movement has reemerged to expand antitrust beyond consumer welfare to include political and social issues, ranging from broadly macroeconomic matters like rising income inequality and declining wages, to sociopolitical concerns like increasing political concentration, environmental degradation, a struggling traditional news industry, and declining localism. 

Although we at ICLE are decidedly in the consumer welfare camp, the contested “original intent” of the antitrust laws and the simple progress of evolving interpretation could conceivably support a broader, more-political interpretation. It is, at the very least, a timely and significant question whether and how political and social issues might be incorporated into antitrust law. Yet much of the discussion of politics and antitrust has been heavy on rhetoric and light on substance; it is dominated by non-expert, ideologically driven opinion. 

In this blog symposium we seek to offer a more substantive and balanced discussion of the issue. To that end, we invited a number of respected economists, legal scholars, and practitioners to offer their perspectives. 

The symposium comprises posts by Steve Cernak, Luigi Zingales and Filippo Maria Lancieri, Geoffrey A. Manne and Alec Stapp, Valentin MirceaRamsi Woodcock, Kristian Stout, and Cento Veljanoski.

Both Steve Cernak and Zingales and Lancieri offer big picture perspectives. Cernak sees the current debate as, “an opportunity to explain the benefits and limits of antitrust enforcement and the competitive process it is meant to protect.” He then urges “regulatory humility” and outlines what this means in the context of antitrust.  

Zingales and Lancieri note that “simply “politicizing” the current antitrust regime would be very dangerous for the economic well-being of nations.” More specifically, they observe that “If used without clear and objective standards, antitrust remedies could easily add an extra layer of uncertainty or could even outright prohibit perfectly legitimate conduct, which would depress competition, investment, and growth.” Nonetheless, they argue that nuanced changes to the application of antitrust law may be justified because, “as markets become more concentrated, incumbent firms become better at distorting the political process in their favor.”

Manne and Stapp question the existence of a causal relationship between market concentration and political power, noting that there is little empirical support for such a claim.  Moreover, they warn that politicizing antitrust will inevitably result in more politicized antitrust enforcement actions to the detriment of consumers and democracy. 

Mircea argues that antitrust enforcement in the EU is already too political and that enforcement has been too focused on “Big Tech” companies. The result has been to chill investment in technology firms in the EU while failing to address legitimate antitrust violations in other sectors. 

Woodcock argues that the excessive focus on “Big Tech” companies as antitrust villains has come in no small part from a concerted effort by “Big Ink” (i.e. media companies), who resent the loss of advertising revenue that has resulted from the emergence of online advertising platforms. Woodcock suggests that the solution to this problem is to ban advertising. (We suspect that this cure would be worse than the disease but will leave substantive criticism to another blog post.)

Stout argues that while consumers may have legitimate grievances with Big Tech companies, these grievances do not justify widening the scope of antitrust, noting that “Concerns about privacy, hate speech, and, more broadly, the integrity of the democratic process are critical issues to wrestle with. But these aren’t antitrust problems.”

Finally, Veljanovski highlights potential problems with per se rules against cartels, noting that in some cases (most notably regulation of common pool resources such as fisheries), long-run consumer welfare may be improved by permitting certain kinds of cartel. However, he notes that in the case of polluting firms, a cartel that raises prices and lowers output is not likely to be the most efficient way to reduce the harms associated with pollution. This is of relevance given the DOJ’s case against certain automobile manufacturers, which are accused of colluding with California to set emission standards that are stricter than required under federal law.

It is tempting to conclude that U.S. antitrust law is not fundamentally broken, so does not require a major fix. Indeed, if any fix is needed, it is that the CWS should be more widely applied both in the U.S. and internationally.

An oft-repeated claim of conferences, media, and left-wing think tanks is that lax antitrust enforcement has led to a substantial increase in concentration in the US economy of late, strangling the economy, harming workers, and saddling consumers with greater markups in the process. But what if rising concentration (and the current level of antitrust enforcement) were an indication of more competition, not less?

By now the concentration-as-antitrust-bogeyman story is virtually conventional wisdom, echoed, of course, by political candidates such as Elizabeth Warren trying to cash in on the need for a government response to such dire circumstances:

In industry after industry — airlines, banking, health care, agriculture, tech — a handful of corporate giants control more and more. The big guys are locking out smaller, newer competitors. They are crushing innovation. Even if you don’t see the gears turning, this massive concentration means prices go up and quality goes down for everything from air travel to internet service.  

But the claim that lax antitrust enforcement has led to increased concentration in the US and that it has caused economic harm has been debunked several times (for some of our own debunking, see Eric Fruits’ posts here, here, and here). Or, more charitably to those who tirelessly repeat the claim as if it is “settled science,” it has been significantly called into question

Most recently, several working papers looking at the data on concentration in detail and attempting to identify the likely cause for the observed data, show precisely the opposite relationship. The reason for increased concentration appears to be technological, not anticompetitive. And, as might be expected from that cause, its effects are beneficial. Indeed, the story is both intuitive and positive.

What’s more, while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.

The most recent — and, I believe, most significant — corrective to the conventional story comes from economists Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University. As they write in a recent paper titled, “The Industrial Revolution in Services”: 

We show that new technologies have enabled firms that adopt them to scale production over a large number of establishments dispersed across space. Firms that adopt this technology grow by increasing the number of local markets that they serve, but on average are smaller in the markets that they do serve. Unlike Henry Ford’s revolution in manufacturing more than a hundred years ago when manufacturing firms grew by concentrating production in a given location, the new industrial revolution in non-traded sectors takes the form of horizontal expansion across more locations. At the same time, multi-product firms are forced to exit industries where their productivity is low or where the new technology has had no effect. Empirically we see that top firms in the overall economy are more focused and have larger market shares in their chosen sectors, but their size as a share of employment in the overall economy has not changed. (pp. 42-43) (emphasis added).

This makes perfect sense. And it has the benefit of not second-guessing structural changes made in response to technological change. Rather, it points to technological change as doing what it regularly does: improving productivity.

The implementation of new technology seems to be conferring benefits — it’s just that these benefits are not evenly distributed across all firms and industries. But the assumption that larger firms are causing harm (or even that there is any harm in the first place, whatever the cause) is unmerited. 

What the authors find is that the apparent rise in national concentration doesn’t tell the relevant story, and the data certainly aren’t consistent with assumptions that anticompetitive conduct is either a cause or a result of structural changes in the economy.

Hsieh and Rossi-Hansberg point out that increased concentration is not happening everywhere, but is being driven by just three industries:

First, we show that the phenomena of rising concentration . . . is only seen in three broad sectors – services, wholesale, and retail. . . . [T]op firms have become more efficient over time, but our evidence indicates that this is only true for top firms in these three sectors. In manufacturing, for example, concentration has fallen.

Second, rising concentration in these sectors is entirely driven by an increase [in] the number of local markets served by the top firms. (p. 4) (emphasis added).

These findings are a gloss on a (then) working paper — The Fall of the Labor Share and the Rise of Superstar Firms — by David Autor, David Dorn, Lawrence F. Katz, Christina Patterson, and John Van Reenan (now forthcoming in the QJE). Autor et al. (2019) finds that concentration is rising, and that it is the result of increased productivity:

If globalization or technological changes push sales towards the most productive firms in each industry, product market concentration will rise as industries become increasingly dominated by superstar firms, which have high markups and a low labor share of value-added.

We empirically assess seven predictions of this hypothesis: (i) industry sales will increasingly concentrate in a small number of firms; (ii) industries where concentration rises most will have the largest declines in the labor share; (iii) the fall in the labor share will be driven largely by reallocation rather than a fall in the unweighted mean labor share across all firms; (iv) the between-firm reallocation component of the fall in the labor share will be greatest in the sectors with the largest increases in market concentration; (v) the industries that are becoming more concentrated will exhibit faster growth of productivity; (vi) the aggregate markup will rise more than the typical firm’s markup; and (vii) these patterns should be observed not only in U.S. firms, but also internationally. We find support for all of these predictions. (emphasis added).

This is alone is quite important (and seemingly often overlooked). Autor et al. (2019) finds that rising concentration is a result of increased productivity that weeds out less-efficient producers. This is a good thing. 

But Hsieh & Rossi-Hansberg drill down into the data to find something perhaps even more significant: the rise in concentration itself is limited to just a few sectors, and, where it is observed, it is predominantly a function of more efficient firms competing in more — and more localized — markets. This means that competition is increasing, not decreasing, whether it is accompanied by an increase in concentration or not. 

No matter how may times and under how many monikers the antitrust populists try to revive it, the Structure-Conduct-Performance paradigm remains as moribund as ever. Indeed, on this point, as one of the new antitrust agonists’ own, Fiona Scott Morton, has written (along with co-authors Martin Gaynor and Steven Berry):

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration. As Bresnahan (1989) argued three decades ago, no clear interpretation of the impact of concentration is possible without a clear focus on equilibrium oligopoly demand and “supply,” where supply includes the list of the marginal cost functions of the firms and the nature of oligopoly competition. 

Some of the recent literature on concentration, profits, and markups has simply reasserted the relevance of the old-style structure-conduct-performance correlations. For economists trained in subfields outside industrial organization, such correlations can be attractive. 

Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates. Such correlations will not produce information about the causal estimates that policy demands. It is these causal relationships that will help us understand what, if anything, may be causing markups to rise. (emphasis added).

Indeed! And one reason for the enduring irrelevance of market concentration measures is well laid out in Hsieh and Rossi-Hansberg’s paper:

This evidence is consistent with our view that increasing concentration is driven by new ICT-enabled technologies that ultimately raise aggregate industry TFP. It is not consistent with the view that concentration is due to declining competition or entry barriers . . . , as these forces will result in a decline in industry employment. (pp. 4-5) (emphasis added)

The net effect is that there is essentially no change in concentration by the top firms in the economy as a whole. The “super-star” firms of today’s economy are larger in their chosen sectors and have unleashed productivity growth in these sectors, but they are not any larger as a share of the aggregate economy. (p. 5) (emphasis added)

Thus, to begin with, the claim that increased concentration leads to monopsony in labor markets (and thus unemployment) appears to be false. Hsieh and Rossi-Hansberg again:

[W]e find that total employment rises substantially in industries with rising concentration. This is true even when we look at total employment of the smaller firms in these industries. (p. 4)

[S]ectors with more top firm concentration are the ones where total industry employment (as a share of aggregate employment) has also grown. The employment share of industries with increased top firm concentration grew from 70% in 1977 to 85% in 2013. (p. 9)

Firms throughout the size distribution increase employment in sectors with increasing concentration, not only the top 10% firms in the industry, although by definition the increase is larger among the top firms. (p. 10) (emphasis added)

Again, what actually appears to be happening is that national-level growth in concentration is actually being driven by increased competition in certain industries at the local level:

93% of the growth in concentration comes from growth in the number of cities served by top firms, and only 7% comes from increased employment per city. . . . [A]verage employment per county and per establishment of top firms falls. So necessarily more than 100% of concentration growth has to come from the increase in the number of counties and establishments served by the top firms. (p.13)

The net effect is a decrease in the power of top firms relative to the economy as a whole, as the largest firms specialize more, and are dominant in fewer industries:

Top firms produce in more industries than the average firm, but less so in 2013 compared to 1977. The number of industries of a top 0.001% firm (relative to the average firm) fell from 35 in 1977 to 17 in 2013. The corresponding number for a top 0.01% firm is 21 industries in 1977 and 9 industries in 2013. (p. 17)

Thus, summing up, technology has led to increased productivity as well as greater specialization by large firms, especially in relatively concentrated industries (exactly the opposite of the pessimistic stories):  

[T]op firms are now more specialized, are larger in the chosen industries, and these are precisely the industries that have experienced concentration growth. (p. 18)

Unsurprisingly (except to some…), the increase in concentration in certain industries does not translate into an increase in concentration in the economy as a whole. In other words, workers can shift jobs between industries, and there is enough geographic and firm mobility to prevent monopsony. (Despite rampant assumptions that increased concentration is constraining labor competition everywhere…).

Although the employment share of top firms in an average industry has increased substantially, the employment share of the top firms in the aggregate economy has not. (p. 15)

It is also simply not clearly the case that concentration is causing prices to rise or otherwise causing any harm. As Hsieh and Rossi-Hansberg note:

[T]he magnitude of the overall trend in markups is still controversial . . . and . . . the geographic expansion of top firms leads to declines in local concentration . . . that could enhance competition. (p. 37)

Indeed, recent papers such as Traina (2018), Gutiérrez and Philippon (2017), and the IMF (2019) have found increasing markups over the last few decades but at much more moderate rates than the famous De Loecker and Eeckhout (2017) study. Other parts of the anticompetitive narrative have been challenged as well. Karabarbounis and Neiman (2018) finds that profits have increased, but are still within their historical range. Rinz (2018) shows decreased wages in concentrated markets but also points out that local concentration has been decreasing over the relevant time period.

None of this should be so surprising. Has antitrust enforcement gotten more lax, leading to greater concentration? According to Vita and Osinski (2018), not so much. And how about the stagnant rate of new firms? Are incumbent monopolists killing off new startups? The more likely — albeit mundane — explanation, according to Hopenhayn et al. (2018), is that increased average firm age is due to an aging labor force. Lastly, the paper from Hsieh and Rossi-Hansberg discussed above is only the latest in a series of papers, including Bessen (2017), Van Reenen (2018), and Autor et al. (2019), that shows a rise in fixed costs due to investments in proprietary information technology, which correlates with increased concentration. 

So what is the upshot of all this?

  • First, as noted, employment has not decreased because of increased concentration; quite the opposite. Employment has increased in the industries that have experienced the most concentration at the national level.
  • Second, this result suggests that the rise in concentrated industries has not led to increased market power over labor.
  • Third, concentration itself needs to be understood more precisely. It is not explained by a simple narrative that the economy as a whole has experienced a great deal of concentration and this has been detrimental for consumers and workers. Specific industries have experienced national level concentration, but simultaneously those same industries have become more specialized and expanded competition into local markets. 

Surprisingly (because their paper has been around for a while and yet this conclusion is rarely recited by advocates for more intervention — although they happily use the paper to support claims of rising concentration), Autor et al. (2019) finds the same thing:

Our formal model, detailed below, generates superstar effects from increases in the toughness of product market competition that raise the market share of the most productive firms in each sector at the expense of less productive competitors. . . . An alternative perspective on the rise of superstar firms is that they reflect a diminution of competition, due to a weakening of U.S. antitrust enforcement (Dottling, Gutierrez and Philippon, 2018). Our findings on the similarity of trends in the U.S. and Europe, where antitrust authorities have acted more aggressively on large firms (Gutierrez and Philippon, 2018), combined with the fact that the concentrating sectors appear to be growing more productive and innovative, suggests that this is unlikely to be the primary explanation, although it may important in some specific industries (see Cooper et al, 2019, on healthcare for example). (emphasis added).

The popular narrative among Neo-Brandeisian antitrust scholars that lax antitrust enforcement has led to concentration detrimental to society is at base an empirical one. The findings of these empirical papers severely undermine the persuasiveness of that story.

Antitrust populists have a long list of complaints about competition policy, including: laws aren’t broad enough or tough enough, enforcers are lax, and judges tend to favor defendants over plaintiffs or government agencies. The populist push got a bump with the New York Times coverage of Lina Khan’s “Amazon’s Antitrust Paradox” in which she advocated breaking up Amazon and applying public utility regulation to platforms. Khan’s ideas were picked up by Sen. Elizabeth Warren, who has a plan for similar public utility regulation and promised to unwind earlier acquisitions by Amazon (Whole Foods and Zappos), Facebook (WhatsApp and Instagram), and Google (Waze, Nest, and DoubleClick).

Khan, Warren, and the other Break Up Big Tech populists don’t clearly articulate how consumers, suppliers — or anyone for that matter — would be better off with their mandated spinoffs. The Khan/Warren plan, however, requires a unique alignment of many factors: Warren must win the White House, Democrats must control both houses of Congress, and judges must substantially shift their thinking. It’s like turning a supertanker on a dime in the middle of a storm. Instead of publishing manifestos and engaging in antitrust hashtag hipsterism, maybe — just maybe — the populists can do something.

The populists seem to have three main grievances:

  • Small firms cannot enter the market or cannot thrive once they enter;
  • Suppliers, including workers, are getting squeezed; and
  • Speculation that someday firms will wake up, realize they have a monopoly, and begin charging noncompetitive prices to consumers.

Each of these grievances can be, and has been, already addressed by antitrust and competition litigation. And, in many cases these grievances were addressed in private antitrust litigation. For example:

In the US, private actions are available for a wide range of alleged anticompetitive conduct, including coordinated conduct (e.g., price-fixing), single-firm conduct (e.g., predatory pricing), and mergers that would substantially lessen competition. 

If the antitrust populists are so confident that concentration is rising and firms are behaving anticompetitively and consumers/suppliers/workers are being harmed, then why don’t they organize an antitrust lawsuit against the worst of the worst violators? If anticompetitive activity is so obvious and so pervasive, finding compelling cases should be easy.

For example, earlier this year, Shaoul Sussman, a law student at Fordham University, published “Prime Predator: Amazon and the Rationale of Below Average Variable Cost Pricing Strategies Among Negative-Cash Flow Firms” in the Journal of Antitrust Enforcement. Why not put Sussman’s theory to the test by building an antitrust case around it? The discovery process would unleash a treasure trove of cost data and probably more than a few “hot docs.”

Khan argues:

While predatory pricing technically remains illegal, it is extremely difficult to win predatory pricing claims because courts now require proof that the alleged predator would be able to raise prices and recoup its losses. 

However, in her criticism of the court in the Apple e-books litigation, she lays out a clear rationale for courts to revise their thinking on predatory pricing [emphasis added]:

Judge Cote, who presided over the district court trial, refrained from affirming the government’s conclusion. Still, the government’s argument illustrates the dominant framework that courts and enforcers use to analyze predation—and how it falls short. Specifically, the government erred by analyzing the profitability of Amazon’s e-book business in the aggregate and by characterizing the conduct as “loss leading” rather than potentially predatory pricing. These missteps suggest a failure to appreciate two critical aspects of Amazon’s practices: (1) how steep discounting by a firm on a platform-based product creates a higher risk that the firm will generate monopoly power than discounting on non-platform goods and (2) the multiple ways Amazon could recoup losses in ways other than raising the price of the same e-books that it discounted.

Why not put Khan’s cross-subsidy theory to the test by building an antitrust case around it? Surely there’d be a document explaining how the firm expects to recoup its losses. Or, maybe not. Maybe by the firm’s accounting, it’s not losing money on the discounted products. Without evidence, it’s just speculation.

In fairness, one can argue that recent court decisions have made pursuing private antitrust litigation more difficult. For example, the Supreme Court’s decision in Twombly requires an antitrust plaintiff to show more than mere speculation based on circumstantial evidence in order to move forward to discovery. Decisions in matters such as Ashcroft v. Iqbal have made it more difficult for plaintiffs to maintain antitrust claims. Wal-Mart v. Dukes and Comcast Corp v Behrend subject antitrust class actions to more rigorous analysis. In Ohio v. Amex the court ruled antitrust plaintiffs can’t meet the burden of proof by showing only some effect on some part of a two-sided market.

At the same time Jeld-Wen indicates third party plaintiffs can be awarded damages and obtain divestitures, even after mergers clear. In Jeld-Wen, a competitor filed suit to challenge the consummated Jeld-Wen/Craftmaster merger four years after the DOJ approved the merger without conditions. The challenge was lengthy, but successful, and a district court ordered damages and the divestiture of one of the combined firm’s manufacturing facilities six years after the merger was closed.

Despite the possible challenges of pursuing a private antitrust suit, Daniel Crane’s review of US federal court workload statistics concludes the incidence of private antitrust enforcement in the United States has been relatively stable since the mid-1980s — in the range of 600 to 900 new private antitrust filings a year. He also finds resolution by trial has been relatively stable at an average of less than 1 percent a year. Thus, it’s not clear that recent decisions have erected insurmountable barriers to antitrust plaintiffs.

In the US, third parties may fund private antitrust litigation and plaintiffs’ attorneys are allowed to work under a contingency fee arrangement, subject to court approval. A compelling case could be funded by deep-pocketed supporters of the populists’ agenda, big tech haters, or even investors. Perhaps the most well-known example is Peter Thiel’s bankrolling of Hulk Hogan’s takedown of Gawker. Before that, the savings and loan crisis led to a number of forced mergers which were later challenged in court, with the costs partially funded by the issuance of litigation tracking warrants.

The antitrust populist ranks are chock-a-block with economists, policy wonks, and go-getter attorneys. If they are so confident in their claims of rising concentration, bad behavior, and harm to consumers, suppliers, and workers, then they should put those ideas to the test with some slam dunk litigation. The fact that they haven’t suggests they may not have a case.