Archives For Robert Bork

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

Things are heating up in the antitrust world. There is considerable pressure to pass the American Innovation and Choice Online Act (AICOA) before the congressional recess in August—a short legislative window before members of Congress shift their focus almost entirely to campaigning for the mid-term elections. While it would not be impossible to advance the bill after the August recess, it would be a steep uphill climb.

But whether it passes or not, some of the damage from AICOA may already be done. The bill has moved the antitrust dialogue that will harm innovation and consumers. In this post, I will first explain AICOA’s fundamental flaws. Next, I discuss the negative impact that the legislation is likely to have if passed, even if courts and agencies do not aggressively enforce its provisions. Finally, I show how AICOA has already provided an intellectual victory for the approach articulated in the European Union (EU)’s Digital Markets Act (DMA). It has built momentum for a dystopian regulatory framework to break up and break into U.S. superstar firms designated as “gatekeepers” at the expense of innovation and consumers.

The Unseen of AICOA

AICOA’s drafters argue that, once passed, it will deliver numerous economic benefits. Sen. Amy Klobuchar (D-Minn.)—the bill’s main sponsor—has stated that it will “ensure small businesses and entrepreneurs still have the opportunity to succeed in the digital marketplace. This bill will do just that while also providing consumers with the benefit of greater choice online.”

Section 3 of the bill would provide “business users” of the designated “covered platforms” with a wide range of entitlements. This includes preventing the covered platform from offering any services or products that a business user could provide (the so-called “self-preferencing” prohibition); allowing a business user access to the covered platform’s proprietary data; and an entitlement for business users to have “preferred placement” on a covered platform without having to use any of that platform’s services.

These entitlements would provide non-platform businesses what are effectively claims on the platform’s proprietary assets, notwithstanding the covered platform’s own investments to collect data, create services, and invent products—in short, the platform’s innovative efforts. As such, AICOA is redistributive legislation that creates the conditions for unfair competition in the name of “fair” and “open” competition. It treats the behavior of “covered platforms” differently than identical behavior by their competitors, without considering the deterrent effect such a framework will have on consumers and innovation. Thus, AICOA offers rent-seeking rivals a formidable avenue to reap considerable benefits at the expense of the innovators thanks to the weaponization of antitrust to subvert, not improve, competition.

In mandating that covered platforms make their data and proprietary assets freely available to “business users” and rivals, AICOA undermines the underpinning of free markets to pursue the misguided goal of “open markets.” The inevitable result will be the tragedy of the commons. Absent the covered platforms having the ability to benefit from their entrepreneurial endeavors, the law no longer encourages innovation. As Joseph Schumpeter seminally predicted: “perfect competition implies free entry into every industry … But perfectly free entry into a new field may make it impossible to enter it at all.”

To illustrate, if business users can freely access, say, a special status on the covered platforms’ ancillary services without having to use any of the covered platform’s services (as required under Section 3(a)(5)), then platforms are disincentivized from inventing zero-priced services, since they cannot cross-monetize these services with existing services. Similarly, if, under Section 3(a)(1) of the bill, business users can stop covered platforms from pre-installing or preferencing an app whenever they happen to offer a similar app, then covered platforms will be discouraged from investing in or creating new apps. Thus, the bill would generate a considerable deterrent effect for covered platforms to invest, invent, and innovate.

AICOA’s most detrimental consequences may not be immediately apparent; they could instead manifest in larger and broader downstream impacts that will be difficult to undo. As the 19th century French economist Frederic Bastiat wrote: “a law gives birth not only to an effect but to a series of effects. Of these effects, the first only is immediate; it manifests itself simultaneously with its cause—it is seen. The others unfold in succession—they are not seen it is well for, if they are foreseen … it follows that the bad economist pursues a small present good, which will be followed by a great evil to come, while the true economist pursues a great good to come,—at the risk of a small present evil.”

To paraphrase Bastiat, AICOA offers ill-intentioned rivals a “small present good”–i.e., unconditional access to the platforms’ proprietary assets–while society suffers the loss of a greater good–i.e., incentives to innovate and welfare gains to consumers. The logic is akin to those who advocate the abolition of intellectual-property rights: The immediate (and seen) gain is obvious, concerning the dissemination of innovation and a reduction of the price of innovation, while the subsequent (and unseen) evil remains opaque, as the destruction of the institutional premises for innovation will generate considerable long-term innovation costs.

Fundamentally, AICOA weakens the benefits of scale by pursuing vertical disintegration of the covered platforms to the benefit of short-term static competition. In the long term, however, the bill would dampen dynamic competition, ultimately harming consumer welfare and the capacity for innovation. The measure’s opportunity costs will prevent covered platforms’ innovations from benefiting other business users or consumers. They personify the “unseen,” as Bastiat put it: “[they are] always in the shadow, and who, personifying what is not seen, [are] an essential element of the problem. [They make] us understand how absurd it is to see a profit in destruction.”

The costs could well amount to hundreds of billions of dollars for the U.S. economy, even before accounting for the costs of deterred innovation. The unseen is costly, the seen is cheap.

A New Robinson-Patman Act?

Most antitrust laws are terse, vague, and old: The Sherman Act of 1890, the Federal Trade Commission Act, and the Clayton Act of 1914 deal largely in generalities, with considerable deference for courts to elaborate in a common-law tradition on the specificities of what “restraints of trade,” “monopolization,” or “unfair methods of competition” mean.

In 1936, Congress passed the Robinson-Patman Act, designed to protect competitors from the then-disruptive competition of large firms who—thanks to scale and practices such as price differentiation—upended traditional incumbents to the benefit of consumers. Passed after “Congress made no factual investigation of its own, and ignored evidence that conflicted with accepted rhetoric,” the law prohibits price differentials that would benefit buyers, and ultimately consumers, in the name of less vigorous competition from more efficient, more productive firms. Indeed, under the Robinson-Patman Act, manufacturers cannot give a bigger discount to a distributor who would pass these savings onto consumers, even if the distributor performs extra services relative to others.

Former President Gerald Ford declared in 1975 that the Robinson-Patman Act “is a leading example of [a law] which restrain[s] competition and den[ies] buyers’ substantial savings…It discourages both large and small firms from cutting prices, making it harder for them to expand into new markets and pass on to customers the cost-savings on large orders.” Despite this, calls to amend or repeal the Robinson-Patman Act—supported by, among others, competition scholars like Herbert Hovenkamp and Robert Bork—have failed.

In the 1983 Abbott decision, Justice Lewis Powell wrote: “The Robinson-Patman Act has been widely criticized, both for its effects and for the policies that it seeks to promote. Although Congress is aware of these criticisms, the Act has remained in effect for almost half a century.”

Nonetheless, the act’s enforcement dwindled, thanks to wise reactions from antitrust agencies and the courts. While it is seldom enforced today, the act continues to create considerable legal uncertainty, as it raises regulatory risks for companies who engage in behavior that may conflict with its provisions. Indeed, many of the same so-called “neo-Brandeisians” who support passage of AICOA also advocate reinvigorating Robinson-Patman. More specifically, the new FTC majority has expressed that it is eager to revitalize Robinson-Patman, even as the law protects less efficient competitors. In other words, the Robinson-Patman Act is a zombie law: dead, but still moving.

Even if the antitrust agencies and courts ultimately follow the same path of regulatory and judicial restraint on AICOA that they have on Robinson-Patman, the legal uncertainty its existence will engender will act as a powerful deterrent on disruptive competition that dynamically benefits consumers and innovation. In short, like the Robinson-Patman Act, antitrust agencies and courts will either enforce AICOA–thus, generating the law’s adverse effects on consumers and innovation–or they will refrain from enforcing AICOA–but then, the legal uncertainty shall lead to unseen, harmful effects on innovation and consumers.

For instance, the bill’s prohibition on “self-preferencing” in Section 3(a)(1) will prevent covered platforms from offering consumers new products and services that happen to compete with incumbents’ products and services. Self-preferencing often is a pro-competitive, pro-efficiency practice that companies widely adopt—a reality that AICOA seems to ignore.

Would AICOA prevent, e.g., Apple from offering a bundled subscription to Apple One, which includes Apple Music, so that the company can effectively compete with incumbents like Spotify? As with Robinson-Patman, antitrust agencies and courts will have to choose whether to enforce a productivity-decreasing law, or to ignore congressional intent but, in the process, generate significant legal uncertainties.

Judge Bork once wrote that Robinson-Patman was “antitrust’s least glorious hour” because, rather than improving competition and innovation, it reduced competition from firms who happen to be more productive, innovative, and efficient than their rivals. The law infamously protected inefficient competitors rather than competition. But from the perspective of legislative history perspective, AICOA may be antitrust’s new “least glorious hour.” If adopted, it will adversely affect innovation and consumers, as opportunistic rivals will be able to prevent cost-saving practices by the covered platforms.

As with Robinson-Patman, calls to amend or repeal AICOA may follow its passage. But Robinson-Patman Act illustrates the path dependency of bad antitrust laws. However costly and damaging, AICOA would likely stay in place, with regular calls for either stronger or weaker enforcement, depending on whether the momentum shifts from populist antitrust or antitrust more consistent with dynamic competition.

Victory of the Brussels Effect

The future of AICOA does not bode well for markets, either from a historical perspective or from a comparative-law perspective. The EU’s DMA similarly targets a few large tech platforms but it is broader, harsher, and swifter. In the competition between these two examples of self-inflicted techlash, AICOA will pale in comparison with the DMA. Covered platforms will be forced to align with the DMA’s obligations and prohibitions.

Consequently, AICOA is a victory of the DMA and of the Brussels effect in general. AICOA effectively crowns the DMA as the all-encompassing regulatory assault on digital gatekeepers. While members of Congress have introduced numerous antitrust bills aimed at targeting gatekeepers, the DMA is the one-stop-shop regulation that encompasses multiple antitrust bills and imposes broader prohibitions and stronger obligations on gatekeepers. In other words, the DMA outcompetes AICOA.

Commentators seldom lament the extraterritorial impact of European regulations. Regarding regulating digital gatekeepers, U.S. officials should have pushed back against the innovation-stifling, welfare-decreasing effects of the DMA on U.S. tech companies, in particular, and on U.S. technological innovation, in general. To be fair, a few U.S. officials, such as Commerce Secretary Gina Raimundo, did voice opposition to the DMA. Indeed, well-aware of the DMA’s protectionist intent and its potential to break up and break into tech platforms, Raimundo expressed concerns that antitrust should not be about protecting competitors and deterring innovation but rather about protecting the process of competition, however disruptive may be.

The influential neo-Brandeisians and radical antitrust reformers, however, lashed out at Raimundo and effectively shamed the Biden administration into embracing the DMA (and its sister regulation, AICOA). Brussels did not have to exert its regulatory overreach; the U.S. administration happily imports and emulates European overregulation. There is no better way for European officials to see their dreams come true: a techlash against U.S. digital platforms that enjoys the support of local officials.

In that regard, AICOA has already played a significant role in shaping the intellectual mood in Washington and in altering the course of U.S. antitrust. Members of Congress designed AICOA along the lines pioneered by the DMA. Sen. Klobuchar has argued that America should emulate European competition policy regarding tech platforms. Lina Khan, now chair of the FTC, co-authored the U.S. House Antitrust Subcommittee report, which recommended adopting the European concept of “abuse of dominant position” in U.S. antitrust. In her current position, Khan now praises the DMA. Tim Wu, competition counsel for the White House, has praised European competition policy and officials. Indeed, the neo-Brandeisians’ have not only praised the European Commission’s fines against U.S. tech platforms (despite early criticisms from former President Barack Obama) but have more dramatically called for the United States to imitate the European regulatory framework.

In this regulatory race to inefficiency, the standard is set in Brussels with the blessings of U.S. officials. Not even the precedent set by the EU’s General Data Protection Regulation (GDPR) fully captures the effects the DMA will have. Privacy laws passed by U.S. states’ privacy have mostly reacted to the reality of the GDPR. With AICOA, Congress is proactively anticipating, emulating, and welcoming the DMA before it has even been adopted. The intellectual and policy shift is historical, and so is the policy error.

AICOA and the Boulevard of Broken Dreams

AICOA is a failure similar to the Robinson-Patman Act and a victory for the Brussels effect and the DMA. Consumers will be the collateral damages, and the unseen effects on innovation will take years before they materialize. Calls for amendments and repeals of AICOA are likely to fail, so that the inevitable costs will forever bear upon consumers and innovation dynamics.

AICOA illustrates the neo-Brandeisian opposition to large innovative companies. Joseph Schumpeter warned against such hostility and its effect on disincentivizing entrepreneurs to innovate when he wrote:

Faced by the increasing hostility of the environment and by the legislative, administrative, and judicial practice born of that hostility, entrepreneurs and capitalists—in fact the whole stratum that accepts the bourgeois scheme of life—will eventually cease to function. Their standard aims are rapidly becoming unattainable, their efforts futile.

President William Howard Taft once said, “the world is not going to be saved by legislation.” AICOA will not save antitrust, nor will consumers. To paraphrase Schumpeter, the bill’s drafters “walked into our future as we walked into the war, blindfolded.” AICOA’s intentions to deliver greater competition, a fairer marketplace, greater consumer choice, and more consumer benefits will ultimately scatter across the boulevard of broken dreams.

The Baron de Montesquieu once wrote that legislators should only change laws with a “trembling hand”:

It is sometimes necessary to change certain laws. But the case is rare, and when it happens, they should be touched only with a trembling hand: such solemnities should be observed, and such precautions are taken that the people will naturally conclude that the laws are indeed sacred since it takes so many formalities to abrogate them.

AICOA’s drafters had a clumsy hand, coupled with what Friedrich Hayek would call “a pretense of knowledge.” They were certain to do social good and incapable of thinking of doing social harm. The future will remember AICOA as the new antitrust’s least glorious hour, where consumers and innovation were sacrificed on the altar of a revitalized populist view of antitrust.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

In Free to Choose, Milton Friedman famously noted that there are four ways to spend money[1]:

  1. Spending your own money on yourself. For example, buying groceries or lunch. There is a strong incentive to economize and to get full value.
  2. Spending your own money on someone else. For example, buying a gift for another. There is a strong incentive to economize, but perhaps less to achieve full value from the other person’s point of view. Altruism is admirable, but it differs from value maximization, since—strictly speaking—giving cash would maximize the other’s value. Perhaps the point of a gift is that it does not amount to cash and the maximization of the other person’s welfare from their point of view.
  3. Spending someone else’s money on yourself. For example, an expensed business lunch. “Pass me the filet mignon and Chateau Lafite! Do you have one of those menus without any prices?” There is a strong incentive to get maximum utility, but there is little incentive to economize.
  4. Spending someone else’s money on someone else. For example, applying the proceeds of taxes or donations. There may be an indirect desire to see utility, but incentives for quality and cost management are often diminished.

This framework can be criticized. Altruism has a role. Not all motives are selfish. There is an important role for action to help those less fortunate, which might mean, for instance, that a charity gains more utility from category (4) (assisting the needy) than from category (3) (the charity’s holiday party). It always depends on the facts and the context. However, there is certainly a grain of truth in the observation that charity begins at home and that, in the final analysis, people are best at managing their own affairs.

How would this insight apply to data interoperability? The difficult cases of assisting the needy do not arise here: there is no serious sense in which data interoperability does, or does not, result in destitution. Thus, Friedman’s observations seem to ring true: when spending data, those whose data it is seem most likely to maximize its value. This is especially so where collection of data responds to incentives—that is, the amount of data collected and processed responds to how much control over the data is possible.

The obvious exception to this would be a case of market power. If there is a monopoly with persistent barriers to entry, then the incentive may not be to maximize total utility, and therefore to limit data handling to the extent that a higher price can be charged for the lesser amount of data that does remain available. This has arguably been seen with some data-handling rules: the “Jedi Blue” agreement on advertising bidding, Apple’s Intelligent Tracking Prevention and App Tracking Transparency, and Google’s proposed Privacy Sandbox, all restrict the ability of others to handle data. Indeed, they may fail Friedman’s framework, since they amount to the platform deciding how to spend others’ data—in this case, by not allowing them to collect and process it at all.

It should be emphasized, though, that this is a special case. It depends on market power, and existing antitrust and competition laws speak to it. The courts will decide whether cases like Daily Mail v Google and Texas et al. v Google show illegal monopolization of data flows, so as to fall within this special case of market power. Outside the United States, cases like the U.K. Competition and Markets Authority’s Google Privacy Sandbox commitments and the European Union’s proposed commitments with Amazon seek to allow others to continue to handle their data and to prevent exclusivity from arising from platform dynamics, which could happen if a large platform prevents others from deciding how to account for data they are collecting. It will be recalled that even Robert Bork thought that there was risk of market power harms from the large Microsoft Windows platform a generation ago.[2] Where market power risks are proven, there is a strong case that data exclusivity raises concerns because of an artificial barrier to entry. It would only be if the benefits of centralized data control were to outweigh the deadweight loss from data restrictions that this would be untrue (though query how well the legal processes verify this).

Yet the latest proposals go well beyond this. A broad interoperability right amounts to “open season” for spending others’ data. This makes perfect sense in the European Union, where there is no large domestic technology platform, meaning that the data is essentially owned via foreign entities (mostly, the shareholders of successful U.S. and Chinese companies). It must be very tempting to run an industrial policy on the basis that “we’ll never be Google” and thus to embrace “sharing is caring” as to others’ data.

But this would transgress the warning from Friedman: would people optimize data collection if it is open to mandatory sharing even without proof of market power? It is deeply concerning that the EU’s DATA Act is accompanied by an infographic that suggests that coffee-machine data might be subject to mandatory sharing, to allow competition in services related to the data (e.g., sales of pods; spare-parts automation). There being no monopoly in coffee machines, this simply forces vertical disintegration of data collection and handling. Why put a data-collection system into a coffee maker at all, if it is to be a common resource? Friedman’s category (4) would apply: the data is taken and spent by another. There is no guarantee that there would be sensible decision making surrounding the resource.

It will be interesting to see how common-law jurisdictions approach this issue. At the risk of stating the obvious, the polity in continental Europe differs from that in the English-speaking democracies when it comes to whether the collective, or the individual, should be in the driving seat. A close read of the UK CMA’s Google commitments is interesting, in that paragraph 30 requires no self-preferencing in data collection and requires future data-handling systems to be designed with impacts on competition in mind. No doubt the CMA is seeking to prevent data-handling exclusivity on the basis that this prevents companies from using their data collection to compete. This is far from the EU DATA Act’s position in that it is certainly not a right to handle Google’s data: it is simply a right to continue to process one’s own data.

U.S. proposals are at an earlier stage. It would seem important, as a matter of principle, not to make arbitrary decisions about vertical integration in data systems, and to identify specific market-power concerns instead, in line with common-law approaches to antitrust.

It might be very attractive to the EU to spend others’ data on their behalf, but that does not make it right. Those working on the U.S. proposals would do well to ensure that there is a meaningful market-power gate to avoid unintended consequences.

Disclaimer: The author was engaged for expert advice relating to the UK CMA’s Privacy Sandbox case on behalf of the complainant Marketers for an Open Web.


[1] Milton Friedman, Free to Choose, 1980, pp.115-119

[2] Comments at the Yale Law School conference, Robert H. Bork’s influence on Antitrust Law, Sep. 27-28, 2013.

In a recent op-ed, Robert Bork Jr. laments the Biden administration’s drive to jettison the Consumer Welfare Standard that has formed nearly half a century of antitrust jurisprudence. The move can be seen in the near-revolution at the Federal Trade Commission, in the president’s executive order on competition enforcement, and in several of the major antitrust bills currently before Congress.

Bork notes the Competition and Antitrust Law Enforcement Reform Act, introduced by Sen. Amy Klobuchar (D-Minn.), would “outlaw any mergers or acquisitions for the more than 80 large U.S. companies valued over $100 billion.”

Bork is correct that it will be more than 80 companies, but it is likely to be way more. While the Klobuchar bill does not explicitly outlaw such mergers, under certain circumstances, it shifts the burden of proof to the merging parties, who must demonstrate that the benefits of the transaction outweigh the potential risks. Under current law, the burden is on the government to demonstrate the potential costs outweigh the potential benefits.

One of the measure’s specific triggers for this burden-shifting is if the acquiring party has a market capitalization, assets, or annual net revenue of more than $100 billion and seeks a merger or acquisition valued at $50 million or more. About 120 or more U.S. companies satisfy at least one of these conditions. The end of this post provides a list of publicly traded companies, according to Zacks’ stock screener, that would likely be subject to the shift in burden of proof.

If the goal is to go after Big Tech, the Klobuchar bill hits the mark. All of the FAANG companies—Facebook, Amazon, Apple, Netflix, and Alphabet (formerly known as Google)—satisfy one or more of the criteria. So do Microsoft and PayPal.

But even some smaller tech firms will be subject to the shift in burden of proof. Zoom and Square have market caps that would trigger under Klobuchar’s bill and Snap is hovering around $100 billion in market cap. Twitter and eBay, however, are well under any of the thresholds. Likewise, privately owned Advance Communications, owner of Reddit, would also likely fall short of any of the triggers.

Snapchat has a little more than 300 million monthly active users. Twitter and Reddit each have about 330 million monthly active users. Nevertheless, under the Klobuchar bill, Snapchat is presumed to have more market power than either Twitter or Reddit, simply because the market assigns a higher valuation to Snap.

But this bill is about more than Big Tech. Tesla, which sold its first car only 13 years ago, is now considered big enough that it will face the same antitrust scrutiny as the Big 3 automakers. Walmart, Costco, and Kroger would be subject to the shifted burden of proof, while Safeway and Publix would escape such scrutiny. An acquisition by U.S.-based Nike would be put under the microscope, but a similar acquisition by Germany’s Adidas would not fall under the Klobuchar bill’s thresholds.

Tesla accounts for less than 2% of the vehicles sold in the United States. I have no idea what Walmart, Costco, Kroger, or Nike’s market share is, or even what comprises “the” market these companies compete in. What we do know is that the U.S. Department of Justice and Federal Trade Commission excel at narrowly crafting market definitions so that just about any company can be defined as dominant.

So much of the recent interest in antitrust has focused on Big Tech. But even the biggest of Big Tech firms operate in dynamic and competitive markets. None of my four children use Facebook or Twitter. My wife and I don’t use Snapchat. We all use Netflix, but we also use Hulu, Disney+, HBO Max, YouTube, and Amazon Prime Video. None of these services have a monopoly on our eyeballs, our attention, or our pocketbooks.

The antitrust bills currently working their way through Congress abandon the long-standing balancing of pro- versus anti-competitive effects of mergers in favor of a “big is bad” approach. While the Klobuchar bill appears to provide clear guidance on the thresholds triggering a shift in the burden of proof, the arbitrary nature of the thresholds will result in arbitrary application of the burden of proof. If passed, we will soon be faced with a case in which two firms who differ only in market cap, assets, or sales will be subject to very different antitrust scrutiny, resulting in regulatory chaos.

Publicly traded companies with more than $100 billion in market capitalization

3MDanaher Corp.PepsiCo
Abbott LaboratoriesDeere & Co.Pfizer
AbbVieEli Lilly and Co.Philip Morris International
Adobe Inc.ExxonMobilProcter & Gamble
Advanced Micro DevicesFacebook Inc.Qualcomm
Alphabet Inc.General Electric Co.Raytheon Technologies
AmazonGoldman SachsSalesforce
American ExpressHoneywellServiceNow
American TowerIBMSquare Inc.
AmgenIntelStarbucks
Apple Inc.IntuitTarget Corp.
Applied MaterialsIntuitive SurgicalTesla Inc.
AT&TJohnson & JohnsonTexas Instruments
Bank of AmericaJPMorgan ChaseThe Coca-Cola Co.
Berkshire HathawayLockheed MartinThe Estée Lauder Cos.
BlackRockLowe’sThe Home Depot
BoeingMastercardThe Walt Disney Co.
Bristol Myers SquibbMcDonald’sThermo Fisher Scientific
Broadcom Inc.MedtronicT-Mobile US
Caterpillar Inc.Merck & Co.Union Pacific Corp.
Charles Schwab Corp.MicrosoftUnited Parcel Service
Charter CommunicationsMorgan StanleyUnitedHealth Group
Chevron Corp.NetflixVerizon Communications
Cisco SystemsNextEra EnergyVisa Inc.
CitigroupNike Inc.Walmart
ComcastNvidiaWells Fargo
CostcoOracle Corp.Zoom Video Communications
CVS HealthPayPal

Publicly traded companies with more than $100 billion in current assets

Ally FinancialFreddie Mac
American International GroupKeyBank
BNY MellonM&T Bank
Capital OneNorthern Trust
Citizens Financial GroupPNC Financial Services
Fannie MaeRegions Financial Corp.
Fifth Third BankState Street Corp.
First Republic BankTruist Financial
Ford Motor Co.U.S. Bancorp

Publicly traded companies with more than $100 billion in sales

AmerisourceBergenDell Technologies
AnthemGeneral Motors
Cardinal HealthKroger
Centene Corp.McKesson Corp.
CignaWalgreens Boots Alliance

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

As one of the few economic theorists in this symposium, I believe my comparative advantage is in that: economic theory. In this post, I want to remind people of the basic economic theories that we have at our disposal, “off the shelf,” to make sense of the U.S. Department of Justice’s lawsuit against Google. I do not mean this to be a proclamation of “what economics has to say about X,” but merely just to help us frame the issue.

In particular, I’m going to focus on the economic concerns of Google paying phone manufacturers (Apple, in particular) to be the default search engine installed on phones. While there is not a large literature on the economic effects of default contracts, there is a large literature on something that I will argue is similar: trade promotions, such as slotting contracts, where a manufacturer pays a retailer for shelf space. Despite all the bells and whistles of the Google case, I will argue that, from an economic point of view, the contracts that Google signed are just trade promotions. No more, no less. And trade promotions are well-established as part of a competitive process that ultimately helps consumers. 

However, it is theoretically possible that such trade promotions hurt customers, so it is theoretically possible that Google’s contracts hurt consumers. Ultimately, the theoretical possibility of anticompetitive behavior that harms consumers does not seem plausible to me in this case.

Default Status

There are two reasons that Google paying Apple to be its default search engine is similar to a trade promotion. First, the deal brings awareness to the product, which nudges certain consumers/users to choose the product when they would not otherwise do so. Second, the deal does not prevent consumers from choosing the other product.

In the case of retail trade promotions, a promotional space given to Coca-Cola makes it marginally easier for consumers to pick Coke, and therefore some consumers will switch from Pepsi to Coke. But it does not reduce any consumer’s choice. The store will still have both items.

This is the same for a default search engine. The marginal searchers, who do not have a strong preference for either search engine, will stick with the default. But anyone can still install a new search engine, install a new browser, etc. It takes a few clicks, just as it takes a few steps to walk down the aisle to get the Pepsi; it is still an available choice.

If we were to stop the analysis there, we could conclude that consumers are worse off (if just a tiny bit). Some customers will have to change the default app. We also need to remember that this contract is part of a more general competitive process. The retail stores are also competing with one another, as are smartphone manufacturers.

Despite popular claims to the contrary, Apple cannot charge anything it wants for its phone. It is competing with Samsung, etc. Therefore, Apple has to pass through some of Google’s payments to customers in order to compete with Samsung. Prices are lower because of this payment. As I phrased it elsewhere, Google is effectively subsidizing the iPhone. This cross-subsidization is a part of the competitive process that ultimately benefits consumers through lower prices.

These contracts lower consumer prices, even if we assume that Apple has market power. Those who recall your Econ 101 know that a monopolist chooses a quantity where the marginal revenue equals marginal cost. With a payment from Google, the marginal cost of producing a phone is lower, therefore Apple will increase the quantity and lower price. This is shown below:

One of the surprising things about markets is that buyers’ and sellers’ incentives can be aligned, even though it seems like they must be adversarial. Companies can indirectly bargain for their consumers. Commenting on Standard Fashion Co. v. Magrane-Houston Co., where a retail store contracted to only carry Standard’s products, Robert Bork (1978, pp. 306–7) summarized this idea as follows:

The store’s decision, made entirely in its own interest, necessarily reflects the balance of competing considerations that determine consumer welfare. Put the matter another way. If no manufacturer used exclusive dealing contracts, and if a local retail monopolist decided unilaterally to carry only Standard’s patterns because the loss in product variety was more than made up in the cost saving, we would recognize that decision was in the consumer interest. We do not want a variety that costs more than it is worth … If Standard finds it worthwhile to purchase exclusivity … the reason is not the barring of entry, but some more sensible goal, such as obtaining the special selling effort of the outlet.

How trade promotions could harm customers

Since Bork’s writing, many theoretical papers have shown exceptions to Bork’s logic. There are times that the retailers’ incentives are not aligned with the customers. And we need to take those possibilities seriously.

The most common way to show the harm of these deals (or more commonly exclusivity deals) is to assume:

  1. There are large, fixed costs so that a firm must acquire a sufficient number of customers in order to enter the market; and
  2. An incumbent can lock in enough customers to prevent the entrant from reaching an efficient size.

Consumers can be locked-in because there is some fixed cost of changing suppliers or because of some coordination problems. If that’s true, customers can be made worse off, on net, because the Google contracts reduce consumer choice.

To understand the logic, let’s simplify the model to just search engines and searchers. Suppose there are two search engines (Google and Bing) and 10 searchers. However, to operate profitably, each search engine needs at least three searchers. If Google can entice eight searchers to use its product, Bing cannot operate profitably, even if Bing provides a better product. This holds even if everyone knows Bing would be a better product. The consumers are stuck in a coordination failure.

We should be skeptical of coordination failure models of inefficient outcomes. The problem with any story of coordination failures is that it is highly sensitive to the exact timing of the model. If Bing can preempt Google and offer customers an even better deal (the new entrant is better by assumption), then the coordination failure does not occur.

To argue that Bing could not execute a similar contract, the most common appeal is that the new entrant does not have the capital to pay upfront for these contracts, since it will only make money from its higher-quality search engine down the road. That makes sense until you remember that we are talking about Microsoft. I’m skeptical that capital is the real constraint. It seems much more likely that Google just has a more popular search engine.

The other problem with coordination failure arguments is that they are almost non-falsifiable. There is no way to tell, in the model, whether Google is used because of a coordination failure or whether it is used because it is a better product. If Google is a better product, then the outcome is efficient. The two outcomes are “observationally equivalent.” Compare this to the standard theory of monopoly, where we can (in principle) establish an inefficiency if the price is greater than marginal cost. While it is difficult to measure marginal cost, it can be done.

There is a general economic idea in these models that we need to pay attention to. If Google takes an action that prevents Bing from reaching efficient size, that may be an externality, sometimes called a network effect, and so that action may hurt consumer welfare.

I’m not sure how seriously to take these network effects. If more searchers allow Bing to make a better product, then literally any action (competitive or not) by Google is an externality. Making a better product that takes away consumers from Bing lowers Bing’s quality. That is, strictly speaking, an externality. Surely, that is not worthy of antitrust scrutiny simply because we find an externality.

And Bing also “takes away” searchers from Google, thus lowering Google’s possible quality. With network effects, bigger is better and it may be efficient to have only one firm. Surely, that’s not an argument we want to put forward as a serious antitrust analysis.

Put more generally, it is not enough to scream “NETWORK EFFECT!” and then have the antitrust authority come in, lawsuits-a-blazing. Well, it shouldn’t be enough.

For me to take the network effect argument seriously from an economic point of view, compared to a legal perspective, I would need to see a real restriction on consumer choice, not just an externality. One needs to argue that:

  1. No competitor can cover their fixed costs to make a reasonable search engine; and
  2. These contracts are what prevent the competing search engines from reaching size.

That’s the challenge I would like to put forward to supporters of the lawsuit. I’m skeptical.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by William J. Kolasky (Partner, Hughes Hubbard & Reed; former Deputy Assistant Attorney General, DOJ Antitrust Division), and Philip A. Giordano (Partner, Hughes Hubbard & Reed LLP).

[Kolasky & Giordano: The authors thank Katherine Taylor, an associate at Hughes Hubbard & Reed, for her help in researching this article.]

On January 10, the Department of Justice (DOJ) withdrew the 1984 DOJ Non-Horizontal Merger Guidelines, and, together with the Federal Trade Commission (FTC), released new draft 2020 Vertical Merger Guidelines (“DOJ/FTC draft guidelines”) on which it seeks public comment by February 26.[1] In announcing these new draft guidelines, Makan Delrahim, the Assistant Attorney General for the Antitrust Division, acknowledged that while many vertical mergers are competitively beneficial or neutral, “some vertical transactions can raise serious concern.” He went on to explain that, “The revised draft guidelines are based on new economic understandings and the agencies’ experience over the past several decades and better reflect the agencies’ actual practice in evaluating proposed vertical mergers.” He added that he hoped these new guidelines, once finalized, “will provide more clarity and transparency on how we review vertical transactions.”[2]

While we agree with the DOJ and FTC that the 1984 Non-Horizontal Merger Guidelines are now badly outdated and that a new set of vertical merger guidelines is needed, we question whether the draft guidelines released on January 10, will provide the desired “clarity and transparency.” In our view, the proposed guidelines give insufficient recognition to the wide range of efficiencies that flow from most, if not all, vertical mergers. In addition, the guidelines fail to provide sufficiently clear standards for challenging vertical mergers, thereby leaving too much discretion in the hands of the agencies as to when they will challenge a vertical merger and too much uncertainty for businesses contemplating a vertical merger. 

What is most troubling is that this did not need to be so. In 2008, the European Commission, as part of its merger process reform initiative, issued an excellent set of non-horizontal merger guidelines that adopt basically the same analytical framework as the new draft guidelines for evaluating vertical mergers.[3] The EU guidelines, however, lay out in much more detail the factors the Commission will consider and the standards it will apply in evaluating vertical transactions. That being so, it is difficult to understand why the DOJ and FTC did not propose a set of vertical merger guidelines that more closely mirror those of the European Commission, rather than try to reinvent the wheel with a much less complete set of guidelines.

Rather than making the same mistake ourselves, we will try to summarize the EU vertical mergers and to explain why we believe they are markedly better than the draft guidelines the DOJ and FTC have proposed. We would urge the DOJ and FTC to consider revising their draft guidelines to make them more consistent with the EU vertical merger guidelines. Doing so would, among other things, promote greater convergence between the two jurisdictions, which is very much in the interest of both businesses and consumers in an increasingly global economy.

The principal differences between the draft joint guidelines and the EU vertical merger guidelines

1. Acknowledgement of the key differences between horizontal and vertical mergers

The EU guidelines begin with an acknowledgement that, “Non-horizontal mergers are generally less likely to significantly impede effective competition than horizontal mergers.” As they explain, this is because of two key differences between vertical and horizontal mergers.

  • First, unlike horizontal mergers, vertical mergers “do not entail the loss of direct competition between the merging firms in the same relevant market.”[4] As a result, “the main source of anti-competitive effect in horizontal mergers is absent from vertical and conglomerate mergers.”[5]
  • Second, vertical mergers are more likely than horizontal mergers to provide substantial, merger-specific efficiencies, without any direct reduction in competition. The EU guidelines explain that these efficiencies stem from two main sources, both of which are intrinsic to vertical mergers. The first is that, “Vertical integration may thus provide an increased incentive to seek to decrease prices and increase output because the integrated firm can capture a larger fraction of the benefits.”[6] The second is that, “Integration may also decrease transaction costs and allow for a better co-ordination in terms of product design, the organization of the production process, and the way in which the products are sold.”[7]

The DOJ/FTC draft guidelines do not acknowledge these fundamental differences between horizontal and vertical mergers. The 1984 DOJ non-horizontal guidelines, by contrast, contained an acknowledgement of these differences very similar to that found in the EU guidelines. First, the 1984 guidelines acknowledge that, “By definition, non-horizontal mergers involve firms that do not operate in the same market. It necessarily follows that such mergers produce no immediate change in the level of concentration in any relevant market as defined in Section 2 of these Guidelines.”[8] Second, the 1984 guidelines acknowledge that, “An extensive pattern of vertical integration may constitute evidence that substantial economies are afforded by vertical integration. Therefore, the Department will give relatively more weight to expected efficiencies in determining whether to challenge a vertical merger than in determining whether to challenge a horizontal merger.”[9] Neither of these acknowledgements can be found in the new draft guidelines.

These key differences have also been acknowledged by the courts of appeals for both the Second and D.C. circuits in the agencies’ two most recent litigated vertical mergers challenges: Fruehauf Corp. v. FTC in 1979[10] and United States v. AT&T in 2019.[11] In both cases, the courts held, as the D.C. Circuit explained in AT&T, that because of these differences, the government “cannot use a short cut to establish a presumption of anticompetitive effect through statistics about the change in market concentration” – as it can in a horizontal merger case – “because vertical mergers produce no immediate change in the relevant market share.”[12] Instead, in challenging a vertical merger, “the government must make a ‘fact-specific’ showing that the proposed merger is ‘likely to be anticompetitive’” before the burden shifts to the defendants “to present evidence that the prima facie case ‘inaccurately predicts the relevant transaction’s probable effect on future competition,’ or to ‘sufficiently discredit’ the evidence underlying the prima facie case.”[13]

While the DOJ/FTC draft guidelines acknowledge that a vertical merger may generate efficiencies, they propose that the parties to the merger bear the burden of identifying and substantiating those efficiencies under the same standards applied by the 2010 Horizontal Merger Guidelines. Meeting those standards in the case of a horizontal merger can be very difficult. For that reason, it is important that the DOJ/FTC draft guidelines be revised to make it clear that before the parties to a vertical merger are required to establish efficiencies meeting the horizontal merger guidelines’ evidentiary standard, the agencies must first show that the merger is likely to substantially lessen competition, based on the type of fact-specific evidence the courts required in both Fruehauf and AT&T.

2. Safe harbors

Although they do not refer to it as a “safe harbor,” the DOJ/FTC draft guidelines state that, 

The Agencies are unlikely to challenge a vertical merger where the parties to the merger have a share in the relevant market of less than 20 percent, and the related product is used in less than 20 percent of the relevant market.[14] 

If we understand this statement correctly, it means that the agencies may challenge a vertical merger in any case where one party has a 20% share in a relevant market and the other party has a 20% or higher share of any “related product,” i.e., any “product or service” that is supplied by the other party to firms in that relevant market. 

By contrast, the EU guidelines state that,

The Commission is unlikely to find concern in non-horizontal mergers . . . where the market share post-merger of the new entity in each of the markets concerned is below 30% . . . and the post-merger HHI is below 2,000.[15] 

Both the EU guidelines and the DOJ/FTC draft guidelines are careful to explain that these statements do not create any “legal presumption” that vertical mergers below these thresholds will not be challenged or that vertical mergers above those thresholds are likely to be challenged.

The EU guidelines are more consistent than the DOJ/FTC draft guidelines both with U.S. case law and with the actual practice of both the DOJ and FTC. It is important to remember that the raising rivals’ costs theory of vertical foreclosure was first developed nearly four decades ago by two young economists, David Scheffman and Steve Salop, as a theory of exclusionary conduct that could be used against dominant firms in place of the more simplistic theories of vertical foreclosure that the courts had previously relied on and which by 1979 had been totally discredited by the Chicago School for the reasons stated by the Second Circuit in Fruehauf.[16] 

As the Second Circuit explained in Fruehauf, it was “unwilling to assume that any vertical foreclosure lessens competition” because 

[a]bsent very high market concentration or some other factor threatening a tangible anticompetitive effect, a vertical merger may simply realign sales patterns, for insofar as the merger forecloses some of the market from the merging firms’ competitors, it may simply free up that much of the market, in which the merging firm’s competitors and the merged firm formerly transacted, for new transactions between the merged firm’s competitors and the merging firm’s competitors.[17] 

Or, as Robert Bork put it more colorfully in The Antitrust Paradox, in criticizing the FTC’s decision in A.G. Spalding & Bros., Inc.,[18]:

We are left to imagine eager suppliers and hungry customers, unable to find each other, forever foreclosed and left languishing. It would appear the commission could have cured this aspect of the situation by throwing an industry social mixer.[19]

Since David Scheffman and Steve Salop first began developing their raising rivals’ cost theory of exclusionary conduct in the early 1980s, gallons of ink have been spilled in legal and economic journals discussing and evaluating that theory.[20] The general consensus of those articles is that while raising rivals’ cost is a plausible theory of exclusionary conduct, proving that a defendant has engaged in such conduct is very difficult in practice. It is even more difficult to predict whether, in evaluating a proposed merger, the merged firm is likely to engage in such conduct at some time in the future. 

Consistent with the Second Circuit’s decision in Fruehauf and with this academic literature, the courts, in deciding cases challenging exclusive dealing arrangements under either a vertical foreclosure theory or a raising rivals’ cost theory, have generally been willing to consider a defendant’s claim that the alleged exclusive dealing arrangements violated section 1 of the Sherman Act only in cases where the defendant had a dominant or near-dominant share of a highly concentrated market — usually meaning a share of 40 percent or more.[21] Likewise, all but one of the vertical mergers challenged by either the FTC or DOJ since 1996 have involved parties that had dominant or near-dominant shares of a highly concentrated market.[22] A majority of these involved mergers that were not purely vertical, but in which there was also a direct horizontal overlap between the two parties.

One of the few exceptions is AT&T/Time Warner, a challenge the DOJ lost in both the district court and the D.C. Circuit.[23] The outcome of that case illustrates the difficulty the agencies face in trying to prove a raising rivals’ cost theory of vertical foreclosure where the merging firms do not have a dominant or near-dominant share in either of the affected markets.

Given these court decisions and the agencies’ historical practice of challenging vertical mergers only between companies with dominant or near-dominant shares in highly concentrated markets, we would urge the DOJ and FTC to consider raising the market share threshold below which it is unlikely to challenge a vertical merger to at least 30 percent, in keeping with the EU guidelines, or to 40 percent in order to make the vertical merger guidelines more consistent with the U.S. case law on exclusive dealing.[24] We would also urge the agencies to consider adding a market concentration HHI threshold of 2,000 or higher, again in keeping with the EU guidelines.

3. Standards for applying a raising rivals’ cost theory of vertical foreclosure

Another way in which the EU guidelines are markedly better than the DOJ/FTC draft guidelines is in explaining the factors taken into consideration in evaluating whether a vertical merger will give the parties both the ability and incentive to raise their rivals’ costs in a way that will enable the merged entity to increase prices to consumers. Most importantly, the EU guidelines distinguish clearly between input foreclosure and customer foreclosure, and devote an entire section to each. For brevity, we will focus only on input foreclosure to show why we believe the more detailed approach the EU guidelines take is preferable to the more cursory discussion in the DOJ/FTC draft guidelines.

In discussing input foreclosure, the EU guidelines correctly distinguish between whether a vertical merger will give the merged firm the ability to raise rivals’ costs in a way that may substantially lessen competition and, if so, whether it will give the merged firm an incentive to do so. These are two quite distinct questions, which the DOJ/FTC draft guidelines unfortunately seem to lump together.

The ability to raise rivals’ costs

The EU guidelines identify four important conditions that must exist for a vertical merger to give the merged firm the ability to raise its rivals’ costs. First, the alleged foreclosure must concern an important input for the downstream product, such as one that represents a significant cost factor relative to the price of the downstream product. Second, the merged entity must have a significant degree of market power in the upstream market. Third, the merged entity must be able, by reducing access to its own upstream products or services, to affect negatively the overall availability of inputs for rivals in the downstream market in terms of price or quality. Fourth, the agency must examine the degree to which the merger may free up capacity of other potential input suppliers. If that capacity becomes available to downstream competitors, the merger may simple realign purchase patterns among competing firms, as the Second Circuit recognized in Fruehauf.

The incentive to foreclose access to inputs: 

The EU guidelines recognize that the incentive to foreclose depends on the degree to which foreclosure would be profitable. In making this determination, the vertically integrated firm will take into account how its supplies of inputs to competitors downstream will affect not only the profits of its upstream division, but also of its downstream division. Essentially, the merged entity faces a trade-off between the profit lost in the upstream market due to a reduction of input sales to (actual or potential) rivals and the profit gained from expanding sales downstream or, as the case may be, raising prices to consumers. This trade-off is likely to depend on the margins the merged entity obtains on upstream and downstream sales. Other things constant, the lower the margins upstream, the lower the loss from restricting input sales. Similarly, the higher the downstream margins, the higher the profit gain from increasing market share downstream at the expense of foreclosed rivals.

The EU guidelines recognize that the incentive for the integrated firm to raise rivals’ costs further depends on the extent to which downstream demand is likely to be diverted away from foreclosed rivals and the share of that diverted demand the downstream division of the integrated firm can capture. This share will normally be higher the less capacity constrained the merged entity will be relative to non-foreclosed downstream rivals and the more the products of the merged entity and foreclosed competitors are close substitutes. The effect on downstream demand will also be higher if the affected input represents a significant proportion of downstream rivals’ costs or if it otherwise represents a critical component of the downstream product.

The EU guidelines recognize that the incentive to foreclose actual or potential rivals may also depend on the extent to which the downstream division of the integrated firm can be expected to benefit from higher price levels downstream as a result of a strategy to raise rivals’ costs. The greater the market shares of the merged entity downstream, the greater the base of sales on which to enjoy increased margins. However, an upstream monopolist that is already able to fully extract all available profits in vertically related markets may not have any incentive to foreclose rivals following a vertical merger. Therefore, the ability to extract available profits from consumers does not follow immediately from a very high market share; to come to that conclusion requires a more thorough analysis of the actual and future constraints under which the monopolist operates.

Finally, the EU guidelines require the Commission to examine not only the incentives to adopt such conduct, but also the factors liable to reduce, or even eliminate, those incentives, including the possibility that the conduct is unlawful. In this regard, the Commission will consider, on the basis of a summary analysis: (i) the likelihood that this conduct would be clearly be unlawful under Community law, (ii) the likelihood that this illegal conduct could be detected, and (iii) the penalties that could be imposed.

Overall likely impact on effective competition: 

Finally, the EU guidelines recognize that a vertical merger will raise foreclosure concerns only when it would lead to increased prices in the downstream market. This normally requires that the foreclosed suppliers play a sufficiently important role in the competitive process in the downstream market. In general, the higher the proportion of rivals that would be foreclosed in the downstream market, the more likely the merger can be expected to result in a significant price increase in the downstream market and, therefore, to significantly impede effective competition. 

In making these determinations, the Commission must under the EU guidelines also assess the extent to which a vertical merger may raise barriers to entry, a criterion that is also found in the 1984 DOJ non-horizontal merger guidelines but is strangely missing from the DOJ/FTC draft guidelines. As the 1984 guidelines recognize, a vertical merger can raise entry barriers if the anticipated input foreclosure would create a need to enter at both the downstream and the upstream level in order to compete effectively in either market.

* * * * *

Rather than issue a set of incomplete vertical merger guidelines, we would urge the DOJ and FTC to follow the lead of the European Commission and develop a set of guidelines setting out in more detail the factors the agencies will consider and the standards they will use in evaluating vertical mergers. The EU non-horizontal merger guidelines provide an excellent model for doing so.


[1] U.S. Department of Justice & Federal Trade Commission, Draft Vertical Merger Guidelines, available at https://www.justice.gov/opa/press-release/file/1233741/download (hereinafter cited as “DOJ/FTC draft guidelines”).

[2] U.S. Department of Justice, Office of Public Affairs, “DOJ and FTC Announce Draft Vertical Merger Guidelines for Public Comment,” Jan. 10, 2020, available at https://www.justice.gov/opa/pr/doj-and-ftc-announce-draft-vertical-merger-guidelines-public-comment.

[3] See European Commission, Guidelines on the assessment of non-horizontal mergers under the Council Regulation on the control of concentrations between undertakings (2008) (hereinafter cited as “EU guidelines”), available at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52008XC1018(03)&from=EN.

[4] Id. at § 12.

[5] Id.

[6] Id. at § 13.

[7] Id. at § 14. The insight that transactions costs are an explanation for both horizontal and vertical integration in firms first occurred to Ronald Coase in 1932, while he was a student at the London School of Economics. See Ronald H. Coase, Essays on Economics and Economists 7 (1994). Coase took five years to flesh out his initial insight, which he then published in 1937 in a now-famous article, The Nature of the Firm. See Ronald H. Coase, The Nature of the Firm, Economica 4 (1937). The implications of transactions costs for antitrust analysis were explained in more detail four decades later by Oliver Williamson in a book he published in 1975. See Oliver E. William, Markets and Hierarchies: Analysis and Antitrust Implications (1975) (explaining how vertical integration, either by ownership or contract, can, for example, protect a firm from free riding and other opportunistic behavior by its suppliers and customers). Both Coase and Williamson later received Nobel Prizes for Economics for their work recognizing the importance of transactions costs, not only in explaining the structure of firms, but in other areas of the economy as well. See, e.g., Ronald H. Coase, The Problem of Social Cost, J. Law & Econ. 3 (1960) (using transactions costs to explain the need for governmental action to force entities to internalize the costs their conduct imposes on others).

[8] U.S. Department of Justice, Antitrust Division, 1984 Merger Guidelines, § 4, available at https://www.justice.gov/archives/atr/1984-merger-guidelines.

[9] EU guidelines, at § 4.24.

[10] Fruehauf Corp. v. FTC, 603 F.2d 345 (2d Cir. 1979).

[11] United States v. AT&T, Inc., 916 F.2d 1029 (D.C. Cir. 2019).

[12] Id. at 1032; accord, Fruehauf, 603 F.2d, at 351 (“A vertical merger, unlike a horizontal one, does not eliminate a competing buyer or seller from the market . . . . It does not, therefore, automatically have an anticompetitive effect.”) (emphasis in original) (internal citations omitted).

[13] AT&T, 419 F.2d, at 1032 (internal citations omitted).

[14] DOJ/FTC draft guidelines, at 3.

[15] EU guidelines, at § 25.

[16] See Steven C. Salop & David T. Scheffman, Raising Rivals’ Costs, 73 AM. ECON. REV. 267 (1983).

[17] Fruehauf, supra note11, 603 F.2d at 353 n.9 (emphasis added).

[18] 56 F.T.C. 1125 (1960).

[19] Robert H. Bork, The Antitrust Paradox: A Policy at War with Itself 232 (1978).

[20] See, e.g., Alan J. Meese, Exclusive Dealing, the Theory of the Firm, and Raising Rivals’ Costs: Toward a New Synthesis, 50 Antitrust Bull., 371 (2005); David T. Scheffman and Richard S. Higgins, Twenty Years of Raising Rivals Costs: History, Assessment, and Future, 12 George Mason L. Rev.371 (2003); David Reiffen & Michael Vita, Comment: Is There New Thinking on Vertical Mergers, 63 Antitrust L.J. 917 (1995); Thomas G. Krattenmaker & Steven Salop, Anticompetitive Exclusion: Raising Rivals’ Costs to Achieve Power Over Price, 96 Yale L. J. 209, 219-25 (1986).

[21] See, e.g., United States v. Microsoft, 87 F. Supp. 2d 30, 50-53 (D.D.C. 1999) (summarizing law on exclusive dealing under section 1 of the Sherman Act); id. at 52 (concluding that modern case law requires finding that exclusive dealing contracts foreclose rivals from 40% of the marketplace); Omega Envtl, Inc. v. Gilbarco, Inc., 127 F.3d 1157, 1162-63 (9th Cir. 1997) (finding 38% foreclosure insufficient to make out prima facie case that exclusive dealing agreement violated the Sherman and Clayton Acts, at least where there appeared to be alternate channels of distribution).

[22] See, e.g., United States, et al. v. Comcast, 1:11-cv-00106 (D.D.C. Jan. 18, 2011) (Comcast had over 50% of MVPD market), available at https://www.justice.gov/atr/case-document/competitive-impact-statement-72; United States v. Premdor, Civil No.: 1-01696 (GK) (D.D.C. Aug. 3, 2002) (Masonite manufactured more than 50% of all doorskins sold in the U.S.; Premdor sold 40% of all molded doors made in the U.S.), available at https://www.justice.gov/atr/case-document/final-judgment-151.

[23] See United States v. AT&T, Inc., 916 F.2d 1029 (D.C. Cir. 2019).

[24] See Brown Shoe Co. v. United States, 370 U.S. 294, (1962) (relying on earlier Supreme Court decisions involving exclusive dealing and tying claims under section 3 of the Clayton Act for guidance as to what share of a market must be foreclosed before a vertical merger can be found unlawful under section 7).

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Herbert Hovenkamp (James G. Dinan University Professor, University of Pennsylvania School of Law and the Wharton School).]

In its 2019 AT&T/Time-Warner merger decision the D.C. Circuit Court of Appeals mentioned something that antitrust enforcers have known for years: We need a new set of Agency Guidelines for vertical mergers. The vertical merger Guidelines were last revised in 1984 at the height of Chicago School hostility toward harsh antitrust treatment of vertical restraints. In January, 2020, the Agencies issued a set of draft vertical merger Guidelines for comment. At this writing the Guidelines are not final, and the Agencies are soliciting comments on the draft and will be holding at least two workshops to discuss them before they are finalized.

1. What the Guidelines contain

a. “Relevant markets” and “related products”

The draft Guidelines borrow heavily from the 2010 Horizontal Merger Guidelines concerning general questions of market definition, entry barriers, partial acquisitions, treatment of efficiencies and the failing company defense. Both the approach to market definition and the necessity for it are treated somewhat differently than for horizontal mergers, however. First, the Guidelines do not generally speak of vertical mergers as linking two different “markets,” such as an upstream market and a downstream market. Instead, they use the term “relevant market” to speak of the market that is of competitive concern, and the term “related product” to refer to some product, service, or grouping of sales that is either upstream or downstream from this market:

A related product is a product or service that is supplied by the merged firm, is vertically related to the products and services in the relevant market, and to which access by the merged firm’s rivals affects competition in the relevant market.

So, for example, if a truck trailer manufacturer should acquire a maker of truck wheels and the market of concern was trailer manufacturing, the Agencies would identify that as the relevant market and wheels as the “related product.” (Cf. Fruehauf Corp. v. FTC).

b. 20% market share threshold

The Guidelines then suggest (§3) that the Agencies would be

unlikely to challenge a vertical merger where the parties to the merger have a share in the relevant market of less than 20 percent and the related product is used in less than 20 percent of the relevant market.

The choice of 20% is interesting but quite defensible as a statement of enforcement policy, and very likely represents a compromise between extreme positions. First, 20% is considerably higher than the numbers that supported enforcement during the 1960s and earlier (see, e.g., Brown Shoe (less than 4%); Bethlehem Steel (10% in one market; as little as 1.8% in another market)). Nevertheless, it is also considerably lower than the numbers that commentators such as Robert Bork would have approved (see Robert H. Bork, The Antitrust Paradox: A Policy at War with Itself at pp. 219, 232-33; see also Herbert Hovenkamp, Robert Bork and Vertical Integration: Leverage, Foreclosure, and Efficiency), and lower than the numbers generally used to evaluate vertical restraints such as tying or exclusive dealing (see Jefferson Parish (30% insufficient); see also 9 Antitrust Law ¶1709 (4th ed. 2018)).

The Agencies do appear to be admonished by the Second Circuit’s Fruehauf decision, now 40 years old but nevertheless the last big, fully litigated vertical merger case prior to AT&T/Time Warner: foreclosure numbers standing alone do not mean very much, at least not unless they are very large. Instead, there must be some theory about how foreclosure leads to lower output and higher prices. These draft Guidelines provide several examples and illustrations.

Significantly, the Guidelines do not state that they will challenge vertical mergers crossing the 20% threshold, but only that they are unlikely to challenge mergers that fall short of it. Even here, they leave open the possibility of challenge in unusual situations where the share numbers may understate the concern, such as where the related product “is relatively new,” and its share is rapidly growing. The Guidelines also note (§3) that if the merging parties serve different geographic areas, then the relevant share may not be measured by a firm’s gross sales everywhere, but rather by its shares in the other firm’s market in which anticompetitive effects are being tested. 

These numbers as well as the qualifications seem quite realistic, particularly in product differentiated markets where market shares tend to understate power, particularly in vertical distribution.

c. Unilateral effects

The draft Vertical Guidelines then divide the universe of adverse competitive effects into Unilateral Effects (§5) and Coordinated Effects (§7). The discussion of unilateral effects is based on bargaining theory similar to that used in the treatment of unilateral effects from horizontal mergers in the 2010 Horizontal Merger Guidelines. Basically, a price increase is more profitable if the losses that accrue to one merging participant are affected by gains to the merged firm as a whole. These principles have been a relatively uncontroversial part of industrial organization economics and game theory for decades. The Draft Vertical Guidelines recognize both foreclosure and raising rivals’ costs as concerns, as well as access to competitively sensitive information (§5).

 The Draft Guidelines note:

A vertical merger may diminish competition by allowing the merged firm to profitably weaken or remove the competitive constraint from one or more of its actual or potential rivals in the relevant market by changing the terms of those rivals’ access to one or more related products. For example, the merged firm may be able to raise its rivals’ costs by charging a higher price for the related products or by lowering service or product quality. The merged firm could also refuse to supply rivals with the related products altogether (“foreclosure”).

Where sufficient data are available, the Agencies may construct economic models designed to quantify the likely unilateral price effects resulting from the merger…..

The draft Guidelines note that these models need not rely on a particular market definition. As in the case of unilateral effects horizontal mergers, they compare the firms’ predicted bargaining position before and after the merger, assuming that the firms seek maximization of profits or value. They then query whether equilibrium prices in the post-merger market will be higher than those prior to the merger. 

In making that determination the Guidelines suggest (§4a) that the Agency could look at several factors, including:

  1. The merged firm’s foreclosure of, or raising costs of, one or more rivals would cause those rivals to lose sales (for example, if they are forced out of the market, if they are deterred from innovating, entering or expanding, or cannot finance these activities, or if they have incentives to pass on higher costs through higher prices), or to otherwise compete less aggressively for customers’ business;
  2. The merged firm’s business in the relevant market would benefit (for example if some portion of those lost sales would be diverted to the merged firm);
  3. Capturing this benefit through merger may make foreclosure, or raising rivals’ costs, profitable even though it would not have been profitable prior to the merger; and,
  4. The magnitude of likely foreclosure or raising rivals’ costs is not de minimis such that it would substantially lessen competition.

This approach, which reflects important developments in empirical economics, does entail that there will be increasing reliance on economic experts to draft, interpret, and dispute the relevant economic models.

In a brief section the Draft Guidelines also state a concern for mergers that will provide a firm with access or control of sensitive business information that could be used anticompetitively. The Guidelines do not provide a great deal of elaboration on this point.

d. Elimination of double marginalization

The Vertical Guidelines also have a separate section (§6) discussing an offset for elimination of double marginalization. They note what has come to be the accepted economic wisdom that elimination of double marginalization can result in higher output and lower prices when it applies, but it does not invariably apply.

e. Coordinated effects

Finally, the draft Guidelines note (§7) a concern that certain vertical mergers may enable collusion. This could occur, for example, if the merger eliminated a maverick buyer who formerly played rival sellers off against one another. In other cases the merger may give one of the partners access to information that could be used to facilitate collusion or discipline cartel cheaters, offering this example:

Example 7: The merger brings together a manufacturer of components and a maker of final products. If the component manufacturer supplies rival makers of final products, it will have information about how much they are making, and will be better able to detect cheating on a tacit agreement to limit supplies. As a result the merger may make the tacit agreement more effective.

2. Conclusion: An increase in economic sophistication

These draft Guidelines are relatively short, but that is in substantial part because they incorporate by reference many of the relevant points from the 2010 Guidelines for horizontal mergers. In any event, they may not provide as much detail as federal courts might hope for, but they are an important step toward specifying the increasingly economic approaches that the agencies take toward merger analysis, one in which direct estimates play a larger role, with a comparatively reduced role for more traditional approaches depending on market definition and market share.

They also avoid both rhetorical extremes, which are being too hostile or too sanguine about the anticompetitive potential of vertical acquisitions. While the new draft Guidelines leave the overall burden of proof with the challenger, they have clearly weakened the presumption that vertical mergers are invariably benign, particularly in highly concentrated markets or where the products in question are differentiated. Second, the draft Guidelines emphasize approaches that are more economically sophisticated and empirical. Consistent with that, foreclosure concerns are once again taken more seriously.

The Department of Justice began its antitrust case against IBM on January 17, 1969. The DOJ sued under the Sherman Antitrust Act, claiming IBM tried to monopolize the market for “general-purpose digital computers.” The case lasted almost thirteen years, ending on January 8, 1982 when Assistant Attorney General William Baxter declared the case to be “without merit” and dropped the charges. 

The case lasted so long, and expanded in scope so much, that by the time the trial began, “more than half of the practices the government raised as antitrust violations were related to products that did not exist in 1969.” Baltimore law professor Robert Lande said it was “the largest legal case of any kind ever filed.” Yale law professor Robert Bork called it “the antitrust division’s Vietnam.”

As the case dragged on, IBM was faced with increasingly perverse incentives. As NYU law professor Richard Epstein pointed out (emphasis added), 

Oddly, enough IBM was able to strengthen its antitrust-related legal position by reducing its market share, which it achieved through raising prices. When the suit was discontinued that share had fallen dramatically since 1969 from about 50 percent of the market to 37 percent in 1982. Only after the government suit ended did IBM lower its prices in order to increase market share.

Source: Levy & Welzer

In an interview with Vox, Tim Wu claimed that without the IBM case, Apple wouldn’t exist and we might still be using mainframe computers (emphasis added):

Vox: You said that Apple wouldn’t exist without the IBM case.

Wu: Yeah, I did say that. The case against IBM took 13 years and we didn’t get a verdict but in that time, there was the “policeman at the elbow” effect. IBM was once an all-powerful company. It’s not clear that we would have had an independent software industry, or that it would have developed that quickly, the idea of software as a product, [without this case]. That was one of the immediate benefits of that excavation.

And then the other big one is that it gave a lot of room for the personal computer to get started, and the software that surrounds the personal computer — two companies came in, Apple and Microsoft. They were sort of born in the wake of the IBM lawsuit. You know they were smart guys, but people did need the pressure off their backs.

Nobody is going to start in the shadow of Facebook and get anywhere. Snap’s been the best, but how are they doing? They’ve been halted. I think it’s a lot harder to imagine this revolutionary stuff that happened in the ’80s. If IBM had been completely unwatched by regulators, by enforcement, doing whatever they wanted, I think IBM would have held on and maybe we’d still be using mainframes, or something — a very different situation.

Steven Sinofsky, a former Microsoft executive and current Andreessen Horowitz board partner, had a different take on the matter, attributing IBM’s (belated) success in PCs to its utter failure in minicomputers (emphasis added):

IBM chose to prevent third parties from interoperating with mainframes sometimes at crazy levels (punch card formats). And then chose to defend until the end their business model of leasing … The minicomputer was a direct threat not because of technology but because of those attributes. I’ve heard people say IBM went into PCs because the antitrust loss caused them to look for growth or something. Ha. PCs were spun up because IBM was losing Minis. But everything about the PC was almost a fluke organizationally and strategically. The story of IBM regulation is told as though PCs exist because of the case.

The more likely story is that IBM got swamped by the paradigm shift from mainframes to PCs. IBM was dominant in mainframe computers which were sold to the government and large enterprises. Microsoft, Intel, and other leaders in the PC market sold to small businesses and consumers, which required an entirely different business model than IBM was structured to implement.

ABB – Always Be Bundling (Or Unbundling)

“There’s only two ways I know of to make money: bundling and unbundling.” – Jim Barksdale

In 1969, IBM unbundled its software and services from hardware sales. As many industry observers note, this action precipitated the rise of the independent software development industry. But would this have happened regardless of whether there was an ongoing antitrust case? Given that bundling and unbundling is ubiquitous in the history of the computer industry, the answer is likely yes.

As the following charts show, IBM first created an integrated solution in the mainframe market, controlling everything from raw materials and equipment to distribution and service. When PCs disrupted mainframes, the entire value chain was unbundled. Later, Microsoft bundled its operating system with applications software. 

Source: Clayton Christensen

The first smartphone to disrupt the PC market was the Apple iPhone — an integrated solution. And once the technology became “good enough” to meet the average consumer’s needs, Google modularized everything except the operating system (Android) and the app store (Google Play).

Source: SlashData
Source: Jake Nielson

Another key prong in Tim Wu’s argument that the government served as an effective “policeman at the elbow” in the IBM case is that the company adopted an open model when it entered the PC market and did not require an exclusive license from Microsoft to use its operating system. But exclusivity is only one term in a contract negotiation. In an interview with Playboy magazine in 1994, Bill Gates explained how he was able to secure favorable terms from IBM (emphasis added):

Our restricting IBM’s ability to compete with us in licensing MS-DOS to other computer makers was the key point of the negotiation. We wanted to make sure only we could license it. We did the deal with them at a fairly low price, hoping that would help popularize it. Then we could make our move because we insisted that all other business stay with us. We knew that good IBM products are usually cloned, so it didn’t take a rocket scientist to figure out that eventually we could license DOS to others. We knew that if we were ever going to make a lot of money on DOS it was going to come from the compatible guys, not from IBM. They paid us a fixed fee for DOS. We didn’t get a royalty, even though we did make some money on the deal. Other people paid a royalty. So it was always advantageous to us, the market grew and other hardware guys were able to sell units.

In this version of the story, IBM refrained from demanding an exclusive license from Microsoft not because it was fearful of antitrust enforcers but because Microsoft made significant concessions on price and capped its upside by agreeing to a fixed fee rather than a royalty. These economic and technical explanations for why IBM wasn’t able to leverage its dominant position in mainframes into the PC market are more consistent with the evidence than Wu’s “policeman at the elbow” theory.

In my next post, I will discuss the other major antitrust case that came to an end in 1982: AT&T.

Qualcomm is currently in the midst of a high-profile antitrust case against the FTC. At the heart of these proceedings lies Qualcomm’s so-called “No License, No Chips” (NLNC) policy, whereby it purportedly refuses to sell chips to OEMs that have not concluded a license agreement covering its underlying intellectual property. According to the FTC and Qualcomm’s opponents, this ultimately thwarts competition in the chipset market.

But Qualcomm’s critics fail to convincingly explain how NLNC averts competition — a failing that is particularly evident in the short hypothetical put forward in the amicus brief penned by Mark Lemley, Douglas Melamed, and Steven Salop. This blog post responds to their brief. 

The amici’s hypothetical

In order to highlight the most salient features of the case against Qualcomm, the brief’s authors offer the following stylized example:

A hypothetical example can illustrate how Qualcomm’s strategy increases the royalties it is able to charge OEMs. Suppose that the reasonable royalty Qualcomm could charge OEMs if it licensed the patents separately from its chipsets is $2, and that the monopoly price of Qualcomm’s chips is $18 for an all-in monopoly cost to OEMs of $20. Suppose that a new chipmaker entrant is able to manufacture chipsets of comparable quality at a cost of $11 each. In that case, the rival chipmaker entrant could sell its chips to OEMs for slightly more than $11. An OEM’s all-in cost of buying from the new entrant would be slightly above $13 (i.e., the Qualcomm reasonable license royalty of $2 plus the entrant chipmaker’s price of slightly more than $11). This entry into the chipset market would induce price competition for chips. Qualcomm would still be entitled to its patent royalties of $2, but it would no longer be able to charge the monopoly all-in price of $20. The competition would force Qualcomm to reduce its chipset prices from $18 down to something closer to $11 and its all-in price from $20 down to something closer to $13.

Qualcomm’s NLNC policy prevents this competition. To illustrate, suppose instead that Qualcomm implements the NLNC policy, raising its patent royalty to $10 and cutting the chip price to $10. The all-in cost to an OEM that buys Qualcomm chips will be maintained at the monopoly level of $20. But the OEM’s cost of using the rival entrant’s chipsets now will increase to a level above $21 (i.e., the slightly higher than $11 price for the entrant’s chipset plus the $10 royalty that the OEM pays to Qualcomm of $10). Because the cost of using the entrant’s chipsets will exceed Qualcomm’s all-in monopoly price, Qualcomm will face no competitive pressure to reduce its chipset or all-in prices.

A close inspection reveals that this hypothetical is deeply flawed

There appear to be five steps in the amici’s reasoning:

  1. Chips and IP are complementary goods that are bought in fixed proportions. So buyers have a single reserve price for both; 
  2. Because of its FRAND pledges, Qualcomm is unable to directly charge a monopoly price for its IP;
  3. But, according to the amici, Qualcomm can obtain these monopoly profits by keeping competitors out of the chipset market [this would give Qualcomm a chipset monopoly and, theoretically at least, enable it to charge the combined (IP + chips) monopoly price for its chips alone, thus effectively evading its FRAND pledges]; 
  4. To keep rivals out of the chipset market, Qualcomm undercuts them on chip prices and recoups its losses by charging supracompetitive royalty rates on its IP.
  5. This is allegedly made possible by the “No License, No Chips” policy, which forces firms to obtain a license from Qualcomm, even when they purchase chips from rivals.

While points 1 and 3 of the amici’s reasoning are uncontroversial, points 2 and 4 are mutually exclusive. This flaw ultimately undermines their entire argument, notably point 5. 

The contradiction between points 2 and 4 is evident. The amici argue (using hypothetical but representative numbers) that its FRAND pledges should prevent Qualcomm from charging more than $2 in royalties per chip (“the reasonable royalty Qualcomm could charge OEMs if it licensed the patents separately from its chipsets is $2”), and that Qualcomm deters entry in the chip market by charging $10 in royalties per chip sold (“raising its patent royalty to $10 and cutting the chip price to $10”).

But these statements cannot both be true. Qualcomm either can or it cannot charge more than $2 in royalties per chip. 

There is, however, one important exception (discussed below): parties can mutually agree to depart from FRAND pricing. But let us momentarily ignore this limitation, and discuss two baseline scenarios: One where Qualcomm can evade its FRAND pledges and one where it cannot. Comparing these two settings reveals that Qualcomm cannot magically increase its profits by shifting revenue from chips to IP.

For a start, if Qualcomm cannot raise the price of its IP beyond the hypothetical FRAND benchmark ($2, in the amici’s hypo), then it cannot use its standard essential technology to compensate for foregone revenue in the chipset market. Any supracompetitive profits that it earns must thus result from its competitive position in the chipset market.

Conversely, if it can raise its IP revenue above the $2 benchmark, then it does not require a strong chipset position to earn supracompetitive profits. 

It is worth unpacking this second point. If Qualcomm can indeed evade its FRAND pledges and charge royalties of $10 per chip, then it need not exclude chipset rivals to obtain supracompetitive profits. 

Take the amici’s hypothetical numbers and assume further that Qualcomm has the same cost as its chipsets rivals (i.e. $11), and that there are 100 potential buyers with a uniform reserve price of $20 (the reserve price assumed by the amici). 

As the amici point out, Qualcomm can earn the full monopoly profits by charging $10 for IP and $10 for chips. Qualcomm would thus pocket a total of $900 in profits ((10+10-11)*100). What the amici brief fails to acknowledge is that Qualcomm could also earn the exact same profits by staying out of the chipset market. Qualcomm could let its rivals charge $11 per chip (their cost), and demand $9 for its IP. It would thus earn the same $900 of profits (9*100). 

In this hypothetical, the only reason for Qualcomm to enter the chip market is if it is a more efficient chipset producer than its chipset rivals, or if it can out-compete them with better chipsets. For instance, if Qualcomm’s costs are only $10 per chip, Qualcomm could earn a total of $1000 in profits by driving out these rivals ((10+10-10)*100). Or, if it can produce better chips, though at higher cost and price (say, $12 per chip), it could earn the same $1000 in profits ((10+12-12)*100). Both of the situations would benefit purchasers, of course. Conversely, at a higher production cost of $12 per chip, but without any quality improvement, Qualcomm would earn only $800 in profits ((10+10-12)*100) and would thus do better to exit the chipset market.

Let us recap:

  • If Qualcomm can easily evade its FRAND pledges, then it need not enter the chipset market to earn supracompetitive profits; 
  • If it cannot evade these FRAND obligations, then it will be hard-pressed to leverage its IP bottleneck so as to dominate chipsets. 

The upshot is that Qualcomm would need to benefit from exceptional circumstances in order to improperly leverage its FRAND-encumbered IP and impose anticompetitive harm by excluding its rivals in the chipset market

The NLNC policy

According to the amici, that exceptional circumstance is the NLNC policy. In their own words:

The competitive harm is a result of the royalty being higher than it would be absent the NLNC policy.

This is best understood by adding an important caveat to our previous hypothetical: The $2 FRAND benchmark of the amici’s hypothetical is only a fallback option that can be obtained via litigation. Parties are thus free to agree upon a higher rate, for instance $10. This could, notably, be the case if Qualcomm offsetted the IP increase by reducing its chipset price, such that OEMs who purchase both chipsets and IP from Qualcomm were indifferent between contracts with either of the two royalty rates.

At first sight, this caveat may appear to significantly improve the FTC’s case against Qualcomm — it raises the specter of Qualcomm charging predatory prices on its chips and then recouping its losses on IP. But further examination suggests that this is an unlikely scenario.

Though firms may nominally be paying $10 for Qualcomm’s IP and $10 for its chips, there is no escaping the fact that buyers have an outside option in both the IP and chip segments (respectively, litigation to obtain FRAND rates, and buying chips from rivals). As a result, Qualcomm will be unable to charge a total price that is significantly above the price of rivals’ chips, plus the FRAND rate for its IP (and expected litigation costs).

This is where the amici’s hypothetical is most flawed. 

It is one thing to argue that Qualcomm can charge $10 per chipset and $10 per license to firms that purchase all of their chips and IP from it (or, as the amici point out, charge a single price of $20 for the bundle). It is another matter entirely to argue — as the amici do — that Qualcomm can charge $10 for its IP to firms that receive little or no offset in the chip market because they purchase few or no chips from Qualcomm, and who have the option of suing Qualcomm, thus obtaining a license at $2 per chip (if that is, indeed, the maximum FRAND rate). Firms would have to be foolish to ignore this possibility and to acquiesce to contracts at substantially higher rates. 

Indeed, two of the largest and most powerful OEMs — Apple and Samsung — have entered into such contracts with Qualcomm. Given their ability (and, indeed, willingness) to sue for FRAND violations and to produce their own chips or assist other manufacturers in doing so, it is difficult to conclude that they have assented to supracompetitive terms. (The fact that they would prefer even lower rates, and have supported this and other antitrust suits against Qualcomm doesn’t, change this conclusion; it just means they see antitrust as a tool to reduce their costs. And the fact that Apple settled its own FRAND and antitrust suit against Qualcomm (and paid Qualcomm $4.5 billion and entered into a global licensing agreement with it) after just one day of trial further supports this conclusion).

Double counting

The amici attempt to overcome this weakness by implicitly framing their argument in terms of exclusivity, strategic entry deterrence, and tying:

An OEM cannot respond to Qualcomm’s NLNC policy by purchasing chipsets only from a rival chipset manufacturer and obtaining a license at the reasonable royalty level (i.e., $2 in the example). As the district court found, OEMs needed to procure at least some 3G CDMA and 4G LTE chipsets from Qualcomm.

* * *

The surcharge burdens rivals, leads to anticompetitive effects in the chipset markets, deters entry, and impedes follow-on innovation. 

* * *

As an economic matter, Qualcomm’s NLNC policy is analogous to the use of a tying arrangement to maintain monopoly power in the market for the tying product (here, chipsets).

But none of these arguments totally overcomes the flaw in their reasoning. Indeed, as Aldous Huxley once pointed out, “several excuses are always less convincing than one”.

For a start, the amici argue that Qualcomm uses its strong chipset position to force buyers into accepting its supracompetitive IP rates, even in those instances where they purchase chipsets from rivals. 

In making this point, the amici fall prey to the “double counting fallacy” that Robert Bork famously warned about in The Antitrust Paradox: Monopolists cannot simultaneously charge a monopoly price AND purchase exclusivity (or other contractual restrictions) from their buyers/suppliers.

The amici fail to recognize the important sacrifices that Qualcomm would have to make in order for the above strategy to be viable. In simple terms, Qualcomm would have to offset every dollar it charges above the FRAND benchmark in the IP segment with an equivalent price reduction in the chipset segment.

This has important ramifications for the FTC’s case.

Qualcomm would have to charge lower — not higher — IP fees to OEMs who purchased a large share of their chips from third party chipmakers. Otherwise, there would be no carrot to offset its greater-than-FRAND license fees, and these OEMs would have significant incentives to sue (especially in a post-eBay world where the threat of injunctions is reduced if they happen to lose). 

And yet, this is the exact opposite of what the FTC alleged:

Qualcomm sometimes expressly charged higher royalties on phones that used rivals’ chips. And even when it did not, its provision of incentive funds to offset its license fees when OEMs bought its chips effectively resulted in a discriminatory surcharge. (emphasis added)

The infeasibility of alternative explanations

One theoretical workaround would be for Qualcomm to purchase exclusivity from its OEMs, in an attempt to foreclose chipset rivals. 

Once again, Bork’s double counting argument suggests that this would be particularly onerous. By accepting exclusivity-type requirements, OEMs would not only be reducing potential competition in the chipset market, they would also be contributing to an outcome where Qualcomm could evade its FRAND pledges in the IP segment of the market. This is particularly true for pivotal OEMs (such as Apple and Samsung), who may single-handedly affect the market’s long-term trajectory. 

The amici completely overlook this possibility, while the FTC argues that this may explain the rebates that Qulacomm gave to Apple. 

But even if the rebates Qualcomm gave Apple amounted to de facto exclusivity, there are still important objections. Authorities would notably need to prove that Qualcomm could recoup its initial losses (i.e. that the rebate maximized Qualcomm’s long-term profits). If this was not the case, then the rebates may simply be due to either efficiency considerations or Apple’s significant bargaining power (Apple is routinely cited as a potential source of patent holdout; see, e.g., here and here). 

Another alternative would be for Qualcomm to evict its chipset rivals through strategic entry deterrence or limit pricing (see here and here, respectively). But while the economic literature suggests that incumbents may indeed forgo short-term profits in order to deter rivals from entering the market, these theories generally rest on assumptions of imperfect information and/or strategic commitments. Neither of these factors was alleged in the case at hand.

In particular, there is no sense that Qualcomm’s purported decision to shift royalties from chips to IP somehow harms its short-term profits, or that it is merely a strategic device used to deter the entry of rivals. As the amici themselves seem to acknowledge, the pricing structure maximizes Qualcomm’s short term revenue (even ignoring potential efficiency considerations). 

Note that this is not just a matter of economic policy. The case law relating to unilateral conduct infringements — be it Brooke Group, Alcoa, or Aspen Skiing — almost systematically requires some form of profit sacrifice on the part of the monopolist. (For a legal analysis of this issue in the Qualcomm case, see ICLE’s Amicus brief, and yesterday’s blog post on the topic).

The amici are thus left with the argument that Qualcomm could structure its prices differently, so as to maximize the profits of its rivals. Why it would choose to do so, or should indeed be forced to, is a whole other matter.

Finally, the amici refer to the strategic tying literature (here), typically associated with the Microsoft case and the so-called “platform threat”. But this analogy is highly problematic. 

Unlike Microsoft and its Internet Explorer browser, Qualcomm’s IP is de facto — and necessarily — tied to the chips that practice its technology. This is not a bug, it is a feature of the patent system. Qualcomm is entitled to royalties, whether it manufactures chips itself or leaves that task to rival manufacturers. In other words, there is no counterfactual world where OEMs could obtain Qualcomm-based chips without entering into some form of license agreement (whether directly or indirectly) with Qualcomm. The fact that OEMs must acquire a license that covers Qualcomm’s IP — even when they purchase chips from rivals — is part and parcel of the IP system.

In any case, there is little reason to believe that Qualcomm’s decision to license its IP at the OEM level is somehow exclusionary. The gist of the strategic tying literature is that incumbents may use their market power in a primary market to thwart entry in the market for a complementary good (and ultimately prevent rivals from using their newfound position in the complementary market in order to overthrow the incumbent in the primary market; Carlton & Waldman, 2002). But this is not the case here.

Qualcomm does not appear to be using what little power it might have in the IP segment in order to dominate its rivals in the chip market. As has already been explained above, doing so would imply some profit sacrifice in the IP segment in order to encourage OEMs to accept its IP/chipset bundle, rather than rivals’ offerings. This is the exact opposite of what the FTC and amici allege in the case at hand. The facts thus cut against a conjecture of strategic tying.

Conclusion

So where does this leave the amici and their brief? 

Absent further evidence, their conclusion that Qualcomm injured competition is untenable. There is no evidence that Qualcomm’s pricing structure — enacted through the NLNC policy — significantly harmed competition to the detriment of consumers. 

When all is done and dusted, the amici’s brief ultimately amounts to an assertion that Qualcomm should be made to license its intellectual property at a rate that — in their estimation — is closer to the FRAND benchmark. That judgment is a matter of contract law, not antitrust.

On November 22, the FTC filed its answering brief in the FTC v. Qualcomm litigation. As we’ve noted before, it has always seemed a little odd that the current FTC is so vigorously pursuing this case, given some of the precedents it might set and the Commission majority’s apparent views on such issues. But this may also help explain why the FTC has now opted to eschew the district court’s decision and pursue a novel, but ultimately baseless, legal theory in its brief.

The FTC’s decision to abandon the district court’s reasoning constitutes an important admission: contrary to the district court’s finding, there is no legal basis to find an antitrust duty to deal in this case. As Qualcomm stated in its reply brief (p. 12), “the FTC disclaims huge portions of the decision.” In its effort to try to salvage its case, however, the FTC reveals just how bad its arguments have been from the start, and why the case should be tossed out on its ear.

What the FTC now argues

The FTC’s new theory is that SEP holders that fail to honor their FRAND licensing commitments should be held liable under “traditional Section 2 standards,” even though they do not have an antitrust duty to deal with rivals who are members of the same standard-setting organizations (SSOs) under the “heightened” standard laid out by the Supreme Court in Aspen and Trinko:  

To be clear, the FTC does not contend that any breach of a FRAND commitment is a Sherman Act violation. But Section 2 liability is appropriate when, as here, a monopolist SEP holder commits to license its rivals on FRAND terms, and then implements a blanket policy of refusing to license those rivals on any terms, with the effect of substantially contributing to the acquisition or maintenance of monopoly power in the relevant market…. 

The FTC does not argue that Qualcomm had a duty to deal with its rivals under the Aspen/Trinko standard. But that heightened standard does not apply here, because—unlike the defendants in Aspen, Trinko, and the other duty-to-deal precedents on which it relies—Qualcomm entered into a voluntary contractual commitment to deal with its rivals as part of the SSO process, which is itself a derogation from normal market competition. And although the district court applied a different approach, this Court “may affirm on any ground finding support in the record.” Cigna Prop. & Cas. Ins. Co. v. Polaris Pictures Corp., 159 F.3d 412, 418-19 (9th Cir. 1998) (internal quotation marks omitted) (emphasis added) (pp.69-70).

In other words, according to the FTC, because Qualcomm engaged in the SSO process—which is itself “a derogation from normal market competition”—its evasion of the constraints of that process (i.e., the obligation to deal with all comers on FRAND terms) is “anticompetitive under traditional Section 2 standards.”

The most significant problem with this new standard is not that it deviates from the basis upon which the district court found Qualcomm liable; it’s that it is entirely made up and has no basis in law.

Absent an antitrust duty to deal, patent law grants patentees the right to exclude rivals from using patented technology

Part of the bundle of rights connected with the property right in patents is the right to exclude, and along with it, the right of a patent holder to decide whether, and on what terms, to sell licenses to rivals. The law curbs that right only in select circumstances. Under antitrust law, such a duty to deal, in the words of the Supreme Court in Trinko, “is at or near the outer boundary of §2 liability.” The district court’s ruling, however, is based on the presumption of harm arising from a SEP holder’s refusal to license, rather than an actual finding of anticompetitive effect under §2. The duty to deal it finds imposes upon patent holders an antitrust obligation to license their patents to competitors. (While, of course, participation in an SSO may contractually obligate an SEP-holder to license its patents to competitors, that is an entirely different issue than whether it operates under a mandatory requirement to do so as a matter of public policy).  

The right of patentees to exclude is well-established, and injunctions enforcing that right are regularly issued by courts. Although the rate of permanent injunctions has decreased since the Supreme Court’s eBay decision, research has found that federal district courts still grant them over 70% of the time after a patent holder prevails on the merits. And for patent litigation involving competitors, the same research finds that injunctions are granted 85% of the time.  In principle, even SEP holders can receive injunctions when infringers do not act in good faith in FRAND negotiations. See Microsoft Corp. v. Motorola, Inc., 795 F.3d 1024, 1049 n.19 (9th Cir. 2015):

We agree with the Federal Circuit that a RAND commitment does not always preclude an injunctive action to enforce the SEP. For example, if an infringer refused to accept an offer on RAND terms, seeking injunctive relief could be consistent with the RAND agreement, even where the commitment limits recourse to litigation. See Apple Inc., 757 F.3d at 1331–32

Aside from the FTC, federal agencies largely agree with this approach to the protection of intellectual property. For instance, the Department of Justice, the US Patent and Trademark Office, and the National Institute for Standards and Technology recently released their 2019 Joint Policy Statement on Remedies for Standards-Essential Patents Subject to Voluntary F/RAND Commitments, which clarifies that:

All remedies available under national law, including injunctive relief and adequate damages, should be available for infringement of standards-essential patents subject to a F/RAND commitment, if the facts of a given case warrant them. Consistent with the prevailing law and depending on the facts and forum, the remedies that may apply in a given patent case include injunctive relief, reasonable royalties, lost profits, enhanced damages for willful infringement, and exclusion orders issued by the U.S. International Trade Commission. These remedies are equally available in patent litigation involving standards-essential patents. While the existence of F/RAND or similar commitments, and conduct of the parties, are relevant and may inform the determination of appropriate remedies, the general framework for deciding these issues remains the same as in other patent cases. (emphasis added).

By broadening the antitrust duty to deal well beyond the bounds set by the Supreme Court, the district court opinion (and the FTC’s preferred approach, as well) eviscerates the right to exclude inherent in patent rights. In the words of retired Federal Circuit Judge Paul Michel in an amicus brief in the case: 

finding antitrust liability premised on the exercise of valid patent rights will fundamentally abrogate the patent system and its critical means for promoting and protecting important innovation.

And as we’ve noted elsewhere, this approach would seriously threaten consumer welfare:

Of course Qualcomm conditions the purchase of its chips on the licensing of its intellectual property; how could it be any other way? The alternative would require Qualcomm to actually facilitate the violation of its property rights by forcing it to sell its chips to device makers even if they refuse its patent license terms. In that world, what device maker would ever agree to pay more than a pittance for a patent license? The likely outcome is that Qualcomm charges more for its chips to compensate (or simply stops making them). Great, the FTC says; then competitors can fill the gap and — voila: the market is more competitive, prices will actually fall, and consumers will reap the benefits.

Except it doesn’t work that way. As many economists, including both the current [now former] and a prominent former chief economist of the FTC, have demonstrated, forcing royalty rates lower in such situations is at least as likely to harm competition as to benefit it. There is no sound theoretical or empirical basis for concluding that using antitrust to move royalty rates closer to some theoretical ideal will actually increase consumer welfare. All it does for certain is undermine patent holders’ property rights, virtually ensuring there will be less innovation.

The FTC realizes the district court doesn’t have the evidence to support its duty to deal analysis

Antitrust law does not abrogate the right of a patent holder to exclude and to choose when and how to deal with rivals, unless there is a proper finding of a duty to deal. In order to find a duty to deal, there must be a harm to competition, not just a competitor, which, under the Supreme Court’s Aspen and Trinko cases can be inferred in the duty-to-deal context only where the challenged conduct leads to a “profit sacrifice.” But the record does not support such a finding. As we wrote in our amicus brief:

[T]he Supreme Court has identified only a single scenario from which it may plausibly be inferred that defendant’s refusal to deal with rivals harms consumers: The existence of a prior, profitable course of dealing, and the termination and replacement of that arrangement with an alternative that not only harms rivals, but also is less profitable for defendant. 

A monopolist’s willingness to forego (short-term) profits plausibly permits an inference that conduct is not procompetitive, because harm to a rival caused by an increase in efficiency should lead to higher—not lower—profits for defendant. And “[i]f a firm has been ‘attempting to exclude rivals on some basis other than efficiency,’ it’s fair to characterize its behavior as predatory.” Aspen Skiing, 472 U.S. at 605 (quoting Robert Bork, The Antitrust Paradox 138 (1978)).

In an effort to satisfy this standard, the district court states that “because Qualcomm previously licensed its rivals, but voluntarily stopped licensing rivals even though doing so was profitable, Qualcomm terminated a voluntary and profitable course of dealing.” Slip op. at 137. 

But it is not enough merely that the prior arrangement was profitable. Rather, Trinko and Aspen Skiing hold that when a monopolist ends a profitable relationship with a rival, anticompetitive exclusion may be inferred only when it also refuses to engage in an ongoing arrangement that, in the short run, is more profitable than no relationship at all. The key is the relative value to the monopolist of the current options on offer, not the value to the monopolist of the terminated arrangement. See Trinko, 540 U.S. at 409 (“a willingness to forsake short-term profits”); Aspen Skiing, 472 U.S. at 610–11 (“it was willing to sacrifice short-run benefits”)…

The record here uniformly indicates Qualcomm expected to maximize its royalties by dealing with OEMs rather than rival chip makers; it neither anticipated nor endured short-term loss. As the district court itself concluded, Qualcomm’s licensing practices avoided patent exhaustion and earned it “humongously more lucrative” royalties. Slip op. at 1243–254. That Qualcomm anticipated greater profits from its conduct precludes an inference of anticompetitive harm.

Moreover, Qualcomm didn’t refuse to allow rivals to use its patents; it simply didn’t sell them explicit licenses to do so. As discussed in several places by the district court:

According to Andrew Hong (Legal Counsel at Samsung Intellectual Property Center), during license negotiations, Qualcomm made it clear to Samsung that “Qualcomm’s standard business practice was not to provide licenses to chip manufacturers.” Hong Depo. 161:16-19. Instead, Qualcomm had an “unwritten policy of not going after chip manufacturers.” Id. at 161:24-25… (p.123)

* * *

Alex Rogers (QTL President) testified at trial that as part of the 2018 Settlement Agreement between Samsung and Qualcomm, Qualcomm did not license Samsung, but instead promised only that Qualcomm would offer Samsung a FRAND license before suing Samsung: “Qualcomm gave Samsung an assurance that should Qualcomm ever seek to assert its cellular SEPs against that component business, against those components, we would first make Samsung an offer on fair, reasonable, and non-discriminatory terms.” Tr. at 1989:5-10. (p.124)

This is an important distinction. Qualcomm allows rivals to use its patented technology by not asserting its patent rights against them—which is to say: instead of licensing its technology for a fee, Qualcomm allows rivals to use its technology to develop their own chips royalty-free (and recoups its investment by licensing the technology to OEMs that choose to implement the technology in their devices). 

The irony of this analysis, of course, is that the district court effectively suggests that Qualcomm must charge rivals a positive, explicit price in exchange for a license in order to facilitate competition, while allowing rivals to use its patented technology for free (or at the “cost” of some small reduction in legal certainty, perhaps) is anticompetitive.

Nonetheless, the district court’s factual finding that Qualcomm’s licensing scheme was “humongously” profitable shows there was no profit sacrifice as required for a duty to deal finding. The general presumption that patent holders can exclude rivals is not subject to an antitrust duty to deal where there is no profit sacrifice by the patent holder. Here, however, Qualcomm did not sacrifice profits by adopting the challenged licensing scheme. 

It is perhaps unsurprising that the FTC chose not to support the district court’s duty-to-deal argument, even though its holding was in the FTC’s favor. But, while the FTC was correct not to countenance the district court’s flawed arguments, the FTC’s alternative argument in its reply brief is even worse.

The FTC’s novel theory of harm is unsupported and weak

As noted, the FTC’s alternative theory is that Qualcomm violated Section 2 simply by failing to live up to its contractual SSO obligations. For the FTC, because Qualcomm joined an SSO, it is no longer in a position to refuse to deal legally. Moreover, there is no need to engage in an Aspen/Trinko analysis in order to find liability. Instead, according to the FTC’s brief, liability arises because the evasion of an exogenous pricing constraint (such as an SSO’s FRAND obligation) constitutes an antitrust harm:

Of course, a breach of contract, “standing alone,” does not “give rise to antitrust liability.” City of Vernon v. S. Cal. Edison Co., 955 F.2d 1361, 1368 (9th Cir. 1992); cf. Br. 52 n.6. Instead, a monopolist’s conduct that breaches such a contractual commitment is anticompetitive only when it satisfies traditional Section 2 standards—that is, only when it “tends to impair the opportunities of rivals and either does not further competition on the merits or does so in an unnecessarily restrictive way.” Cascade Health, 515 F.3d at 894. The district court’s factual findings demonstrate that Qualcomm’s breach of its SSO commitments satisfies both elements of that traditional test. (emphasis added)

To begin, it must be noted that the operative language quoted by the FTC from Cascade Health is attributed in Cascade Health to Aspen Skiing. In other words, even Cascade Health recognizes that Aspen Skiing represents the Supreme Court’s interpretation of that language in the duty-to-deal context. And in that case—in contrast to the FTC’s argument in its brief—the Court required demonstration of such a standard to mean that a defendant “was not motivated by efficiency concerns and that it was willing to sacrifice short-run benefits and consumer goodwill in exchange for a perceived long-run impact on its… rival.” (Aspen Skiing at 610-11) (emphasis added).

The language quoted by the FTC cannot simultaneously justify an appeal to an entirely different legal standard separate from that laid out in Aspen Skiing. As such, rather than dispensing with the duty to deal requirements laid out in that case, Cascade Health actually reinforces them.

Second, to support its argument the FTC points to Broadcom v. Qualcomm, 501 F.3d 297 (3rd Cir. 2007) as an example of a court upholding an antitrust claim based on a defendant’s violation of FRAND terms. 

In Broadcom, relying on the FTC’s enforcement action against Rambus before it was overturned by the D.C. Circuit, the Third Circuit found that there was an actionable issue when Qualcomm deceived other members of an SSO by promising to

include its proprietary technology in the… standard by falsely agreeing to abide by the [FRAND policies], but then breached those agreements by licensing its technology on non-FRAND terms. The intentional acquisition of monopoly power through deception… violates antitrust law. (emphasis added)

Even assuming Broadcom were good law post-Rambus, the case is inapposite. In Broadcom the court found that Qualcomm could be held to violate antitrust law by deceiving the SSO (by falsely promising to abide by FRAND terms) in order to induce it to accept Qualcomm’s patent in the standard. The court’s concern was that, by falsely inducing the SSO to adopt its technology, Qualcomm deceptively acquired monopoly power and limited access to competing technology:

When a patented technology is incorporated in a standard, adoption of the standard eliminates alternatives to the patented technology…. Firms may become locked in to a standard requiring the use of a competitor’s patented technology. 

Key to the court’s finding was that the alleged deception induced the SSO to adopt the technology in its standard:

We hold that (1) in a consensus-oriented private standard-setting environment, (2) a patent holder’s intentionally false promise to license essential proprietary technology on FRAND terms, (3) coupled with an SDO’s reliance on that promise when including the technology in a standard, and (4) the patent holder’s subsequent breach of that promise, is actionable conduct. (emphasis added)

Here, the claim is different. There is no allegation that Qualcomm engaged in deceptive conduct that affected the incorporation of its technology into the relevant standard. Indeed, there is no allegation that Qualcomm’s alleged monopoly power arises from its challenged practices; only that it abused its lawful monopoly power to extract supracompetitive prices. Even if an SEP holder may be found liable for falsely promising not to evade a commitment to deal with rivals in order to acquire monopoly power from its inclusion in a technological standard under Broadcom, that does not mean that it can be held liable for evading a commitment to deal with rivals unrelated to its inclusion in a standard, nor that such a refusal to deal should be evaluated under any standard other than that laid out in Aspen Skiing.

Moreover, the FTC nowhere mentions the DC Circuit’s subsequent Rambus decision overturning the FTC and calling the holding in Broadcom into question, nor does it discuss the Supreme Court’s NYNEX decision in any depth. Yet these cases stand clearly for the opposite proposition: a court cannot infer competitive harm from a company’s evasion of a FRAND pricing constraint. As we wrote in our amicus brief

In Rambus Inc. v. FTC, 522 F.3d 456 (D.C. Cir. 2008), the D.C. Circuit, citing NYNEX, rejected the FTC’s contention that it may infer anticompetitive effect from defendant’s evasion of a constraint on its monopoly power in an analogous SEP-licensing case: “But again, as in NYNEX, an otherwise lawful monopolist’s end-run around price constraints, even when deceptive or fraudulent, does not alone present a harm to competition.” Id. at 466 (citation omitted). NYNEX and Rambus reinforce the Court’s repeated holding that an inference is permissible only where it points clearly to anticompetitive effect—and, bad as they may be, evading obligations under other laws or violating norms of “business morality” do not permit a court to undermine “[t]he freedom to switch suppliers [which] lies close to the heart of the competitive process that the antitrust laws seek to encourage. . . . Thus, this Court has refused to apply per se reasoning in cases involving that kind of activity.” NYNEX, 525 U.S. at 137 (citations omitted).

Essentially, the FTC’s brief alleges that Qualcomm’s conduct amounts to an evasion of the constraint imposed by FRAND terms—without which the SSO process itself is presumptively anticompetitive. Indeed, according to the FTC, it is only the FRAND obligation that saves the SSO agreement from being inherently anticompetitive. 

In fact, when a firm has made FRAND commitments to an SSO, requiring the firm to comply with its commitments mitigates the risk that the collaborative standard-setting process will harm competition. Product standards—implicit “agreement[s] not to manufacture, distribute, or purchase certain types of products”—“have a serious potential for anticompetitive harm.” Allied Tube, 486 U.S. at 500 (citation and footnote omitted). Accordingly, private SSOs “have traditionally been objects of antitrust scrutiny,” and the antitrust laws tolerate private standard-setting “only on the understanding that it will be conducted in a nonpartisan manner offering procompetitive benefits,” and in the presence of “meaningful safeguards” that prevent the standard-setting process from falling prey to “members with economic interests in stifling product competition.” Id. at 500- 01, 506-07; see Broadcom, 501 F.3d at 310, 314-15 (collecting cases). 

FRAND commitments are among the “meaningful safeguards” that SSOs have adopted to mitigate this serious risk to competition…. 

Courts have therefore recognized that conduct that breaches or otherwise “side-steps” these safeguards is appropriately subject to conventional Sherman Act scrutiny, not the heightened Aspen/Trinko standard… (p.83-84)

In defense of the proposition that courts apply “traditional antitrust standards to breaches of voluntary commitments made to mitigate antitrust concerns,” the FTC’s brief cites not only Broadcom, but also two other cases:

While this Court has long afforded firms latitude to “deal or refuse to deal with whomever [they] please[] without fear of violating the antitrust laws,” FountWip, Inc. v. Reddi-Wip, Inc., 568 F.2d 1296, 1300 (9th Cir. 1978) (citing Colgate, 250 U.S. at 307), it, too, has applied traditional antitrust standards to breaches of voluntary commitments made to mitigate antitrust concerns. In Mount Hood Stages, Inc. v. Greyhound Corp., 555 F.2d 687 (9th Cir. 1977), this Court upheld a judgment holding that Greyhound violated Section 2 by refusing to interchange bus traffic with a competing bus line after voluntarily committing to do so in order to secure antitrust approval from the Interstate Commerce Commission for proposed acquisitions. Id. at 69723; see also, e.g., Biovail Corp. Int’l v. Hoechst Aktiengesellschaft, 49 F. Supp. 2d 750, 759 (D.N.J. 1999) (breach of commitment to deal in violation of FTC merger consent decree exclusionary under Section 2). (p.85-86)

The cases the FTC cites to justify the proposition all deal with companies sidestepping obligations in order to falsely acquire monopoly power. The two cases cited above both involve companies making promises to government agencies to win merger approval and then failing to follow through. And, as noted, Broadcom deals with the acquisition of monopoly power by making false promises to an SSO to induce the choice of proprietary technology in a standard. While such conduct in the acquisition of monopoly power may be actionable under Broadcom (though this is highly dubious post-Rambus), none of these cases supports the FTC’s claim that an SEP holder violates antitrust law any time it evades an SSO obligation to license its technology to rivals. 

Conclusion

Put simply, the district court’s opinion in FTC v. Qualcomm runs headlong into the Supreme Court’s Aspen decision and founders there. This is why the FTC is trying to avoid analyzing the case under Aspen and subsequent duty-to-deal jurisprudence (including Trinko, the 9th Circuit’s MetroNet decision, and the 10th Circuit’s Novell decision): because it knows that if the appellate court applies those standards, the district court’s duty-to-deal analysis will fail. The FTC’s basis for applying a different standard is unsupportable, however. And even if its logic for applying a different standard were valid, the FTC’s proffered alternative theory is groundless in light of Rambus and NYNEX. The Ninth Circuit should vacate the district court’s finding of liability. 

Ours is not an age of nuance.  It’s an age of tribalism, of teams—“Yer either fer us or agin’ us!”  Perhaps I should have been less surprised, then, when I read the unfavorable review of my book How to Regulate in, of all places, the Federalist Society Review.

I had expected some positive feedback from reviewer J. Kennerly Davis, a contributor to the Federalist Society’s Regulatory Transparency Project.  The “About” section of the Project’s website states:

In the ultra-complex and interconnected digital age in which we live, government must issue and enforce regulations to protect public health and safety.  However, despite the best of intentions, government regulation can fail, stifle innovation, foreclose opportunity, and harm the most vulnerable among us.  It is for precisely these reasons that we must be diligent in reviewing how our policies either succeed or fail us, and think about how we might improve them.

I might not have expressed these sentiments in such pro-regulation terms.  For example, I don’t think government should regulate, even “to protect public health and safety,” absent (1) a market failure and (2) confidence that systematic governmental failures won’t cause the cure to be worse than the disease.  I agree, though, that regulation is sometimes appropriate, that government interventions often fail (in systematic ways), and that regulatory policies should regularly be reviewed with an eye toward reducing the combined costs of market and government failures.

Those are, in fact, the central themes of How to Regulate.  The book sets forth an overarching goal for regulation (minimize the sum of error and decision costs) and then catalogues, for six oft-cited bases for regulating, what regulatory tools are available to policymakers and how each may misfire.  For every possible intervention, the book considers the potential for failure from two sources—the knowledge problem identified by F.A. Hayek and public choice concerns (rent-seeking, regulatory capture, etc.).  It ends up arguing:

  • for property rights-based approaches to environmental protection (versus the command-and-control status quo);
  • for increased reliance on the private sector to produce public goods;
  • that recognizing property rights, rather than allocating usage, is the best way to address the tragedy of the commons;
  • that market-based mechanisms, not shareholder suits and mandatory structural rules like those imposed by Sarbanes-Oxley and Dodd-Frank, are the best way to constrain agency costs in the corporate context;
  • that insider trading restrictions should be left to corporations themselves;
  • that antitrust law should continue to evolve in the consumer welfare-focused direction Robert Bork recommended;
  • against the FCC’s recently abrogated net neutrality rules;
  • that occupational licensure is primarily about rent-seeking and should be avoided;
  • that incentives for voluntary disclosure will usually obviate the need for mandatory disclosure to correct information asymmetry;
  • that the claims of behavioral economics do not justify paternalistic policies to protect people from themselves; and
  • that “libertarian-paternalism” is largely a ruse that tends to morph into hard paternalism.

Given the congruence of my book’s prescriptions with the purported aims of the Regulatory Transparency Project—not to mention the laundry list of specific market-oriented policies the book advocates—I had expected a generally positive review from Mr. Davis (whom I sincerely thank for reading and reviewing the book; book reviews are a ton of work).

I didn’t get what I’d expected.  Instead, Mr. Davis denounced my book for perpetuating “progressive assumptions about state and society” (“wrongheaded” assumptions, the editor’s introduction notes).  He responded to my proposed methodology with a “meh,” noting that it “is not clearly better than the status quo.”  His one compliment, which I’ll gladly accept, was that my discussion of economic theory was “generally accessible.”

Following are a few thoughts on Mr. Davis’s critiques.

Are My Assumptions Progressive?

According to Mr. Davis, my book endorses three progressive concepts:

(i) the idea that market based arrangements among private parties routinely misallocate resources, (ii) the idea that government policymakers are capable of formulating executive directives that can correct private ordering market failures and optimize the allocation of resources, and (iii) the idea that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.

I agree with Mr. Davis that these are progressive ideas.  If my book embraced them, it might be fair to label it “progressive.”  But it doesn’t.  Not one of them.

  1. Market Failure

Nothing in my book suggests that “market based arrangements among private parties routinely misallocate resources.”  I do say that “markets sometimes fail to work well,” and I explain how, in narrow sets of circumstances, market failures may emerge.  Understanding exactly what may happen in those narrow sets of circumstances helps to identify the least restrictive option for addressing problems and would thus would seem a pre-requisite to effective policymaking for a conservative or libertarian.  My mere invocation of the term “market failure,” however, was enough for Mr. Davis to kick me off the team.

Mr. Davis ignored altogether the many points where I explain how private ordering fixes situations that could lead to poor market performance.  At the end of the information asymmetry chapter, for example, I write,

This chapter has described information asymmetry as a problem, and indeed it is one.  But it can also present an opportunity for profit.  Entrepreneurs have long sought to make money—and create social value—by developing ways to correct informational imbalances and thereby facilitate transactions that wouldn’t otherwise occur.

I then describe the advent of companies like Carfax, AirBnb, and Uber, all of which offer privately ordered solutions to instances of information asymmetry that might otherwise create lemons problems.  I conclude:

These businesses thrive precisely because of information asymmetry.  By offering privately ordered solutions to the problem, they allow previously under-utilized assets to generate heretofore unrealized value.  And they enrich the people who created and financed them.  It’s a marvelous thing.

That theme—that potential market failures invite privately ordered solutions that often obviate the need for any governmental fix—permeates the book.  In the public goods chapter, I spend a great deal of time explaining how privately ordered devices like assurance contracts facilitate the production of amenities that are non-rivalrous and non-excludable.  In discussing the tragedy of the commons, I highlight Elinor Ostrom’s work showing how “groups of individuals have displayed a remarkable ability to manage commons goods effectively without either privatizing them or relying on government intervention.”  In the chapter on externalities, I spend a full seven pages explaining why Coasean bargains are more likely than most people think to prevent inefficiencies from negative externalities.  In the chapter on agency costs, I explain why privately ordered solutions like the market for corporate control would, if not precluded by some ill-conceived regulations, constrain agency costs better than structural rules from the government.

Disregarding all this, Mr. Davis chides me for assuming that “markets routinely fail.”  And, for good measure, he explains that government interventions are often a bigger source of failure, a point I repeatedly acknowledge, as it is a—perhaps the—central theme of the book.

  1. Trust in Experts

In what may be the strangest (and certainly the most misleading) part of his review, Mr. Davis criticizes me for placing too much confidence in experts by giving short shrift to the Hayekian knowledge problem and the insights of public choice.

          a.  The Knowledge Problem

According to Mr. Davis, the approach I advocate “is centered around fully functioning experts.”  He continues:

This progressive trust in experts is misplaced.  It is simply false to suppose that government policymakers are capable of formulating executive directives that effectively improve upon private arrangements and optimize the allocation of resources.  Friedrich Hayek and other classical liberals have persuasively argued, and everyday experience has repeatedly confirmed, that the information needed to allocate resources efficiently is voluminous and complex and widely dispersed.  So much so that government experts acting through top down directives can never hope to match the efficiency of resource allocation made through countless voluntary market transactions among private parties who actually possess the information needed to allocate the resources most efficiently.

Amen and hallelujah!  I couldn’t agree more!  Indeed, I said something similar when I came to the first regulatory tool my book examines (and criticizes), command-and-control pollution rules.  I wrote:

The difficulty here is an instance of a problem that afflicts regulation generally.  At the end of the day, regulating involves centralized economic planning:  A regulating “planner” mandates that productive resources be allocated away from some uses and toward others.  That requires the planner to know the relative value of different resource uses.  But such information, in the words of Nobel laureate F.A. Hayek, “is not given to anyone in its totality.”  The personal preferences of thousands or millions of individuals—preferences only they know—determine whether there should be more widgets and fewer gidgets, or vice-versa.  As Hayek observed, voluntary trading among resource owners in a free market generates prices that signal how resources should be allocated (i.e., toward the uses for which resource owners may command the highest prices).  But centralized economic planners—including regulators—don’t allocate resources on the basis of relative prices.  Regulators, in fact, generally assume that prices are wrong due to the market failure the regulators are seeking to address.  Thus, the so-called knowledge problem that afflicts regulation generally is particularly acute for command-and-control approaches that require regulators to make refined judgments on the basis of information about relative costs and benefits.

That was just the first of many times I invoked the knowledge problem to argue against top-down directives and in favor of market-oriented policies that would enable individuals to harness local knowledge to which regulators would not be privy.  The index to the book includes a “knowledge problem” entry with no fewer than nine sub-entries (e.g., “with licensure regimes,” “with Pigouvian taxes,” “with mandatory disclosure regimes”).  There are undoubtedly more mentions of the knowledge problem than those listed in the index, for the book assesses the degree to which the knowledge problem creates difficulties for every regulatory approach it considers.

Mr. Davis does mention one time where I “acknowledge[] the work of Hayek” and “recognize[] that context specific information is vitally important,” but he says I miss the point:

Having conceded these critical points [about the importance of context-specific information], Professor Lambert fails to follow them to the logical conclusion that private ordering arrangements are best for regulating resources efficiently.  Instead, he stops one step short, suggesting that policymakers defer to the regulator most familiar with the regulated party when they need context-specific information for their analysis.  Professor Lambert is mistaken.  The best information for resource allocation is not to be found in the regional office of the regulator.  It resides with the persons who have long been controlled and directed by the progressive regulatory system.  These are the ones to whom policymakers should defer.

I was initially puzzled by Mr. Davis’s description of how my approach would address the knowledge problem.  It’s inconsistent with the way I described the problem (the “regional office of the regulator” wouldn’t know people’s personal preferences, etc.), and I couldn’t remember ever suggesting that regulatory devolution—delegating decisions down toward local regulators—was the solution to the knowledge problem.

When I checked the citation in the sentences just quoted, I realized that Mr. Davis had misunderstood the point I was making in the passage he cited (my own fault, no doubt, not his).  The cited passage was at the very end of the book, where I was summarizing the book’s contributions.  I claimed to have set forth a plan for selecting regulatory approaches that would minimize the sum of error and decision costs.  I wanted to acknowledge, though, the irony of promulgating a generally applicable plan for regulating in a book that, time and again, decries top-down imposition of one-size-fits-all rules.  Thus, I wrote:

A central theme of this book is that Hayek’s knowledge problem—the fact that no central planner can possess and process all the information needed to allocate resources so as to unlock their greatest possible value—applies to regulation, which is ultimately a set of centralized decisions about resource allocation.  The very knowledge problem besetting regulators’ decisions about what others should do similarly afflicts pointy-headed academics’ efforts to set forth ex ante rules about what regulators should do.  Context-specific information to which only the “regulator on the spot” is privy may call for occasional departures from the regulatory plan proposed here.

As should be obvious, my point was not that the knowledge problem can generally be fixed by regulatory devolution.  Rather, I was acknowledging that the general regulatory approach I had set forth—i.e., the rules policymakers should follow in selecting among regulatory approaches—may occasionally misfire and should thus be implemented flexibly.

           b.  Public Choice Concerns

A second problem with my purported trust in experts, Mr. Davis explains, stems from the insights of public choice:

Actual policymakers simply don’t live up to [Woodrow] Wilson’s ideal of the disinterested, objective, apolitical, expert technocrat.  To the contrary, a vast amount of research related to public choice theory has convincingly demonstrated that decisions of regulatory agencies are frequently shaped by politics, institutional self-interest and the influence of the entities the agencies regulate.

Again, huzzah!  Those words could have been lifted straight out of the three full pages of discussion I devoted to public choice concerns with the very first regulatory intervention the book considered.  A snippet from that discussion:

While one might initially expect regulators pursuing the public interest to resist efforts to manipulate regulation for private gain, that assumes that government officials are not themselves rational, self-interest maximizers.  As scholars associated with the “public choice” economic tradition have demonstrated, government officials do not shed their self-interested nature when they step into the public square.  They are often receptive to lobbying in favor of questionable rules, especially since they benefit from regulatory expansions, which tend to enhance their job status and often their incomes.  They also tend to become “captured” by powerful regulatees who may shower them with personal benefits and potentially employ them after their stints in government have ended.

That’s just a slice.  Elsewhere in those three pages, I explain (1) how the dynamic of concentrated benefits and diffuse costs allows inefficient protectionist policies to persist, (2) how firms that benefit from protectionist regulation are often assisted by “pro-social” groups that will make a public interest case for the rules (Bruce Yandle’s Bootleggers and Baptists syndrome), and (3) the “[t]wo types of losses [that] result from the sort of interest-group manipulation public choice predicts.”  And that’s just the book’s initial foray into public choice.  The entry for “public choice concerns” in the book’s index includes eight sub-entries.  As with the knowledge problem, I addressed the public choice issues that could arise from every major regulatory approach the book considered.

For Mr. Davis, though, that was not enough to keep me out of the camp of Wilsonian progressives.  He explains:

Professor Lambert devotes a good deal of attention to the problem of “agency capture” by regulated entities.  However, he fails to acknowledge that a symbiotic relationship between regulators and regulated is not a bug in the regulatory system, but an inherent feature of a system defined by extensive and continuing government involvement in the allocation of resources.

To be honest, I’m not sure what that last sentence means.  Apparently, I didn’t recite some talismanic incantation that would indicate that I really do believe public choice concerns are a big problem for regulation.  I did say this in one of the book’s many discussions of public choice:

A regulator that has both regular contact with its regulatees and significant discretionary authority over them is particularly susceptible to capture.  The regulator’s discretionary authority provides regulatees with a strong motive to win over the regulator, which has the power to hobble the regulatee’s potential rivals and protect its revenue stream.  The regular contact between the regulator and the regulatee provides the regulatee with better access to those in power than that available to parties with opposing interests.  Moreover, the regulatee’s preferred course of action is likely (1) to create concentrated benefits (to the regulatee) and diffuse costs (to consumers generally), and (2) to involve an expansion of the regulator’s authority.  The upshot is that that those who bear the cost of the preferred policy are less likely to organize against it, and regulators, who benefit from turf expansion, are more likely to prefer it.  Rate-of-return regulation thus involves the precise combination that leads to regulatory expansion at consumer expense: broad and discretionary government power, close contact between regulators and regulatees, decisions that generally involve concentrated benefits and diffuse costs, and regular opportunities to expand regulators’ power and prestige.

In light of this combination of features, it should come as no surprise that the history of rate-of-return regulation is littered with instances of agency capture and regulatory expansion.

Even that was not enough to convince Mr. Davis that I reject the Wilsonian assumption of “disinterested, objective, apolitical, expert technocrat[s].”  I don’t know what more I could have said.

  1. Social Welfare

Mr. Davis is right when he says, “Professor Lambert’s ultimate goal for his book is to provide policymakers with a resource that will enable them to make regulatory decisions that produce greater social welfare.”  But nowhere in my book do I suggest, as he says I do, “that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.”  What I mean by “social welfare” is the aggregate welfare of all the individuals in a society.  And I’m careful to point out that only they know what makes them better off.  (At one point, for example, I write that “[g]overnment planners have no way of knowing how much pleasure regulatees derive from banned activities…or how much displeasure they experience when they must comply with an affirmative command…. [W]ith many paternalistic policies and proposals…government planners are really just guessing about welfare effects.”)

I agree with Mr. Davis that “[t]here is no single generally accepted methodology that anyone can use to determine objectively how and to what extent the welfare of society will be affected by a particular regulatory directive.”  For that reason, nowhere in the book do I suggest any sort of “metes and bounds” measurement of social welfare.  (I certainly do not endorse the use of GDP, which Mr. Davis rightly criticizes; that term appears nowhere in the book.)

Rather than prescribing any sort of precise measurement of social welfare, my book operates at the level of general principles:  We have reasons to believe that inefficiencies may arise when conditions are thus; there is a range of potential government responses to this situation—from doing nothing, to facilitating a privately ordered solution, to mandating various actions; based on our experience with these different interventions, the likely downsides of each (stemming from, for example, the knowledge problem and public choice concerns) are so-and-so; all things considered, the aggregate welfare of the individuals within this group will probably be greatest with policy x.

It is true that the thrust of the book is consequentialist, not deontological.  But it’s a book about policy, not ethics.  And its version of consequentialism is rule, not act, utilitarianism.  Is a consequentialist approach to policymaking enough to render one a progressive?  Should we excise John Stuart Mill’s On Liberty from the classical liberal canon?  I surely hope not.

Is My Proposed Approach an Improvement?

Mr. Davis’s second major criticism of my book—that what it proposes is “just the status quo”—has more bite.  By that, I mean two things.  First, it’s a more painful criticism to receive.  It’s easier for an author to hear “you’re saying something wrong” than “you’re not saying anything new.”

Second, there may be more merit to this criticism.  As Mr. Davis observes, I noted in the book’s introduction that “[a]t times during the drafting, I … wondered whether th[e] book was ‘original’ enough.”  I ultimately concluded that it was because it “br[ought] together insights of legal theorists and economists of various stripes…and systematize[d] their ideas into a unified, practical approach to regulating.”  Mr. Davis thinks I’ve overstated the book’s value, and he may be right.

The current regulatory landscape would suggest, though, that my book’s approach to selecting among potential regulatory policies isn’t “just the status quo.”  The approach I recommend would generate the specific policies catalogued at the outset of this response (in the bullet points).  The fact that those policies haven’t been implemented under the existing regulatory approach suggests that what I’m recommending must be something different than the status quo.

Mr. Davis observes—and I acknowledge—that my recommended approach resembles the review required of major executive agency regulations under Executive Order 12866, President Clinton’s revised version of President Reagan’s Executive Order 12291.  But that order is quite limited in its scope.  It doesn’t cover “minor” executive agency rules (those with expected costs of less than $100 million) or rules from independent agencies or from Congress or from courts or at the state or local level.  Moreover, I understand from talking to a former administrator of the Office of Information and Regulatory Affairs, which is charged with implementing the order, that it has actually generated little serious consideration of less restrictive alternatives, something my approach emphasizes.

What my book proposes is not some sort of governmental procedure; indeed, I emphasize in the conclusion that the book “has not addressed … how existing regulatory institutions should be reformed to encourage the sort of analysis th[e] book recommends.”  Instead, I propose a way to think through specific areas of regulation, one that is informed by a great deal of learning about both market and government failures.  The best audience for the book is probably law students who will someday find themselves influencing public policy as lawyers, legislators, regulators, or judges.  I am thus heartened that the book is being used as a text at several law schools.  My guess is that few law students receive significant exposure to Hayek, public choice, etc.

So, who knows?  Perhaps the book will make a difference at the margin.  Or perhaps it will amount to sound and fury, signifying nothing.  But I don’t think a classical liberal could fairly say that the analysis it counsels “is not clearly better than the status quo.”

A Truly Better Approach to Regulating

Mr. Davis ends his review with a stirring call to revamp the administrative state to bring it “in complete and consistent compliance with the fundamental law of our republic embodied in the Constitution, with its provisions interpreted to faithfully conform to their original public meaning.”  Among other things, he calls for restoring the separation of powers, which has been erased in agencies that combine legislative, executive, and judicial functions, and for eliminating unchecked government power, which results when the legislature delegates broad rulemaking and adjudicatory authority to politically unaccountable bureaucrats.

Once again, I concur.  There are major problems—constitutional and otherwise—with the current state of administrative law and procedure.  I’d be happy to tear down the existing administrative state and begin again on a constitutionally constrained tabula rasa.

But that’s not what my book was about.  I deliberately set out to write a book about the substance of regulation, not the process by which rules should be imposed.  I took that tack for two reasons.  First, there are numerous articles and books, by scholars far more expert than I, on the structure of the administrative state.  I could add little value on administrative process.

Second, the less-addressed substantive question—what, as a substantive matter, should a policy addressing x do?—would exist even if Mr. Davis’s constitutionally constrained regulatory process were implemented.  Suppose that we got rid of independent agencies, curtailed delegations of rulemaking authority to the executive branch, and returned to a system in which Congress wrote all rules, the executive branch enforced them, and the courts resolved any disputes.  Someone would still have to write the rule, and that someone (or group of people) should have some sense of the pros and cons of one approach over another.  That is what my book seeks to provide.

A hard core Hayekian—one who had immersed himself in Law, Legislation, and Liberty—might respond that no one should design regulation (purposive rules that Hayek would call thesis) and that efficient, “purpose-independent” laws (what Hayek called nomos) will just emerge as disputes arise.  But that is not Mr. Davis’s view.  He writes:

A system of governance or regulation based on the rule of law attains its policy objectives by proscribing actions that are inconsistent with those objectives.  For example, this type of regulation would prohibit a regulated party from discharging a pollutant in any amount greater than the limiting amount specified in the regulation.  Under this proscriptive approach to regulation, any and all actions not specifically prohibited are permitted.

Mr. Davis has thus contemplated a purposive rule, crafted by someone.  That someone should know the various policy options and the upsides and downsides of each.  How to Regulate could help.

Conclusion

I’m not sure why Mr. Davis viewed my book as no more than dressed-up progressivism.  Maybe he was triggered by the book’s cover art, which he says “is faithful to the progressive tradition,” resembling “the walls of public buildings from San Francisco to Stalingrad.”  Maybe it was a case of Sunstein Derangement Syndrome.  (Progressive legal scholar Cass Sunstein had nice things to say about the book, despite its criticisms of a number of his ideas.)  Or perhaps it was that I used the term “market failure.”  Many conservatives and libertarians fear, with good reason, that conceding the existence of market failures invites all sorts of government meddling.

At the end of the day, though, I believe we classical liberals should stop pretending that market outcomes are always perfect, that pure private ordering is always and everywhere the best policy.  We should certainly sing markets’ praises; they usually work so well that people don’t even notice them, and we should point that out.  We should continually remind people that government interventions also fail—and in systematic ways (e.g., the knowledge problem and public choice concerns).  We should insist that a market failure is never a sufficient condition for a governmental fix; one must always consider whether the cure will be worse than the disease.  In short, we should take and promote the view that government should operate “under a presumption of error.”

That view, economist Aaron Director famously observed, is the essence of laissez faire.  It’s implicit in the purpose statement of the Federalist Society’s Regulatory Transparency Project.  And it’s the central point of How to Regulate.

So let’s go easy on the friendly fire.

Allen Gibby is a Senior Fellow at the International Center for Law & Economics

Modern agriculture companies like Monsanto, DuPont, and Syngenta, develop cutting-edge seeds containing genetic traits that make them resistant to insecticides and herbicides. They also  develop crop protection chemicals to use throughout the life of the crop to further safeguard from pests, weeds and grasses, and disease. No single company has a monopoly on all the high-demand seeds and traits or crop protection products. Thus, in order for Company A to produce a variety of corn that is resistant to Company B’s herbicide, it may have to license a trait patented by Company B in order to even begin researching its product, and it may need further licenses (and other inputs) from Company B as its research progresses in unpredictable directions.

While the agriculture industry has a long history of successful cross-licensing arrangements between agricultural input providers, licensing talks can break down (and do so for any number of reasons), potentially thwarting a nascent product before research has even begun — or, possibly worse, well into its development. The cost of such a breakdown isn’t merely the loss of the intended product; it’s also the loss of the other products Company A could have been developing, as well as the costs of negotiation.

To eschew this outcome, as well as avoid other challenges such as waiting years for Company B to fully develop and make available a chemical before it engages in in arm’s length negotiations with Company A, one solution is for Company A and Company B to merge and combine their expertise to design novel seeds and traits and complementary crop protection products.

The potential for this type of integration seems evident in the proposed Dow-DuPont and Bayer-Monsanto deals where, of the companies merging, one earns most of its revenue from seeds and traits (DuPont and Monsanto) and the other from crop protection (Dow and Bayer).

Do the complementary aspects inherent in these deals increase the likelihood that the merged entities will gain the ability and the incentive to prevent entry, foreclose competitors, and thereby harm consumers?  

Diana Moss, who will surely have more to say on this in her post, believes the answer is yes. She recently voiced concerns during a Senate hearing that the Dow-DuPont and Bayer-Monsanto mergers would have negative conglomerate effects. According to Moss’s testimony, the mergers would create:

substantial vertical integration between traits, seeds, and chemicals. The resulting “platforms” will likely be engineered for the purpose of creating exclusive packages of traits, seeds and chemicals for farmers that do not “interoperate” with rival products. This will likely raise barriers for smaller innovators and increase the risk that they are foreclosed from access to technology and other resources to compete effectively.

Decades of antitrust policy and practice present a different perspective, however. While it’s true that the combined entities certainly might offer combined stacks of products to farmers, doing so would enable Dow-DuPont and Bayer-Monsanto to vigorously innovate and compete with each other, a combined ChemChina-Syngenta, and an increasing number of agriculture and biotechnology startups (per AgFunder, investments in such startups totaled $719 million in 2016, representing a 150% increase from 2015’s figure).

More importantly, the complaint assumes that the only, or predominant, effect of such integration would be to erect barriers to entry, rather than to improve product quality, offer expanded choices to consumers, and enhance competition.

Concerns about conglomerate effects making life harder for small businesses are not new. From 1965 to 1975, the United States experienced numerous conglomerate mergers. Among the theories of competitive harm advanced by the courts and antitrust authorities to address their envisioned negative effects was entrenchment. Under this theory, mergers could be blocked if they strengthened an incumbent firm through increased efficiencies not available to other firms, access to a broader line of products, or increased financial muscle to discourage entry.

While a nice theory, for over a decade the DoJ could not identify any conditions under which conglomerate effects would give the merged firm the ability and incentive to raise price and restrict output. The DoJ determined that the harms of foreclosure and barriers to smaller businesses were remote and easily outweighed by the potential benefits, which include

providing infusions of capital, improving management efficiency either through replacement of mediocre executives or reinforcement of good ones with superior financial control and management information systems, transfer of technical and marketing know-how and best practices across traditional industry lines; meshing of research and distribution; increasing ability to ride out economic fluctuations through diversification; and providing owners-managers a market for selling the enterprises they created, thus encouraging entrepreneurship and risk-taking.

Consequently, the DoJ concluded that it should rarely, if ever, interfere to mitigate conglomerate effects in the 1982 Merger Guidelines.

In the Dow-DuPont and Bayer-Monsanto deals, there are no overwhelming factors that would contradict the presumption that the conglomerate effects of improved product quality and expanded choices for farmers outweigh the potential harms.

To find such harms, the DoJ reasoned, would require satisfying a highly attenuated chain of causation that “invites competition authorities to speculate about what the future is likely to bring.” Such speculation — which includes but is not limited to: weighing whether rivals can match the merged firm’s costs, whether rivals will exit, whether firms will not re-enter the market in response to price increases above pre-merger levels, and whether what buyers gain through prices set below pre-merger levels is less than what they later lose through paying higher than pre-merger prices — does not inspire confidence that even the most clairvoyant regulator would properly make trade-offs that would ultimately benefit consumers.

Moss’s argument also presumes that the merger would compel farmers to purchase the potentially “exclusive packages of traits, seeds and chemicals… that do not ‘interoperate’ with rival products.” But while there aren’t a large number of “platform” competitors in agribusiness, there are still enough to provide viable alternatives to any “exclusive packages” and cross-licensed combinations of seeds, traits, and chemicals that Dow-DuPont and Bayer-Monsanto may attempt to sell.

First, even if a rival fails to offer an equally “good deal” or suffers a loss of sales or market share, it would be illogical, the DoJ concluded, to condemn mergers that promote benefits such as resource savings, more efficient production modes, and efficient bundling (i.e., bundling that benefits customers by offering them improved products, lower prices or lower transactions costs due to the purchase of a combined stack through a “one-stop shop”). As Robert Bork put it, far from “frightening smaller companies into semi-paralysis,” conglomerate mergers that generate greater efficiencies will force smaller competitors to compete more effectively, making consumers better off.

Second, it is highly unlikely these deals will adversely affect the long-standing prevalence of cross-licensing arrangements between agricultural input providers. Agriculture companies have a long history of supplying competitors with products while simultaneously competing with them. For decades, antitrust scholars have been skeptical of claims that firms have incentives to deal unreasonably with providers of complementary products, and the ag-biotech industry seems to bear this out. This is because discriminating anticompetitively against complements often devalues the firm’s own platform. For example, Apple’s App Store is more valuable to iPhone users because it includes messaging apps like WeChat, WhatsApp, and Facebook Messenger, even though they compete directly with iMessage and FaceTime. By excluding these apps, Apple would devalue the iPhone to hundreds of millions of its users who also use these apps.

In the case of the pending mergers, not only would a combined Dow-DuPont and Bayer-Monsanto offer their own combined stacks, their platforms increase in value by providing a broad suite of alternative cross-licensed product combinations. And, of course, the combined stack (independent of whether it’s entirely produced by a Dow-DuPont or Bayer-Monsanto) that offers sufficiently increased value to farmers over other packages or non-packaged alternatives, will — and should — win in the end.

The Dow-DuPont and Bayer-Monsanto mergers are an opportunity to remember why, decades ago, the DoJ concluded that it should rarely, if ever, interfere to mitigate conglomerate effects and an occasion to highlight the incentives that providers of complementary products have to deal reasonably with one another.

 

For several decades, U.S. federal antitrust enforcers, on a bipartisan basis, have publicly supported the proposition that antitrust law seeks to advance consumer welfare by promoting economic efficiency and vigorous competition on the merits.  This reflects an economic interpretation of the antitrust laws adopted by the Supreme Court beginning in the late 1970s, inspired by the scholarship of Robert Bork and other law and economics experts.  As leading antitrust scholars Judge (and Professor) Douglas Ginsburg and Professor Joshua Wright have explained (footnotes omitted), the “economic approach” to antitrust has benefited the American economy and consumers:

The promotion of economic welfare as the lodestar of antitrust laws—to the exclusion of social, political, and protectionist goals—transformed the state of the law and restored intellectual coherence to a body of law Robert Bork had famously described as paradoxical. Indeed, there is now widespread agreement that this evolution toward welfare and away from noneconomic considerations has benefitted consumers and the economy more broadly. Welfare-based standards have led to greater predictability in judicial and agency decision making. They also rule out theories of liability (e.g., a transaction will tend to reduce the number of small businesses in a market) and defenses (e.g., the restraint upon trade is necessary to save consumers from the consequences of competition) that would significantly harm consumers.

It is therefore most regrettable that the Attorney General of the United States, who oversees U.S. Executive Branch antitrust enforcement (which is carried out by the U.S. Justice Department’s Antitrust Division), recently delivered a speech on federal antitrust enforcement that is, at the very least, in severe tension with the (up to now) bipartisan federal antitrust enforcement consensus regarding the efficiency-centered goal of antitrust.  In an April 6 keynote luncheon address to the Spring Meeting of the American Bar Association’s (ABA) Antitrust Section, Attorney General Loretta E. Lynch focused instead on the themes of “fairness” and “economic justice” in discussing American antitrust enforcement:

[The ABA Antitrust Section] ha[s] always stood at the forefront of the Bar’s [laudable] efforts to guarantee fair competition; to encourage transparent business practices; and, above all, to secure economic justice. . . .  [O]ur choices have always been steeped in fundamental fairness.  The Sherman [Antitrust] Act was also a landmark in the history of the Department of Justice, adding the maintenance of a level economic playing field to our fundamental mission of upholding the law and seeking justice.  And the principle that it embodied – that the people of this country deserve the freedom to navigate their own path and chart their own future – still stands at the core of our work.  Today, the Department of Justice is as committed to fair, open and competitive markets as it has ever been. . . .  All of us in this room have a responsibility to stand up for people where they cannot stand up for themselves.  We have a duty to defend the institutions that make this country strong . . . [including] markets that allow for competition that is fair, . . . [and] a nation where every person has a meaningful chance to succeed and to thrive. . . .  [A]ll of you are making a significant and lasting contribution to a stronger and more just society. 

“Fairness” and “economic justice” may be laudable (albeit ill-defined) social goals in the abstract, but antitrust is ill-suited to advance them.  Indeed, history demonstrates that invocation of those goals was associated with welfare-inimical American antitrust enforcement policies that ill-served the American public.  Prior to the 1970s, “fairness,” “justice,” and related concepts (such as “a level playing field”) were often cited by the courts and public enforcers to justify antitrust interventions aimed at protecting entrenched small businesses from more efficient competitors, and at precluding the aggressive exploitation of efficiencies by large innovative companies.  This often resulted in higher prices to consumers, sluggish economic productivity, and slower innovation and economic growth, to the detriment of the overall American economy.

Admittedly, modern U.S. federal antitrust case law holdings and enforcement tools emphasize economic efficiency, rather than “fairness” and “justice,” so one might be tempted to dismiss the Attorney General’s remarks as unfortunate but of no real consequence.  (In fairness, the Attorney General did pay lip service to the importance of competition and to recent enforcement victories by the Antitrust Division, although inexplicably she had nothing to say about cartel prosecutions – the one area of antitrust that is most clearly welfare-enhancing.)  Unfortunately, however, many foreign antitrust enforcement officials and practitioners attended her speech, which by now has been disseminated throughout the global antitrust enforcement community.  Significantly, a number of major foreign jurisdictions have recently employed antitrust concepts of “unfair competition” and “superior bargaining position” to attack efficient, economic welfare-enhancing business arrangements, such as patent licensing restrictions, by major companies (including U.S. multinationals).  When American competition experts urge foreign antitrust officials to eschew such tactics in favor of efficiency-based antitrust rules, it would not be surprising to see those officials invoke Attorney General Lynch’s unfortunate paean to “fairness” in defense of their approach.  (For this reason, U.S. Federal Trade Commissioner Maureen Ohlhausen has stressed that American officials should be careful in their public antitrust pronouncements, a warning that obviously went unheeded by the Attorney General’s April 6 speechwriter.)

One may only hope that going forward, Attorney General Lynch, and the U.S. antitrust enforcers who report to her, will keep these concerns in mind and publicly reaffirm their dedication to the accepted mainstream consensus view that American antitrust policy is based on efficiency and consumer welfare considerations, not on bygone populist nostrums of “fairness.”  In so doing, U.S. officials should emphasize that efficiency-based antitrust strengthens innovation, advances consumer welfare, and fosters strong economies, considerations that ideally should prove attractive to public officials from all jurisdictions.