Archives For commercialization

The leading contribution to sound competition policy made by former Assistant U.S. Attorney General Makan Delrahim was his enunciation of the “New Madison Approach” to patent-antitrust enforcement—and, in particular, to the antitrust treatment of standard essential patent licensing (see, for example, here, here, and here). In short (citations omitted):

The New Madison Approach (“NMA”) advanced by former Assistant Attorney General for Antitrust Makan Delrahim is a simple analytical framework for understanding the interplay between patents and antitrust law arising out of standard setting. A key aspect of the NMA is its rejection of the application of antitrust law to the “hold-up” problem, whereby patent holders demand supposedly supra-competitive licensing fees to grant access to their patents that “read on” a standard – standard essential patents (“SEPs”). This scenario is associated with an SEP holder’s prior commitment to a standard setting organization (“SSO”), that is: if its patented technology is included in a proposed new standard, it will license its patents on fair, reasonable, and non-discriminatory (“FRAND”) terms. “Hold-up” is said to arise subsequently, when the SEP holder reneges on its FRAND commitment and demands that a technology implementer pay higher-than-FRAND licensing fees to access its SEPs.

The NMA has four basic premises that are aimed at ensuring that patent holders have adequate incentives to innovate and create welfare-enhancing new technologies, and that licensees have appropriate incentives to implement those technologies:

1. Hold-up is not an antitrust problem. Accordingly, an antitrust remedy is not the correct tool to resolve patent licensing disputes between SEP-holders and implementers of a standard.

2. SSOs should not allow collective actions by standard-implementers to disfavor patent holders in setting the terms of access to patents that cover a new standard.

3. A fundamental element of patent rights is the right to exclude. As such, SSOs and courts should be hesitant to restrict SEP holders’ right to exclude implementers from access to their patents, by, for example, seeking injunctions.

4. Unilateral and unconditional decisions not to license a patent should be per se legal.

Delrahim emphasizes that the threat of antitrust liability, specifically treble damages, distorts the incentives associated with good faith negotiations with SSOs over patent inclusion. Contract law, he goes on to note, is perfectly capable of providing an ex post solution to licensing disputes between SEP holders and implementers of a standard. Unlike antitrust law, a contract law framework allows all parties equal leverage in licensing negotiations.

As I have explained elsewhere, the NMA is best seen as a set of policies designed to spark dynamic economic growth:

[P]atented technology serves as a catalyst for the wealth-creating diffusion of innovation. This occurs through numerous commercialization methods; in the context of standardized technologies, the development of standards is a process of discovery. At each [SSO], the process of discussion and negotiation between engineers, businesspersons, and all other relevant stakeholders reveals the relative value of alternative technologies and tends to result in the best patents being integrated into a standard.

The NMA supports this process of discovery and implementation of the best patented technology born of the labors of the innovators who created it. As a result, the NMA ensures SEP valuations that allow SEP holders to obtain an appropriate return for the new economic surplus that results from the commercialization of standard-engendered innovations. It recognizes that dynamic economic growth is fostered through the incentivization of innovative activities backed by patents.

In sum, the NMA seeks to promote innovation by offering incentives for SEP-driven technological improvements. As such, it rejects as ill-founded prior Federal Trade Commission (FTC) litigation settlements and Obama-era U.S. Justice Department (DOJ) Antitrust Division policy statements that artificially favored implementor licensees’ interests over those of SEP licensors (see here).

In light of the NMA, DOJ cooperated with the U.S. Patent and Trademark Office and National Institute of Standards and Technology (NIST) in issuing a 2019 SEP Policy Statement clarifying that an SEP holder’s promise to license a patent on fair, reasonable, and non-discriminatory (FRAND) terms does not bar it from seeking any available remedy for patent infringement, including an injunction. This signaled that SEPs and non-SEP patents enjoy equivalent legal status.

DOJ also issued a 2020 supplement to its 2015 Institute of Electrical and Electronics Engineers (IEEE) business review letter. The 2015 letter had found no legal fault with revised IEEE standard-setting policies that implicitly favored implementers of standardized technology over SEP holders. The 2020 supplement characterized key elements of the 2015 letter as “outdated,” and noted that the anti-SEP bias of that document could “harm competition and chill innovation.”   

Furthermore, DOJ issued a July 2019 Statement of Interest before the 9th U.S. Circuit Court of Appeals in FTC v. Qualcomm, explaining that unilateral and unconditional decisions not to license a patent are legal under the antitrust laws. In October 2020, the 9th Circuit reversed a district court decision and rejected the FTC’s monopolization suit against Qualcomm. The circuit court, among other findings, held that Qualcomm had no antitrust duty to license its SEPs to competitors.

Regrettably, the Biden Administration appears to be close to rejecting the NMA and to reinstituting the anti-strong patents SEP-skeptical views of the Obama administration (see here and here). DOJ already has effectively repudiated the 2020 supplement to the 2015 IEEE letter and the 2019 SEP Policy Statement. Furthermore, written responses to Senate Judiciary Committee questions by assistant attorney general nominee Jonathan Kanter suggest support for renewed antitrust scrutiny of SEP licensing. These developments are highly problematic if one supports dynamic economic growth.

Conclusion

The NMA represents a pro-American, pro-growth innovation policy prescription. Its abandonment would reduce incentives to invest in patents and standard-setting activities, to the detriment of the U.S. economy. Such a development would be particularly unfortunate at a time when U.S. Supreme Court decisions have weakened American patent rights (see here); China is taking steps to strengthen Chinese patents and raise incentives to obtain Chinese patents (see here); and China is engaging in litigation to weaken key U.S. patents and undermine American technological leadership (see here).

The rejection of NMA would also be in tension with the logic of the 5th U.S. Circuit Court of Appeals’ 2021 HTC v. Ericsson decision, which held that the non-discrimination portion of the FRAND commitment required Ericsson to give HTC the same licensing terms as given to larger mobile-device manufacturers. Furthermore, recent important European court decisions are generally consistent with NMA principles (see here).

Given the importance of dynamic competition in an increasingly globalized world economy, Biden administration officials may wish to take a closer look at the economic arguments supporting the NMA before taking final action to condemn it. Among other things, the administration might take note that major U.S. digital platforms, which are the subject of multiple U.S. and foreign antitrust enforcement investigations, tend to firmly oppose strong patents rights. As one major innovation economist recently pointed out:

If policymakers and antitrust gurus are so concerned about stemming the rising power of Big Tech platforms, they should start by first stopping the relentless attack on IP. Without the IP system, only the big and powerful have the privilege to innovate[.]

Bad Blood at the FTC

Thom Lambert —  9 June 2021

John Carreyrou’s marvelous book Bad Blood chronicles the rise and fall of Theranos, the one-time Silicon Valley darling that was revealed to be a house of cards.[1] Theranos’s Svengali-like founder, Elizabeth Holmes, convinced scores of savvy business people (mainly older men) that her company was developing a machine that could detect all manner of maladies from a small quantity of a patient’s blood. Turns out it was a fraud. 

I had a couple of recurring thoughts as I read Bad Blood. First, I kept thinking about how Holmes’s fraud might impair future medical innovation. Something like Theranos’s machine would eventually be developed, I figured, but Holmes’s fraud would likely set things back by making investors leery of blood-based, multi-disease diagnostics.

I also had a thought about the causes of Theranos’s spectacular failure. A key problem, it seemed, was that the company tried to do too many things at once: develop diagnostic technologies, design an elegant machine (Holmes was obsessed with Steve Jobs and insisted that Theranos’s machine resemble a sleek Apple device), market the product, obtain regulatory approval, scale the operation by getting Theranos machines in retail chains like Safeway and Walgreens, and secure third-party payment from insurers.

A thought that didn’t occur to me while reading Bad Blood was that a multi-disease blood diagnostic system would soon be developed but would be delayed, or possibly even precluded from getting to market, by an antitrust enforcement action based on things the developers did to avoid the very problems that doomed Theranos. 

Sadly, that’s where we are with the Federal Trade Commission’s misguided challenge to the merger of Illumina and Grail.

Founded in 1998, San Diego-based Illumina is a leading provider of products used in genetic sequencing and genomic analysis. Illumina produces “next generation sequencing” (NGS) platforms that are used for a wide array of applications (genetic tests, etc.) developed by itself and other companies.

In 2015, Illumina founded Grail for the purpose of developing a blood test that could detect cancer in asymptomatic individuals—the “holy grail” of cancer diagnosis. Given the superior efficacy and lower cost of treatments for early- versus late-stage cancers, success by Grail could save millions of lives and billions of dollars.

Illumina created Grail as a separate entity in which it initially held a controlling interest (having provided the bulk of Grail’s $100 million Series A funding). Legally separating Grail in this fashion, rather than running it as an Illumina division, offered a number of benefits. It limited Illumina’s liability for Grail’s activities, enabling Grail to take greater risks. It mitigated the Theranos problem of managers’ being distracted by too many tasks: Grail managers could concentrate exclusively on developing a viable cancer-screening test, while Illumina’s management continued focusing on that company’s core business. It made it easier for Grail to attract talented managers, who would rather come in as corporate officers than as division heads. (Indeed, Grail landed Jeff Huber, a high-profile Google executive, as its initial CEO.) Structuring Grail as a majority-owned subsidiary also allowed Illumina to attract outside capital, with the prospect of raising more money in the future by selling new Grail stock to investors.

In 2017, Grail did exactly that, issuing new shares to investors in exchange for $1 billion. While this capital infusion enabled the company to move forward with its promising technologies, the creation of new shares meant that Illumina no longer held a controlling interest in the firm. Its ownership interest dipped below 20 percent and now stands at about 14.5 percent of Grail’s voting shares.  

Setting up Grail so as to facilitate outside capital formation and attract top managers who could focus single-mindedly on product development has paid off. Grail has now developed a blood test that, when processed on Illumina’s NGS platform, can accurately detect a number of cancers in asymptomatic individuals. Grail predicts that this “liquid biopsy,” called Galleri, will eventually be able to detect up to 50 cancers before physical symptoms manifest. Grail is also developing other blood-based cancer tests, including one that confirms cancer diagnoses in patients suspected to have cancer and another designed to detect cancer recurrence in patients who have undergone treatment.

Grail now faces a host of new challenges. In addition to continuing to develop its tests, Grail needs to:  

  • Engage in widespread testing of its cancer-detection products on up to 50 different cancers;
  • Process and present the information from its extensive testing in formats that will be acceptable to regulators;
  • Navigate the pre-market regulatory approval process in different countries across the globe;
  • Secure commitments from third-party payors (governments and private insurers) to provide coverage for its tests;
  • Develop means of manufacturing its products at scale;
  • Create and implement measures to ensure compliance with FDA’s Quality System Regulation (QSR), which governs virtually all aspects of medical device production (design, testing, production, process controls, quality assurance, labeling, packaging, handling, storage, distribution, installation, servicing, and shipping); and
  • Market its tests to hospitals and health-care professionals.

These steps are all required to secure widespread use of Grail’s tests. And, importantly, such widespread use will actually improve the quality of the tests. Grail’s tests analyze the DNA in a patient’s blood to look for methylation patterns that are known to be associated with cancer. In essence, the tests work by comparing the methylation patterns in a test subject’s DNA against a database of genomic data collected from large clinical studies. With enough comparison data, the tests can indicate not only the presence of cancer but also where in the body the cancer signal is coming from. And because Grail’s tests use machine learning to hone their algorithms in response to new data collected from test usage, the greater the use of Grail’s tests, the more accurate, sensitive, and comprehensive they become.     

To assist with the various tasks needed to achieve speedy and widespread use of its tests, Grail decided to reunite with Illumina. In September 2020, the companies entered a merger agreement under which Illumina would acquire the 85.5 percent of Grail voting shares it does not already own for cash and stock worth $7.1 billion and additional contingent payments of $1.2 billion to Grail’s non-Illumina shareholders.

Recombining with Illumina will allow Grail—which has appropriately focused heretofore solely on product development—to accomplish the tasks now required to get its tests to market. Illumina has substantial laboratory capacity that Grail can access to complete the testing needed to refine its products and establish their effectiveness. As the leading global producer of NGS platforms, Illumina has unparalleled experience in navigating the regulatory process for NGS-related products, producing and marketing those products at scale, and maintaining compliance with complex regulations like FDA’s QSR. With nearly 3,000 international employees located in 26 countries, it has obtained regulatory authorizations for NGS-based tests in more than 50 jurisdictions around the world.  It also has long-standing relationships with third-party payors, health systems, and laboratory customers. Grail, by contrast, has never obtained FDA approval for any products, has never manufactured NGS-based tests at scale, has only a fledgling regulatory affairs team, and has far less extensive contacts with potential payors and customers. By remaining focused on its key objective (unlike Theranos), Grail has achieved product-development success. Recombining with Illumina will now enable it, expeditiously and efficiently, to deploy its products across the globe, generating user data that will help improve the products going forward.

In addition to these benefits, the combination of Illumina and Grail will eliminate a problem that occurs when producers of complementary products each operate in markets that are not fully competitive: double marginalization. When sellers of products that are used together each possess some market power due to a lack of competition, their uncoordinated pricing decisions may result in less surplus for each of them and for consumers of their products. Combining so that they can coordinate pricing will leave them and their customers better off.

Unlike a producer participating in a competitive market, a producer that faces little competition can enhance its profits by raising its price above its incremental cost.[2] But there are limits on its ability to do so. As the well-known monopoly pricing model shows, even a monopolist has a “profit-maximizing price” beyond which any incremental price increase would lose money.[3] Raising price above that level would hurt both consumers and the monopolist.

When consumers are deciding whether to purchase products that must be used together, they assess the final price of the overall bundle. This means that when two sellers of complementary products both have market power, there is an above-cost, profit-maximizing combined price for their products. If the complement sellers individually raise their prices so that the combined price exceeds that level, they will reduce their own aggregate welfare and that of their customers.

This unfortunate situation is likely to occur when market power-possessing complement producers are separate companies that cannot coordinate their pricing. In setting its individual price, each separate firm will attempt to capture as much surplus for itself as possible. This will cause the combined price to rise above the profit-maximizing level. If they could unite, the complement sellers would coordinate their prices so that the combined price was lower and the sellers’ aggregate profits higher.

Here, Grail and Illumina provide complementary products (cancer-detection tests and the NGS platforms on which they are processed), and each faces little competition. If they price separately, their aggregate prices are likely to exceed the profit-maximizing combined price for the cancer test and NGS platform access. If they combine into a single firm, that firm would maximize its profits by lowering prices so that the aggregate test/platform price is the profit-maximizing combined price.  This would obviously benefit consumers.

In light of the social benefits the Grail/Illumina merger offers—speeding up and lowering the cost of getting Grail’s test approved and deployed at scale, enabling improvement of the test with more extensive user data, eliminating double marginalization—one might expect policymakers to cheer the companies’ recombination. The FTC, however, is trying to block it.  In late March, the commission brought an action claiming that the merger would violate Section 7 of the Clayton Act by substantially reducing competition in a line of commerce.

The FTC’s theory is that recombining Illumina and Grail will impair competition in the market for “multi-cancer early detection” (MCED) tests. The commission asserts that the combined company would have both the opportunity and the motivation to injure rival producers of MCED tests.

The opportunity to do so would stem from the fact that MCED tests must be processed on NGS platforms, which are produced exclusively by Illumina. Illumina could charge Grail’s rivals or their customers higher prices for access to its NGS platforms (or perhaps deny access altogether) and could withhold the technical assistance rivals would need to secure both regulatory approval of their tests and coverage by third-party payors.

But why would Illumina take this tack, given that it would be giving up profits on transactions with producers and users of other MCED tests? The commission asserts that the losses a combined Illumina/Grail would suffer in the NGS platform market would be more than offset by gains stemming from reduced competition in the MCED test market. Thus, the combined company would have a motive, as well as an opportunity, to cause anticompetitive harm.

There are multiple problems with the FTC’s theory. As an initial matter, the market the commission claims will be impaired doesn’t exist. There is no MCED test market for the simple reason that there are no commercializable MCED tests. If allowed to proceed, the Illumina/Grail merger may create such a market by facilitating the approval and deployment of the first MCED test. At present, however, there is no such market, and the chances of one ever emerging will be diminished if the FTC succeeds in blocking the recombination of Illumina and Grail.

Because there is no existing market for MCED tests, the FTC’s claim that a combined Illumina/Grail would have a motivation to injure MCED rivals—potential consumers of Illumina’s NGS platforms—is rank speculation. The commission has no idea what profits Illumina would earn from NGS platform sales related to MCED tests, what profits Grail would earn on its own MCED tests, and how the total profits of the combined company would be affected by impairing opportunities for rival MCED test producers.

In the only relevant market that does exist—the cancer-detection market—there can be no question about the competitive effect of an Illumina/Grail merger: It would enhance competition by speeding the creation of a far superior offering that promises to save lives and substantially reduce health-care costs. 

There is yet another problem with the FTC’s theory of anticompetitive harm. The commission’s concern that a recombined Illumina/Grail would foreclose Grail’s rivals from essential NGS platforms and needed technical assistance is obviated by Illumina’s commitments. Specifically, Illumina has irrevocably offered current and prospective oncology customers 12-year contract terms that would guarantee them the same access to Illumina’s sequencing products that they now enjoy, with no price increase. Indeed, the offered terms obligate Illumina not only to refrain from raising prices but also to lower them by at least 43% by 2025 and to provide regulatory and technical assistance requested by Grail’s potential rivals. Illumina’s continued compliance with its firm offer will be subject to regular audits by an independent auditor.

In the end, then, the FTC’s challenge to the Illumina/Grail merger is unjustified. The initial separation of Grail from Illumina encouraged the managerial focus and capital accumulation needed for successful test development. Recombining the two firms will now expedite and lower the costs of the regulatory approval and commercialization processes, permitting Grail’s tests to be widely used, which will enhance their quality. Bringing Grail’s tests and Illumina’s NGS platforms within a single company will also benefit consumers by eliminating double marginalization. Any foreclosure concerns are entirely speculative and are obviated by Illumina’s contractual commitments.

In light of all these considerations, one wonders why the FTC challenged this merger (and on a 4-0 vote) in the first place. Perhaps it was the populist forces from left and right that are pressuring the commission to generally be more aggressive in policing mergers. Some members of the commission may also worry, legitimately, that if they don’t act aggressively on a vertical merger, Congress will amend the antitrust laws in a deleterious fashion. But the commission has picked a poor target. This particular merger promises tremendous benefit and threatens little harm. The FTC should drop its challenge and encourage its European counterparts to do the same. 


[1] If you don’t have time for Carreyrou’s book (and you should make time if you can), HBO’s Theranos documentary is pretty solid.

[2] This ability is market power.  In a perfectly competitive market, any firm that charges an above-cost price will lose sales to rivals, who will vie for business by lowering their prices down to the level of their cost.

[3] Under the model, this is the price that emerges at the output level where the producer’s marginal revenue equals its marginal cost.

With the COVID-19 vaccine made by Moderna joining the one from Pfizer and BioNTech in gaining approval from the U.S. Food and Drug Administration, it should be time to celebrate the U.S. system of pharmaceutical development. The system’s incentives—notably granting patent rights to firms that invest in new and novel discoveries—have worked to an astonishing degree, producing not just one but as many as three or four effective approaches to end a viral pandemic that, just a year ago, was completely unknown.

Alas, it appears not all observers agree. Now that we have the vaccines, some advocate suspending or limiting patent rights—for example, by imposing a compulsory licensing scheme—with the argument that this is the only way for the vaccines to be produced in mass quantities worldwide. Some critics even assert that abolishing or diminishing property rights in pharmaceuticals is needed to end the pandemic.

In truth, we can effectively and efficiently distribute the vaccines while still maintaining the integrity of our patent system. 

What the false framing ignores are the important commercialization and distribution functions that patents provide, as well as the deep, long-term incentives the patent system provides to create medical innovations and develop a robust pharmaceutical supply chain. Unless we are sure this is the last pandemic we will ever face, repealing intellectual property rights now would be a catastrophic mistake.

The supply chains necessary to adequately scale drug production are incredibly complex, and do not appear overnight. The coordination and technical expertise needed to support worldwide distribution of medicines depends on an ongoing pipeline of a wide variety of pharmaceuticals to keep the entire operation viable. Public-spirited officials may in some cases be able to piece together facilities sufficient to produce and distribute a single medicine in the short term, but over the long term, global health depends on profit motives to guarantee the commercialization pipeline remains healthy. 

But the real challenge is in maintaining proper incentives to develop new drugs. It has long been understood that information goods like intellectual property will be undersupplied without sufficient legal protections. Innovators and those that commercialize innovations—like researchers and pharmaceutical companies—have less incentive to discover and market new medicines as the likelihood that they will be able to realize a return for their efforts diminishes. Without those returns, it’s far less certain the COVID vaccines would have been produced so quickly, or at all. The same holds for the vaccines we will need for the next crisis or badly needed treatments for other deadly diseases.

Patents are not the only way to structure incentives, as can be seen with the current vaccines. Pharmaceutical companies also took financial incentives from various governments in the form of direct payment or in purchase guarantees. But this enhances, rather than diminishes, the larger argument. There needs to be adequate returns for those who engage in large, risky undertakings like creating a new drug. 

Some critics would prefer to limit pharmaceutical companies’ returns solely to those early government investments, but there are problems with this approach. It is difficult for governments to know beforehand what level of profit is needed to properly incentivize firms to engage in producing these innovations.  To the extent that direct government investment is useful, it often will be as an additional inducement that encourages new entry by multiple firms who might each pursue different technologies. 

Thus, in the case of coronavirus vaccines, government subsidies may have enticed more competitors to enter more quickly, or not to drop out as quickly, in hopes that they would still realize a profit, notwithstanding the risks. Where there might have been only one or two vaccines produced in the United States, it appears likely we will see as many as four.

But there will always be necessary trade-offs. Governments cannot know how to set proper incentives to encourage development of every possible medicine for every possible condition by every possible producer.  Not only do we not know which diseases and which firms to prioritize, but we have no idea how to determine which treatment approaches to encourage. 

The COVID-19 vaccines provide a clear illustration of this problem. We have seen development of both traditional vaccines and experimental mRNA treatments to combat the virus. Thankfully, both have shown positive results, but there was no way to know that in March. In this perennial state of ignorance,t markets generally have provided the best—though still imperfect—way to make decisions. 

The patent system’s critics sometimes claim that prizes would offer a better way to encourage discovery. But if we relied solely on government-directed prizes, we might never have had the needed research into the technology that underlies mRNA. As one recent report put it, “before messenger RNA was a multibillion-dollar idea, it was a scientific backwater.” Simply put, without patent rights as the backstop to purely academic or government-led innovation and commercialization, it is far less likely that we would have seen successful COVID vaccines developed as quickly.

It is difficult for governments to be prepared for the unknown. Abolishing or diminishing pharmaceutical patents would leave us even less prepared for the next medical crisis. That would only add to the lasting damage that the COVID-19 pandemic has already wrought on the world.

In the hands of a wise philosopher-king, the Sherman Act’s hard-to-define prohibitions of “restraints of trade” and “monopolization” are tools that will operate inevitably to advance the public interest in competitive markets. In the hands of real-world litigators, regulators and judges, those same words can operate to advance competitors’ private interests in securing commercial advantages through litigation that could not be secured through competition in the marketplace. If successful, this strategy may yield outcomes that run counter to antitrust law’s very purpose.

The antitrust lawsuit filed by Epic Games against Apple in August 2020, and Apple’s antitrust lawsuit against Qualcomm (settled in April 2019), suggest that antitrust law is heading in this unfortunate direction.

From rent-minimization to rent-maximization

The first step in converting antitrust law from an instrument to minimize rents to an instrument to maximize rents lies in expanding the statute’s field of application on the apparently uncontroversial grounds of advancing the public interest in “vigorous” enforcement. In surprisingly short order, this largely unbounded vision of antitrust’s proper scope has become the dominant fashion in policy discussions, at least as expressed by some legislators, regulators, and commentators.

Following the new conventional wisdom, antitrust law has pursued over the past decades an overly narrow path, consequently overlooking and exacerbating a panoply of social ills that extend well beyond the mission to “merely” protect the operation of the market pricing mechanism. This line of argument is typically coupled with the assertion that courts, regulators and scholars have been led down this path by incumbents that welcome the relaxed scrutiny of a purportedly deferential antitrust policy.

This argument, and related theory of regulatory capture, has things roughly backwards.

Placing antitrust law at the service of a largely undefined range of social purposes set by judicial and regulatory fiat threatens to render antitrust a tool that can be easily deployed to favor the private interests of competitors rather than the public interest in competition. Without the intellectual discipline imposed by the consumer welfare standard (and, outside of per se illegal restraints, operationalized through the evidentiary requirement of competitive harm), the rhetoric of antitrust provides excellent cover for efforts to re-engineer the rules of the game in lieu of seeking to win the game as it has been played.

Epic Games v. Apple

A nascent symptom of this expansive form of antitrust is provided by the much-publicized lawsuit brought by Epic Games, the maker of the wildly popular video game, Fortnite, against Apple, the operator of the even more wildly popular App Store. On August 13, 2020, Epic added a “direct” payment processing services option to its Fortnite game, which violated the developer terms of use that govern the App Store. In response, Apple exercised its contractual right to remove Fortnite from the App Store, triggering Fortnite’s antitrust suit. The same sequence has ensued between Epic Games and Google in connection with the Google Play Store. Both litigations are best understood as a breach of contract dispute cloaked in the guise of an antitrust cause of action.

In suggesting that a jury trial would be appropriate in Epic Games’ suit against Apple, the district court judge reportedly stated that the case is “on the frontier of antitrust law” and [i]t is important enough to understand what real people think.” That statement seems to suggest that this is a close case under antitrust law. I respectfully disagree. Based on currently available information and applicable law, Epic’s argument suffers from two serious vulnerabilities that would seem to be difficult for the plaintiff to overcome.

A contestably narrow market definition

Epic states three related claims: (1) Apple has a monopoly in the relevant market, defined as the App Store, (2) Apple maintains its monopoly by contractually precluding developers from distributing iOS-compatible versions of their apps outside the App Store, and (3) Apple maintains a related monopoly in the payment processing services market for the App Store by contractually requiring developers to use Apple’s processing service.

This market definition, and the associated chain of reasoning, is subject to significant doubt, both as a legal and factual matter.

Epic’s narrow definition of the relevant market as the App Store (rather than app distribution platforms generally) conveniently results in a 100% market share for Apple. Inconveniently, federal case law is generally reluctant to adopt single-brand market definitions. While the Supreme Court recognized in 1992 a single-brand market in Eastman Kodak Co. v. Image Technical Services, the case is widely considered to be an outlier in light of subsequent case law. As a federal district court observed in Spahr v. Leegin Creative Leather Products (E.D. Tenn. 2008): “Courts have consistently refused to consider one brand to be a relevant market of its own when the brand competes with other potential substitutes.”

The App Store would seem to fall into this typical category. The customer base of existing and new Fortnite users can still accessthe gamethrough multiple platforms and on multiple devices other than the iPhone, including a PC, laptop, game console, and non-Apple mobile devices. (While Google has also removed Fortnite from the Google Play store due to the added direct payment feature, users can, at some inconvenience, access the game manually on Android phones.)

Given these alternative distribution channels, it is at a minimum unclear whether Epic is foreclosed from reaching a substantial portion of its consumer base, which may already access the game on alternative platforms or could potentially do so at moderate incremental transaction costs. In the language of platform economics, it appears to be technologically and economically feasible for the target consumer base to “multi-home.” If multi-homing and related switching costs are low, even a 100% share of the App Store submarket would not translate into market power in the broader and potentially more economically relevant market for app distribution generally.

An implausible theory of platform lock-in

Even if it were conceded that the App Store is the relevant market, Epic’s claim is not especially persuasive, both as an economic and a legal matter. That is because there is no evidence that Apple is exploiting any such hypothetically attributed market power to increase the rents extracted from developers and indirectly impose deadweight losses on consumers.

In the classic scenario of platform lock-in, a three-step sequence is observed: (1) a new firm acquires a high market share in a race for platform dominance, (2) the platform winner is protected by network effects and switching costs, and (3) the entrenched platform “exploits” consumers by inflating prices (or imposing other adverse terms) to capture monopoly rents. This economic model is reflected in the case law on lock-in claims, which typically requires that the plaintiff identify an adverse change by the defendant in pricing or other terms after users were allegedly locked-in.

The history of the App Store does not conform to this model. Apple has always assessed a 30% fee and the same is true of every other leading distributor of games for the mobile and PC market, including Google Play Store, App Store’s rival in the mobile market, and Steam, the dominant distributor of video games in the PC market. This long-standing market practice suggests that the 30% fee is most likely motivated by an efficiency-driven business motivation, rather than seeking to entrench a monopoly position that Apple did not enjoy when the practice was first adopted. That is: even if Apple is deemed to be a “monopolist” for Section 2 purposes, it is not taking any “illegitimate” actions that could constitute monopolization or attempted monopolization.

The logic of the 70/30 split

Uncovering the business logic behind the 70/30 split in the app distribution market is not too difficult.

The 30% fee appears to be a low transaction-cost practice that enables the distributor to fund a variety of services, including app development tools, marketing support, and security and privacy protections, all of which are supplied at no separately priced fee and therefore do not require service-by-service negotiation and renegotiation. The same rationale credibly applies to the integrated payment processing services that Apple supplies for purposes of in-app purchases.

These services deliver significant value and would otherwise be difficult to replicate cost-effectively, protect the App Store’s valuable stock of brand capital (which yields positive spillovers for app developers on the site), and lower the costs of joining and participating in the App Store. Additionally, the 30% fee cross-subsidizes the delivery of these services to the approximately 80% of apps on the App Store that are ad-based and for which no fee is assessed, which in turn lowers entry costs and expands the number and variety of product options for platform users. These would all seem to be attractive outcomes from a competition policy perspective.

Epic’s objection

Epic would object to this line of argument by observing that it only charges a 12% fee to distribute other developers’ games on its own Epic Games Store.

Yet Epic’s lower fee is reportedly conditioned, at least in some cases, on the developer offering the game exclusively on the Epic Games Store for a certain period of time. Moreover, the services provided on the Epic Games Store may not be comparable to the extensive suite of services provided on the App Store and other leading distributors that follow the 30% standard. Additionally, the user base a developer can expect to access through the Epic Games Store is in all likelihood substantially smaller than the audience that can be reached through the App Store and other leading app and game distributors, which is then reflected in the higher fees charged by those platforms.

Hence, even the large fee differential may simply reflect the higher services and larger audiences available on the App Store, Google Play Store and other leading platforms, as compared to the Epic Games Store, rather than the unilateral extraction of market rents at developers’ and consumers’ expense.

Antitrust is about efficiency, not distribution

Epic says the standard 70/30 split between game publishers and app distributors is “excessive” while others argue that it is historically outdated.

Neither of these are credible antitrust arguments. Renegotiating the division of economic surplus between game suppliers and distributors is not the concern of antitrust law, which (as properly defined) should only take an interest if either (i) Apple is colluding on the 30% fee with other app distributors, or (ii) Apple is taking steps that preclude entry into the apps distribution market and lack any legitimate business justification. No one claims evidence for the former possibility and, without further evidence, the latter possibility is not especially compelling given the uniform use of the 70/30 split across the industry (which, as noted, can be derived from a related set of credible efficiency justifications). It is even less compelling in the face of evidence that output is rapidly accelerating, not declining, in the gaming app market: in the first half of 2020, approximately 24,500 new games were added to the App Store.

If this conclusion is right, then Epic’s lawsuit against Apple does not seem to have much to do with the public interest in preserving market competition.

But it clearly has much to do with the business interest of an input supplier in minimizing its distribution costs and maximizing its profit margin. That category includes not only Epic Games but Tencent, the world’s largest video game publisher and the holder of a 40% equity stake in Epic. Tencent also owns Riot Games (the publisher of “League of Legends”), an 84% stake in Supercell (the publisher of “Clash of Clans”), and a 5% stake in Activision Blizzard (the publisher of “Call of Duty”). It is unclear how an antitrust claim that, if successful, would simply redistribute economic value from leading game distributors to leading game developers has any necessary relevance to antitrust’s objective to promote consumer welfare.

The prequel: Apple v. Qualcomm

Ironically (and, as Dirk Auer has similarly observed), there is a symmetry between Epic’s claims against Apple and the claims previously pursued by Apple (and, concurrently, the Federal Trade Commission) against Qualcomm.

In that litigation, Apple contested the terms of the licensing arrangements under which Qualcomm made available its wireless communications patents to Apple (more precisely, Foxconn, Apple’s contract manufacturer), arguing that the terms were incompatible with Qualcomm’s commitment to “fair, reasonable and nondiscriminatory” (“FRAND”) licensing of its “standard-essential” patents (“SEPs”). Like Epic v. Apple, Apple v. Qualcomm was fundamentally a contract dispute, with the difference that Apple was in the position of a third-party beneficiary of the commitment that Qualcomm had made to the governing standard-setting organization. Like Epic, Apple sought to recharacterize this contractual dispute as an antitrust question, arguing that Qualcomm’s licensing practices constituted anticompetitive actions to “monopolize” the market for smartphone modem chipsets.

Theory meets evidence

The rhetoric used by Epic in its complaint echoes the rhetoric used by Apple in its briefs and other filings in the Qualcomm litigation. Apple (like the FTC) had argued that Qualcomm imposed a “tax” on competitors by requiring that any purchaser of Qualcomm’s chipsets concurrently enter into a license for Qualcomm’s SEP portfolio relating to 3G and 4G/LTE-enabled mobile communications devices.

Yet the history and performance of the mobile communications market simply did not track Apple’s (and the FTC’s continuing) characterization of Qualcomm’s licensing fee as a socially costly drag on market growth and, by implication, consumer welfare.

If this assertion had merit, then the decades-old wireless market should have exhibited a dismal history of increasing prices, slow user adoption and lagging innovation. In actuality, the wireless market since its inception has grown relentlessly, characterized by declining quality-adjusted prices, expanding output, relentless innovation, and rapid adoption across a broad range of income segments.

Given this compelling real-world evidence, the only remaining line of argument (still being pursued by the FTC) that could justify antitrust intervention is a theoretical conjecture that the wireless market might have grown even faster under some alternative IP licensing arrangement. This assertion rests precariously on the speculative assumption that any such arrangement would have induced the same or higher level of aggregate investment in innovation and commercialization activities. That fragile chain of “what if” arguments hardly seems a sound basis on which to rewrite the legal infrastructure behind the billions of dollars of licensing transactions that support the economically thriving smartphone market and the even larger ecosystem that has grown around it.

Antitrust litigation as business strategy

Given the absence of compelling evidence of competitive harm from Qualcomm’s allegedly anticompetitive licensing practices, Apple’s litigation would seem to be best interpreted as an economically rational attempt by a downstream producer to renegotiate a downward adjustment in the fees paid to an upstream supplier of critical technology inputs. (In fact, those are precisely the terms on which Qualcomm in 2015 settled the antitrust action brought against it by China’s competition regulator, to the obvious benefit of local device producers.) The Epic Games litigation is a mirror image fact pattern in which an upstream supplier of content inputs seeks to deploy antitrust law strategically for the purposes of minimizing the fees it pays to a leading downstream distributor.

Both litigations suffer from the same flaw. Private interests concerning the division of an existing economic value stream—a business question that is matter of indifference from an efficiency perspective—are erroneously (or, at least, reflexively) conflated with the public interest in preserving the free play of competitive forces that maximizes the size of the economic value stream.

Conclusion: Remaking the case for “narrow” antitrust

The Epic v. Apple and Apple v. Qualcomm disputes illustrate the unproductive rent-seeking outcomes to which antitrust law will inevitably be led if, as is being widely advocated, it is decoupled from its well-established foundation in promoting consumer welfare—and not competitor welfare.

Some proponents of a more expansive approach to antitrust enforcement are convinced that expanding the law’s scope of application will improve market efficiency by providing greater latitude for expert regulators and courts to reengineer market structures to the public benefit. Yet any substitution of top-down expert wisdom for the bottom-up trial-and-error process of market competition can easily yield “false positives” in which courts and regulators take actions that counterproductively intervene in markets that are already operating under reasonably competitive conditions. Additionally, an overly expansive approach toward the scope of antitrust law will induce private firms to shift resources toward securing advantages over competitors through lobbying and litigation, rather than seeking to win the race to deliver lower-cost and higher-quality products and services. Neither outcome promotes the public’s interest in a competitive marketplace.

This blog post summarizes the findings of a paper published in Volume 21 of the Federalist Society Review. The paper was co-authored by Dirk Auer, Geoffrey A. Manne, Julian Morris, & Kristian Stout. It uses the analytical framework of law and economics to discuss recent patent law reforms in the US, and their negative ramifications for inventors. The full paper can be found on the Federalist Society’s website, here.

Property rights are a pillar of the free market. As Harold Demsetz famously argued, they spur specialization, investment and competition throughout the economy. And the same holds true for intellectual property rights (IPRs). 

However, despite the many social benefits that have been attributed to intellectual property protection, the past decades have witnessed the birth and growth of an powerful intellectual movement seeking to reduce the legal protections offered to inventors by patent law.

These critics argue that excessive patent protection is holding back western economies. For instance, they posit that the owners of the standard essential patents (“SEPs”) are charging their commercial partners too much for the rights to use their patents (this is referred to as patent holdup and royalty stacking). Furthermore, they argue that so-called patent trolls (“patent-assertion entities” or “PAEs”) are deterring innovation by small startups by employing “extortionate” litigation tactics.

Unfortunately, this movement has led to a deterioration of appropriate remedies in patent disputes.

The many benefits of patent protection

While patents likely play an important role in providing inventors with incentives to innovate, their role in enabling the commercialization of ideas is probably even more important.

By creating a system of clearly defined property rights, patents empower market players to coordinate their efforts in order to collectively produce innovations. In other words, patents greatly reduce the cost of concluding mutually-advantageous deals, whereby firms specialize in various aspects of the innovation process. Critically, these deals occur in the shadow of patent litigation and injunctive relief. The threat of these ensures that all parties have an incentive to take a seat at the negotiating table.

This is arguably nowhere more apparent than in the standardization space. Many of the most high-profile modern technologies are the fruit of large-scale collaboration coordinated through standards developing organizations (SDOs). These include technologies such as Wi-Fi, 3G, 4G, 5G, Blu-Ray, USB-C, and Thunderbolt 3. The coordination necessary to produce technologies of this sort is hard to imagine without some form of enforceable property right in the resulting inventions.

The shift away from injunctive relief

Of the many recent reforms to patent law, the most significant has arguably been a significant limitation of patent holders’ availability to obtain permanent injunctions. This is particularly true in the case of so-called standard essential patents (SEPs). 

However, intellectual property laws are meaningless without the ability to enforce them and remedy breaches. And injunctions are almost certainly the most powerful, and important, of these remedies.

The significance of injunctions is perhaps best understood by highlighting the weakness of damages awards when applied to intangible assets. Indeed, it is often difficult to establish the appropriate size of an award of damages when intangible property—such as invention and innovation in the case of patents—is the core property being protected. This is because these assets are almost always highly idiosyncratic. By blocking all infringing uses of an invention, injunctions thus prevent courts from having to act as price regulators. In doing so, they also ensure that innovators are adequately rewarded for their technological contributions.

Unfortunately, the Supreme Court’s 2006 ruling in eBay Inc. v. MercExchange, LLC significantly narrowed the circumstances under which patent holders could obtain permanent injunctions. This predictably led lower courts to grant fewer permanent injunctions in patent litigation suits. 

But while critics of injunctions had hoped that reducing their availability would spur innovation, empirical evidence suggests that this has not been the case so far. 

Other reforms

And injunctions are not the only area of patent law that have witnessed a gradual shift against the interests of patent holders. Much of the same could be said about damages awards, revised fee shifting standards, and the introduction of Inter Partes Review.

Critically, the intellectual movement to soften patent protection has also had ramifications outside of the judicial sphere. It is notably behind several legislative reforms, particularly the America Invents Act. Moreover, it has led numerous private parties – most notably Standard Developing Organizations (SDOs) – to adopt stances that have advanced the interests of technology implementers at the expense of inventors.

For instance, one of the most noteworthy reforms has been IEEE’s sweeping reforms to its IP policy, in 2015. The new rules notably prevented SEP holders from seeking permanent injunctions against so-called “willing licensees”. They also mandated that royalties pertaining to SEPs should be based upon the value of the smallest saleable component that practices the patented technology. Both of these measures ultimately sought to tilt the bargaining range in license negotiations in favor of implementers.

Concluding remarks

The developments discussed in this article might seem like small details, but they are part of a wider trend whereby U.S. patent law is becoming increasingly inhospitable for inventors. This is particularly true when it comes to the enforcement of SEPs by means of injunction.

While the short-term effect of these various reforms has yet to be quantified, there is a real risk that, by decreasing the value of patents and increasing transaction costs, these changes may ultimately limit the diffusion of innovations and harm incentives to invent.

This likely explains why some legislators have recently put forward bills that seek to reinforce the U.S. patent system (here and here).

Despite these initiatives, the fact remains that there is today a strong undercurrent pushing for weaker or less certain patent protection. If left unchecked, this threatens to undermine the utility of patents in facilitating the efficient allocation of resources for innovation and its commercialization. Policymakers should thus pay careful attention to the changes this trend may bring about and move swiftly to recalibrate the patent system where needed in order to better protect the property rights of inventors and yield more innovation overall.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Daniel Takash,(Regulatory policy fellow at the Niskanen Center. He is the manager of Niskanen’s Captured Economy Project, https://capturedeconomy.com, and you can follow him @danieltakash or @capturedecon).]

The pharmaceutical industry should be one of the most well-regarded industries in America. It helps bring drugs to market that improve, and often save, people’s lives. Yet last year a Gallup poll found that of 25 major industries, the pharmaceutical industry was the most unpopular– trailing behind fossil fuels, lawyers, and even the federal government. The opioid crisis dominated the headlines for the past few years, but the high price of drugs is a top-of-mind issue that generates significant animosity toward the pharmaceutical industry. The effects of high drug prices are felt not just at every trip to the pharmacy, but also by those who are priced out of life-saving treatments. Many Americans simply can’t afford what their doctors prescribe. The pharmaceutical industry helps save lives, but it’s also been credibly accused of anticompetitive behavior–not just from generics, but even other brand manufacturers.

These extraordinary times are an opportunity to right the ship. AbbVie, roundly criticized for building a patent thicket around Humira, has donated its patent rights to a promising COVID-19 treatment. This is to be celebrated– yet pharma’s bad reputation is defined by its worst behaviors and the frequent apologetics for overusing the patent system. Hopefully corporate social responsibility will prevail, and such abuses will cease in the future.

The most effective long-term treatment for COVID-19 will be a vaccine. We also need drugs to treat those afflicted with COVID-19 to improve recovery and lower mortality rates for those that get sick before a vaccine is developed and widely available. This requires rapid drug development through effective public-private partnerships to bring these treatments to market.

Without a doubt, these solutions will come from the pharmaceutical industry. Increased funding for the National Institutes for Health, nonprofit research institutions, and private pharmaceutical researchers are likely needed to help accelerate the development of these treatments. But we must be careful to ensure whatever necessary upfront public support is given to these entities results in a fair trade-off for Americans. The U.S. taxpayer is one of the largest investors in early to mid-stage drug research, and we need to make sure that we are a good investor.

Basic research into the costs of drug development, especially when taxpayer subsidies are involved, is a necessary start. This is a feature of the We PAID Act, introduced by Senators Rick Scott (R-FL) and Chris Van Hollen (D-MD), which requires the Department of Health and Human Services to enter into a contract with the National Academy of Medicine to figure the reasonable price of drugs developed with taxpayer support. This reasonable price would include a suitable reward to the private companies that did the important work of finishing drug development and gaining FDA approval. This is important, as setting a price too low would reduce investments in indispensable research and development. But this must be balanced with the risk of using patents to charge prices above and beyond those necessary to finance research, development, and commercialization.

A little sunshine can go a long way. We should trust that pharmaceutical companies will develop a vaccine and treatments or coronavirus, but we must also verify these are affordable and accessible through public scrutiny. Take the drug manufacturer Gilead Science’s about-face on its application for orphan drug status on the possible COVID-19 treatment remdesivir. Remedesivir, developed in part with public funds and already covered by three Gilead patents, technically satisfied the definition of “orphan drug,” as COVID-19 (at the time of the application) afflicted fewer than 200,000 patents. In a pandemic that could infect tens of millions of Americans, this designation is obviously absurd, and public outcry led to Gilead to ask the FDA to rescind the application. Gilead claimed it sought the designation to speed up FDA review, and that might be true. Regardless, public attention meant that the FDA will give Gilead’s drug Remdesivir expedited review without Gilead needing a designation that looks unfair to the American people.

The success of this isolated effort is absolutely worth celebrating. But we need more research to better comprehend the pharmaceutical industry’s needs, and this is just what the study provisions of We PAID would provide.

There is indeed some existing research on this front. For example,the Pharmaceutical Researchers and Manufacturers of America (PhRMA) estimates it costs an average of $2.6 billion to bring a new drug to market, and research from the Journal of the American Medical Association finds this average to be closer to $1.3 billion, with the median cost of development to be $985 million.

But a thorough analysis provided under We PAID is the best way for us to fully understand just how much support the pharmaceutical industry needs, and just how successful it has been thus far. The NIH, one of the major sources of publicly funded research, invests about $41.7 billion annually in medical research. We need to better understand how these efforts link up, and how the torch is passed from public to private efforts.

Patents are essential to the functioning of the pharmaceutical industry by incentivizing drug development through temporary periods of exclusivity. But it is equally essential, in light of the considerable investment already made by taxpayers in drug research and development, to make sure we understand the effects of these incentives and calibrate them to balance the interests of patients and pharmaceutical companies. Most drugs require research funding from both public and private sources as well as patent protection. And the U.S. is one of the biggest investors of drug research worldwide (even compared to drug companies), yet Americans pay the highest prices in the world. Are these prices justified, and can we improve patent policy to bring these costs down without harming innovation?

Beyond a thorough analysis of drug pricing, what makes We PAID one of the most promising solutions to the problem of excessively high drug prices are the accountability mechanisms included. The bill, if made law, would establish a Drug Access and Affordability Committee. The Committee would use the methodology from the joint HHS and NAM study to determine a reasonable price for affected drugs (around 20 percent of drugs currently on the market, if the bill were law today). Any companies that price drugs granted exclusivity by a patent above the reasonable price would lose their exclusivity.

This may seem like a price control at first blush, but it isn’t–for two reasons. First, this only applies to drugs developed with taxpayer dollars, which any COVID-19 treatments or cures almost certainly would be considering the $785 million spent by the NIH since 2002 researching coronaviruses. It’s an accountability mechanism that would ensure the government is getting its money’s worth. This tool is akin to ensuring that a government contractor is not charging more than would be reasonable, lest it loses its contract.

Second, it is even less stringent than pulling a contract with a private firm overcharging the government for the services provided. Why? Losing a patent does not mean losing the ability to make a drug, or any other patented invention for that matter.This basic fact is often lost in the patent debate, but it cannot be stressed enough.

If patents functioned as licenses, then every patent expiration would mean another product going off the market. In reality, that means that any other firm can compete and use the patented design. Even if a firm violated the price regulations included in the bill and lost its patent, it could continue manufacturing the drug. And so could any other firm, bringing down prices for all consumers by opening up market competition.

The We PAID Act could be a dramatic change for the drug industry, and because of that many in Congress may want to first debate the particulars of the bill. This is fine, assuming  this promising legislation isn’t watered down beyond recognition. But any objections to the Drug Affordability and Access Committee and reasonable pricing regulations aren’t an excuse to not, at a bare minimum, pass the study included in the bill as part of future coronavirus packages, if not sooner. It is an inexpensive way to get good information in a single, reputable source that would allow us to shape good policy.

Good information is needed for good policy. When the government lays the groundwork for future innovations by financing research and development, it can be compared to a venture capitalist providing the financing necessary for an innovative product or service. But just like in the private sector, the government should know what it’s getting for its (read: taxpayers’) money and make recipients of such funding accountable to investors.

The COVID-19 outbreak will be the most pressing issue for the foreseeable future, but determining how pharmaceuticals developed with public research are priced is necessary in good times and bad. The final prices for these important drugs might be fair, but the public will never know without a trusted source examining this information. Trust, but verify. The pharmaceutical industry’s efforts in fighting the COVID-19 pandemic might be the first step to improving Americans’ relationship with the industry. But we need good information to make that happen. Americans need to know when they are being treated fairly, and that policymakers are able to protect them when they are treated unfairly. The government needs to become a better-informed investor, and that won’t happen without something like the We PAID Act.

Last Thursday and Friday, Truth on the Market hosted a symposium analyzing the Draft Vertical Merger Guidelines from the FTC and DOJ. The relatively short draft guidelines provided ample opportunity for discussion, as evidenced by the stellar roster of authors thoughtfully weighing in on the topic. 

We want to thank all of the participants for their excellent contributions. All of the posts are collected here, and below I briefly summarize each in turn. 

Symposium Day 1

Herbert Hovenkamp on the important advance of economic analysis in the draft guidelines

Hovenkamp views the draft guidelines as a largely positive development for the state of antitrust enforcement. Beginning with an observation — as was common among participants in the symposium — that the existing guidelines are outdated, Hovenkamp believes that the inclusion of 20% thresholds for market share and related product use represent a reasonable middle position between the extremes of zealous antitrust enforcement and non-enforcement.

Hovenkamp also observes that, despite their relative brevity, the draft guidelines contain much by way of reference to the 2010 Horizontal Merger Guidelines. Ultimately Hovenkamp believes that, despite the relative lack of detail in some respects, the draft guidelines are an important step in elaborating the “economic approaches that the agencies take toward merger analysis, one in which direct estimates play a larger role, with a comparatively reduced role for more traditional approaches depending on market definition and market share.”

Finally, he notes that, while the draft guidelines leave the current burden of proof in the hands of challengers, the presumption that vertical mergers are “invariably benign, particularly in highly concentrated markets or where the products in question are differentiated” has been weakened.

Full post.

Jonathan E. Neuchterlein on the lack of guidance in the draft vertical merger guidelines

Neuchterlein finds it hard to square elements of the draft vertical merger guidelines with both the past forty years of US enforcement policy as well as the empirical work confirming the largely beneficial nature of vertical mergers. Related to this, the draft guidelines lack genuine limiting principles when describing speculative theories of harm. Without better specificity, the draft guidelines will do little as a source of practical guidance.

One criticism from Neuchterlein is that the draft guidelines blur the distinction between “harm to competition” and “harm to competitors” by, for example, focusing on changes to rivals’ access to inputs and lost sales.

Neuchterlein also takes issue with what he characterizes as the “arbitrarily low” 20 percent thresholds. In particular, he finds the fact that the two separate 20 percent thresholds (relevant market and related product) being linked leads to a too-small set of situations in which firms might qualify for the safe harbor. Instead, by linking the two thresholds, he believes the provision does more to facilitate the agencies’ discretion, and little to provide clarity to firms and consumers.

Full post.

William J. Kolasky and Philip A. Giordano discuss the need to look to the EU for a better model for the draft guidelines

While Kolasky and Giordano believe that the 1984 guidelines are badly outdated, they also believe that the draft guidelines fail to recognize important efficiencies, and fail to give sufficiently clear standards for challenging vertical mergers.

By contrast, Kolasky and Giordano believe that the 2008 EU vertical merger guidelines provide much greater specificity and, in some cases, the 1984 guidelines were better aligned with the 2008 EU guidelines. Losing that specificity in the new draft guidelines sets back the standards. As such, they recommend that the DOJ and FTC adopt the EU vertical merger guidelines as a model for the US.

To take one example, the draft guidelines lose some of the important economic distinctions between vertical and horizontal mergers and need to be clarified, in particular with respect to burdens of proof related to efficiencies. The EU guidelines also provide superior guidance on how to distinguish between a firm’s ability and its incentive to raise rivals’ costs.

Full post.

Margaret Slade believes that the draft guidelines are a step in the right direction, but uneven on critical issues

Slade welcomes the new draft guidelines and finds them to be a good effort, if in need of some refinement.  She believes the agencies were correct to defer to the 2010 Horizontal Merger Guidelines for the the conceptual foundations of market definition and concentration, but believes that the 20 percent thresholds don’t reveal enough information. She believes that it would be helpful “to have a list of factors that could be used to determine which mergers that fall below those thresholds are more likely to be investigated, and vice versa.”

Slade also takes issue with the way the draft guidelines deal with EDM. Although she does not believe that EDM should always be automatically assumed, the guidelines do not offer enough detail to determine the cases where it should not be.

For Slade, the guidelines also fail to include a wide range of efficiencies that can arise from vertical integration. For instance “organizational efficiencies, such as mitigating contracting, holdup, and renegotiation costs, facilitating specific investments in physical and human capital, and providing appropriate incentives within firms” are important considerations that the draft guidelines should acknowledge.

Slade also advises caution when simulating vertical mergers. They are much more complex than horizontal simulations, which means that “vertical merger simulations have to be carefully crafted to fit the markets that are susceptible to foreclosure and that a one-size-fits-all model can be very misleading.”

Full post.

Joshua D. Wright, Douglas H. Ginsburg, Tad Lipsky, and John M. Yun on how to extend the economic principles present in the draft vertical merger guidelines

Wright et al. commend the agencies for highlighting important analytical factors while avoiding “untested merger assessment tools or theories of harm.”

They do, however, offer some points for improvement. First, EDM should be clearly incorporated into the unilateral effects analysis. The way the draft guidelines are currently structured improperly leaves the role of EDM in a sort of “limbo” between effects analysis and efficiencies analysis that could confuse courts and lead to an incomplete and unbalanced assessment of unilateral effects.

Second, Wright et al. also argue that the 20 percent thresholds in the draft guidelines do not have any basis in evidence or theory, nor are they of “any particular importance to predicting competitive effects.”

Third, by abandoning the 1984 guidelines’ acknowledgement of the generally beneficial effects of vertical mergers, the draft guidelines reject the weight of modern antitrust literature and fail to recognize “the empirical reality that vertical relationships are generally procompetitive or neutral.”

Finally, the draft guidelines should be more specific in recognizing that there are transaction costs associated with integration via contract. Properly conceived, the guidelines should more readily recognize that efficiencies arising from integration via merger are cognizable and merger specific.

Full post.

Gregory J. Werden and Luke M. Froeb on the the conspicuous silences of the proposed vertical merger guidelines

A key criticism offered by Werden and Froeb in their post is that “the proposed Guidelines do not set out conditions necessary or sufficient for the agencies to conclude that a merger likely would substantially lessen competition.” The draft guidelines refer to factors the agencies may consider as part of their deliberation, but ultimately do not give an indication as to how those different factors will be weighed. 

Further, Werden and Froeb believe that the draft guidelines fail even to communicate how the agencies generally view the competitive process — in particular, how the agencies’ views regard the critical differences between horizontal and vertical mergers. 

Full post.

Jonathan M. Jacobson and Kenneth Edelson on the missed opportunity to clarify merger analysis in the draft guidelines

Jacobson and Edelson begin with an acknowledgement that the guidelines are outdated and that there is a dearth of useful case law, thus leading to a need for clarified rules. Unfortunately, they do not feel that the current draft guidelines do nearly enough to satisfy this need for clarification. 

Generally positive about the 20% thresholds in the draft guidelines, Jacobson and Edelson nonetheless feel that this “loose safe harbor” leaves some problematic ambiguity. For example, the draft guidelines endorse a unilateral foreclosure theory of harm, but leave unspecified what actually qualifies as a harm. Also, while the Baker Hughes burden shifting framework is widely accepted, the guidelines fail to specify how burdens should be allocated in vertical merger cases. 

The draft guidelines also miss an important opportunity to specify whether or not EDM should be presumed to exist in vertical mergers, and whether it should be presumptively credited as merger-specific.

Full post.

Symposium Day 2

Timothy Brennan on the complexities of enforcement for “pure” vertical mergers

Brennan’s post focused on what he referred to as “pure” vertical mergers that do not include concerns about expansion into upstream or downstream markets. Brennan notes the highly complex nature of speculative theories of vertical harms that can arise from vertical mergers. Consequently, he concludes that, with respect to blocking pure vertical mergers, 

“[I]t is not clear that we are better off expending the resources to see whether something is bad, rather than accepting the cost of error from adopting imperfect rules — even rules that imply strict enforcement. Pure vertical merger may be an example of something that we might just want to leave be.”

Full post.

Steven J. Cernak on the burden of proof for EDM

Cernak’s post examines the absences and ambiguities in the draft guidelines as compared to the 1984 guidelines. He notes the absence of some theories of harm — for instance, the threat of regulatory evasion. And then moves on to point out the ambiguity in how the draft guidelines deal with pleading and proving EDM.

Specifically, the draft guidelines are unclear as to how EDM should be treated. Is EDM an affirmative defense, or is it a factor that agencies are required to include as part of their own analysis? In Cernak’s opinion, the agencies should be clearer on the point. 

Full post.

Eric Fruits on messy mergers and muddled guidelines

Fruits observes that the attempt of the draft guidelines to clarify how the Agencies think about mergers and competition actually demonstrates how complex markets, related products, and dynamic competition actually are.

Fruits goes on to describe how the nature of assumptions necessary to support the speculative theories of harm that the draft guidelines may rely upon are vulnerable to change. Ultimately, relying on such theories and strong assumptions may make market definition of even “obvious” markets and products a fraught exercise that devolves into a battle of experts. 

Full post.

Pozen, Cornell, Concklin, and Van Arsdall on the missed opportunity to harmonize with international law

Pozen et al. believe that the draft guidelines inadvisably move the US away from accepted international standards. The 20 percent threshold in the draft guidelines   is “arbitrarily low” given the generally pro competitive nature of vertical combinations. 

Instead, DOJ and the FTC should consider following the approaches taken by the EU, Japan and Chile by favoring a 30 percent threshold for challenges along with a post-merger  HHI measure below 2000.

Full post.

Scott Sher and Mattew McDonald write about the implications of the Draft Vertical Merger Guidelines for vertical mergers involving technology start-ups

Sher and McDonald describe how the draft Vertical guidelines miss a valuable opportunity to clarify speculative theories harm based on “potential competition.” 

In particular, the draft guidelines should address the literature that demonstrates that vertical acquisition of small tech firms by large tech firms is largely complementary and procompetitive. Large tech firms are good at process innovation and the smaller firms are good at product innovation leading to specialization and the realization of efficiencies through acquisition. 

Further, innovation in tech markets is driven by commercialization and exit strategy. Acquisition has become an important way for investors and startups to profit from their innovation. Vertical merger policy that is biased against vertical acquisition threatens this ecosystem and the draft guidelines should be updated to reflect this reality.

Full post.

Rybnicek on how the draft vertical merger guidelines might do more harm than good

Rybnicek notes the common calls to withdraw the 1984 Non-Horizontal Merger Guidelines, but is skeptical that replacing them will be beneficial. Particularly, he believes there are major flaws in the draft guidelines that would lead to suboptimal merger policy at the Agencies.

One concern is that the draft guidelines could easily lead to the impression that vertical mergers are as likely to lead to harm as horizontal mergers. But that is false and easily refuted by economic evidence and logic. By focusing on vertical transactions more than the evidence suggests is necessary, the Agencies will waste resources and spend less time pursuing enforcement of actually anticompetitive transactions.

Rybicek also notes that, in addition to the 20 percent threshold “safe harbor” being economically unsound, they will likely create a problematic “sufficient condition” for enforcement.

Rybnicek believes that the draft guidelines minimize the significant role of EDM and efficiencies by pointing to the 2010 Horizontal Merger Guidelines for analytical guidance. In the horizontal context, efficiencies are exceedingly difficult to prove, and it is unwarranted to apply the same skeptical treatment of efficiencies in the vertical merger context.

Ultimately, Rybnicek concludes that the draft guidelines do little to advance an understanding of how the agencies will look at a vertical transaction, while also undermining the economics and theory that have guided antitrust law. 

Full post.

Lawrence J. White on the missing market definition standard in the draft vertical guidelines

White believes that there is a gaping absence in the draft guidelines insofar as they lack an adequate  market definition paradigm. White notes that markets need to be defined in a way that permits a determination of market power (or not) post-merger, but the guidelines refrain from recommending a vertical-specific method for drawing market definition. 

Instead, the draft guidelines point to the 2010 Horizontal Merger Guidelines for a market definition paradigm. Unfortunately, that paradigm is inapplicable in the vertical merger context. The way that markets are defined in the horizontal and vertical contexts is very different. There is a significant chance that an improperly drawn market definition based on the Horizontal Guidelines could understate the risk of harm from a given vertical merger.

Full post.

Manne & Stout 1 on the important differences between integration via contract and integration via merger

Manne & Stout believe that there is a great deal of ambiguity in the proposed guidelines that could lead either to uncertainty as to how the agencies will exercise their discretion, or, more troublingly, could lead courts to take seriously speculative theories of harm. 

Among these, Manne & Stout believe that the Agencies should specifically address the alleged equivalence of integration via contract and integration via merger. They  need to either repudiate this theory, or else more fully explain the extremely complex considerations that factor into different integration decisions for different firms.

In particular, there is no reason to presume in any given situation that the outcome from contracting would be the same as from merging, even where both are notionally feasible. It would be a categorical mistake for the draft guidelines to permit an inference that simply because an integration could be achieved by contract, it follows that integration by merger deserves greater scrutiny per se.

A whole host of efficiency and non-efficiency related goals are involved in a choice of integration methods. But adopting a presumption against integration via merger necessary leads to (1) an erroneous assumption that efficiencies are functionally achievable in both situations and (2) a more concerning creation of discretion in the hands of enforcers to discount the non-efficiency reasons for integration.

Therefore, the agencies should clarify in the draft guidelines that the mere possibility of integration via contract or the inability of merging parties to rigorously describe and quantify efficiencies does not condemn a proposed merger.

Full post.

Manne & Stout 2 on the problematic implication of incorporating a contract/merger equivalency assumption into the draft guidelines

Manne & Stout begin by observing that, while Agencies have the opportunity to enforce in either the case of merger or contract, defendants can frequently only realize efficiencies in the case of merger. Therefore, calling for a contract/merger equivalency amounts to a preference for more enforcement per se, and is less solicitous of concerns about loss of procompetitive arrangements. Moreover, Manne & Stout point out that there is currently no empirical basis for justifying the weighting of enforcement so heavily against vertical mergers. 

Manne & Stout further observe that vertical merger enforcement is more likely to thwart procompetitive than anticompetitive arrangements relative to the status quo ante because we lack fundamental knowledge about the effects of market structure and firm organization on innovation and dynamic competition. 

Instead, the draft guidelines should adopt Williamson’s view of economic organizations: eschew the formal orthodox neoclassical economic lens in favor of organizational theory that focuses on complex contracts (including vertical mergers). Without this view, “We are more likely to miss it when mergers solve market inefficiencies, and more likely to see it when they impose static costs — even if the apparent costs actually represent a move from less efficient contractual arrangements to more efficient integration.”

Critically, Manne & Stout argue that the guidelines focus on market share thresholds leads to an overly narrow view of competition. Instead of looking at static market analyses, the Agencies should include a richer set of observations, including those that involve “organizational decisions made to facilitate the coordination of production and commercialization when they are dependent upon intangible assets.”

Ultimately Manne & Stout suggest that the draft guidelines should be clarified to guide the Agencies and courts away from applying inflexible, formalistic logic that will lead to suboptimal enforcement.

Full post.

In our first post, we discussed the weaknesses of an important theoretical underpinning of efforts to expand vertical merger enforcement (including, possibly, the proposed guidelines): the contract/merger equivalency assumption.

In this post we discuss the implications of that assumption and some of the errors it leads to — including some incorporated into the proposed guidelines.

There is no theoretical or empirical justification for more vertical enforcement

Tim Brennan makes a fantastic and regularly overlooked point in his post: If it’s true, as many claim (see, e.g., Steve Salop), that firms can generally realize vertical efficiencies by contracting instead of merging, then it’s also true that they can realize anticompetitive outcomes the same way. While efficiencies have to be merger-specific in order to be relevant to the analysis, so too do harms. But where the assumption is that the outcomes of integration can generally be achieved by the “less-restrictive” means of contracting, that would apply as well to any potential harms, thus negating the transaction-specificity required for enforcement. As Dennis Carlton notes:

There is a symmetry between an evaluation of the harms and benefits of vertical integration. Each must be merger-specific to matter in an evaluation of the merger’s effects…. If transaction costs are low, then vertical integration creates neither benefits nor harms, since everything can be achieved by contract. If transaction costs exist to prevent the achievement of a benefit but not a harm (or vice-versa), then that must be accounted for in a calculation of the overall effect of a vertical merger. (Dennis Carlton, Transaction Costs and Competition Policy)

Of course, this also means that those (like us) who believe that it is not so easy to accomplish by contract what may be accomplished by merger must also consider the possibility that a proposed merger may be anticompetitive because it overcomes an impediment to achieving anticompetitive goals via contract.

There’s one important caveat, though: The potential harms that could arise from a vertical merger are the same as those that would be cognizable under Section 2 of the Sherman Act. Indeed, for a vertical merger to cause harm, it must be expected to result in conduct that would otherwise be illegal under Section 2. This means there is always the possibility of a second bite at the apple when it comes to thwarting anticompetitive conduct. 

The same cannot be said of procompetitive conduct that can arise only through merger if a merger is erroneously prohibited before it even happens

Interestingly, Salop himself — the foremost advocate today for enhanced vertical merger enforcement — recognizes the issue raised by Brennan: 

Exclusionary harms and certain efficiency benefits also might be achieved with vertical contracts and agreements without the need for a vertical merger…. It [] might be argued that the absence of premerger exclusionary contracts implies that the merging firms lack the incentive to engage in conduct that would lead to harmful exclusionary effects. But anticompetitive vertical contracts may face the same types of impediments as procompetitive ones, and may also be deterred by potential Section 1 enforcement. Neither of these arguments thus justify a more or less intrusive vertical merger policy generally. Rather, they are factors that should be considered in analyzing individual mergers. (Salop & Culley, Potential Competitive Effects of Vertical Mergers)

In the same article, however, Salop also points to the reasons why it should be considered insufficient to leave enforcement to Sections 1 and 2, instead of addressing them at their incipiency under Clayton Section 7:

While relying solely on post-merger enforcement might have appealing simplicity, it obscures several key facts that favor immediate enforcement under Section 7.

  • The benefit of HSR review is to prevent the delays and remedial issues inherent in after-the-fact enforcement….
  • There may be severe problems in remedying the concern….
  • Section 1 and Section 2 legal standards are more permissive than Section 7 standards….
  • The agencies might well argue that anticompetitive post-merger conduct was caused by the merger agreement, so that it would be covered by Section 7….

All in all, failure to address these kinds of issues in the context of merger review could lead to significant consumer harm and underdeterrence.

The points are (mostly) well-taken. But they also essentially amount to a preference for more and tougher enforcement against vertical restraints than the judicial interpretations of Sections 1 & 2 currently countenance — a preference, in other words, for the use of Section 7 to bolster enforcement against vertical restraints of any sort (whether contractual or structural).

The problem with that, as others have pointed out in this symposium (see, e.g., Nuechterlein; Werden & Froeb; Wright, et al.), is that there’s simply no empirical basis for adopting a tougher stance against vertical restraints in the first place. Over and over again the empirical research shows that vertical restraints and vertical mergers are unlikely to cause anticompetitive harm: 

In reviewing this literature, two features immediately stand out: First, there is a paucity of support for the proposition that vertical restraints/vertical integration are likely to harm consumers. . . . Second, a far greater number of studies found that the use of vertical restraints in the particular context studied improved welfare unambiguously. (Cooper, et al, Vertical Restrictions and Antitrust Policy: What About the Evidence?)

[W]e did not have a particular conclusion in mind when we began to collect the evidence, and we… are therefore somewhat surprised at what the weight of the evidence is telling us. It says that, under most circumstances, profit-maximizing, vertical-integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view…. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked. (Francine Lafontaine & Margaret Slade, Vertical Integration and Firm Boundaries: The Evidence)

[Table 1 in this paper] indicates that voluntarily adopted restraints are associated with lower costs, greater consumption, higher stock returns, and better chances of survival. (Daniel O’Brien, The Antitrust Treatment of Vertical Restraint: Beyond the Beyond the Possibility Theorems)

In sum, these papers from 2009-2018 continue to support the conclusions from Lafontaine & Slade (2007) and Cooper et al. (2005) that consumers mostly benefit from vertical integration. While vertical integration can certainly foreclose rivals in theory, there is only limited empirical evidence supporting that finding in real markets. (GAI Comment on Vertical Mergers)

To the extent that the proposed guidelines countenance heightened enforcement relative to the status quo, they fall prey to the same defect. And while it is unclear from the fairly terse guidelines whether this is animating them, the removal of language present in the 1984 Non-Horizontal Merger Guidelines acknowledging the relative lack of harm from vertical mergers (“[a]lthough non-horizontal mergers are less likely than horizontal mergers to create competitive problems…”) is concerning.  

The shortcomings of orthodox economics and static formal analysis

There is also a further reason to think that vertical merger enforcement may be more likely to thwart procompetitive than anticompetitive arrangements relative to the status quo ante (i.e., where arrangements among vertical firms are by contract): Our lack of knowledge about the effects of market structure and firm organization on innovation and dynamic competition, and the relative hostility to nonstandard contracting, including vertical integration:

[T]he literature addressing how market structure affects innovation (and vice versa) in the end reveals an ambiguous relationship in which factors unrelated to competition play an important role. (Katz & Shelanski, Mergers and Innovation)

The fixation on the equivalency of the form of vertical integration (i.e., merger versus contract) is likely to lead enforcers to focus on static price and cost effects, and miss the dynamic organizational and informational effects that lead to unexpected, increased innovation across and within firms. 

In the hands of Oliver Williamson, this means that understanding firms in the real world entails taking an organization theory approach, in contrast to the “orthodox” economic perspective:

The lens of contract approach to the study of economic organization is partly complementary but also partly rival to the orthodox [neoclassical economic] lens of choice. Specifically, whereas the latter focuses on simple market exchange, the lens of contract is predominantly concerned with the complex contracts. Among the major differences is that non‐standard and unfamiliar contractual practices and organizational structures that orthodoxy interprets as manifestations of monopoly are often perceived to serve economizing purposes under the lens of contract. A major reason for these and other differences is that orthodoxy is dismissive of organization theory whereas organization theory provides conceptual foundations for the lens of contract. (emphasis added)

We are more likely to miss it when mergers solve market inefficiencies, and more likely to see it when they impose static costs — even if the apparent costs actually represent a move from less efficient contractual arrangements to more efficient integration.

The competition that takes place in the real world and between various groups ultimately depends upon the institution of private contracts, many of which, including the firm itself, are nonstandard. Innovation includes the discovery of new organizational forms and the application of old forms to new contexts. Such contracts prevent or attenuate market failure, moving the market toward what economists would deem a more competitive result. Indeed, as Professor Coase pointed out, many markets deemed “perfectly competitive” are in fact the end result of complex contracts limiting rivalry between competitors. This contractual competition cannot produce perfect results — no human institution ever can. Nonetheless, the result is superior to that which would obtain in a (real) world without nonstandard contracting. These contracts do not depend upon the creation or enhancement of market power and thus do not produce the evils against which antitrust law is directed. (Alan Meese, Price Theory Competition & the Rule of Reason)

Or, as Oliver Williamson more succinctly puts it:

[There is a] rebuttable presumption that nonstandard forms of contracting have efficiency purposes. (Oliver Williamson, The Economic Institutions of Capitalism)

The pinched focus of the guidelines on narrow market definition misses the bigger picture of dynamic competition over time

The proposed guidelines (and the theories of harm undergirding them) focus upon indicia of market power that may not be accurate if assessed in more realistic markets or over more relevant timeframes, and, if applied too literally, may bias enforcement against mergers with dynamic-innovation benefits but static-competition costs.  

Similarly, the proposed guidelines’ enumeration of potential efficiencies doesn’t really begin to cover the categories implicated by the organization of enterprise around dynamic considerations

The proposed guidelines’ efficiencies section notes that:

Vertical mergers bring together assets used at different levels in the supply chain to make a final product. A single firm able to coordinate how these assets are used may be able to streamline production, inventory management, or distribution, or create innovative products in ways that would have been hard to achieve though arm’s length contracts. (emphasis added)

But it is not clear than any of these categories encompasses organizational decisions made to facilitate the coordination of production and commercialization when they are dependent upon intangible assets.

As Thomas Jorde and David Teece write:

For innovations to be commercialized, the economic system must somehow assemble all the relevant complementary assets and create a dynamically-efficient interactive system of learning and information exchange. The necessary complementary assets can conceivably be assembled by either administrative or market processes, as when the innovator simply licenses the technology to firms that already own or are willing to create the relevant assets. These organizational choices have received scant attention in the context of innovation. Indeed, the serial model relies on an implicit belief that arm’s-length contracts between unaffiliated firms in the vertical chain from research to customer will suffice to commercialize technology. In particular, there has been little consideration of how complex contractual arrangements among firms can assist commercialization — that is, translating R&D capability into profitable new products and processes….

* * *

But in reality, the market for know-how is riddled with imperfections. Simple unilateral contracts where technology is sold for cash are unlikely to be efficient. Complex bilateral and multilateral contracts, internal organization, or various hybrid structures are often required to shore up obvious market failures and create procompetitive efficiencies. (Jorde & Teece, Rule of Reason Analysis of Horizontal Arrangements: Agreements Designed to Advance Innovation and Commercialize Technology) (emphasis added)

When IP protection for a given set of valuable pieces of “know-how” is strong — easily defendable, unique patents, for example — firms can rely on property rights to efficiently contract with vertical buyers and sellers. But in cases where the valuable “know how” is less easily defended as IP — e.g. business process innovation, managerial experience, distributed knowledge, corporate culture, and the like — the ability to partially vertically integrate through contract becomes more difficult, if not impossible. 

Perhaps employing these assets is part of what is meant in the draft guidelines by “streamline.” But the very mention of innovation only in the technological context of product innovation is at least some indication that organizational innovation is not clearly contemplated.  

This is a significant lacuna. The impact of each organizational form on knowledge transfers creates a particularly strong division between integration and contract. As Enghin Atalay, Ali Hortaçsu & Chad Syverson point out:

That vertical integration is often about transfers of intangible inputs rather than physical ones may seem unusual at first glance. However, as observed by Arrow (1975) and Teece (1982), it is precisely in the transfer of nonphysical knowledge inputs that the market, with its associated contractual framework, is most likely to fail to be a viable substitute for the firm. Moreover, many theories of the firm, including the four “elemental” theories as identified by Gibbons (2005), do not explicitly invoke physical input transfers in their explanations for vertical integration. (Enghin Atalay, et al., Vertical Integration and Input Flows) (emphasis added)

There is a large economics and organization theory literature discussing how organizations are structured with respect to these sorts of intangible assets. And the upshot is that, while we start — not end, as some would have it — with the Coasian insight that firm boundaries are necessarily a function of production processes and not a hard limit, we quickly come to realize that it is emphatically not the case that integration-via-contract and integration-via-merger are always, or perhaps even often, viable substitutes.

Conclusion

The contract/merger equivalency assumption, coupled with a “least-restrictive alternative” logic that favors contract over merger, puts a thumb on the scale against vertical mergers. While the proposed guidelines as currently drafted do not necessarily portend the inflexible, formalistic application of this logic, they offer little to guide enforcers or courts away from the assumption in the important (and perhaps numerous) cases where it is unwarranted.   

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Scott Sher (Partner, Wilson Sonsini Goodrich & Rosati) and Matthew McDonald (Associate, Wilson Sonsini Goodrich & Rosati).]

On January 10, 2020, the United States Department of Justice (“DOJ”) and the Federal Trade Commission (“FTC”) (collectively, “the Agencies”) released their joint draft guidelines outlining their “principal analytical techniques, practices and enforcement policy” with respect to vertical mergers (“Draft Guidelines”). While the Draft Guidelines describe and formalize the Agencies’ existing approaches when investigating vertical mergers, they leave several policy questions unanswered. In particular, the Draft Guidelines do not address how the Agencies might approach the issue of acquisition of potential or nascent competitors through vertical mergers. As many technology mergers are motivated by the desire to enter new industries or add new tools or features to an existing platform (i.e., the Buy-Versus-Build dilemma), the omission leaves a significant hole in the Agencies’ enforcement policy agenda, and leaves the tech industry, in particular, without adequate guidance as to how the Agencies may address these issues.

This is notable, given that the Horizontal Merger Guidelines explicitly address potential competition theories of harm (e.g., at § 1 (referencing mergers and acquisitions “involving actual or potential competitors”); § 2 (“The Agencies consider whether the merging firms have been, or likely will become absent the merger, substantial head-to-head competitors.”). Indeed, the Agencies have recently challenged several proposed horizontal mergers based on nascent competition theories of harm. 

Further, there has been much debate regarding whether increased antitrust scrutiny of vertical acquisitions of nascent competitors, particularly in technology markets, is warranted (See, e.g., Open Markets Institute, The Urgent Need for Strong Vertical Merger Guidelines (“Enforcers should be vigilant toward dominant platforms’ acquisitions of seemingly small or marginal firms and be ready to block acquisitions that may be part of a monopoly protection strategy. Dominant firms should not be permitted to expand through vertical acquisitions and cut off budding threats before they have a chance to bloom.”); Caroline Holland, Taking on Big Tech Through Merger Enforcement (“Vertical mergers that create market power capable of stifling competition could be particularly pernicious when it comes to digital platforms.”)). 

Thus, further policy guidance from the Agencies on this issue is needed. As the Agencies formulate guidance, they should take note that vertical mergers involving technology start-ups generally promote efficiency and innovation, and that any potential competitive harm almost always can be addressed with easy-to-implement behavioral remedies.

The agencies’ draft vertical merger guidelines

The Draft Guidelines outline the following principles that the Agencies will apply when analyzing vertical mergers:

  • Market definition. The Agencies will identify a relevant market and one or more “related products.” (§ 2) This is a product that is supplied by the merged firm, is vertically related to the product in the relevant market, and to which access by the merged firm’s rivals affects competition in the relevant market. (§ 2)
  • Safe harbor. Unlike horizontal merger cases, the Agencies cannot rely on changes in concentration in the relevant market as a screen for competitive effects. Instead, the Agencies consider measures of the competitive significance of the related product. (§ 3) The Draft Guidelines propose a safe harbor, stating that the Agencies are unlikely to challenge a vertical merger “where the parties to the merger have a share in the relevant market of less than 20 percent, and the related product is used in less than 20 percent of the relevant market.” (§ 3) However, shares exceeding the thresholds, taken alone, do not support an inference that the vertical merger is anticompetitive. (§ 3)
  • Theories of unilateral harm. Vertical mergers can result in unilateral competitive effects, including raising rivals’ costs (charging rivals in the relevant market a higher price for the related product) or foreclosure (refusing to supply rivals with the related product altogether). (§ 5.a) Another potential unilateral effect is access to competitively sensitive information: The combined firm may, through the acquisition, gain access to sensitive business information about its upstream or downstream rivals that was unavailable to it before the merger (for example, a downstream rival of the merged firm may have been a premerger customer of the upstream merging party). (§ 5.b)
  • Theories of coordinated harm. Vertical mergers can also increase the likelihood of post-merger coordinated interaction. For example, a vertical merger might eliminate or hobble a maverick firm that would otherwise play an important role in limiting anticompetitive coordination. (§ 7)
  • Procompetitive effects. Vertical mergers can have procompetitive effects, such as the elimination of double marginalization (“EDM”). A merger of vertically related firms can create an incentive for the combined entity to lower prices on the downstream product, because it will capture the additional margins from increased sales on the upstream product. (§ 6) EDM thus may benefit both the merged firm and buyers of the downstream product. (§ 6)
  • Efficiencies. Vertical mergers have the potential to create cognizable efficiencies; the Agencies will evaluate such efficiencies using the standards set out in the Horizontal Merger Guidelines. (§ 8)

Implications for vertical mergers involving nascent start-ups

At present, the Draft Guidelines do not address theories of nascent or potential competition. To the extent the Agencies provide further guidance regarding the treatment of vertical mergers involving nascent start-ups, they should take note of the following facts:

First, empirical evidence from strategy literature indicates that technology-related vertical mergers are likely to be efficiency-enhancing. In a survey of the strategy literature on vertical integration, Professor D. Daniel Sokol observed that vertical acquisitions involving technology start-ups are “largely complementary, combining the strengths of the acquiring firm in process innovation with the product innovation of the target firms.” (p. 1372) The literature shows that larger firms tend to be relatively poor at developing new and improved products outside of their core expertise, but are relatively strong at process innovation (developing new and improved methods of production, distribution, support, and the like). (Sokol, p. 1373) Larger firms need acquisitions to help with innovation; acquisition is more efficient than attempting to innovate through internal efforts. (Sokol, p. 1373)

Second, vertical merger policy towards nascent competitor acquisitions has important implications for the rate of start-up formation, and the innovation that results. Entrepreneurship in technology markets is motivated by the opportunity for commercialization and exit. (Sokol, p. 1362 (“[T]he purpose of such investment [in start-ups] is to reap the rewards of scaling a venture to exit.”))

In recent years, as IPO activity has declined, vertical mergers have become the default method of entrepreneurial exit. (Sokol, p. 1376) Increased vertical merger enforcement against start-up acquisitions thus closes off the primary exit strategy for entrepreneurs. As Prof. Sokol concluded in his study of vertical mergers:

When antitrust agencies, judges, and legislators limit the possibility of vertical mergers as an exit strategy for start-up firms, it creates risk for innovation and entrepreneurship…. it threatens entrepreneurial exits, particularly for tech companies whose very business model is premised upon vertical mergers for purposes of a liquidity event. (p. 1377)

Third, to the extent that the vertical acquisition of a start-up raises competitive concerns, a behavioral remedy is usually preferable to a structural one. As explained above, vertical acquisitions typically result in substantial efficiencies, and these efficiencies are likely to overwhelm any potential competitive harm. Further, a structural remedy is likely infeasible in the case of a start-up acquisition. Thus, behavioral relief is the only way of preserving the deal’s efficiencies while remedying the potential competitive harm. (Which the Agencies have recognized, see DOJ Antitrust Division, Policy Guide to Merger Remedies, p. 20 (“Stand-alone conduct relief is only appropriate when a full-stop prohibition of the merger would sacrifice significant efficiencies and a structural remedy would similarly eliminate such efficiencies or is simply infeasible.”)) Appropriate behavioral remedies for vertical acquisitions of start-ups would include firewalls (restricting the flow of competitively sensitive information between the upstream and downstream units of the combined firm) or a fair dealing or non-discrimination remedy (requiring the merging firm to supply an input or grant customer access to competitors in a non-discriminatory way) with clear benchmarks to ensure compliance. (See Policy Guide to Merger Remedies, pp. 22-24)

To be sure, some vertical mergers may cause harm to competition, and there should be enforcement when the facts justify it. But vertical mergers involving technology start-ups generally enhance efficiency and promote innovation. Antitrust’s goals of promoting competition and innovation are thus best served by taking a measured approach towards vertical mergers involving technology start-ups. (Sokol, pp. 1362–63) (“Thus, a general inference that makes vertical acquisitions, particularly in tech, more difficult to approve leads to direct contravention of antitrust’s role in promoting competition and innovation.”)

Why Data Is Not the New Oil

Alec Stapp —  8 October 2019

“Data is the new oil,” said Jaron Lanier in a recent op-ed for The New York Times. Lanier’s use of this metaphor is only the latest instance of what has become the dumbest meme in tech policy. As the digital economy becomes more prominent in our lives, it is not unreasonable to seek to understand one of its most important inputs. But this analogy to the physical economy is fundamentally flawed. Worse, introducing regulations premised upon faulty assumptions like this will likely do far more harm than good. Here are seven reasons why “data is the new oil” misses the mark:

1. Oil is rivalrous; data is non-rivalrous

If someone uses a barrel of oil, it can’t be consumed again. But, as Alan McQuinn, a senior policy analyst at the Information Technology and Innovation Foundation, noted, “when consumers ‘pay with data’ to access a website, they still have the same amount of data after the transaction as before. As a result, users have an infinite resource available to them to access free online services.” Imposing restrictions on data collection makes this infinite resource finite. 

2. Oil is excludable; data is non-excludable

Oil is highly excludable because, as a physical commodity, it can be stored in ways that prevent use by non-authorized parties. However, as my colleagues pointed out in a recent comment to the FTC: “While databases may be proprietary, the underlying data usually is not.” They go on to argue that this can lead to under-investment in data collection:

[C]ompanies that have acquired a valuable piece of data will struggle both to prevent their rivals from obtaining the same data as well as to derive competitive advantage from the data. For these reasons, it also  means that firms may well be more reluctant to invest in data generation than is socially optimal. In fact, to the extent this is true there is arguably more risk of companies under-investing in data  generation than of firms over-investing in order to create data troves with which to monopolize a market. This contrasts with oil, where complete excludability is the norm.

3. Oil is fungible; data is non-fungible

Oil is a commodity, so, by definition, one barrel of oil of a given grade is equivalent to any other barrel of that grade. Data, on the other hand, is heterogeneous. Each person’s data is unique and may consist of a practically unlimited number of different attributes that can be collected into a profile. This means that oil will follow the law of one price, while a dataset’s value will be highly contingent on its particular properties and commercialization potential.

4. Oil has positive marginal costs; data has zero marginal costs

There is a significant expense to producing and distributing an additional barrel of oil (as low as $5.49 per barrel in Saudi Arabia; as high as $21.66 in the U.K.). Data is merely encoded information (bits of 1s and 0s), so gathering, storing, and transferring it is nearly costless (though, to be clear, setting up systems for collecting and processing can be a large fixed cost). Under perfect competition, the market clearing price is equal to the marginal cost of production (hence why data is traded for free services and oil still requires cold, hard cash).

5. Oil is a search good; data is an experience good

Oil is a search good, meaning its value can be assessed prior to purchasing. By contrast, data tends to be an experience good because companies don’t know how much a new dataset is worth until it has been combined with pre-existing datasets and deployed using algorithms (from which value is derived). This is one reason why purpose limitation rules can have unintended consequences. If firms are unable to predict what data they will need in order to develop new products, then restricting what data they’re allowed to collect is per se anti-innovation.

6. Oil has constant returns to scale; data has rapidly diminishing returns

As an energy input into a mechanical process, oil has relatively constant returns to scale (e.g., when oil is used as the fuel source to power a machine). When data is used as an input for an algorithm, it shows rapidly diminishing returns, as the charts collected in a presentation by Google’s Hal Varian demonstrate. The initial training data is hugely valuable for increasing an algorithm’s accuracy. But as you increase the dataset by a fixed amount each time, the improvements steadily decline (because new data is only helpful in so far as it’s differentiated from the existing dataset).

7. Oil is valuable; data is worthless

The features detailed above — rivalrousness, fungibility, marginal cost, returns to scale — all lead to perhaps the most important distinction between oil and data: The average barrel of oil is valuable (currently $56.49) and the average dataset is worthless (on the open market). As Will Rinehart showed, putting a price on data is a difficult task. But when data brokers and other intermediaries in the digital economy do try to value data, the prices are almost uniformly low. The Financial Times had the most detailed numbers on what personal data is sold for in the market:

  • “General information about a person, such as their age, gender and location is worth a mere $0.0005 per person, or $0.50 per 1,000 people.”
  • “A person who is shopping for a car, a financial product or a vacation is more valuable to companies eager to pitch those goods. Auto buyers, for instance, are worth about $0.0021 a pop, or $2.11 per 1,000 people.”
  • “Knowing that a woman is expecting a baby and is in her second trimester of pregnancy, for instance, sends the price tag for that information about her to $0.11.”
  • “For $0.26 per person, buyers can access lists of people with specific health conditions or taking certain prescriptions.”
  • “The company estimates that the value of a relatively high Klout score adds up to more than $3 in word-of-mouth marketing value.”
  • [T]he sum total for most individuals often is less than a dollar.

Data is a specific asset, meaning it has “a significantly higher value within a particular transacting relationship than outside the relationship.” We only think data is so valuable because tech companies are so valuable. In reality, it is the combination of high-skilled labor, large capital expenditures, and cutting-edge technologies (e.g., machine learning) that makes those companies so valuable. Yes, data is an important component of these production functions. But to claim that data is responsible for all the value created by these businesses, as Lanier does in his NYT op-ed, is farcical (and reminiscent of the labor theory of value). 

Conclusion

People who analogize data to oil or gold may merely be trying to convey that data is as valuable in the 21st century as those commodities were in the 20th century (though, as argued, a dubious proposition). If the comparison stopped there, it would be relatively harmless. But there is a real risk that policymakers might take the analogy literally and regulate data in the same way they regulate commodities. As this article shows, data has many unique properties that are simply incompatible with 20th-century modes of regulation.

A better — though imperfect — analogy, as author Bernard Marr suggests, would be renewable energy. The sources of renewable energy are all around us — solar, wind, hydroelectric — and there is more available than we could ever use. We just need the right incentives and technology to capture it. The same is true for data. We leave our digital fingerprints everywhere — we just need to dust for them.

Over the past few weeks, Truth on the Market has had several posts related to harm reduction policies, with a focus on tobacco, e-cigarettes, and other vapor products:

Harm reduction policies are used to manage a wide range of behaviors including recreational drug use and sexual activity. Needle-exchange programs reduce the spread of infectious diseases among users of heroin and other injected drugs. Opioid replacement therapy substitutes illegal opioids, such as heroin, with a longer acting but less euphoric opioid. Safer sex education and condom distribution in schools are designed to reduce teenage pregnancy and reduce the spread of sexually transmitted infections. None of these harm reduction policies stop the risky behavior, nor do the policies eliminate the potential for harm. Nevertheless, the policies intend to reduce the expected harm.

Carrie Wade, Director of Harm Reduction Policy and Senior Fellow at the R Street Institute, draws a parallel between opiate harm reduction strategies and potential policies related to tobacco harm reduction. She notes that with successful one-year quit rates hovering around 10 percent, harm reduction strategies offer ways to transition more smokers off the most dangerous nicotine delivery device: the combustible cigarette.

Most of the harm from smoking is caused by the inhalation of toxicants released through the combustion of tobacco. Use of non-combustible nicotine delivery systems, such as e-cigarettes and smokeless tobacco generally are considered to be significantly less harmful than smoking cigarettes. UK government agency Public Health England has concluded that e-cigarettes are around 95 percent less harmful than combustible cigarettes.

In the New England Journal of Medicine, Fairchild, et al. (2018) identify a continuum of potential policies regarding the regulation of vapor products, such as e-cigarettes, show in the figure below.  They note that the most restrictive policies would effectively eliminate e-cigarettes as a viable alternative to smoking, while the most permissive may promote e-cigarette usage and potentially encourage young people—who would not do so otherwise—to take-up e-cigarettes. In between these extremes are policies that may discourage young people from initiating use of e-cigarettes, while encouraging current smokers to switch to less harmful vapor products.

nejmp1711991_f1

International Center for Law & Economics chief economist, Eric Fruits, notes in his blog post that more than 20 countries have introduced taxation on e-cigarettes and other vapor products. In the United States, several states and local jurisdictions have enacted e-cigarette taxes. His post is based on a recently released ICLE white paper entitled Vapor products, harm reduction, and taxation: Principles, evidence and a research agenda.

Under a harm reduction principle, Fruits argues that e-cigarettes and other vapor products should face no taxes or low taxes relative to conventional cigarettes, to guide consumers toward a safer alternative to smoking.

In contrast to harm reduction principles,  the precautionary principle as well as principles of tax equity point toward the taxation of vapor products at rates similar to conventional cigarettes.

On the one hand, some policymakers claim that the objective of taxing nicotine products is to reduce nicotine consumption. On the other hand, Dan Mitchell, co-founder of the Center for Freedom and Prosperity, points out that some politicians are concerned that they will lose tax revenue if a substantial number of smokers switch to options such as vaping.

Often missed in the policy discussion is the effect of fiscal policies on innovation and the development and commercialization of harm-reducing products. Also, often missed are the consequences for current consumers of nicotine products, including smokers seeking to quit using harmful conventional cigarettes.

Policy decisions regarding taxation of vapor products should take into account both long-term fiscal effects and broader economic and welfare effects. These effects might (or might not) suggest very different tax policies to those that have been enacted or are under consideration. These considerations, however, are frustrated by unreliable and wildly divergent empirical estimates of consumer demand in the face of changing prices and/or rising taxes.

Along the lines of uncertain—if not surprising—impacts Fritz Laux, professor of economics at Northeastern State University, provides an explanation of why smoke-free air laws have not been found to adversely affect revenues or employment in the restaurant and hospitality industries.

He argues that social norms regarding smoking in restaurants have changed to the point that many smokers themselves support bans on smoking in restaurants. In this way, he hypothesizes, smoke-free air laws do not impose a significant constraint on consumer behavior or business activity. We might likewise infer, by extension, that policies which do not prohibit vaping in public spaces (leaving such decisions to the discretion of business owners and managers) could encourage switching by people who otherwise would have to exit buildings in order to vape or smoke—without adversely affecting businesses.

Principles of harm reduction recognize that every policy proposal has uncertain outcomes as well as potential spillovers and unforeseen consequences. With such high risks and costs associated with cigarette and other combustible use, taxes and regulations must be developed in an environment of uncertainty and with an eye toward a net reduction in harm, rather than an unattainable goal of zero harm or in an overt pursuit of tax revenues.

 

ICLE has released a white paper entitled Vapor products, harm reduction, and taxation: Principles, evidence and a research agenda, authored by ICLE Chief Economist, Eric Fruits.

More than 20 countries have introduced taxation on e-cigarettes and other vapor products. In the United States, several states and local jurisdictions have enacted e-cigarette taxes.

The concept of tobacco harm reduction began in 1976 when Michael Russell, a psychiatrist and lecturer at the Addiction Research Unit of Maudsley Hospital in London, wrote: “People smoke for nicotine but they die from the tar.”  Russell hypothesized that reducing the ratio of tar to nicotine could be the key to safer smoking.

Since then, much of the harm from smoking has been well-established as caused almost exclusively by toxicants released through the combustion of tobacco. Public Health England and the American Cancer Society have concluded non-combustible tobacco products as well as pure nicotine products are considerably less harmful than combustible products. Earlier this year, the American Cancer Society shifted its position on e-cigarettes, recommending that individuals who do not quit smoking, “… should be encouraged to switch to the least harmful form of tobacco product possible; switching to the exclusive use of e-cigarettes is preferable to continuing to smoke combustible products.”

In contrast, some public health advocates urge a precautionary approach in which the introduction and sale of e-cigarettes be limited or halted until the products are demonstrably safe.

Policymakers face a wide range of strategies regarding the taxation of vapor products. On the one hand, principles of harm reduction suggest vapor products should face no taxes or low taxes relative to conventional cigarettes, to guide consumers toward a safer alternative to smoking. the U.K. House of Commons Science and Technology Committee concludes:

The level of taxation on smoking-related products should directly correspond to the health risks that they present, to encourage less harmful consumption. Applying that logic, e-cigarettes should remain the least-taxed and conventional cigarettes the most, with heat-not-burn products falling between the two.

In contrast, the precautionary principle as well as principles of tax equity point toward the taxation of vapor products at rates similar to conventional cigarettes.

Analysis of tax policy issues is complicated by divergent—and sometimes obscured—intentions of such policies. Some policymakers claim that the objective of taxing nicotine products is to reduce nicotine consumption. Other policymakers indicate the objective is to raise revenues to support government spending. Often missed in the policy discussion is the effect of fiscal policies on innovation and the development and commercialization of harm-reducing products. Also, often missed are the consequences for current consumers of nicotine products, including smokers seeking to quit using harmful conventional cigarettes.

Policy decisions regarding taxation of vapor products should take into account both long-term fiscal effects, as well as broader economic and welfare effects. These effects might (or might not) suggest very different tax policies to those that have been enacted or are under consideration.

Apart from being a significant source of revenue, the cigarette taxes have been promoted as “sin” taxes to discourage consumption either because of externalities caused by smoking (increased costs for third-party health payers and health consequences) or paternalism. According to Centers for Disease Control and Prevention in U.S., smoking-related illness in the U.S. costs more than $300 billion each year, including; (1) nearly $170 billion for direct medical care for adults and (2) more than $156 billion in lost productivity, including $5.6 billion in lost productivity due to secondhand smoke exposure.

The CDC’s cost estimates raise important questions regarding who bears the burden of smoking related illness. Much of the cost is borne by private insurance, which charges steeper premiums for customers who smoke. In addition, the CDC estimates reflect costs imposed by people who have smoked for decades—many of whom have now quit. A proper accounting of the costs vis-à-vis tax policy would measure the incremental discounted costs imposed by today’s smokers.

According to Levy et al. (2017), a strategy of replacing cigarette smoking with e-cigarettes would yield substantial life year gains, even under pessimistic assumptions regarding cessation, initiation, and relative harm. Increased longevity does not simply extend the individual’s years of retirement and reliance on government transfers but has impact on greater work effort and productivity together with higher tax payments on consumption.

Vapor products that cause less direct harm or have lower externalities (e.g., the absence of “second hand smoke”) should be subject to a lower “sin” tax. A cost-benefit analysis of the desired excise tax rate on vapor products would include reduced health spending as an offset against excise tax revenue that was foregone by putting a lesser rate on those products.

State and local governments in the U.S. collect more than $18 billion a year in tobacco taxes. While some jurisdictions earmark a portion of tobacco taxes for prevention and cessation efforts, in practice most tobacco taxes are treated by policymakers as general revenues to be spent in whatever way the legislative body determines.

In the long-run, the goals of reducing or eliminating consumption of the taxed good and generating revenues are in conflict. If the tax is successful in reducing consumption, it falls short in generating revenue. Similarly, if the tax succeeds in generating revenues, it falls short in reducing or eliminating consumption.

Substitutability is another consideration. An increase in the tax on spirits will result in an increase in beer and wine purchases. A high toll on a road will divert traffic to untolled streets that may not be designed for increased traffic volumes. Evidence from the U.S. and Europe indicate high or rising tobacco taxes in one jurisdiction will result in increased sales in bordering jurisdictions as well as increase illegal cross-jurisdiction sales or smuggling.

As of March 2018, nine U.S. states have enacted taxes on e-cigarettes:

California 65.08% on wholesale price
Delaware 0.05 USD/ml
DC 70% on wholesale price
Kansas 0.05 USD/ml
Louisiana 0.05 USD/ml
Minnesota 95% of wholesale price
North Carolina 0.05 USD/ml
Pennsylvania 40% of wholesaler price
West Virginia 0.075 USD/ml

In addition, 22 countries outside of the U.S. have introduced taxation on e-cigarettes.

The effects of different types of taxation on usage and thus economic outcomes varies. Research to date finds a wide range of own price and cross price elasticities for e-cigarettes. While most researchers conclude that the demand for e-cigarettes is more elastic than the demand for combustible cigarettes, some studies find inelastic demand and some studies find highly elastic demand. Economic theory would point to e-cigarettes as a substitute for combustible cigarettes. Some empirical research supports this hypothesis, while others conclude the two products are complements.

In addition to e-cigarettes, little cigars and smokeless tobacco are also potential substitutes for cigarettes. The results from Zheng, et al. (2016) suggest increases in sales of little cigars and smokeless tobacco products would account for about 14 percent of the decline in cigarette sales associated with a hypothetical 10 percent increase in the price of cigarettes. On the other hand, another study using a seemingly identical data set (Zheng, et al., 2017), suggests that sales of little cigars and smokeless tobacco would decrease in the face of an increase in cigarette prices.

The wide range of estimated elasticities calls into question the reliability of published estimates. As a nascent area of research, the policy debate would benefit from additional research that involves larger samples with better statistical power, reflects the dynamic nature of this relatively new product category, and accounts for the wide variety of vapor products.

More importantly, demand and supply conditions for e-cigarettes, heated tobacco products and other electronic nicotine delivery products have been changing rapidly over the past few years—and are expected for rapidly change into the foreseeable future. Thus, estimates of demand parameters, such as elasticity and cross-price elasticity estimates, are almost certain to vary over time as users gain knowledge and experience and as products and suppliers enter the market.

Because the market for e-cigarettes and other vapor products is small and developing, the tax bearing capacity of these new product segments are untested and unknown. Moreover, current tax levels and prices could be also misleading based on the relatively sparse empirical data, in which case more data points and evaluation is needed. One can argue, given the slow growth rates of these segments in many markets, that current prices of e-cigarettes and heat-not-burn products are relatively high when compared to cigarettes and a tax or an increase on existing tax would slow down the segment growth or even lead to a decline.

Separately, the challenges in assessing a tax on electronic nicotine delivery products indicate the costs of collecting the tax, especially an excise tax, may be much higher than similar taxes levied on combustible cigarettes. In addition, as discussed above, heavy taxation of this relatively new industry would likely stifle innovation in a way that is contrary to the goal harm reduction.

Principles of harm reduction recognize that every proposal has uncertain outcomes as well as potential spillovers and unforeseen consequences. Nevertheless, the basic principle of harm reduction is a focus on safer rather than safe. Policymakers must make their decisions weighing the expected benefits and expected costs. With such high risks and costs associated with cigarette and other combustible use, taxes and regulations must be developed in an environment of uncertainty and with an eye toward a net reduction in harm, rather than an unattainable goal of zero harm.

Read the full report.