The lame duck is not yet dead, and the Federal Trade Commission (FTC) is supposed to be an independent agency. Work continues. The Commission has announced apartly open oral argument in the Illumina-Grail matter. That is, parts of the argument will be open to the public, via webcast, and parts won’t. This is what’s known as translucency in government.
Enquiring minds: I have several questions about Illumina-Grail. First, for anyone reading this column, am I the only one who cannot think of the case without thinking of Monty Python’s grail-shaped beacon? Asking for a friend who worries about me.
Second, why seek to unwind this merger? My ICLE colleagues Geoff Manne and Gus Hurwitz are members of a distinguished group of law & economics scholars who filed a motion for leave tofile an amicus brief in the matter. They question the merits of the case on a number of grounds.
Pertinent, not dispositive: this is a vertical merger. Certainly, it’s possible for vertical mergers to harm competition but theory suggests that they entail at least some efficiencies, and the empirical evidence fromFrancine Lafontaine and others tends to suggest that most have been beneficial for firms and consumers alike. One might wonder about the extent to which this case is built on analysis of the facts and circumstances rather than on Chair Lina Khan’s well-publicized antipathy to vertical mergers.
There’s also a question of whether FTC’s likely foreclosure argument is all that likely. Illumina, which created Grail and had retained a substantial interest in it all along, would have strong commercial incentives against barring Grail’s future competitors from its platform. Moreover, Illumina made an open offer—contractually binding—to continue providing access for 12 years to its NGS platform and other products, on terms substantially similar to those available pre-merger. That would seem to undercut the possibility of foreclosure. Complaint counsel discounts this as a remedy (with behavioral remedies disfavored), but it is relatively straightforward and not really a remedy at all, with terms both private parties and the FTC might enforce.Thom Lambert andJonathan Barnett both have interesting posts on the matter.
This is about a future market and potential (presumed) competitors. And it’s an area of biologics commerce where the deep pockets and regulatory sophistication necessary for development and approval frequently militate in favor of acquisition of a small innovator by a larger, established firm. As I noted in a prior column, “[p]otential competition cases are viable given the right facts, and in areas where good grounds to predict significant entry are well-established.” It can be hard to second-guess rule-of-reason cases from the outside, but there are reasons to think this is one of those matters where the preconditions to a strong potential competition argument are absent, but merger-related efficiencies real.
What else is going on at the FTC? Law360 reportson a staff brief urging the Commission not to pitch a new standard of review in Altria-Juul on what look to be sensible grounds, independent of the merits of their Section I case. The Commission had asked to be briefed on the possibility of switching to a claim of a per se violation or, in the alternative, quick look, and the staff brief recommends maintaining the rule-of-reason approach that the Commission’s ALJ found unpersuasive in dismissing the Commission’s case, which will now be heard by the Commission itself. I have no non-public information on the matter. There’s a question of whether this signals any real tension between the staff’s analysis and the Commission’s preferred approach or simply the Commission’s interest in asking questions about pushing boundaries and the staff providing good counsel. I don’t know, but it could be business as usual.
And just this week, FTC announced that it is bringing a case to block Microsoft’s acquisition of Activision. More on that to follow.
What’s pressing is not so clear. The Commission announced the agenda for a Dec. 14 open meeting. On it is a vote on regulatory review of the “green guides,” which provide guidance on environmental-marketing claims. But there’s nothing further on the various ANPRs announced in September, or about rulemaking that the Chair has hinted at for noncompete clauses in employment contracts. And, of course, we’re still waiting for merger guidelines to replace the ones that have been withdrawn—likely joint FTC/DOJ guidelines that will likely range over both horizontal and vertical mergers.
There’s the Altria matter, Meta, Meta-Within, the forthcoming Supreme Court opinion in Axon, etc. The FTC’s request for an injunction in Meta-Within will be heard in federal district court in California over the next couple of weeks. It’s a novel (read, speculative) complaint. I had a few paragraphs on Meta-Within in my first roundup column; Gus Hurwitz covered it, as well. We shall see.
Wandering up Pennsylvania Avenue onto the Hill, various bills seem not so much lame ducks as dead ones. But perhaps one or more is not dead yet. The Journalism Competition and Preservation Act (JCPA) might be one such bill, its conspicuous defects notwithstanding. “Might be.” First, a bit of FTC history. Way back in 2010, the FTC held a series of workshops on the Future of Journalism. There were many interesting issues there, if no obvious room for antitrust. I reveal no secrets in saying THOSE WORKSHOPS WERE NOT THE STAFF’S IDEA. We failed to recommend any intervention, although the staff did publish a clarification of its discussion draft:
The FTC has not endorsed the idea of making any policy recommendation or recommended any of the proposals in the discussion draft
My own take at the time: many newspapers were struggling, and that was unfortunate, but much of the struggle had to do with the papers’ loss of local print-advertising monopolies, which tended to offer high advertising prices but not high quality. Remember the price of classified ads? For decades, many of the holders of market power happened to turn large portions of their rents over to their news divisions. Then came the internet, then Craigslist, etc., etc., and down went the rents. Antitrust intervention seemed no answer at all.
Back to the bill. In brief, as currently drafted, the JCPA would permit certain “digital journalism providers” to form cartels to negotiate prices with large online platforms, and to engage in group boycotts, without being liable to the federal antitrust laws, at least for four years. Dirk Auer and Ben Sperry have anoverview here.
This would be an exemption for some sources of journalism, but not all, and its benefits would not be equally distributed. I am a paying consumer of digital (and even print) journalism. On the one hand, I enjoy it when others subsidize my preferences. On the other, I’m not sure why they should. As I said in a prior column, “antitrust exemptions help the special interests receiving them but not a living soul besides those special interests. That’s it, full stop.”
Moreover, asBrian Albrecht points out, the bill’s mandatory final arbitration provision is likely to lead to a form of price regulation.
England v. France on Saturday. Allez les bleus or we few, we happy few? Cheers.
Welcome to the FTC UMC Roundup, our new weekly update of news and events relating to antitrust and, more specifically, to the Federal Trade Commission’s (FTC) newfound interest in “revitalizing” the field. Each week we will bring you a brief recap of the week that was and a preview of the week to come. All with a bit of commentary and news of interest to regular readers of Truth on the Market mixed in.
This week’s headline? Of course it’s that Alvaro Bedoya has been confirmed as the FTC’s fifth commissioner—notably breaking the commission’s 2-2 tie between Democrats and Republicans and giving FTC Chair Lina Khan the majority she has been lacking. Politico and Gibson Dunn both offer some thoughts on what to expect next—though none of the predictions are surprising: more aggressive merger review and litigation; UMC rulemakings on a range of topics, including labor, right-to-repair, and pharmaceuticals; and privacy-related consumer protection. The real question is how quickly and aggressively the FTC will implement this agenda. Will we see a flurry of rulemakings in the next week, or will they be rolled out over a period of months or years? Will the FTC risk major litigation questions with a “go big or go home” attitude, or will it take a more incrementalist approach to boiling the frog?
Much of the rest of this week’s action happened on the Hill. Khan, joined by Securities and Exchange Commission (SEC) Chair Gary Gensler, made the regular trip to Congress to ask for a bigger budget to support more hires. (FTC, Law360) Sen. Mike Lee (R-Utah) asked for unanimous consent on his State Antitrust Enforcement Venue Act, but met resistance from Sen. Amy Klobuchar (D-Minn.), who wants that bill paired with her own American Innovation and Choice Online Act. This follows reports that Senate Majority Leader Chuck Schumer (D-N.Y.) is pushing Klobuchar to get support in line for both AICOA and the Open App Markets Act to be brought to the Senate floor. Of course, if they had the needed support, we probably wouldn’t be talking so much about whether they have the needed support.
Questions about the climate at the FTC continue following release of the Office of Personnel Management’s (OPM) Federal Employee Viewpoint Survey. Sen. Roger Wicker (R-Miss.) wants to know what has caused staff satisfaction at the agency to fall precipitously. And former senior FTC staffer Eileen Harrington issued a stern rebuke of the agency at this week’s open meeting, saying of the relationship between leadership and staff that: “The FTC is not a failed agency but it’s on the road to becoming one. This is a crisis.”
Perhaps the only thing experiencing greater inflation than the dollar is interest in the FTC doing something about inflation. Alden Abbott and Andrew Mercado remind us that these calls are misplaced. But that won’t stop politicians from demanding the FTC do something about high gas prices. Or beef production. Or utilities. Or baby formula.
A little further afield, the 5th U.S. Circuit Court of Appealsissued an opinion this week in a case involving SEC administrative-law judges that took broad issue with them on delegation, due process, and “take care” grounds. It may come as a surprise that this has led to much overwroughtconsternation that the opinion would dismantle the administrative state. But given that it is often the case that the SEC and FTC face similar constitutional issues (recall that Kokesh v. SEC was the precursor to AMG Capital), the 5th Circuit case could portend future problems for FTC adjudication. Add this to the queue with the Supreme Court’s pending review of whether federal district courts can consider constitutional challenges to an agency’s structure. The court was already scheduled to consider this question with respect to the FTC this next term in Axon, and agreed this week to hear a similar SEC-focused case next term as well.
Some Navel-Gazing News!
Congratulations to recent University of Michigan Law School graduate Kacyn Fujii, winner of our New Voices competition for contributions to our recent symposium on FTC UMC Rulemaking (hey, this post is actually part of that symposium, as well!). Kacyn’s contribution looked at the statutory basis for FTC UMC rulemaking authority and evaluated the use of such authority as a way to address problematic use of non-compete clauses.
And, one for the academics (and others who enjoy writing academic articles): you might be interested in this call for proposals for a research roundtable on Market Structuring Regulation that the International Center for Law & Economics will host in September. If you are interested in writing on topics that include conglomerate business models, market-structuring regulation, vertical integration, or other topics relating to the regulation and economics of contemporary markets, we hope to hear from you!
In recent years, a growing chorus of voices has argued that existing merger rules fail to apprehend competitively significant mergers, either because they fall below existing merger-filing thresholds or because they affect innovation in ways that are purportedly ignored.
These fears are particularly acute in the pharmaceutical and tech industries, where several high-profile academic articles and reports claim to have identified important gaps in current merger-enforcement rules, particularly with respect to acquisitions involving nascent and potential competitors (here, here, and here, among many others).
Such fears have led activists, lawmakers, and enforcers to call for tougher rules, including the introduction of more stringent merger-filing thresholds and other substantive changes, such as the inversion of the burden of proof when authorities review mergers and acquisitions involving digital platforms.
However, as we discuss in a recent working paper—forthcoming in the Missouri Law Review and available on SSRN—these proposals tend to overlook the important tradeoffs that would ensue from attempts to decrease the number of false positives under existing merger rules and thresholds.
The paper draws from two key strands of economic literature that are routinely overlooked (or summarily dismissed) by critics of the status quo.
For a start, antitrust enforcement is not costless. In the case of merger enforcement, not only is it expensive for agencies to detect anticompetitive deals but, more importantly, overbearing rules may deter beneficial merger activity that creates value for consumers.
Second, critics tend to overlook the possibility that incumbents’ superior managerial or other capabilities (i.e., what made them successful in the first place) makes them the ideal acquisition partners for entrepreneurs and startup investors looking to sell.
The result is a body of economic literature that focuses almost entirely on hypothetical social costs, while ignoring the redeeming benefits of corporate acquisitions, as well as the social cost of enforcement.
Kill Zones
One of the most significant allegations leveled against large tech firms is that their very presence in a market may hinder investments, entry, and innovation, creating what some have called a “kill zone.” The strongest expression in the economic literature of this idea of a kill zone stems from a working paper by Sai Krishna Kamepalli, Raghuram Rajan, and Luigi Zingales.
The paper makes two important claims, one theoretical and one empirical. From a theoretical standpoint, the authors argue that the prospect of an acquisition by a dominant platform deters consumers from joining rival platforms, and that this, in turn, hampers the growth of these rivals. The authors then test a similar hypothesis empirically. They find that acquisitions by a dominant platform—such as Google or Facebook—decrease investment levels and venture capital deals in markets that are “similar” to that of the target firm.
But both findings are problematic. For a start, Zingales and his co-authors’ theoretical model is premised on questionable assumptions about the way in which competition develops in the digital space. The first is that early adopters of new platforms—called “techies” in the authors’ parlance—face high switching costs because of their desire to learn these platforms in detail. As an initial matter, it would appear facially contradictory that “techies” both are the group with the highest switching costs and that they switch the most. The authors further assume that “techies” would incur lower adoption costs if they remained on the incumbent platform and waited for the rival platform to be acquired.
Unfortunately, while these key behavioral assumptions drive the results of the theoretical model, the paper presents no evidence to support their presence in real-world settings. In that sense, the authors commit the same error as previous theoretical work concerning externalities, which have tended to overestimate their frequency.
Second, the empirical analysis put forward in the paper is unreliable for policymaking purposes. The authors notably find that:
[N]ormalized VC investments in start-ups in the same space as the company acquired by Google and Facebook drop by over 40% and the number of deals falls by over 20% in the three years following an acquisition.
However, the results of this study are derived from the analysis of only nine transactions. The study also fails to clearly show that firms in the treatment and controls are qualitatively similar. In a nutshell, the study compares industry acquisitions exceeding $500 million to Facebook and Google’s acquisitions that exceed that amount. This does not tell us whether the mergers in both groups involved target companies with similar valuations or similar levels of maturity. This does not necessarily invalidate the results, but it does suggest that policymakers should be circumspect in interpreting those results.
Finally, the paper fails to demonstrate evidence that existing antitrust regimes fail to achieve an optimal error-cost balance. The central problem is that the paper has indeterminate welfare implications. For instance, as the authors note, the declines in investment in spaces adjacent to the incumbent platforms occurred during a time of rapidly rising venture capital investment, both in terms of the number of deals and dollars invested. It is entirely plausible that venture capital merely shifted to other sectors.
Put differently, on its own terms, the evidence merely suggests that acquisitions by Google and Facebook affected the direction of innovation, not its overall rate. And there is little to suggest that this shift was suboptimal, from a welfare standpoint.
In short, as the authors themselves conclude: “[i]t would be premature to draw any policy conclusion on antitrust enforcement based solely on our model and our limited evidence.”
Mergers and Potential Competition
Scholars have also posited more direct effects from acquisitions of startups or nascent companies by incumbent technology market firms.
Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.
However, these antitrust theories of harm suffer from several important flaws. They rest upon several restrictive assumptions that are not certain to occur in real-world settings. Most are premised on the notion that, in a given market, monopoly profits generally exceed joint duopoly profits. This allegedly makes it profitable, and mutually advantageous, for an incumbent to protect its monopoly position by preemptively acquiring potential rivals.
Accordingly, under these theories, anticompetitive mergers are only possible when the acquired rival could effectively challenge the incumbent. But these are, of course, only potential challengers; there is no guarantee that any one of them could or would mount a viable competitive threat.
Less obviously, it must be the case that the rival can hope to share only duopoly profits, as opposed to completely overthrowing the incumbent or surpassing them with a significantly larger share of the market. Where competition is “for the market” itself, monopoly maintenance would fail to explain a rival’s decision to sell. Because there would be no asymmetry between the expected profits of the incumbent and the rival, monopoly maintenance alone would not give rise to mutually advantageous deals.
Second, potential competition does not always increase consumer welfare. Indeed, while the presence of potential competitors might increase price competition, it can also have supply-side effects that cut in the opposite direction.
For example, as Nobel laureate Joseph Stiglitz observed, a monopolist threatened by potential competition may invest in socially wasteful R&D efforts or entry-deterrence mechanisms, and it may operate at below-optimal scale in anticipation of future competitive entry.
There are also pragmatic objections. Analyzing a merger’s effect on potential competition would compel antitrust authorities and courts to make increasingly speculative assessments concerning the counterfactual setting of proposed acquisitions.
In simple terms, it is far easier to determine whether a merger between McDonald’s and Burger King would lead to increased hamburger prices in the short run than it is to determine whether a gaming platform like Steam or the Epic Games Store might someday compete with video-streaming or music-subscription platforms like Netflix or Spotify. It is not that the above models are necessarily wrong, but rather that applying them to practical cases would require antitrust enforcers to estimate mostly unknowable factors.
Finally, the real test for regulators is not just whether they can identify possibly anticompetitive mergers, but whether they can do so in a cost-effective manner. Whether it is desirable to implement a given legal test is not simply a function of its accuracy, the cost to administer it, and the respective costs of false positives and false negatives. It also critically depends on how prevalent the conduct is that adjudicators would be seeking to foreclose.
Consider two hypothetical settings. Imagine there are 10,000 tech mergers in a given year, of which either 1,000 or 2,500 are anticompetitive (the remainder are procompetitive or competitively neutral). Suppose that authorities can either attempt to identify anticompetitive mergers with 75% accuracy, or perform no test at all—i.e., letting all mergers go through unchallenged.
If there are 1,000 anticompetitive mergers, applying the test would result in 7,500 correct decisions and 2,500 incorrect ones (2,250 false positives and 250 false negatives). Doing nothing would lead to 9,000 correct decisions and 1,000 false negatives. If the number of anticompetitive deals were 2,500, applying the test would lead to the same number of incorrect decisions as not applying it (1,875 false positives and 625 false negatives, versus 2,500 false negatives). The advantage would tilt toward applying the test if anticompetitive mergers were even more widespread.
This hypothetical example holds a simple lesson for policymakers: the rarer the conduct that they are attempting to identify, the more accurate their identification method must be, and the more costly false negatives must be relative to false positives.
As discussed below, current empirical evidence does not suggest that anticompetitive mergers of this sort are particularly widespread, nor does it offer accurate heuristics to detect the ones that are. Finally, there is little sense that the cost of false negatives significantly outweighs that of false positives. In short, there is currently little evidence to suggest that tougher enforcement would benefit consumers.
Killer Acquisitions
Killer acquisitions are, effectively, a subset of the “potential competitor” mergers discussed in the previous section. As defined by Colleen Cunningham, Florian Ederer, and Song Ma, they are those deals where “an incumbent firm may acquire an innovative target and terminate the development of the target’s innovations to preempt future competition.”
Cunningham, Ederer, and Ma’s highly influential paper on killer acquisitions has been responsible for much of the recent renewed interest in the effect that mergers exert on innovation. The authors studied thousands of pharmaceutical mergers and concluded that between 5.3% and 7.4% of them were killer acquisitions. As they write:
[W]e empirically compare development probabilities of overlapping acquisitions, which are, in our theory, motivated by a mix of killer and development intentions, and non-overlapping acquisitions, which are motivated only by development intentions. We find an increase in acquisition probability and a decrease in post-acquisition development for overlapping acquisitions and interpret that as evidence for killer acquisitions. […]
[W]e find that projects acquired by an incumbent with an overlapping drug are 23.4% less likely to have continued development activity compared to drugs acquired by non-overlapping incumbents.
From a policy standpoint, the question is what weight antitrust authorities, courts, and legislators should give to these findings. Stated differently, does the paper provide sufficient evidence to warrant reform of existing merger-filing thresholds and review standards? There are several factors counseling that policymakers should proceed with caution.
To start, the study’s industry-specific methodology means that it may not be a useful guide to understand acquisitions in other industries, like the tech sector, for example.
Second, even if one assumes that the findings of Cunningham, et al., are correct and apply with equal force in the tech sector (as some official reports have), it remains unclear whether the 5.3–7.4% of mergers they describe warrant a departure from the status quo.
Antitrust enforcers operate under uncertainty. The critical policy question is thus whether this subset of anticompetitive deals can be identified ex-ante. If not, is there a heuristic that would enable enforcers to identify more of these anticompetitive deals without producing excessive false positives?
The authors focus on the effect that overlapping R&D pipelines have on project discontinuations. In the case of non-overlapping mergers, acquired projects continue 17.5% of the time, while this number is 13.4% when there are overlapping pipelines. The authors argue that this gap is evidence of killer acquisitions. But it misses the bigger picture: under the authors’ own numbers and definition of a “killer acquisition,” a vast majority of overlapping acquisitions are perfectly benign; prohibiting them would thus have important social costs.
Third, there are several problems with describing this kind of behavior as harmful. Indeed, Cunningham, et al., acknowledge that this kind of behavior could increase innovation by boosting the returns to innovation.
And even if one ignores incentives to innovate, product discontinuations can improve consumer welfare. This question ultimately boils down to identifying the counterfactual to a merger. As John Yun writes:
For instance, an acquisition that results in a discontinued product is not per se evidence of either consumer harm or benefit. The answer involves comparing the counterfactual world without the acquisition with the world with the acquisition. The comparison includes potential efficiencies that were gained from the acquisition, including integration of intellectual property, the reduction of transaction costs, economies of scope, and better allocation of skilled labor.
One of the reasons R&D project discontinuation may be beneficial is simply cost savings. R&D is expensive. Pharmaceutical firms spend up to 27.8% of their annual revenue on R&D. Developing a new drug has an estimated median cost of $985.3 million. Cost-cutting—notably as it concerns R&D—is thus a critical part of pharmaceutical (as well as tech) companies’ businesses. As a report by McKinsey concludes:
The recent boom in M&A in the pharma industry is partly the result of attempts to address short-term productivity challenges. An acquiring or merging company typically designs organization-wide integration programs to capture synergies, especially in costs. Such programs usually take up to three years to complete and deliver results.
Maximizing the efficiency of production labor and equipment is one important way top-quartile drugmakers break out of the pack. Their rates of operational-equipment effectiveness are more than twice those of bottom-quartile companies (Exhibit 1), and when we looked closely we found that processes account for two-thirds of the difference.
In short, pharmaceutical companies do not just compete along innovation-related parameters, though these are obviously important, but also on more traditional grounds such as cost-rationalization. Accordingly, as the above reports suggest, pharmaceutical mergers are often about applying an incumbent’s superior managerial efficiency to the acquired firm’s assets through operation of the market for corporate control.
This cost-cutting (and superior project selection) ultimately enables companies to offer lower prices, thereby benefiting consumers and increasing their incentives to invest in R&D in the first place by making successfully developed drugs more profitable.
In that sense, Henry Manne’s seminal work relating to mergers and the market for corporate control sheds at least as much light on pharmaceutical (and tech) mergers as the killer acquisitions literature. And yet, it is hardly ever mentioned in modern economic literature on this topic.
While Colleen Cunningham and her co-authors do not entirely ignore these considerations, as we discuss in our paper, their arguments for dismissing them are far from watertight.
A natural extension of the killer acquisitions work is to question whether mergers of this sort also take place in the tech industry. Interest in this question is notably driven by the central role that digital markets currently occupy in competition-policy discussion, but also by the significant number of startup acquisitions that take place in the tech industry. However, existing studies provide scant evidence that killer acquisitions are a common occurrence in these markets.
This is not surprising. Unlike in the pharmaceutical industry—where drugs need to go through a lengthy and visible regulatory pipeline before they can be sold—incumbents in digital industries will likely struggle to identify their closest rivals and prevent firms from rapidly pivoting to seize new commercial opportunities. As a result, the basic conditions for killer acquisitions to take place (i.e., firms knowing they are in a position to share monopoly profits) are less likely to be present; it also would be harder to design research methods to detect these mergers.
The empirical literature on killer acquisitions in the tech sector is still in its infancy. But, as things stand, no study directly examines whether killer acquisitions actually take place in digital industries (i.e., whether post-merger project discontinuations are more common in overlapping than non-overlapping tech mergers). This is notably the case for studies by Axel Gautier & Joe Lamesch, and Elena Argentesi and her co-authors. Instead, these studies merely show that product discontinuations are common after an acquisition by a big tech company.
To summarize, while studies of this sort might suggest that the clearance of certain mergers might not have been optimal, it is hardly a sufficient basis on which to argue that enforcement should be tightened.
The reason for this is simple. The fact that some anticompetitive mergers may have escaped scrutiny and/or condemnation is never a sufficient basis to tighten rules. For that, it is also necessary to factor in the administrative costs of increased enforcement, as well as potential false convictions to which it might give rise. As things stand, economic research on killer acquisitions in the tech sector does not warrant tougher antitrust enforcement, though it does show the need for further empirical research on the topic.
Conclusion
Many proposed merger-enforcement reforms risk throwing the baby out with the bathwater. Mergers are largely beneficial to society (here, here and here); anticompetitive ones are rare; and there is little way, at the margin, to tell good from bad. To put it mildly, there is a precious baby that needs to be preserved and relatively little bathwater to throw out.
Take the fulcrum of policy debates that is the pharmaceutical industry. It is not hard to point to pharmaceutical mergers (or long-term agreements) that have revolutionized patient outcomes. Most recently, Pfizer and BioNTech’s efforts to successfully market an mRNA vaccine against COVID-19 offers a case in point.
The deal struck by both firms could naïvely be construed as bearing hallmarks of a killer acquisition or an anticompetitive agreement (long-term agreements can easily fall into either of these categories). Pfizer was a powerful incumbent in the vaccine industry; BioNTech threatened to disrupt the industry with new technology; and the deal likely caused Pfizer to forgo some independent R&D efforts. And yet, it also led to the first approved COVID-19 vaccine and groundbreaking advances in vaccine technology.
Of course, the counterfactual is unclear, and the market might be more competitive absent the deal, just as there might be only one approved mRNA vaccine today instead of two—we simply do not know. More importantly, this counterfactual was even less knowable at the time of the deal. And much the same could be said about countless other pharmaceutical mergers.
The key policy question is how authorities should handle this uncertainty. Critics of the status quo argue that current rules and thresholds leave certain anticompetitive deals unchallenged. But these calls for tougher enforcement fail to satisfy the requirements of the error-cost framework. Critics have so far failed to show that, on balance, mergers harm social welfare—even overlapping ones or mergers between potential competitors—just as they are yet to suggest alternative institutional arrangements that would improve social welfare.
In other words, they mistakenly analyze purported false negatives of merger-enforcement regimes in isolation. In doing so, they ignore how measures that aim to reduce such judicial errors may lead to other errors, as well as higher enforcement costs. In short, they paint a world where policy decisions involve facile tradeoffs, and this undermines their policy recommendations.
Given these significant limitations, this body of academic research should be met with an appropriate degree of caution. For all the criticism it has faced, the current merger-review system is mostly a resounding success. It is administrable, predictable, and timely. Yet it also eliminates a vast majority of judicial errors: even its critics concede that false negatives make up only a tiny fraction of decisions. Policymakers must decide whether the benefits from catching the very few arguably anticompetitive mergers that currently escape prosecution outweigh the significant costs that are required to achieve this goal. There is currently little evidence to suggest that this is, indeed, the case.
U.S. and European competition laws diverge in numerous ways that have important real-world effects. Understanding these differences is vital, particularly as lawmakers in the United States, and the rest of the world, consider adopting a more “European” approach to competition.
In broad terms, the European approach is more centralized and political. The European Commission’s Directorate General for Competition (DG Comp) has significant de facto discretion over how the law is enforced. This contrasts with the common law approach of the United States, in which courts elaborate upon open-ended statutes through an iterative process of case law. In other words, the European system was built from the top down, while U.S. antitrust relies on a bottom-up approach, derived from arguments made by litigants (including the government antitrust agencies) and defendants (usually businesses).
This procedural divergence has significant ramifications for substantive law. European competition law includes more provisions akin to de facto regulation. This is notably the case for the “abuse of dominance” standard, in which a “dominant” business can be prosecuted for “abusing” its position by charging high prices or refusing to deal with competitors. By contrast, the U.S. system places more emphasis on actual consumer outcomes, rather than the nature or “fairness” of an underlying practice.
The American system thus affords firms more leeway to exclude their rivals, so long as this entails superior benefits for consumers. This may make the U.S. system more hospitable to innovation, since there is no built-in regulation of conduct for innovators who acquire a successful market position fairly and through normal competition.
In this post, we discuss some key differences between the two systems—including in areas like predatory pricing and refusals to deal—as well as the discretionary power the European Commission enjoys under the European model.
Exploitative Abuses
U.S. antitrust is, by and large, unconcerned with companies charging what some might consider “excessive” prices. The late Associate Justice Antonin Scalia, writing for the Supreme Court majority in the 2003 case Verizon v. Trinko, observed that:
The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period—is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth.
This contrasts with European competition-law cases, where firms may be found to have infringed competition law because they charged excessive prices. As the European Court of Justice (ECJ) held in 1978’s United Brands case: “In this case charging a price which is excessive because it has no reasonable relation to the economic value of the product supplied would be such an abuse.”
While United Brands was the EU’s foundational case for excessive pricing, and the European Commission reiterated that these allegedly exploitative abuses were possible when it published its guidance paper on abuse of dominance cases in 2009, the commission had for some time demonstrated apparent disinterest in bringing such cases. In recent years, however, both the European Commission and some national authorities have shown renewed interest in excessive-pricing cases, most notably in the pharmaceutical sector.
European competition law also penalizes so-called “margin squeeze” abuses, in which a dominant upstream supplier charges a price to distributors that is too high for them to compete effectively with that same dominant firm downstream:
[I]t is for the referring court to examine, in essence, whether the pricing practice introduced by TeliaSonera is unfair in so far as it squeezes the margins of its competitors on the retail market for broadband connection services to end users. (Konkurrensverket v TeliaSonera Sverige, 2011)
As Scalia observed in Trinko, forcing firms to charge prices that are below a market’s natural equilibrium affects firms’ incentives to enter markets, notably with innovative products and more efficient means of production. But the problem is not just one of market entry and innovation. Also relevant is the degree to which competition authorities are competent to determine the “right” prices or margins.
As Friedrich Hayek demonstrated in his influential 1945 essay The Use of Knowledge in Society, economic agents use information gleaned from prices to guide their business decisions. It is this distributed activity of thousands or millions of economic actors that enables markets to put resources to their most valuable uses, thereby leading to more efficient societies. By comparison, the efforts of central regulators to set prices and margins is necessarily inferior; there is simply no reasonable way for competition regulators to make such judgments in a consistent and reliable manner.
Given the substantial risk that investigations into purportedly excessive prices will deter market entry, such investigations should be circumscribed. But the court’s precedents, with their myopic focus on ex post prices, do not impose such constraints on the commission. The temptation to “correct” high prices—especially in the politically contentious pharmaceutical industry—may thus induce economically unjustified and ultimately deleterious intervention.
Monopolists must charge prices that are below some measure of their incremental costs; and
There must be a realistic prospect that they will able to recoup these initial losses.
In laying out its approach to predatory pricing, the U.S. Supreme Court has identified the risk of false positives and the clear cost of such errors to consumers. It thus has particularly stressed the importance of the recoupment requirement. As the court found in 1993’s Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., without recoupment, “predatory pricing produces lower aggregate prices in the market, and consumer welfare is enhanced.”
Accordingly, U.S. authorities must prove that there are constraints that prevent rival firms from entering the market after the predation scheme, or that the scheme itself would effectively foreclose rivals from entering the market in the first place. Otherwise, the predator would be undercut by competitors as soon as it attempts to recoup its losses by charging supra-competitive prices.
Without the strong likelihood that a monopolist will be able to recoup lost revenue from underpricing, the overwhelming weight of economic evidence (to say nothing of simple logic) is that predatory pricing is not a rational business strategy. Thus, apparent cases of predatory pricing are most likely not, in fact, predatory; deterring or punishing them would actually harm consumers.
By contrast, the EU employs a more expansive legal standard to define predatory pricing, and almost certainly risks injuring consumers as a result. Authorities must prove only that a company has charged a price below its average variable cost, in which case its behavior is presumed to be predatory. Even when a firm charges prices that are between its average variable and average total cost, it can be found guilty of predatory pricing if authorities show that its behavior was part of a plan to eliminate a competitor. Most significantly, in neither case is it necessary for authorities to show that the scheme would allow the monopolist to recoup its losses.
[I]t does not follow from the case‑law of the Court that proof of the possibility of recoupment of losses suffered by the application, by an undertaking in a dominant position, of prices lower than a certain level of costs constitutes a necessary precondition to establishing that such a pricing policy is abusive. (France Télécom v Commission, 2009).
This aspect of the legal standard has no basis in economic theory or evidence—not even in the “strategic” economic theory that arguably challenges the dominant Chicago School understanding of predatory pricing. Indeed, strategic predatory pricing still requires some form of recoupment, and the refutation of any convincing business justification offered in response. For example, in a 2017 piece for the Antitrust Law Journal, Steven Salop lays out the “raising rivals’ costs” analysis of predation and notes that recoupment still occurs, just at the same time as predation:
[T]he anticompetitive conditional pricing practice does not involve discrete predatory and recoupment periods, as in the case of classical predatory pricing. Instead, the recoupment occurs simultaneously with the conduct. This is because the monopolist is able to maintain its current monopoly power through the exclusionary conduct.
The case of predatory pricing illustrates a crucial distinction between European and American competition law. The recoupment requirement embodied in American antitrust law serves to differentiate aggressive pricing behavior that improves consumer welfare—because it leads to overall price decreases—from predatory pricing that reduces welfare with higher prices. It is, in other words, entirely focused on the welfare of consumers.
The European approach, by contrast, reflects structuralist considerations far removed from a concern for consumer welfare. Its underlying fear is that dominant companies could use aggressive pricing to engender more concentrated markets. It is simply presumed that these more concentrated markets are invariably detrimental to consumers. Both the Tetra Pak and France Télécom cases offer clear illustrations of the ECJ’s reasoning on this point:
[I]t would not be appropriate, in the circumstances of the present case, to require in addition proof that Tetra Pak had a realistic chance of recouping its losses. It must be possible to penalize predatory pricing whenever there is a risk that competitors will be eliminated… The aim pursued, which is to maintain undistorted competition, rules out waiting until such a strategy leads to the actual elimination of competitors. (Tetra Pak v Commission, 1996).
Similarly:
[T]he lack of any possibility of recoupment of losses is not sufficient to prevent the undertaking concerned reinforcing its dominant position, in particular, following the withdrawal from the market of one or a number of its competitors, so that the degree of competition existing on the market, already weakened precisely because of the presence of the undertaking concerned, is further reduced and customers suffer loss as a result of the limitation of the choices available to them. (France Télécom v Commission, 2009).
In short, the European approach leaves less room to analyze the concrete effects of a given pricing scheme, leaving it more prone to false positives than the U.S. standard explicated in the Brooke Group decision. Worse still, the European approach ignores not only the benefits that consumers may derive from lower prices, but also the chilling effect that broad predatory pricing standards may exert on firms that would otherwise seek to use aggressive pricing schemes to attract consumers.
Refusals to Deal
U.S. and EU antitrust law also differ greatly when it comes to refusals to deal. While the United States has limited the ability of either enforcement authorities or rivals to bring such cases, EU competition law sets a far lower threshold for liability.
As Justice Scalia wrote in Trinko:
Aspen Skiing is at or near the outer boundary of §2 liability. The Court there found significance in the defendant’s decision to cease participation in a cooperative venture. The unilateral termination of a voluntary (and thus presumably profitable) course of dealing suggested a willingness to forsake short-term profits to achieve an anticompetitive end. (Verizon v Trinko, 2003.)
This highlights two key features of American antitrust law with regard to refusals to deal. To start, U.S. antitrust law generally does not apply the “essential facilities” doctrine.Accordingly, in the absence of exceptional facts, upstream monopolists are rarely required to supply their product to downstream rivals, even if that supply is “essential” for effective competition in the downstream market. Moreover, as Justice Scalia observed in Trinko, the Aspen Skiing case appears to concern only those limited instances where a firm’s refusal to deal stems from the termination of a preexisting and profitable business relationship.
While even this is not likely the economically appropriate limitation on liability, its impetus—ensuring that liability is found only in situations where procompetitive explanations for the challenged conduct are unlikely—is completely appropriate for a regime concerned with minimizing the cost to consumers of erroneous enforcement decisions.
As in most areas of antitrust policy, EU competition law is much more interventionist. Refusals to deal are a central theme of EU enforcement efforts, and there is a relatively low threshold for liability.
In theory, for a refusal to deal to infringe EU competition law, it must meet a set of fairly stringent conditions: the input must be indispensable, the refusal must eliminate all competition in the downstream market, and there must not be objective reasons that justify the refusal. Moreover, if the refusal to deal involves intellectual property, it must also prevent the appearance of a new good.
In practice, however, all of these conditions have been relaxed significantly by EU courts and the commission’s decisional practice. This is best evidenced by the lower court’s Microsoft ruling where, as John Vickers notes:
[T]he Court found easily in favor of the Commission on the IMS Health criteria, which it interpreted surprisingly elastically, and without relying on the special factors emphasized by the Commission. For example, to meet the “new product” condition it was unnecessary to identify a particular new product… thwarted by the refusal to supply but sufficient merely to show limitation of technical development in terms of less incentive for competitors to innovate.
EU competition law thus shows far less concern for its potential chilling effect on firms’ investments than does U.S. antitrust law.
Vertical Restraints
There are vast differences between U.S. and EU competition law relating to vertical restraints—that is, contractual restraints between firms that operate at different levels of the production process.
On the one hand, since the Supreme Court’s Leegin ruling in 2006, even price-related vertical restraints (such as resale price maintenance (RPM), under which a manufacturer can stipulate the prices at which retailers must sell its products) are assessed under the rule of reason in the United States. Some commentators have gone so far as to say that, in practice, U.S. case law on RPM almost amounts to per se legality.
Furthermore, in the Consten and Grundig ruling, the ECJ rejected the consequentialist, and economically grounded, principle that inter-brand competition is the appropriate framework to assess vertical restraints:
Although competition between producers is generally more noticeable than that between distributors of products of the same make, it does not thereby follow that an agreement tending to restrict the latter kind of competition should escape the prohibition of Article 85(1) merely because it might increase the former. (Consten SARL & Grundig-Verkaufs-GMBH v. Commission of the European Economic Community, 1966).
This treatment of vertical restrictions flies in the face of longstanding mainstream economic analysis of the subject. As Patrick Rey and Jean Tirole conclude:
Another major contribution of the earlier literature on vertical restraints is to have shown that per se illegality of such restraints has no economic foundations.
Unlike the EU, the U.S. Supreme Court in Leegintook account of the weight of the economic literature, and changed its approach to RPM to ensure that the law no longer simply precluded its arguable consumer benefits, writing: “Though each side of the debate can find sources to support its position, it suffices to say here that economics literature is replete with procompetitive justifications for a manufacturer’s use of resale price maintenance.” Further, the court found that the prior approach to resale price maintenance restraints “hinders competition and consumer welfare because manufacturers are forced to engage in second-best alternatives and because consumers are required to shoulder the increased expense of the inferior practices.”
The EU’s continued per se treatment of RPM, by contrast, strongly reflects its “precautionary principle” approach to antitrust. European regulators and courts readily condemn conduct that could conceivably injure consumers, even where such injury is, according to the best economic understanding, exceedingly unlikely. The U.S. approach, which rests on likelihood rather than mere possibility, is far less likely to condemn beneficial conduct erroneously.
Political Discretion in European Competition Law
EU competition law lacks a coherent analytical framework like that found in U.S. law’s reliance on the consumer welfare standard. The EU process is driven by a number of laterally equivalent—and sometimes mutually exclusive—goals, including industrial policy and the perceived need to counteract foreign state ownership and subsidies. Such a wide array of conflicting aims produces lack of clarity for firms seeking to conduct business. Moreover, the discretion that attends this fluid arrangement of goals yields an even larger problem.
The Microsoft case illustrates this problem well. In Microsoft, the commission could have chosen to base its decision on various potential objectives. It notably chose to base its findings on the fact that Microsoft’s behavior reduced “consumer choice.”
The commission, in fact, discounted arguments that economic efficiency may lead to consumer welfare gains, because it determined “consumer choice” among media players was more important:
Another argument relating to reduced transaction costs consists in saying that the economies made by a tied sale of two products saves resources otherwise spent for maintaining a separate distribution system for the second product. These economies would then be passed on to customers who could save costs related to a second purchasing act, including selection and installation of the product. Irrespective of the accuracy of the assumption that distributive efficiency gains are necessarily passed on to consumers, such savings cannot possibly outweigh the distortion of competition in this case. This is because distribution costs in software licensing are insignificant; a copy of a software programme can be duplicated and distributed at no substantial effort. In contrast, the importance of consumer choice and innovation regarding applications such as media players is high. (Commission Decision No. COMP. 37792 (Microsoft)).
It may be true that tying the products in question was unnecessary. But merely dismissing this decision because distribution costs are near-zero is hardly an analytically satisfactory response. There are many more costs involved in creating and distributing complementary software than those associated with hosting and downloading. The commission also simply asserts that consumer choice among some arbitrary number of competing products is necessarily a benefit. This, too, is not necessarily true, and the decision’s implication that any marginal increase in choice is more valuable than any gains from product design or innovation is analytically incoherent.
The Court of First Instance was only too happy to give the commission a pass in its breezy analysis; it saw no objection to these findings. With little substantive reasoning to support its findings, the court fully endorsed the commission’s assessment:
As the Commission correctly observes (see paragraph 1130 above), by such an argument Microsoft is in fact claiming that the integration of Windows Media Player in Windows and the marketing of Windows in that form alone lead to the de facto standardisation of the Windows Media Player platform, which has beneficial effects on the market. Although, generally, standardisation may effectively present certain advantages, it cannot be allowed to be imposed unilaterally by an undertaking in a dominant position by means of tying.
The Court further notes that it cannot be ruled out that third parties will not want the de facto standardisation advocated by Microsoft but will prefer it if different platforms continue to compete, on the ground that that will stimulate innovation between the various platforms. (Microsoft Corp. v Commission, 2007)
Pointing to these conflicting effects of Microsoft’s bundling decision, without weighing either, is a weak basis to uphold the commission’s decision that consumer choice outweighs the benefits of standardization. Moreover, actions undertaken by other firms to enhance consumer choice at the expense of standardization are, on these terms, potentially just as problematic. The dividing line becomes solely which theory the commission prefers to pursue.
What such a practice does is vest the commission with immense discretionary power. Any given case sets up a “heads, I win; tails, you lose” situation in which defendants are easily outflanked by a commission that can change the rules of its analysis as it sees fit. Defendants can play only the cards that they are dealt. Accordingly, Microsoft could not successfully challenge a conclusion that its behavior harmed consumers’ choice by arguing that it improved consumer welfare, on net.
By selecting, in this instance, “consumer choice” as the standard to be judged, the commission was able to evade the constraints that might have been imposed by a more robust welfare standard. Thus, the commission can essentially pick and choose the objectives that best serve its interests in each case. This vastly enlarges the scope of potential antitrust liability, while also substantially decreasing the ability of firms to predict when their behavior may be viewed as problematic. It leads to what, in U.S. courts, would be regarded as an untenable risk of false positives that chill innovative behavior and create nearly unwinnable battles for targeted firms.
It will have some positive effects on economic welfare, to the extent it succeeds in lifting artificial barriers to competition that harm consumers and workers—such as allowing direct sales of hearing aids in drug stores—and helping to eliminate unnecessary occupational licensing restrictions, to name just two of several examples.
But it will likely have substantial negative effects on economic welfare as well. Many aspects of the order appear to emphasize new regulation—such as Net Neutrality requirements that may reduce investment in broadband by internet service providers—and imposing new regulatory requirements on airlines, pharmaceutical companies, digital platforms, banks, railways, shipping, and meat packers, among others. Arbitrarily imposing new rules in these areas, without a cost-beneficial appraisal and a showing of a market failure, threatens to reduce innovation and slow economic growth, hurting producers and consumer. (A careful review of specific regulatory proposals may shed greater light on the justifications for particular regulations.)
Antitrust-related proposals to challenge previously cleared mergers, and to impose new antitrust rulemaking, are likely to raise costly business uncertainty, to the detriment of businesses and consumers. They are a recipe for slower economic growth, not for vibrant competition.
An underlying problem with the order is that it is based on the false premise that competition has diminished significantly in recent decades and that “big is bad.” Economic analysis found in the February 2020 Economic Report of the President, and in other economic studies, debunks this flawed assumption.
In short, the order commits the fundamental mistake of proposing intrusive regulatory solutions for a largely nonexistent problem. Competitive issues are best handled through traditional well-accepted antitrust analysis, which centers on promoting consumer welfare and on weighing procompetitive efficiencies against anticompetitive harm on a case-by-case basis. This approach:
Deals effectively with serious competitive problems; while at the same time
Cabining error costs by taking into account all economically relevant considerations on a case-specific basis.
Rather than using an executive order to direct very specific regulatory approaches without a strong economic and factual basis, the Biden administration would have been better served by raising a host of competitive issues that merit possible study and investigation by expert agencies. Such an approach would have avoided imposing the costs of unwarranted regulation that unfortunately are likely to stem from the new order.
Finally, the order’s call for new regulations and the elimination of various existing legal policies will spawn matter-specific legal challenges, and may, in many cases, not succeed in court. This will impose unnecessary business uncertainty in addition to public and private resources wasted on litigation.
The recent launch of the international Multilateral Pharmaceutical Merger Task Force (MPMTF) is just the latest example of burgeoning cooperative efforts by leading competition agencies to promote convergence in antitrust enforcement. (See my recent paper on the globalization of antitrust, which assesses multinational cooperation and convergence initiatives in greater detail.) In what is a first, the U.S. Federal Trade Commission (FTC), the U.S. Justice Department’s (DOJ) Antitrust Division, offices of state Attorneys General, the European Commission’s Competition Directorate, Canada’s Competition Bureau, and the U.K.’s Competition and Market Authority (CMA) jointly created the MPMTF in March 2021 “to update their approach to analyzing the effects of pharmaceutical mergers.”
To help inform its analysis, in May 2021 the MPMTF requested public comments concerning the effects of pharmaceutical mergers. The MPMTF sought submissions regarding (among other issues) seven sets of questions:
What theories of harm should enforcement agencies consider when evaluating pharmaceutical mergers, including theories of harm beyond those currently considered?
What is the full range of a pharmaceutical merger’s effects on innovation? What challenges arise when mergers involve proprietary drug discovery and manufacturing platforms?
In pharmaceutical merger review, how should we consider the risks or effects of conduct such as price-setting practices, reverse payments, and other ways in which pharmaceutical companies respond to or rely on regulatory processes?
How should we approach market definition in pharmaceutical mergers, and how is that implicated by new or evolving theories of harm?
What evidence may be relevant or necessary to assess and, if applicable, challenge a pharmaceutical merger based on any new or expanded theories of harm?
What types of remedies would work in the cases to which those theories are applied?
What factors, such as the scope of assets and characteristics of divestiture buyers, influence the likelihood and success of pharmaceutical divestitures to resolve competitive concerns?
My research assistant Andrew Mercado and I recently submitted comments for the record addressing the questions posed by the MPMTF. We concluded:
Federal merger enforcement in general and FTC pharmaceutical merger enforcement in particular have been effective in promoting competition and consumer welfare. Proposed statutory amendments to strengthen merger enforcement not only are unnecessary, but also would, if enacted, tend to undermine welfare and would thus be poor public policy. A brief analysis of seven questions propounded by the Multilateral Pharmaceutical Merger Task Force suggests that: (a) significant changes in enforcement policies are not warranted; and (b) investigators should employ sound law and economics analysis, taking full account of merger-related efficiencies, when evaluating pharmaceutical mergers.
While we leave it to interested readers to review our specific comments, this commentary highlights one key issue which we stressed—the importance of giving due weight to efficiencies (and, in particular, dynamic efficiencies) in evaluating pharma mergers. We also note an important critique by FTC Commissioner Christine Wilson of the treatment accorded merger-related efficiencies by U.S. antitrust enforcers.
Discussion
Innovation in pharmaceuticals and vaccines has immensely significant economic and social consequences, as demonstrated most recently in the handling of the COVID-19 pandemic. As such, it is particularly important that public policy not stand in the way of realizing efficiencies that promote innovation in these markets. This observation applies directly, of course, to pharmaceutical antitrust enforcement, in general, and to pharma merger enforcement, in particular.
Regrettably, however, though general merger-enforcement policy has been generally sound, it has somewhat undervalued merger-related efficiencies.
Although U.S. antitrust enforcers give lip service to their serious consideration of efficiencies in merger reviews, the reality appears to be quite different, as documented by Commissioner Wilson in a 2020 speech.
Wilson’s General Merger-Efficiencies Critique: According to Wilson, the combination of finding narrow markets and refusing to weigh out-of-market efficiencies has created major “legal and evidentiary hurdles a defendant must clear when seeking to prove offsetting procompetitive efficiencies.” What’s more, the “courts [have] largely continue[d] to follow the Agencies’ lead in minimizing the importance of efficiencies.” Wilson shows that “the Horizontal Merger Guidelines text and case law appear to set different standards for demonstrating harms and efficiencies,” and argues that this “asymmetric approach has the obvious potential consequence of preventing some procompetitive mergers that increase consumer welfare.” Wilson concludes on a more positive note that this problem can be addressed by having enforcers: (1) treat harms and efficiencies symmetrically; and (2) establish clear and reasonable expectations for what types of efficiency analysis will and will not pass muster.
While our filing with the MPMTF did not discuss Wilson’s general treatment of merger efficiencies, one would hope that the task force will appropriately weigh it in its deliberations. Our filing instead briefly addressed two “informational efficiencies” that may arise in the context of pharmaceutical mergers. These include:
More Efficient Resource Reallocation: The theory of the firm teaches that mergers may be motivated by the underutilization or misallocation of assets, or the opportunity to create welfare-enhancing synergies. In the pharmaceutical industry, these synergies may come from joining complementary research and development programs, combining diverse and specialized expertise that may be leveraged for better, faster drug development and more innovation.
Enhanced R&D: Currently, much of the R&D for large pharmaceutical companies is achieved through partnerships or investment in small biotechnology and research firms specializing in a single type of therapy. Whereas large pharmaceutical companies have expertise in marketing, navigating regulation, and undertaking trials of new drugs, small, research-focused firms can achieve greater advancements in medicine with smaller budgets. Furthermore, changes within firms brought about by a merger may increase innovation.
With increases in intellectual property and proprietary data that come from the merging of two companies, smaller research firms that work with the merged entity may have access to greater pools of information, enhancing the potential for innovation without increasing spending. This change not only raises the efficiency of the research being conducted in these small firms, but also increases the probability of a breakthrough without an increase in risk.
Conclusion
U.S. pharmaceutical merger enforcement has been fairly effective in forestalling anticompetitive combinations while allowing consumer welfare-enhancing transactions to go forward. Policy in this area should remain generally the same. Enforcers should continue to base enforcement decisions on sound economic theory fully supported by case-specific facts. Enforcement agencies could benefit, however, by placing a greater emphasis on efficiencies analysis. In particular, they should treat harms and efficiencies symmetrically (as recommend by Commissioner Wilson), and fully take into account likely resource reallocation and innovation-related efficiencies.
Policy discussions about the use of personal data often have “less is more” as a background assumption; that data is overconsumed relative to some hypothetical optimal baseline. This overriding skepticism has been the backdrop for sweeping new privacy regulations, such as the California Consumer Privacy Act (CCPA) and the EU’s General Data Protection Regulation (GDPR).
More recently, as part of the broad pushback against data collection by online firms, some have begun to call for creating property rights in consumers’ personal data or for data to be treated as labor. Prominent backers of the idea include New York City mayoral candidate Andrew Yang and computer scientist Jaron Lanier.
The discussion has escaped the halls of academia and made its way into popular media. During a recent discussion with Tesla founder Elon Musk, comedian and podcast host Joe Rogan argued that Facebook is “one gigantic information-gathering business that’s decided to take all of the data that people didn’t know was valuable and sell it and make f***ing billions of dollars.” Musk appeared to agree.
The animosity exhibited toward data collection might come as a surprise to anyone who has taken Econ 101. Goods ideally end up with those who value them most. A firm finding profitable ways to repurpose unwanted scraps is just the efficient reallocation of resources. This applies as much to personal data as to literal trash.
Unfortunately, in the policy sphere, few are willing to recognize the inherent trade-off between the value of privacy, on the one hand, and the value of various goods and services that rely on consumer data, on the other. Ideally, policymakers would look to markets to find the right balance, which they often can. When the transfer of data is hardwired into an underlying transaction, parties have ample room to bargain.
But this is not always possible. In some cases, transaction costs will prevent parties from bargaining over the use of data. The question is whether such situations are so widespread as to justify the creation of data property rights, with all of the allocative inefficiencies they entail. Critics wrongly assume the solution is both to create data property rights and to allocate them to consumers. But there is no evidence to suggest that, at the margin, heightened user privacy necessarily outweighs the social benefits that new data-reliant goods and services would generate. Recent experience in the worlds of personalized medicine and the fight against COVID-19 help to illustrate this point.
Data Property Rights and Personalized Medicine
The world is on the cusp of a revolution in personalized medicine. Advances such as the improved identification of biomarkers, CRISPR genome editing, and machine learning, could usher a new wave of treatments to markedly improve health outcomes.
Personalized medicine uses information about a person’s own genes or proteins to prevent, diagnose, or treat disease. Genetic-testing companies like 23andMe or Family Tree DNA, with the large troves of genetic information they collect, could play a significant role in helping the scientific community to further medical progress in this area.
However, despite the obvious potential of personalized medicine, many of its real-world applications are still very much hypothetical. While governments could act in any number of ways to accelerate the movement’s progress, recent policy debates have instead focused more on whether to create a system of property rights covering personal genetic data.
Some raise concerns that it is pharmaceutical companies, not consumers, who will reap the monetary benefits of the personalized medicine revolution, and that advances are achieved at the expense of consumers’ and patients’ privacy. They contend that data property rights would ensure that patients earn their “fair” share of personalized medicine’s future profits.
But it’s worth examining the other side of the coin. There are few things people value more than their health. U.S. governmental agencies place the value of a single life at somewhere between $1 million and $10 million. The commonly used quality-adjusted life year metric offers valuations that range from $50,000 to upward of $300,000 per incremental year of life.
It therefore follows that the trivial sums users of genetic-testing kits might derive from a system of data property rights would likely be dwarfed by the value they would enjoy from improved medical treatments. A strong case can be made that policymakers should prioritize advancing the emergence of new treatments, rather than attempting to ensure that consumers share in the profits generated by those potential advances.
These debates drew increased attention last year, when 23andMe signed a strategic agreement with the pharmaceutical company Almirall to license the rights related to an antibody Almirall had developed. Critics pointed out that 23andMe’s customers, whose data had presumably been used to discover the potential treatment, received no monetary benefits from the deal. Journalist Laura Spinney wrote in The Guardian newspaper:
23andMe, for example, asks its customers to waive all claims to a share of the profits arising from such research. But given those profits could be substantial—as evidenced by the interest of big pharma—shouldn’t the company be paying us for our data, rather than charging us to be tested?
In the deal’s wake, some argued that personal health data should be covered by property rights. A cardiologist quoted in Fortune magazine opined: “I strongly believe that everyone should own their medical data—and they have a right to that.” But this strong belief, however widely shared, ignores important lessons that law and economics has to teach about property rights and the role of contractual freedom.
Why Do We Have Property Rights?
Among the many important features of property rights is that they create “excludability,” the ability of economic agents to prevent third parties from using a given item. In the words of law professor Richard Epstein:
[P]roperty is not an individual conception, but is at root a social conception. The social conception is fairly and accurately portrayed, not by what it is I can do with the thing in question, but by who it is that I am entitled to exclude by virtue of my right. Possession becomes exclusive possession against the rest of the world…
Excludability helps to facilitate the trade of goods, offers incentives to create those goods in the first place, and promotes specialization throughout the economy. In short, property rights create a system of exclusion that supports creating and maintaining valuable goods, services, and ideas.
But property rights are not without drawbacks. Physical or intellectual property can lead to a suboptimal allocation of resources, namely market power (though this effect is often outweighed by increased ex ante incentives to create and innovate). Similarly, property rights can give rise to thickets that significantly increase the cost of amassing complementary pieces of property. Often cited are the historic (but contested) examples of tolling on the Rhine River or the airplane patent thicket of the early 20th century. Finally, strong property rights might also lead to holdout behavior, which can be addressed through top-down tools, like eminent domain, or private mechanisms, like contingent contracts.
In short, though property rights—whether they cover physical or information goods—can offer vast benefits, there are cases where they might be counterproductive. This is probably why, throughout history, property laws have evolved to achieve a reasonable balance between incentives to create goods and to ensure their efficient allocation and use.
Personal Health Data: What Are We Trying to Incentivize?
There are at least three critical questions we should ask about proposals to create property rights over personal health data.
What goods or behaviors would these rights incentivize or disincentivize that are currently over- or undersupplied by the market?
Are goods over- or undersupplied because of insufficient excludability?
Could these rights undermine the efficient use of personal health data?
Much of the current debate centers on data obtained from direct-to-consumer genetic-testing kits. In this context, almost by definition, firms only obtain consumers’ genetic data with their consent. In western democracies, the rights to bodily integrity and to privacy generally make it illegal to administer genetic tests against a consumer or patient’s will. This makes genetic information naturally excludable, so consumers already benefit from what is effectively a property right.
When consumers decide to use a genetic-testing kit, the terms set by the testing firm generally stipulate how their personal data will be used. 23andMe has a detailed policy to this effect, as does Family Tree DNA. In the case of 23andMe, consumers can decide whether their personal information can be used for the purpose of scientific research:
You have the choice to participate in 23andMe Research by providing your consent. … 23andMe Research may study a specific group or population, identify potential areas or targets for therapeutics development, conduct or support the development of drugs, diagnostics or devices to diagnose, predict or treat medical or other health conditions, work with public, private and/or nonprofit entities on genetic research initiatives, or otherwise create, commercialize, and apply this new knowledge to improve health care.
Because this transfer of personal information is hardwired into the provision of genetic-testing services, there is space for contractual bargaining over the allocation of this information. The right to use personal health data will go toward the party that values it most, especially if information asymmetries are weeded out by existing regulations or business practices.
Regardless of data property rights, consumers have a choice: they can purchase genetic-testing services and agree to the provider’s data policy, or they can forgo the services. The service provider cannot obtain the data without entering into an agreement with the consumer. While competition between providers will affect parties’ bargaining positions, and thus the price and terms on which these services are provided, data property rights likely will not.
So, why do consumers transfer control over their genetic data? The main reason is that genetic information is inaccessible and worthless without the addition of genetic-testing services. Consumers must pass through the bottleneck of genetic testing for their genetic data to be revealed and transformed into usable information. It therefore makes sense to transfer the information to the service provider, who is in a much stronger position to draw insights from it. From the consumer’s perspective, the data is not even truly “transferred,” as the consumer had no access to it before the genetic-testing service revealed it. The value of this genetic information is then netted out in the price consumers pay for testing kits.
If personal health data were undersupplied by consumers and patients, testing firms could sweeten the deal and offer them more in return for their data. U.S. copyright law covers original compilations of data, while EU law gives 15 years of exclusive protection to the creators of original databases. Legal protections for trade secrets could also play some role. Thus, firms have some incentives to amass valuable health datasets.
But some critics argue that health data is, in fact, oversupplied. Generally, such arguments assert that agents do not account for the negative privacy externalities suffered by third-parties, such as adverse-selection problems in insurance markets. For example, Jay Pil Choi, Doh Shin Jeon, and Byung Cheol Kim argue:
Genetic tests are another example of privacy concerns due to informational externalities. Researchers have found that some subjects’ genetic information can be used to make predictions of others’ genetic disposition among the same racial or ethnic category. … Because of practical concerns about privacy and/or invidious discrimination based on genetic information, the U.S. federal government has prohibited insurance companies and employers from any misuse of information from genetic tests under the Genetic Information Nondiscrimination Act (GINA).
But if these externalities exist (most of the examples cited by scholars are hypothetical), they are likely dwarfed by the tremendous benefits that could flow from the use of personal health data. Put differently, the assertion that “excessive” data collection may create privacy harms should be weighed against the possibility that the same collection may also lead to socially valuable goods and services that produce positive externalities.
In any case, data property rights would do little to limit these potential negative externalities. Consumers and patients are already free to agree to terms that allow or prevent their data from being resold to insurers. It is not clear how data property rights would alter the picture.
Proponents of data property rights often claim they should be associated with some form of collective bargaining. The idea is that consumers might otherwise fail to receive their “fair share” of genetic-testing firms’ revenue. But what critics portray as asymmetric bargaining power might simply be the market signaling that genetic-testing services are in high demand, with room for competitors to enter the market. Shifting rents from genetic-testing services to consumers would undermine this valuable price signal and, ultimately, diminish the quality of the services.
Perhaps more importantly, to the extent that they limit the supply of genetic information—for example, because firms are forced to pay higher prices for data and thus acquire less of it—data property rights might hinder the emergence of new treatments. If genetic data is a key input to develop personalized medicines, adopting policies that, in effect, ration the supply of that data is likely misguided.
Even if policymakers do not directly put their thumb on the scale, data property rights could still harm pharmaceutical innovation. If existing privacy regulations are any guide—notably, thepreviously mentioned GDPR and CCPA, as well as the federal Health Insurance Portability and Accountability Act (HIPAA)—such rights might increase red tape for pharmaceutical innovators. Privacy regulations routinely limit firms’ ability to put collected data to new and previously unforeseen uses. They also limit parties’ contractual freedom when it comes to gathering consumers’ consent.
At the margin, data property rights would make it more costly for firms to amass socially valuable datasets. This would effectively move the personalized medicine space further away from a world of permissionless innovation, thus slowing down medical progress.
In short, there is little reason to believe health-care data is misallocated. Proposals to reallocate rights to such data based on idiosyncratic distributional preferences threaten to stifle innovation in the name of privacy harms that remain mostly hypothetical.
Data Property Rights and COVID-19
The trade-off between users’ privacy and the efficient use of data also has important implications for the fight against COVID-19. Since the beginning of the pandemic, several promising initiatives have been thwarted by privacy regulations and concerns about the use of personal data. This has potentially prevented policymakers, firms, and consumers from putting information to its optimal social use. High-profile issues have included:
Each of these cases may involve genuine privacy risks. But to the extent that they do, those risks must be balanced against the potential benefits to society. If privacy concerns prevent us from deploying contact tracing or green passes at scale, we should question whether the privacy benefits are worth the cost. The same is true for rules that prohibit amassing more data than is strictly necessary, as is required by data-minimization obligations included in regulations such as the GDPR.
If our initial question was instead whether the benefits of a given data-collection scheme outweighed its potential costs to privacy, incentives could be set such that competition between firms would reduce the amount of data collected—at least, where minimized data collection is, indeed, valuable to users. Yet these considerations are almost completely absent in the COVID-19-related privacy debates, as they are in the broader privacy debate. Against this backdrop, the case for personal data property rights is dubious.
Conclusion
The key question is whether policymakers should make it easier or harder for firms and public bodies to amass large sets of personal data. This requires asking whether personal data is currently under- or over-provided, and whether the additional excludability that would be created by data property rights would offset their detrimental effect on innovation.
Swaths of personal data currently lie untapped. With the proper incentive mechanisms in place, this idle data could be mobilized to develop personalized medicines and to fight the COVID-19 outbreak, among many other valuable uses. By making such data more onerous to acquire, property rights in personal data might stifle the assembly of novel datasets that could be used to build innovative products and services.
On the other hand, when dealing with diffuse and complementary data sources, transaction costs become a real issue and the initial allocation of rightscan matter a great deal. In such cases, unlike the genetic-testing kits example, it is not certain that users will be able to bargain with firms, especially where their personal information is exchanged by third parties.
If optimal reallocation is unlikely, should property rights go to the person covered by the data or to the collectors (potentially subject to user opt-outs)? Proponents of data property rights assume the first option is superior. But if the goal is to produce groundbreaking new goods and services, granting rights to data collectors might be a superior solution. Ultimately, this is an empirical question.
As Richard Epstein puts it, the goal is to “minimize the sum of errors that arise from expropriation and undercompensation, where the two are inversely related.” Rather than approach the problem with the preconceived notion that initial rights should go to users, policymakers should ensure that data flows to those economic agents who can best extract information and knowledge from it.
As things stand, there is little to suggest that the trade-offs favor creating data property rights. This is not an argument for requisitioning personal information or preventing parties from transferring data as they see fit, but simply for letting markets function, unfettered by misguided public policies.
The slew of recent antitrust cases in the digital, tech, and pharmaceutical industries has brought significant attention to the investments many firms in these industries make in “intangibles,” such as software and research and development (R&D).
Intangibles are recognized to have an important effect on a company’s (and the economy’s) performance. For example, Jonathan Haskel and Stian Westlake (2017) highlight the increasingly large investments companies have been making in things like programming in-house software, organizational structures, and, yes, a firm’s stock of knowledge obtained through R&D. They also note the considerable difficulties associated with valuing both those investments and the outcomes (such as new operational procedures, a new piece of software, or a new patent) of those investments.
This difficulty in valuing intangibles has gone somewhat under the radar until relatively recently. There has been progress in valuing them at the aggregate level (see Ellen R. McGrattan and Edward C. Prescott (2008)) and in examining their effects at the level of individual sectors (see McGrattan (2020)). It remains difficult, however, to ascertain the value of the entire stock of intangibles held by an individual firm.
There is a method to estimate the value of one component of a firm’s stock of intangibles. Specifically, the “stock of knowledge obtained through research and development” is likely to form a large proportion of most firms’ intangibles. Treating R&D as a “stock” might not be the most common way to frame the subject, but it does have an intuitive appeal.
What a firm knows (i.e., its intellectual property) is an input to its production process, just like physical capital. The most direct way for firms to acquire knowledge is to conduct R&D, which adds to its “stock of knowledge,” as represented by its accumulated stock of R&D. In this way, a firm’s accumulated investment in R&D then becomes a stock of R&D that it can use in production of whatever goods and services it wants. Thankfully, there is a relatively straightforward (albeit imperfect) method to measure a firm’s stock of R&D that relies on information obtained from a company’s accounts, along with a few relatively benign assumptions.
This method (set out by Bronwyn Hall (1990, 1993)) uses a firm’s annual expenditures on R&D (a separate line item in most company accounts) in the “perpetual inventory” method to calculate a firm’s stock of R&D in any particular year. This perpetual inventory method is commonly used to estimate a firm’s stock of physical capital, so applying it to obtain an estimate of a firm’s stock of knowledge—i.e., their stock of R&D—should not be controversial.
All this method requires to obtain a firm’s stock of R&D for this year is knowledge of a firm’s R&D stock and its investment in R&D (i.e., its R&D expenditures) last year. This year’s R&D stock is then the sum of those R&D expenditures and its undepreciated R&D stock that is carried forward into this year.
As some R&D expenditure datasets include, for example, wages paid to scientists and research workers, this is not exactly the same as calculating a firm’s physical capital stock, which would only use a firm’s expenditures on physical capital. But given that paying people to perform R&D also adds to a firm’s stock of R&D through the increased knowledge and expertise of their employees, it seems reasonable to include this in a firm’s stock of R&D.
As mentioned previously, this method requires making certain assumptions. In particular, it is necessary to assume a rate of depreciation of the stock of R&D each period. Hall suggests a depreciation of 15% per year (compared to the roughly 7% per year for physical capital), and estimates presented by Hall, along with Wendy Li (2018), suggest that, in some industries, the figure can be as high as 50%, albeit with a wide range across industries.
The other assumption required for this method is an estimate of the firm’s initial level of stock. To see why such an assumption is necessary, suppose that you have data on a firm’s R&D expenditure running from 1990-2016. This means that you can calculate a firm’s stock of R&D for each year once you have their R&D stock in the previous year via the formula above.
When calculating the firm’s R&D stock for 2016, you need to know what their R&D stock was in 2015, while to calculate their R&D stock for 2015 you need to know their R&D stock in 2014, and so on backward until you reach the first year for which you have data: in this, case 1990.
However, working out the firm’s R&D stock in 1990 requires data on the firm’s R&D stock in 1989. The dataset does not contain any information about 1989, nor the firm’s actual stock of R&D in 1990. Hence, it is necessary to make an assumption regarding the firm’s stock of R&D in 1990.
There are several different assumptions one can make regarding this “starting value.” You could assume it is just a very small number. Or you can assume, as per Hall, that it is the firm’s R&D expenditure in 1990 divided by the sum of the R&D depreciation and average growth rates (the latter being taken as 8% per year by Hall). Note that, given the high depreciation rates for the stock of R&D, it turns out that the exact starting value does not matter significantly (particularly in years toward the end of the dataset) if you have a sufficiently long data series. At a 15% depreciation rate, more than 50% of the initial value disappears after five years.
Although there are other methods to measure a firm’s stock of R&D, these tend to provide less information or rely on stronger assumptions than the approach described above does. For example, sometimes a firm’s stock of R&D is measured using a simple count of the number of patents they hold. However, this approach does not take into account the “value” of a patent. Since, by definition, each patent is unique (with differing number of years to run, levels of quality, ability to be challenged or worked around, and so on), it is unlikely to be appropriate to use an “average value of patents sold recently” to value it. At least with the perpetual inventory method described above, a monetary value for a firm’s stock of R&D can be obtained.
The perpetual inventory method also provides a way to calculate market shares of R&D in R&D-intensive industries, which can be used alongside current measures. This would be akin to looking at capacity shares in some manufacturing industries. Of course, using market shares in R&D industries can be fraught with issues, such as whether it is appropriate to use a backward-looking measure to assess competitive constraints in a forward-looking industry. This is why any investigation into such industries should also look, for example, at a firm’s research pipeline.
Naturally, this only provides for the valuation of the R&D stock and says nothing about valuing other intangibles that are likely to play an important role in a much wider range of industries. Nonetheless, this method could provide another means for competition authorities to assess the current and historical state of R&D stocks in industries in which R&D plays an important part. It would be interesting to see what firms’ shares of R&D stocks look like, for example, in the pharmaceutical and tech industries.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Daniel Takash,(Regulatory policy fellow at the Niskanen Center. He is the manager of Niskanen’s Captured Economy Project, https://capturedeconomy.com, and you can follow him @danieltakash or @capturedecon).]
The pharmaceutical industry should be one of the most well-regarded industries in America. It helps bring drugs to market that improve, and often save, people’s lives. Yet last year a Gallup poll found that of 25 major industries, the pharmaceutical industry was the most unpopular– trailing behind fossil fuels, lawyers, and even the federal government. The opioid crisis dominated the headlines for the past few years, but the high price of drugs is a top-of-mind issue that generates significant animosity toward the pharmaceutical industry. The effects of high drug prices are felt not just at every trip to the pharmacy, but also by those who are priced out of life-saving treatments. Many Americans simply can’t afford what their doctors prescribe. The pharmaceutical industry helps save lives, but it’s also been credibly accused of anticompetitive behavior–not just from generics, but even other brand manufacturers.
These extraordinary times are an opportunity to right the ship. AbbVie, roundly criticized for building a patent thicket around Humira, has donated its patent rights to a promising COVID-19 treatment. This is to be celebrated– yet pharma’s bad reputation is defined by its worst behaviors and the frequent apologetics for overusing the patent system. Hopefully corporate social responsibility will prevail, and such abuses will cease in the future.
The most effective long-term treatment for COVID-19 will be a vaccine. We also need drugs to treat those afflicted with COVID-19 to improve recovery and lower mortality rates for those that get sick before a vaccine is developed and widely available. This requires rapid drug development through effective public-private partnerships to bring these treatments to market.
Without a doubt, these solutions will come from the pharmaceutical industry. Increased funding for the National Institutes for Health, nonprofit research institutions, and private pharmaceutical researchers are likely needed to help accelerate the development of these treatments. But we must be careful to ensure whatever necessary upfront public support is given to these entities results in a fair trade-off for Americans. The U.S. taxpayer is one of the largest investors in early to mid-stage drug research, and we need to make sure that we are a good investor.
Basic research into the costs of drug development, especially when taxpayer subsidies are involved, is a necessary start. This is a feature of the We PAID Act, introduced by Senators Rick Scott (R-FL) and Chris Van Hollen (D-MD), which requires the Department of Health and Human Services to enter into a contract with the National Academy of Medicine to figure the reasonable price of drugs developed with taxpayer support. This reasonable price would include a suitable reward to the private companies that did the important work of finishing drug development and gaining FDA approval. This is important, as setting a price too low would reduce investments in indispensable research and development. But this must be balanced with the risk of using patents to charge prices above and beyond those necessary to finance research, development, and commercialization.
A little sunshine can go a long way. We should trust that pharmaceutical companies will develop a vaccine and treatments or coronavirus, but we must also verify these are affordable and accessible through public scrutiny. Take the drug manufacturer Gilead Science’s about-face on its application for orphan drug status on the possible COVID-19 treatment remdesivir. Remedesivir, developed in part with public funds and already covered by three Gilead patents, technically satisfied the definition of “orphan drug,” as COVID-19 (at the time of the application) afflicted fewer than 200,000 patents. In a pandemic that could infect tens of millions of Americans, this designation is obviously absurd, and public outcry led to Gilead to ask the FDA to rescind the application. Gilead claimed it sought the designation to speed up FDA review, and that might be true. Regardless, public attention meant that the FDA will give Gilead’s drug Remdesivir expedited review without Gilead needing a designation that looks unfair to the American people.
The success of this isolated effort is absolutely worth celebrating. But we need more research to better comprehend the pharmaceutical industry’s needs, and this is just what the study provisions of We PAID would provide.
There is indeed some existing research on this front. For example,the Pharmaceutical Researchers and Manufacturers of America (PhRMA) estimates it costs an average of $2.6 billion to bring a new drug to market, and research from the Journal of the American Medical Association finds this average to be closer to $1.3 billion, with the median cost of development to be $985 million.
But a thorough analysis provided under We PAID is the best way for us to fully understand just how much support the pharmaceutical industry needs, and just how successful it has been thus far. The NIH, one of the major sources of publicly funded research, invests about $41.7 billion annually in medical research. We need to better understand how these efforts link up, and how the torch is passed from public to private efforts.
Patents are essential to the functioning of the pharmaceutical industry by incentivizing drug development through temporary periods of exclusivity. But it is equally essential, in light of the considerable investment already made by taxpayers in drug research and development, to make sure we understand the effects of these incentives and calibrate them to balance the interests of patients and pharmaceutical companies. Most drugs require research funding from both public and private sources as well as patent protection. And the U.S. is one of the biggest investors of drug research worldwide (even compared to drug companies), yet Americans pay the highest prices in the world. Are these prices justified, and can we improve patent policy to bring these costs down without harming innovation?
Beyond a thorough analysis of drug pricing, what makes We PAID one of the most promising solutions to the problem of excessively high drug prices are the accountability mechanisms included. The bill, if made law, would establish a Drug Access and Affordability Committee. The Committee would use the methodology from the joint HHS and NAM study to determine a reasonable price for affected drugs (around 20 percent of drugs currently on the market, if the bill were law today). Any companies that price drugs granted exclusivity by a patent above the reasonable price would lose their exclusivity.
This may seem like a price control at first blush, but it isn’t–for two reasons. First, this only applies to drugs developed with taxpayer dollars, which any COVID-19 treatments or cures almost certainly would be considering the $785 million spent by the NIH since 2002 researching coronaviruses. It’s an accountability mechanism that would ensure the government is getting its money’s worth. This tool is akin to ensuring that a government contractor is not charging more than would be reasonable, lest it loses its contract.
Second, it is even less stringent than pulling a contract with a private firm overcharging the government for the services provided. Why? Losing a patent does not mean losing the ability to make a drug, or any other patented invention for that matter.This basic fact is often lost in the patent debate, but it cannot be stressed enough.
If patents functioned as licenses, then every patent expiration would mean another product going off the market. In reality, that means that any other firm can compete and use the patented design. Even if a firm violated the price regulations included in the bill and lost its patent, it could continue manufacturing the drug. And so could any other firm, bringing down prices for all consumers by opening up market competition.
The We PAID Act could be a dramatic change for the drug industry, and because of that many in Congress may want to first debate the particulars of the bill. This is fine, assuming this promising legislation isn’t watered down beyond recognition. But any objections to the Drug Affordability and Access Committee and reasonable pricing regulations aren’t an excuse to not, at a bare minimum, pass the study included in the bill as part of future coronavirus packages, if not sooner. It is an inexpensive way to get good information in a single, reputable source that would allow us to shape good policy.
Good information is needed for good policy. When the government lays the groundwork for future innovations by financing research and development, it can be compared to a venture capitalist providing the financing necessary for an innovative product or service. But just like in the private sector, the government should know what it’s getting for its (read: taxpayers’) money and make recipients of such funding accountable to investors.
The COVID-19 outbreak will be the most pressing issue for the foreseeable future, but determining how pharmaceuticals developed with public research are priced is necessary in good times and bad. The final prices for these important drugs might be fair, but the public will never know without a trusted source examining this information. Trust, but verify. The pharmaceutical industry’s efforts in fighting the COVID-19 pandemic might be the first step to improving Americans’ relationship with the industry. But we need good information to make that happen. Americans need to know when they are being treated fairly, and that policymakers are able to protect them when they are treated unfairly. The government needs to become a better-informed investor, and that won’t happen without something like the We PAID Act.
We don’t yet know how bad the coronavirus outbreak will be in America. But we do know that the virus is likely to have a major impact on Americans’ access to medication. Currently, 80% of the active ingredients found in the drugs Americans take are made in China, and the virus has disrupted China’s ability to manufacture and supply those ingredients. Generic drugs, which comprise 90% of America’s drugs, are likely to be particularly impacted because most generics are made in India, and Indian drug makers rely heavily on Chinese-made ingredients. Indeed, on Tuesday, March 3, India decided to restrict exports of 26 drugs and drug ingredients because of reductions in China’s supply. This disruption to the generic supply chain could mean that millions of Americans will not get the drugs they need to stay alive and healthy.
Coronavirus-related shortages are only the latest
in a series of problems recently afflicting the generic drug industry. In the last few years, there have been many
reports of safety issues affecting generic drug quality at both domestic and overseas manufacturing facilities. Numerous studies have uncovered shady
practices and quality defects, including
generics contaminated with carcinogens, drugs in which the active ingredients
were switched for ineffective or unsafe alternatives, and manufacturing facilities
that falsify or destroy documents to conceal their misdeeds.
We’ve also been inundated with stories of generic drug makers hiking prices for their products. Although, as a whole, generic drugs are much cheaper than innovative brand products, the prices for many generic drugs are on the increase. For some generics – Martin Shkreli’s Daraprim, heart medication Digoxin, antibiotic Doxycycline, insulin, and many others – prices have increased by several hundred percent. It turns out that many of the price increases are the result of anticompetitive behavior in the generic market. For others, the price increases are due to the increasing difficulty of generic drug makers to earn profits selling low-priced drugs.
Even before the coronavirus outbreak, there were
numerous instances
of shortages for critical generic drugs. These shortages often result from drug
makers’ lack
of incentive to manufacture low-priced drugs that don’t earn
much profit. The shortages have been growing in frequency
and duration in recent years.
As a result of the shortages, 90 percent of U.S. hospitals report having
to find alternative drug therapies, costing patients and hospitals over
$400 million last year.
In other unfortunate situations, reasonable alternatives simply are not
available and patients suffer.
With generic drug makers’ growing list of
problems, many policy makers have called for significant changes to America’s approach
to the generic drug industry. Perhaps the FDA needs to increase its inspection of overseas facilities?
Perhaps the FTC and state and federal prosecutors should step
up their investigations and enforcement actions
against anticompetitive behavior in the industry? Perhaps FDA should do even
more to promote generic competition by expediting
generic approvals?
While these actions and other proposals could certainly help, none are aimed at resolving more than one or two of the significant problems vexing the industry. Senator Elizabeth Warren has proposed a more substantial overhaul that would bring the U.S. government into the generic-drug-making business. Under Warren’s plan, the Department of Health and Human Services (HHS) would manufacture or contract for the manufacture of drugs to be sold at lower prices. Nationalizing the generic drug industry in this way would make the inspection of manufacturing facilities much easier and could ideally eliminate drug shortages. In January, California’s governor proposed a similar system under which the state would begin manufacturing or contracting to manufacture generic drugs.
However, critics
of public manufacturing argue that manufacturing and
distribution infrastructure would be extremely costly to set up, with taxpayers
footing the bill. And even after the
initial set-up, market dynamics that affect costs, such as increasing raw
material costs or supply chain disruptions, would also mean greater costs for
taxpayers. Moreover, by removing the
profit incentive created under the Hatch-Waxman
Act to develop and manufacture generic drugs, it’s
not clear that governments could develop or manufacture a sufficient supply of generics
(consider the difference in efficiency between the U.S. Postal Service and
either UPS or FedEx).
Another approach might be to treat the generic
drug industry as a regulated
industry. This model has been applied to utilities in the
past when unregulated private ownership of utility infrastructure could not
provide sufficient supply to meet consumer need, address market failures, or
prevent the abuse of monopoly power.
Similarly, consumers’ need for safe and affordable medicines, market
failures inherent throughout the industry, and industry consolidation that could give rise to market power suggest the regulated model
might work well for generic drugs.
Under this approach, Hatch-Waxman incentives
could remain in place, granting the first generic drug an exclusivity period
during which it could earn significant profits for the generic drug maker. But when the exclusivity period ends, an
agency like HHS would assign manufacturing responsibility for a particular drug
to a handful of generic drug makers wishing to market in the U.S. These companies would be guaranteed a profit
based on a set rate of return on the costs of high-quality domestic manufacturing. In order to maintain their manufacturing
rights, facilities would have to meet strict FDA
guidelines to ensure high quality drugs.
Like the Warren and California proposals, this
approach would tackle several problems at once.
Prices would be kept under control and facilities would face frequent
inspections to ensure quality. A
guaranteed profit would eliminate generic companies’ financial risk, reducing
their incentive to use cheap (and often unsafe) drug ingredients or to engage
in illegal anticompetitive behavior. It
would also encourage steady production to reduce instances of drug
shortages. Unlike the Warren and
California proposals, this approach would build on the existing generic
infrastructure so that taxpayers don’t have to foot the bill to set up public
manufacturing. It would also continue to
incentivize the development of generic alternatives by maintaining the
Hatch-Waxman exclusivity period, and it would motivate the manufacture of generic
drugs by companies seeking a reliable rate of return.
Several issues would need to be worked out with a regulated generic industry approach to prevent manipulation of rates of return, regulatory capture, and political appointees without the incentives or knowledge to regulate the drug makers. However, the recurring crises affecting generic drugs indicate the industry is rife with market failures. Perhaps only a radical new approach will achieve lasting and necessary change.
A pending case in the U.S. Court of Appeals for the 3rd Circuit has raised several interesting questions about the FTC enforcement approach and patent litigation in the pharmaceutical industry. The case, FTC v. AbbVie, involves allegations that AbbVie (and Besins) filed sham patent infringement cases against generic manufacturer Teva (and Perrigo) for the purpose of preventing or delaying entry into the testosterone gel market in which AbbVie’s AndroGel had a monopoly. The FTC further alleges that AbbVie and Teva settled the testosterone gel litigation in AbbVie’s favor while making a large payment to Teva in an unrelated case, behavior that, considered together, amounted to an illegal reverse payment settlement. The district court dismissed the reverse payment claims, but concluded that the patent infringement cases were sham litigation. It ordered disgorgement damages of $448 million against AbbVie and Besins which was the profit they gained from maintaining the AndroGel monopoly.
The
3rd Circuit has been asked to review several elements of the
district court’s decision including whether the original patent infringement
cases amounted to sham litigation, whether the payment to Teva in a separate
case amounted to an illegal reverse payment, and whether the
FTC has the authority to seek disgorgement damages. The decision will help to clarify outstanding
issues relating to patent litigation and the FTC’s enforcement abilities, but
it also has the potential to chill pro-competitive behavior in the
pharmaceutical market encouraged under Hatch-Waxman.
First,
the 3rd Circuit will review whether AbbVie’s patent infringement
case was sham litigation by asking whether the district court
applied the right standard and how plaintiffs must prove that lawsuits are
baseless. The district court determined
that the case was a sham because it was objectively baseless (AbbVie couldn’t
reasonably expect to win) and subjectively baseless (AbbVie brought the cases
solely to delay generic entry into the market). AbbVie argues that the district court erred by
not requiring affirmative evidence of bad faith and not requiring the FTC to
present clear and convincing evidence that AbbVie and its attorneys believed
the lawsuits were baseless.
While
sham litigation should be penalized and deterred, especially when it produces
anticompetitive effects, the 3rd Circuit’s decision, depending on
how it comes out, also has the potential to deter brand drug makers from filing
patent infringement cases in the first place.
This threatens to disrupt the delicate balance that Hatch-Waxman sought to establish
between protecting generic entry while encouraging brand competition.
The 3rd Circuit will also determine whether AbbVie’s payment to Teva in a separate case involving cholesterol medicine was an illegal reverse payment, otherwise known as a “pay-for-delay” settlement. The FTC asserts that the actions in the two cases—one involving testosterone gel and the other involving cholesterol medicine—should be considered together, but the district court disagreed and determined there was no illegal reverse payment. True pay-for-delay settlements are anticompetitive and harm consumers by delaying their access to cheaper generic alternatives. However, an overly-liberal definition of what constitutes an illegal reverse payment will deter legitimate settlements, thereby increasing expenses for all parties that choose to litigate and possibly dissuading generics from bringing patent challenges in the first place. Moreover, FTC’s argument that two settlements occurring in separate cases around the same time is suspicious overlooks the reality that the pharmaceutical industry has become increasingly concentrated and drug companies often have more than one pending litigation matter against another company involving entirely different products and circumstances.
Finally, the 3rd Circuit will
determine whether the FTC has the authority to seek disgorgement damages on past
acts like settled patent litigation.
AbbVie has argued that the agency has no right to disgorgement because
it isn’t enumerated in the FTC Act and because courts can’t order injunctive
relieve, including disgorgement, on completed past acts.
The FTC has sought disgorgement damages only sparingly, but the frequency with which the agency seeks disgorgement and the amount of the damages have increased in recent years. Proponents of the FTC’s approach argue that the threat of large disgorgement damages provides a strong deterrent to anticompetitive behavior. While true, FTC-ordered disgorgement (even if permissible) may go too far and end up chilling economic activity by exposing businesses to exorbitant liability without clear guidance on when disgorgement will be awarded. The 3rd Circuit will determine whether the FTC’s enforcement approach is authorized, a decision that has important implications for whether the agency’s enforcement can deter unfair practices without depressing economic activity.
Last week the Senate Judiciary Committee held a hearing, Intellectual
Property and the Price of Prescription Drugs: Balancing Innovation and
Competition, that explored whether changes to the pharmaceutical patent
process could help lower drug prices. The
committee’s goal was to evaluate various legislative proposals that might
facilitate the entry of cheaper generic drugs, while also recognizing that strong
patent rights for branded drugs are essential to incentivize drug
innovation. As Committee Chairman
Lindsey Graham explained:
One thing you don’t want to do is kill the goose who laid the golden egg, which is pharmaceutical development. But you also don’t want to have a system that extends unnecessarily beyond the ability to get your money back and make a profit, a patent system that drives up costs for the average consumer.
Several proposals that were discussed at the hearing have
the potential to encourage competition in the pharmaceutical industry and help
rein in drug prices. Below, I discuss these proposals, plus a few additional
reforms. I also point out some of the language in the current draft proposals
that goes a bit too far and threatens the ability of drug makers to remain
innovative.
1. Prevent brand drug makers from blocking generic companies’ access to drug samples. Some brand drug makers have attempted to delay generic entry by restricting generics’ access to the drug samples necessary to conduct FDA-required bioequivalence studies. Some brand drug manufacturers have limited the ability of pharmacies or wholesalers to sell samples to generic companies or abused the REMS (Risk Evaluation Mitigation Strategy) program to refuse samples to generics under the auspices of REMS safety requirements. The Creating and Restoring Equal Access To Equivalent Samples (CREATES) Act of 2019 would allow potential generic competitors to bring an action in federal court for both injunctive relief and damages when brand companies block access to drug samples. It also gives the FDA discretion to approve alternative REMS safety protocols for generic competitors that have been denied samples under the brand companies’ REMS protocol. Although the vast majority of brand drug companies do not engage in the delay tactics addressed by CREATES, the Act would prevent the handful that do from thwarting generic competition. Increased generic competition should, in turn, reduce drug prices.
2. Restrict abuses of FDA Citizen Petitions. The citizen petition process was created as a way for individuals and community groups to flag legitimate concerns about drugs awaiting FDA approval. However, critics claim that the process has been misused by some brand drug makers who file petitions about specific generic drugs in the hopes of delaying their approval and market entry. Although FDA has indicated that citizens petitions rarely delay the approval of generic drugs, there have been a few drug makers, such as Shire ViroPharma, that have clearly abused the process and put unnecessary strain on FDA resources. The Stop The Overuse of Petitions and Get Affordable Medicines to Enter Soon (STOP GAMES) Act is intended to prevent such abuses. The Act reinforces the FDA and FTC’s ability to crack down on petitions meant to lengthen the approval process of a generic competitor, which should deter abuses of the system that can occasionally delay generic entry. However, lawmakers should make sure that adopted legislation doesn’t limit the ability of stakeholders (including drug makers that often know more about the safety of drugs than ordinary citizens) to raise serious concerns with the FDA.
3. Curtail Anticompetitive Pay-for-Delay Settlements. The Hatch-Waxman Act incentivizes generic companies to challenge brand drug patents by granting the first successful generic challenger a period of marketing exclusivity. Like all litigation, many of these patent challenges result in settlements instead of trials. The FTC and some courts have concluded that these settlements can be anticompetitive when the brand companies agree to pay the generic challenger in exchange for the generic company agreeing to forestall the launch of their lower-priced drug. Settlements that result in a cash payment are a red flag for anti-competitive behavior, so pay-for-delay settlements have evolved to involve other forms of consideration instead. As a result, the Preserve Access to Affordable Generics and Biosimilars Act aims to make an exchange of anything of value presumptively anticompetitive if the terms include a delay in research, development, manufacturing, or marketing of a generic drug. Deterring obvious pay-for-delay settlements will prevent delays to generic entry, making cheaper drugs available as quickly as possible to patients.
However, the Act’s rigid presumption that an exchange of anything of value is presumptively anticompetitive may also prevent legitimate settlements that ultimately benefit consumers. Brand drug makers should be allowed to compensate generic challengers to eliminate litigation risk and escape litigation expenses, and many settlements result in the generic drug coming to market before the expiration of the brand patent and possibly earlier than if there was prolonged litigation between the generic and brand company. A rigid presumption of anticompetitive behavior will deter these settlements, thereby increasing expenses for all parties that choose to litigate and possibly dissuading generics from bringing patent challenges in the first place. Indeed, the U.S. Supreme Court has declined to define these settlements as per se anticompetitive, and the FTC’s most recent agreement involving such settlements exempts several forms of exchanges of value. Any adopted legislation should follow the FTC’s lead and recognize that some exchanges of value are pro-consumer and pro-competitive.
4. Restore the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. I have previously discussed how an unbalanced inter partes review (IPR) process for challenging patents threatens to stifle drug innovation. Moreover, current law allows generic challengers to file duplicative claims in both federal court and through the IPR process. And because IPR proceedings do not have a standing requirement, the process has been exploited by entities that would never be granted standing in traditional patent litigation—hedge funds betting against a company by filing an IPR challenge in hopes of crashing the stock and profiting from the bet. The added expense to drug makers of defending both duplicative claims and claims against challengers that are exploiting the system increases litigation costs, which may be passed on to consumers in the form of higher prices.
The Hatch-Waxman Integrity Act (HWIA) is designed to return the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. It requires generic challengers to choose between either Hatch-Waxman litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) or an IPR proceeding (which is faster and provides certain pro-challenger provisions). The HWIA would also eliminate the ability of hedge funds and similar entities to file IPR claims while shorting the stock. By reducing duplicative litigation and the exploitation of the IPR process, the HWIA will reduce costs and strengthen innovation incentives for drug makers. This will ensure that patent owners achieve clarity on the validity of their patents, which will spur new drug innovation and make sure that consumers continue to have access to life-improving drugs.
5. Curb illegal product hopping and patent thickets. Two drug maker tactics currently garnering a lot of attention are so-called “product hopping” and “patent thickets.” At its worst, product hopping involves brand drug makers making minor changes to a drug nearing the end of its patent so that they gets a new patent on the slightly-tweaked drug, and then withdrawing the original drug from the market so that patients shift to the newly patented drug and pharmacists can’t substitute a generic version of the original drug. Similarly, at their worst, patent thickets involve brand drug makers obtaining a web of patents on a single drug to extend the life of their exclusivity and make it too costly for other drug makers to challenge all of the patents associated with a drug. The proposed Affordable Prescriptions for Patients Act of 2019 is meant to stop these abuses of the patent system, which would facilitate generic entry and help to lower drug prices.
However, the Act goes too far by also capturing many legitimate activities in its definitions. For example, the bill defines as anticompetitive product-hopping the selling of any improved version of a drug during a window which extends to a year after the launch of the first generic competitor. Presently, to acquire a patent and FDA approval, the improved version of the drug must be different and innovative enough from the original drug, yet the Act would prevent the drug maker from selling such a product without satisfying a demanding three-pronged test before the FTC or a district court. Similarly, the Act defines as anticompetitive patent thickets any new patents filed on a drug in the same general family as the original patent, and this presumption can only be rebutted by providing extensive evidence and satisfying demanding standards to the FTC or a district court. As a result, the Act deters innovation activity that is at all related to an initial patent and, in doing so, ignores the fact that most important drug innovation is incremental innovation based on previous inventions. Thus, the proposal should be redrafted to capture truly anticompetitive product hopping and patent thicket activity, while exempting behavior this is critical for drug innovation.
Reforms that close loopholes in the current patent process should facilitate competition in the pharmaceutical industry and help to lower drug prices. However, lawmakers need to be sure that they don’t restrict patent rights to the extent that they deter innovation because a significant body of research predicts that patients’ health outcomes will suffer as a result.