[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.
Joshua D. Wright is university professor and executive director of the Global Antitrust Institute at George Mason University’s Scalia Law School. He served as a commissioner of the Federal Trade Commission from 2013 through 2015.]
Much of this symposium celebrates Ajit’s contributions as chairman of the Federal Communications Commission and his accomplishments and leadership in that role. And rightly so. But Commissioner Pai, not just Chairman Pai, should also be recognized.
I first met Ajit when we were both minority commissioners at our respective agencies: the FCC and Federal Trade Commission. Ajit had started several months before I was confirmed. I watched his performance in the minority with great admiration. He reached new heights when he shifted from minority commissioner to chairman, and the accolades he will receive for that work are quite appropriate. But I want to touch on his time as a minority commissioner at the FCC and how that should inform the retrospective of his tenure.
Let me not bury the lead: Ajit Pai has been, in my view, the most successful, impactful minority commissioner in the history of the modern regulatory state. And it is that success that has led him to become the most successful and impactful chairman, too.
I must admit all of this success makes me insanely jealous. My tenure as a minority commissioner ran in parallel with Ajit. We joked together about our fierce duel to be the reigning king of regulatory dissents. We worked together fighting against net neutrality. We compared notes on dissenting statements and opinions. I tried to win our friendly competition. I tried pretty hard. And I lost; worse than I care to admit. But we had fun. And I very much admired the combination of analytical rigor, clarity of exposition, and intellectual honesty in his work. Anyway, the jealousy would be all too much if he weren’t also a remarkable person and friend.
The life of a minority commissioner can be a frustrating one. Like Sisyphus, the minority commissioner often wakes up each day to roll the regulatory (well, in this case, deregulatory) boulder up the hill, only to watch it roll down. And then do it again. And again. At times, it is an exhausting series of jousting matches with the windmills of Washington bureaucracy. It is not often that a minority commissioner has as much success as Commissioner Pai did: dissenting opinions ultimately vindicated by judicial review; substantive victories on critical policy issues; paving the way for institutional and procedural reforms.
It is one thing to write a raging dissent about how the majority has lost all principles. Fire and brimstone come cheap when there aren’t too many consequences to what you have to say. Measure a man after he has been granted power and a chance to use it, and only then will you have a true test of character. Ajit passes that test like few in government ever have.
This is part of what makes Ajit Pai so impressive. I have seen his work firsthand. The multitude of successes Ajit achieved as Chairman Pai were predictable, precisely because Commissioner Pai told the world exactly where he stood on important telecommunications policy issues, the reasons why he stood there, and then, well, he did what he said he would. The Pai regime was much more like a Le’Veon Bell run, between the tackles, than a no-look pass from Patrick Mahomes to Tyreek Hill. Commissioner Pai shared his playbook with the world; he told us exactly where he was going to run the ball. And then Chairman Pai did exactly that. And neither bureaucratic red tape nor political pressure—or even physical threat—could stop him.
Here is a small sampling of his contributions, many of them building on groundwork he laid in the minority:
Focus on Economic Analysis
One of Chairman Pai’s most important contributions to the FCC is his work to systematically incorporate economic analysis into FCC decision-making. The triumph of this effort was establishing the Office of Economic Analysis (OEA) in 2018. The OEA focus on conducting economic analyses of the costs, benefits, and economic impacts of the commission’s proposed rules will be a critical part of agency decision-making from here on out. This act alone would form a legacy any agency head could easily rest their laurels on. The OEA’s work will shape the agency for decades and ensure that agency decisions are made with the oversight economics provides.
This is a hard thing to do; just hiring economists is not enough. Structure matters. How economists get information to decision-makers determines if it will be taken seriously. To this end, Ajit has taken all the lessons from what has made the economists at the FTC so successful—and the lessons from the structural failures at other agencies—and applied them at the FCC.
Structural independence looks like “involving economists on cross-functional teams at the outset and allowing the economics division to make its own, independent recommendations to decision-makers.” And it is necessary for economics to be taken seriously within an agency structure. Ajit has assured that FCC decision-making will benefit from economic analysis for years to come.
Narrowing the Digital Divide
Chairman Pai made helping the disadvantaged get connected to the internet and narrowing the digital divide the top priorities during his tenure. And Commissioner Pai was fighting for this long before the pandemic started.
As businesses, schools, work, and even health care have moved online, the need to get Americans connected with high-speed broadband has never been greater. Under Pai’s leadership, the FCC has removed bureaucratic barriers and provided billions in funding to facilitate rural broadband buildout. We are talking about connections to some 700,000 rural homes and businesses in 45 states, many of whom are gaining access to high-speed internet for the first time.
Ajit has also made sure to keep an eye out for the little guy, and communities that have been historically left behind. Tribal communities, particularly in the rural West, have been a keen focus of his, as he knows all-too-well the difficulties and increased costs associated with servicing those lands. He established programs to rebuild and expand networks in the Virgin Islands and Puerto Rico in an effort to bring the islands to parity with citizens living on the mainland.
You need not take my word for it; he really does talk about this all the time. As he said in a speech at the National Tribal Broadband Summit: “Since my first day in this job, I’ve said that closing the digital divide was my top priority. And as this audience knows all too well, nowhere is that divide more pronounced than on Tribal lands.“ That work is not done; it is beyond any one person. But Ajit should be recognized for his work bridging the divide and laying the foundation for future gains.
And again, this work started as minority commissioner. Before he was chairman, Pai proposed projects for rural broadband development; he frequently toured underserved states and communities; and he proposed legislation to offer the 21st century promise to economically depressed areas of the country. Looking at Chairman Pai is only half the picture.
Keeping Americans Connected
One would not think that the head of the Federal Communications Commission would be a leader on important health-care issues, but Ajit has made a real difference here too. One of his major initiatives has been the development of telemedicine solutions to expand access to care in critical communities.
Beyond encouraging buildout of networks in less-connected areas, Pai’s FCC has also worked to allocate funding for health-care providers and educational institutions who were navigating the transition to remote services. He ensured that health-care providers’ telecommunications and information services were funded. He worked with the U.S. Department of Education to direct funds for education stabilization and allowed schools to purchase additional bandwidth. And he granted temporary additional spectrum usage to broadband providers to meet the increased demand upon our nation’s networks. Oh, and his Keep Americans Connected Pledge gathered commitment from more than 800 companies to ensure that Americans would not lose their connectivity due to pandemic-related circumstances. As if the list were not long enough, Congress’ January coronavirus relief package will ensure that these and other programs, like Rip and Replace, will remain funded for the foreseeable future.
I might sound like I am beating a dead horse here, but the seeds of this, too, were laid in his work in the minority. Here he is describing his work in a 2015 interview, as a minority commissioner:
My own father is a physician in rural Kansas, and I remember him heading out in his car to visit the small towns that lay 40 miles or more from home. When he was there, he could provide care for people who would otherwise never see a specialist at all. I sometimes wonder, back in the 1970s and 1980s, how much easier it would have been on patients, and him, if broadband had been available so he could provide healthcare online.
Agency Transparency and Democratization
Many minority commissioners like to harp on agency transparency. Some take a different view when they are in charge. But Ajit made good on his complaints about agency transparency when he became Chairman Pai. He did this through circulating draft items well in advance of monthly open meetings, giving people the opportunity to know what the agency was voting on.
You used to need a direct connection with the FCC to even be aware of what orders were being discussed—the worst of the D.C. swamp—but now anyone can read about the working items, in clear language.
These moves toward a more transparent, accessible FCC dispel the impression that the agency is run by Washington insiders who are disconnected from the average person. The meetings may well be dry and technical—they really are—but Chairman Pai’s statements are not only good-natured and humorous, but informative and substantive. The public has been well-served by his efforts here.
Incentivizing Innovation and Next-Generation Technologies
Chairman Pai will be remembered for his encouragement of innovation. Under his chairmanship, the FCC discontinued rules that unnecessarily required carriers to maintain costly older, lower-speed networks and legacy voice services. It streamlined the discontinuance process for lower-speed services if the carrier is already providing higher-speed service or if no customers are using the service. It also okayed streamlined notice following force majeure events like hurricanes to encourage investment and deployment of newer, faster infrastructure and services following destruction of networks. The FCC also approved requests by companies to provide high-speed broadband through non-geostationary orbit satellite constellations and created a streamlined licensing process for small satellites to encourage faster deployment.
This is what happens when you get a tech nerd at the head of an agency he loves and cares for. A serious commitment to good policy with an eye toward the future.
Restoring Internet Freedom
This is a pretty sensitive one for me. You hear less about it now, other than some murmurs from the Biden administration about changing it, but the debate over net neutrality got nasty and apocalyptic.
It was everywhere; people saying Chairman Pai would end the internet as we know it. The whole web blacked out for a day in protest. People mocked up memes showing a 25 cent-per-Google-search charge. And as a result of this over-the-top rhetoric, my friend, and his family, received death threats.
That is truly beyond the pale. One could not blame anyone for leaving public service in such an environment. I cannot begin to imagine what I would have done in Ajit’s place. But Ajit took the threats on his life with grace and dignity, never lost his sense of humor, and continued to serve the public dutifully with remarkable courage. I think that says a lot about him. And the American public is lucky to have benefited from his leadership.
Now, for the policy stuff. Though it should go without saying, thelight-touch framework Chairman Pai returned us to—as opposed to the public utility one—will ensure that the United States maintains its leading position on technological innovation in 5G networks and services. The fact that we have endured COVID—and the massive strain on the internet it has caused—with little to no noticeable impact on internet services is all the evidence you need he made the right choice. Ajit has rightfully earned the title of the “5G Chairman.”
I cannot give Ajit all the praise he truly deserves without sounding sycophantic, or bribed. There are any number of windows into his character, but one rises above the rest for me. And I wanted to take the extra time to thank Ajit for it.
Every year, without question, no matter what was going on—even as chairman—Ajit would come to my classes and talk to my students. At length. In detail. And about any subject they wished. He stayed until he answered all of their questions. If I didn’t politely shove him out of the class to let him go do his real job, I’m sure he would have stayed until the last student left. And if you know anything about how to judge a person’s character, that will tell you all you need to know.
With the COVID-19 vaccine made by Moderna joining the one from Pfizer and BioNTech in gaining approval from the U.S. Food and Drug Administration, it should be time to celebrate the U.S. system of pharmaceutical development. The system’s incentives—notably granting patent rights to firms that invest in new and novel discoveries—have worked to an astonishing degree, producing not just one but as many as three or four effective approaches to end a viral pandemic that, just a year ago, was completely unknown.
Alas, it appears not all observers agree. Now that we have the vaccines, some advocate suspending or limiting patent rights—for example, by imposing a compulsory licensing scheme—with the argument that this is the only way for the vaccines to be produced in mass quantities worldwide. Some critics even assert that abolishing or diminishing property rights in pharmaceuticals is needed to end the pandemic.
In truth, we can effectively and efficiently distribute the vaccines while still maintaining the integrity of our patent system.
What the false framing ignores are the important commercialization and distribution functions that patents provide, as well as the deep, long-term incentives the patent system provides to create medical innovations and develop a robust pharmaceutical supply chain. Unless we are sure this is the last pandemic we will ever face, repealing intellectual property rights now would be a catastrophic mistake.
The supply chains necessary to adequately scale drug production are incredibly complex, and do not appear overnight. The coordination and technical expertise needed to support worldwide distribution of medicines depends on an ongoing pipeline of a wide variety of pharmaceuticals to keep the entire operation viable. Public-spirited officials may in some cases be able to piece together facilities sufficient to produce and distribute a single medicine in the short term, but over the long term, global health depends on profit motives to guarantee the commercialization pipeline remains healthy.
But the real challenge is in maintaining proper incentives to develop new drugs. It has long been understood that information goods like intellectual property will be undersupplied without sufficient legal protections. Innovators and those that commercialize innovations—like researchers and pharmaceutical companies—have less incentive to discover and market new medicines as the likelihood that they will be able to realize a return for their efforts diminishes. Without those returns, it’s far less certain the COVID vaccines would have been produced so quickly, or at all. The same holds for the vaccines we will need for the next crisis or badly needed treatments for other deadly diseases.
Patents are not the only way to structure incentives, as can be seen with the current vaccines. Pharmaceutical companies also took financial incentives from various governments in the form of direct payment or in purchase guarantees. But this enhances, rather than diminishes, the larger argument. There needs to be adequate returns for those who engage in large, risky undertakings like creating a new drug.
Some critics would prefer to limit pharmaceutical companies’ returns solely to those early government investments, but there are problems with this approach. It is difficult for governments to know beforehand what level of profit is needed to properly incentivize firms to engage in producing these innovations. To the extent that direct government investment is useful, it often will be as an additional inducement that encourages new entry by multiple firms who might each pursue different technologies.
Thus, in the case of coronavirus vaccines, government subsidies may have enticed more competitors to enter more quickly, or not to drop out as quickly, in hopes that they would still realize a profit, notwithstanding the risks. Where there might have been only one or two vaccines produced in the United States, it appears likely we will see as many as four.
But there will always be necessary trade-offs. Governments cannot know how to set proper incentives to encourage development of every possible medicine for every possible condition by every possible producer. Not only do we not know which diseases and which firms to prioritize, but we have no idea how to determine which treatment approaches to encourage.
The COVID-19 vaccines provide a clear illustration of this problem. We have seen development of both traditional vaccines and experimental mRNA treatments to combat the virus. Thankfully, both have shown positive results, but there was no way to know that in March. In this perennial state of ignorance,t markets generally have provided the best—though still imperfect—way to make decisions.
The patent system’s critics sometimes claim that prizes would offer a better way to encourage discovery. But if we relied solely on government-directed prizes, we might never have had the needed research into the technology that underlies mRNA. As one recent report put it, “before messenger RNA was a multibillion-dollar idea, it was a scientific backwater.” Simply put, without patent rights as the backstop to purely academic or government-led innovation and commercialization, it is far less likely that we would have seen successful COVID vaccines developed as quickly.
It is difficult for governments to be prepared for the unknown. Abolishing or diminishing pharmaceutical patents would leave us even less prepared for the next medical crisis. That would only add to the lasting damage that the COVID-19 pandemic has already wrought on the world.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Geoffrey A. Manne, (President, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics); and Dirk Auer, (Senior Fellow of Law & Economics, ICLE)]
Back in 2012, Covidien, a large health care products company and medical device manufacturer, purchased Newport Medical Instruments, a small ventilator developer and manufacturer. (Covidien itself was subsequently purchased by Medtronic in 2015).
Eight years later, in the midst of the coronavirus pandemic, the New York Times has just published an article revisiting the Covidien/Newport transaction, and questioning whether it might have contributed to the current shortage of ventilators.
The article speculates that Covidien’s purchase of Newport, and the subsequent discontinuation of Newport’s “Aura” ventilator — which was then being developed by Newport under a government contract — delayed US government efforts to procure mechanical ventilators until the second half of 2020 — too late to treat the first wave of COVID-19 patients:
And then things suddenly veered off course. A multibillion-dollar maker of medical devices bought the small California company that had been hired to design the new machines. The project ultimately produced zero ventilators.
That failure delayed the development of an affordable ventilator by at least half a decade, depriving hospitals, states and the federal government of the ability to stock up.
* * *
Today, with the coronavirus ravaging America’s health care system, the nation’s emergency-response stockpile is still waiting on its first shipment.
The article has generated considerable interest not so much for what it suggests about government procurement policies or for its relevance to the ventilator shortages associated with the current pandemic, but rather for its purported relevance to ongoing antitrust debates and the arguments put forward by “antitrust populists” and others that merger enforcement in the US is dramatically insufficient.
Only a single sentence in the article itself points to a possible antitrust story — and it does nothing more than report unsubstantiated speculation from unnamed “government officials” and rival companies:
Government officials and executives at rival ventilator companies said they suspected that Covidien had acquired Newport to prevent it from building a cheaper product that would undermine Covidien’s profits from its existing ventilator business.
Nevertheless, and right on cue, various antitrust scholars quickly framed the deal as a so-called “killer acquisition” (see also here and here):
Unsurprisingly, politicians were also quick to jump on the bandwagon. David Cicilline, the powerful chairman of the House Antitrust Subcommittee, opined that:
The public reporting on this acquisition raises important questions about the review of this deal. We should absolutely be looking back to figure out what happened.
These “hot takes” raise a crucial issue. The New York Times story opened the door to a welter of hasty conclusions offered to support the ongoing narrative that antitrust enforcement has failed us — in this case quite literally at the cost of human lives. But are any of these claims actually supportable?
Unfortunately, the competitive realities of the mechanical ventilator industry, as well as a more clear-eyed view of what was likely going on with the failed government contract at the heart of the story, simply do not support the “killer acquisition” story.
What is a “killer acquisition”…?
Let’s take a step back. Because monopoly profits are, by definition, higher than joint duopoly profits (all else equal), economists have long argued that incumbents may find it profitable to acquire smaller rivals in order to reduce competition and increase their profits. More specifically, incumbents may be tempted to acquire would-be entrants in order to prevent them from introducing innovations that might hurt the incumbent’s profits.
For this theory to have any purchase, however, a number of conditions must hold. Most importantly, as Colleen Cunningham, Florian Ederer, and Song Ma put it in an influential paper:
“killer acquisitions” can only occur when the entrepreneur’s project overlaps with the acquirer’s existing product…. [W]ithout any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur… because, without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.
Moreover, the authors add that:
Successfully developing a new product draws consumer demand and profits away equally from all existing products. An acquiring incumbent is hurt more by such cannibalization when he is a monopolist (i.e., the new product draws demand away only from his own existing product) than when he already faces many other existing competitors (i.e., cannibalization losses are spread over many firms). As a result, as the number of existing competitors increases, the replacement effect decreases and the acquirer’s development decisions become more similar to those of the entrepreneur.
Finally, the “killer acquisition” terminology is appropriate only when the incumbent chooses to discontinue its rival’s R&D project:
If incumbents face significant existing competition, acquired projects are not significantly more frequently discontinued than independent projects. Thus, more competition deters incumbents from acquiring and terminating the projects of potential future competitors, which leads to more competition in the future.
…And what isn’t a killer acquisition?
What is left out of this account of killer acquisitions is the age-old possibility that an acquirer purchases a rival precisely because it has superior know-how or a superior governance structure that enables it to realize greater return and more productivity than its target. In the case of a so-called killer acquisition, this means shutting down a negative ROI project and redeploying resources to other projects or other uses — including those that may not have any direct relation to the discontinued project.
Such “synergistic” mergers are also — like allegedly “killer” mergers — likely to involve acquirers and targets in the same industry and with technological overlap between their R&D projects; it is in precisely these situations that the acquirer is likely to have better knowledge than the target’s shareholders that the target is undervalued because of poor governance rather than exogenous, environmental factors.
In other words, whether an acquisition is harmful or not — as the epithet “killer” implies it is — depends on whether it is about reducing competition from a rival, on the one hand, or about increasing the acquirer’s competitiveness by putting resources to more productive use, on the other.
As argued below, it is highly unlikely that Covidien’s acquisition of Newport could be classified as a “killer acquisition.” There is thus nothing to suggest that the merger materially impaired competition in the mechanical ventilator market, or that it measurably affected the US’s efforts to fight COVID-19.
The market realities of the ventilator market and its implications for the “killer acquisition” story
1. The mechanical ventilator market is highly competitive
As explained above, “killer acquisitions” are less likely to occur in competitive markets. Yet the mechanical ventilator industry is extremely competitive.
Medical ventilators market competition is intense.
The conclusion that the mechanical ventilator industry is highly competitive is further supported by the fact that the five largest producers combined reportedly hold only 50% of the market. In other words, available evidence suggests that none of these firms has anything close to a monopoly position.
Similarly, following preliminary investigations, neither the FTC nor the European Commission saw the need for an in-depth look at the ventilator market when they reviewed Medtronic’s subsequent acquisition of Covidien (which closed in 2015). Although Medtronic did not produce any mechanical ventilators before the acquisition, authorities (particularly the European Commission) could nevertheless have analyzed that market if Covidien’s presumptive market share was particularly high. The fact that they declined to do so tends to suggest that the ventilator market was relatively unconcentrated.
2. The value of the merger was too small
A second strong reason to believe that Covidien’s purchase of Newport wasn’t a killer acquisition is the acquisition’s value of $103 million.
Indeed, if it was clear that Newport was about to revolutionize the ventilator market, then Covidien would likely have been made to pay significantly more than $103 million to acquire it.
As noted above, the crux of the “killer acquisition” theory is that incumbents can induce welfare-reducing acquisitions by offering to acquire their rivals for significantly more than the present value of their rivals’ expected profits. Because an incumbent undertaking a “killer” takeover expects to earn monopoly profits as a result of the transaction, it can offer a substantial premium and still profit from its investment. It is this basic asymmetry that drives the theory.
[Where] a court may lack the expertise to [assess the commercial significance of acquired technology]…, the transaction value… may provide a reasonable proxy. Intuitively, if the startup is a relatively small company with relatively few sales to its name, then a very high acquisition price may reasonably suggest that the startup technology has significant promise.
The strategy only works, however, if the target firm’s shareholders agree that share value properly reflects only “normal” expected profits, and not that the target is poised to revolutionize its market with a uniquely low-cost or high-quality product. Relatively low acquisition prices relative to market size, therefore, tend to reflect low (or normal) expected profits, and a low perceived likelihood of radical innovations occurring.
We can apply this reasoning to Covidien’s acquisition of Newport:
Precise and publicly available figures concerning the mechanical ventilator market are hard to come by. Nevertheless, one estimate finds that the global ventilator market was worth $2.715 billion in 2012. Another report suggests that the global market was worth $4.30 billion in 2018; still another that it was worth $4.58 billion in 2019.
As noted above, Covidien reported to the SEC that it paid $103 million to purchase Newport (a firm that produced only ventilators and apparently had no plans to branch out).
For context, at the time of the acquisition Covidien had annual sales of $11.8 billion overall, and $743 million in sales of its existing “Airways and Ventilation Products.”
If the ventilator market was indeed worth billions of dollars per year, then the comparatively small $108 million paid by Covidien — small even relative to Covidien’s own share of the market — suggests that, at the time of the acquisition, it was unlikely that Newport was poised to revolutionize the market for mechanical ventilators (for instance, by successfully bringing its Aura ventilator to market).
The New York Times article claimed that Newport’s ventilators would be sold (at least to the US government) for $3,000 — a substantial discount from the reportedly then-going rate of $10,000. If selling ventilators at this price seemed credible at the time, then Covidien — as well as Newport’s shareholders — knew that Newport was about to achieve tremendous cost savings, enabling it to offer ventilators not only to the the US government, but to purchasers around the world, at an irresistibly attractive — and profitable — price.
Ventilators at the time typically went for about $10,000 each, and getting the price down to $3,000 would be tough. But Newport’s executives bet they would be able to make up for any losses by selling the ventilators around the world.
“It would be very prestigious to be recognized as a supplier to the federal government,” said Richard Crawford, who was Newport’s head of research and development at the time. “We thought the international market would be strong, and there is where Newport would have a good profit on the product.”
If achievable, Newport thus stood to earn a substantial share of the profits in a multi-billion dollar industry.
Of course, it is necessary to apply a probability to these numbers: Newport’s ventilator was not yet on the market, and had not yet received FDA approval. Nevertheless, if the Times’ numbers seemed credible at the time, then Covidien would surely have had to offer significantly more than $108 million in order to induce Newport’s shareholders to part with their shares.
Given the low valuation, however, as well as the fact that Newport produced other ventilators — and continues to do so to this day, there is no escaping the fact that everyone involved seemed to view Newport’s Aura ventilator as nothing more than a moonshot with, at best, a low likelihood of success.
Curically, this same reasoning explains why it shouldn’t surprise anyone that the project was ultimately discontinued; recourse to a “killer acquisition” theory is hardly necessary.
3. Lessons from Covidien’s ventilator product decisions
The killer acquisition claims are further weakened by at least four other important pieces of information:
Covidien initially continued to develop Newport’s Aura ventilator, and continued to develop and sell Newport’s other ventilators.
There was little overlap between Covidien and Newport’s ventilators — or, at the very least, they were highly differentiated
Covidien appears to have discontinued production of its own portable ventilator in 2014
The Newport purchase was part of a billion dollar series of acquisitions seemingly aimed at expanding Covidien’s in-hospital (i.e., not-portable) device portfolio
Covidien continued to develop and sell Newport’s ventilators
For a start, while the Aura line was indeed discontinued by Covidien, the timeline is important. The acquisition of Newport by Covidien was announced in March 2012, approved by the FTC in April of the same year, and the deal was closed on May 1, 2012.
However, as the FDA’s 510(k) database makes clear, Newport submitted documents for FDA clearance of the Aura ventilator months after its acquisition by Covidien (June 29, 2012, to be precise). And the Aura received FDA 510(k) clearance on November 9, 2012 — many months after the merger.
It would have made little sense for Covidien to invest significant sums in order to obtain FDA clearance for a project that it planned to discontinue (the FDA routinely requires parties to actively cooperate with it, even after 510(k) applications are submitted).
Moreover, if Covidien really did plan to discreetly kill off the Aura ventilator, bungling the FDA clearance procedure would have been the perfect cover under which to do so. Yet that is not what it did.
Covidien continued to develop and sell Newport’s other ventilators
Second, and just as importantly, Covidien (and subsequently Medtronic) continued to sell Newport’s other ventilators. The Newport e360 and HT70 are still sold today. Covidien also continued to improve these products: it appears to have introduced an improved version of the Newport HT70 Plus ventilator in 2013.
If eliminating its competitor’s superior ventilators was the only goal of the merger, then why didn’t Covidien also eliminate these two products from its lineup, rather than continue to improve and sell them?
At least part of the answer, as will be seen below, is that there was almost no overlap between Covidien and Newport’s product lines.
There was little overlap between Covidien’s and Newport’s ventilators
Third — and perhaps the biggest flaw in the killer acquisition story — is that there appears to have been very little overlap between Covidien and Newport’s ventilators.
This decreases the likelihood that the merger was a killer acquisition. When two products are highly differentiated (or not substitutes at all), sales of the first are less likely to cannibalize sales of the other. As Florian Ederer and his co-authors put it:
Importantly, without any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur, neither to “Acquire to Kill” nor to “Acquire to Continue.” This is because without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.
A quick search of the FDA’s 510(k) database reveals that Covidien has three approved lines of ventilators: the Puritan Bennett 980, 840, and 540 (apparently essentially the same as the PB560, the plans to which Medtronic recently made freely available in order to facilitate production during the current crisis). The same database shows that these ventilators differ markedly from Newport’s ventilators (particularly the Aura).
In particular, Covidien manufactured primarily traditional, invasive ICU ventilators (except for the PB540, which is potentially a substitute for the Newport HT70), while Newport made much-more-portable ventilators, suitable for home use (notably the Aura, HT50 and HT70 lines).
Under normal circumstances, critical care and portable ventilators are not substitutes. As the WHO website explains, portable ventilators are:
[D]esigned to provide support to patients who do not require complex critical care ventilators.
A quick glance at Medtronic’s website neatly illustrates the stark differences between these two types of devices:
This is not to say that these devices do not have similar functionalities, or that they cannot become substitutes in the midst of a coronavirus pandemic. However, in normal times (as was the case when Covidien acquired Newport), hospitals likely did not view these devices as substitutes.
The conclusion that Covidien and Newport’s ventilator were not substitutes finds further support in documents and statements released at the time of the merger. For instance, Covidien’s CEO explained that:
This acquisition is consistent with Covidien’s strategy to expand into adjacencies and invest in product categories where it can develop a global competitive advantage.
Newport’s products and technology complement our current portfolio of respiratory solutions and will broaden our ventilation platform for patients around the world, particularly in emerging markets.
In short, the fact that almost all of Covidien and Newport’s products were not substitutes further undermines the killer acquisition story. It also tends to vindicate the FTC’s decision to rapidly terminate its investigation of the merger.
Covidien appears to have discontinued production of its own portable ventilator in 2014
Perhaps most tellingly: It appears that Covidien discontinued production of its own competing, portable ventilator, the Puritan Bennett 560, in 2014.
The product is reported on the company’s 2011, 2012 and 2013 annual reports:
Airway and Ventilation Products — airway, ventilator, breathing systems and inhalation therapy products. Key products include: the Puritan Bennett™ 840 line of ventilators; the Puritan Bennett™ 520 and 560 portable ventilator….
Surely if Covidien had intended to capture the portable ventilator market by killing off its competition it would have continued to actually sell its own, competing device. The fact that the only portable ventilators produced by Covidien by 2014 were those it acquired in the Newport deal strongly suggests that its objective in that deal was the acquisition and deployment of Newport’s viable and profitable technologies — not the abandonment of them. This, in turn, suggests that the Aura was not a viable and profitable technology.
(Admittedly we are unable to determine conclusively that either Covidien or Medtronic stopped producing the PB520/540/560 series of ventilators. But our research seems to indicate strongly that this is indeed the case).
Putting the Newport deal in context
Finally, although not dispositive, it seems important to put the Newport purchase into context. In the same year as it purchased Newport, Covidien paid more than a billion dollars to acquire five other companies, as well — all of them primarily producing in-hospital medical devices.
That 2012 spending spree came on the heels of a series of previous medical device company acquisitions, apparently totally some four billion dollars. Although not exclusively so, the acquisitions undertaken by Covidien seem to have been primarily targeted at operating room and in-hospital monitoring and treatment — making the putative focus on cornering the portable (home and emergency) ventilator market an extremely unlikely one.
By the time Covidien was purchased by Medtronic the deal easily cleared antitrust review because of the lack of overlap between the company’s products, with Covidien’s focusing predominantly on in-hospital, “diagnostic, surgical, and critical care” and Medtronic’s on post-acute care.
Newport misjudged the costs associated with its Aura project; Covidien was left to pick up the pieces
So why was the Aura ventilator discontinued?
Although it is almost impossible to know what motivated Covidien’s executives, the Aura ventilator project clearly suffered from many problems.
The Aura project was intended to meet the requirements of the US government’s BARDA program (under the auspices of the U.S. Department of Health and Human Services’ Biomedical Advanced Research and Development Authority). In short, the program sought to create a stockpile of next generation ventilators for emergency situations — including, notably, pandemics. The ventilator would thus have to be designed for events where
mass casualties may be expected, and when shortages of experienced health care providers with respiratory support training, and shortages of ventilators and accessory components may be expected.
The Aura ventilator would thus sit somewhere between Newport’s two other ventilators: the e360 which could be used in pediatric care (for newborns smaller than 5kg) but was not intended for home care use (or the extreme scenarios envisioned by the US government); and the more portable HT70 which could be used in home care environments, but not for newborns.
Unfortunately, the Aura failed to achieve this goal. The FDA’s 510(k) clearance decision clearly states that the Aura was not intended for newborns:
The AURA family of ventilators is applicable for infant, pediatric and adult patients greater than or equal to 5 kg (11 lbs.).
the company was unable to secure FDA approval for use in neonatal populations — a contract requirement.
And the US Government RFP confirms that this was indeed an important requirement:
The device must be able to provide the same standard of performance as current FDA pre-market cleared portable ventilators and shall have the following additional characteristics or features:
• Flexibility to accommodate a wide patient population range from neonate to adult.
Newport also seems to have been unable to deliver the ventilator at the low price it had initially forecasted — a common problem for small companies and/or companies that undertake large R&D programs. It also struggled to complete the project within the agreed-upon deadlines. As the Medtronic press release explains:
Covidien learned that Newport’s work on the ventilator design for the Government had significant gaps between what it had promised the Government and what it could deliver — both in terms of being able to achieve the cost of production specified in the contract and product features and performance. Covidien management questioned whether Newport’s ability to complete the project as agreed to in the contract was realistic.
As Jason Crawford, an engineer and tech industry commentator, put it:
Projects fail all the time. “Supplier risk” should be a standard checkbox on anyone’s contingency planning efforts. This is even more so when you deliberately push the price down to 30% of the market rate. Newport did not even necessarily expect to be profitable on the contract.
The above is mostly Covidien’s “side” of the story, of course. But other pieces of evidence lend some credibility to these claims:
Newport agreed to deliver its Aura ventilator at a per unit cost of less than $3000. But, even today, this seems extremely ambitious. For instance, the WHO has estimated that portable ventilators cost between $3,300 and $13,500. If Newport could profitably sell the Aura at such a low price, then there was little reason to discontinue it (readers will recall the development of the ventilator was mostly complete when Covidien put a halt to the project).
Covidien/Newport is not the only firm to have struggled to offer suitable ventilators at such a low price. Philips (which took Newport’s place after the government contract fell through) also failed to achieve this low price. Rather than the $2,000 price sought in the initial RFP, Philips ultimately agreed to produce the ventilators for $3,280. But it has not yet been able to produce a single ventilator under the government contract at that price.
Covidien has repeatedly been forced to recall some of its other ventilators ( here, here and here) — including the Newport HT70. And rival manufacturers have also faced these types of issues (for example, here and here).
Accordingly, Covidien may well have preferred to cut its losses on the already problem-prone Aura project, before similar issues rendered it even more costly.
In short, while it is impossible to prove that these development issues caused Covidien to pull the plug on the Aura project, it is certainly plausible that they did. This further supports the hypothesis that Covidien’s acquisition of Newport was not a killer acquisition.
Ending the Aura project might have been an efficient outcome
As suggested above, moreover, it is entirely possible that Covidien was better able to realize the poor prospects of Newport’s Aura project and also better organized to enable it to make the requisite decision to abandon the project.
Moreover, the relatively large share of revue and reputation that Newport — worth $103 million in 2012, versus Covidien’s $11.8 billion — would have realized from fulfilling a substantial US government project could well have induced it to overestimate the project’s viability and to undertake excessive risk in the (vain) hope of bringing the project to fruition.
While there is a tendency among antitrust scholars, enforcers, and practitioners to look for (and find…) antitrust-related rationales for mergers and other corporate conduct, it remains the case that most corporate control transactions (such as mergers) are driven by the acquiring firm’s expectation that it can manage more efficiently. As Henry G. Manne put it in his seminal article, Mergers and the Market for Corporate Control (1965):
Since, in a world of uncertainty, profitable transactions will be entered into more often by those whose information is relatively more reliable, it should not surprise us that mergers within the same industry have been a principal form of changing corporate control. Reliable information is often available to suppliers and customers as well. Thus many vertical mergers may be of the control takeover variety rather than of the “foreclosure of competitors” or scale-economies type.
Of course, the same information that renders an acquiring firm in the same line of business knowledgeable enough to operate a target more efficiently could also enable it to effect a “killer acquisition” strategy. But the important point is that a takeover by a firm with a competing product line, after which the purchased company’s product line is abandoned, is at least as consistent with a “market for corporate control” story as with a “killer acquisition” story.
“Killer acquisitions” can have a nefarious image, but killing off a rival’s product was probably not the main purpose of the transaction, Ederer said. He raised the possibility that Covidien decided to kill Newport’s innovation upon realising that the development of the devices would be expensive and unlikely to result in profits.
In conclusion, Covidien’s acquisition of Newport offers a cautionary tale about reckless journalism, “blackboard economics,” and government failure.
Reckless journalism because the New York Times clearly failed to do the appropriate due diligence for its story. Its journalists notably missed (or deliberately failed to mention) a number of critical pieces of information — such as the hugely important fact that most of Covidien’s and Newport’s products did not overlap, or the fact that there were numerous competitors in the highly competitive mechanical ventilator industry.
And yet, that did not stop the authors from publishing their extremely alarming story, effectively suggesting that a small medical device merger materially contributed to the loss of many American lives.
What is studied is a system which lives in the minds of economists but not on earth.
Numerouscommentators rushed to fit the story to their preconceived narratives, failing to undertake even a rudimentary examination of the underlying market conditions before they voiced their recriminations.
The only thing that Covidien and Newport’s merger ostensibly had in common with the killer acquisition theory was the fact that a large firm purchased a small rival, and that the one of the small firm’s products was discontinued. But this does not even begin to meet the stringent conditions that must be fulfilled for the theory to hold water. Unfortunately, critics appear to have completely ignored all contradicting evidence.
Finally, what the New York Times piece does offer is a chilling tale of government failure.
The inception of the US government’s BARDA program dates back to 2008 — twelve years before the COVID-19 pandemic hit the US.
The collapse of the Aura project is no excuse for the fact that, more than six years after the Newport contract fell through, the US government still has not obtained the necessary ventilators. Questions should also be raised about the government’s decision to effectively put all of its eggs in the same basket — twice. If anything, it is thus government failure that was the real culprit.
And yet the New York Times piece and the critics shouting “killer acquisition!” effectively give the US government’s abject failure here a free pass — all in the service of pursuing their preferred “killer story.”
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Geoffrey A. Manne, (President, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics).]
There has been much (admittedly important) discussion of the economic woes of mass quarantine to thwart the spread and “flatten the curve” of the virus and its health burdens — as well as some extremely interesting discussion of the long-term health woes of quarantine and the resulting economic downturn: see, e.g., previous work by Christopher Ruhm suggesting mortality rates may improve during economic downturns, and this thread on how that might play out differently in the current health crisis.
But there is perhaps insufficient attention being paid to the more immediate problem of medical resource scarcity to treat large, localized populations of acutely sick people — something that will remain a problem for some time in places like New York, no matter how successful we are at flattening the curve.
Yet the fact that we may have failed to prepare adequately for the current emergency does not mean that we can’t improve our ability to respond to the current emergency and build up our ability to respond to subsequent emergencies — both in terms of future, localized outbreaks of COVID-19, as well as for other medical emergencies more broadly.
In what follows I lay out the outlines of a proposal for an OPTN (Organ Procurement and Transplantation Network) analogue for allocating emergency medical resources. In order to make the idea more concrete (and because no doubt there is a limit to the types of medical resources for which such a program would be useful or necessary), let’s call it the VPAN — Ventilator Procurement and Allocation Network.
As quickly as possible in order to address the current crisis — and definitely with enough speed to address the next crisis — we should develop a program to collect relevant data and enable deployment of medical resources where they are most needed, using such data, wherever possible, to enable deployment before shortages become the enormous problem they are today.
Data and information are important tools for mitigating emergencies
Hal’s post, especially in combination with Julian’s, offers a really useful suggestion for using modern information technology to help mitigate one of the biggest problems of the current crisis: The ability to return to economic activity (and a semblance of normalcy) as quickly as possible.
What I like most about his idea (and, again, Julian’s) is its incremental approach: We don’t have to wait until it’s safe for everyone to come outside in order for some people to do so. And, properly collected, assessed, and deployed, information is a key part of making that possible for more and more people every day.
Here I want to build on Hal’s idea to suggest another — perhaps even more immediately crucial — use of data to alleviate the COVID-19 crisis: The allocation of scarce medical resources.
In the current crisis, the “what” of this data is apparent: it is the testing data described by Julian in his post, and implemented in digital form by Hal in his. Thus, whereas Hal’s proposal contemplates using this data solely to allow proprietors (public transportation, restaurants, etc.) to admit entry to users, my proposal contemplates something more expansive: the provision of Hal’s test-verification vendors’ data to a centralized database in order to use it to assess current medical resource needs and to predict future needs.
The apparent ventilator availability crisis
As I have learned at great length from a friend whose spouse is an ICU doctor on the front lines, the current ventilator scarcity in New York City is worrisome (from a personal email, edited slightly for clarity):
When doctors talk about overwhelming a medical system, and talk about making life/death decisions, often they are talking about ventilators. A ventilator costs somewhere between $25K to $50K. Not cheap, but not crazy expensive. Most of the time these go unused, so hospitals have not stocked up on them, even in first-rate medical systems. Certainly not in the US, where equipment has to get used or the hospital does not get reimbursed for the purchase.
With a bad case of this virus you can put somebody — the sickest of the sickest — on one of those for three days and many of them don’t die. That frames a brutal capacity issue in a local area. And that is what has happened in Italy. They did not have enough ventilators in specific cities where the cases spiked. The mortality rates were much higher solely due to lack of these machines. Doctors had to choose who got on the machine and who did not. When you read these stories about a choice of life and death, that could be one reason for it.
Now the brutal part: This is what NYC might face soon. Faster than expected, by the way. Maybe they will ship patients to hospitals in other parts of NY state, and in NJ and CT. Maybe they can send them to the V.A. hospitals. Those are the options for how they hope to avoid this particular capacity issue. Maybe they will flatten the curve just enough with all the social distancing. Hard to know just now. But right now the doctors are pretty scared, and they are planning for the worst.
A 2018 analysis from the Johns Hopkins University Center for Health Security estimated we have around 160,000 ventilators in the U.S. If the “worst-case scenario” were to come to pass in the U.S., “there might not be” enough ventilators, Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases, told CNN on March 15.
“If you don’t have enough ventilators, that means [obviously] that people who need it will not be able to get it,” Fauci said. He stressed that it was most important to mitigate the virus’ spread before it could overwhelm American health infrastructure.
Reports say that the American Hospital Association believes almost 1 million COVID-19 patients in the country will require a ventilator. Not every patient will require ventilation at the same time, but the numbers are still concerning. Dr. Daniel Horn, a physician at Massachusetts General Hospital in Boston, warned in a March 22 editorial in The New York Times that “There simply will not be enough of these machines, especially in major cities.”
Medical resource scarcity in the current crisis is a drastic problem. And without significant efforts to ameliorate it it is likely to get worse before it gets better.
Using data to allocate scarce resources: The basic outlines of a proposed “Ventilator Procurement and Allocation Network”
But that doesn’t mean that the scarce resources we do have can’t be better allocated. As the PBS story quoted above notes, there are some 160,000 ventilators in the US. While that may not be enough in the aggregate, it’s considerably more than are currently needed in, say, New York City — and a great number of them are surely not currently being used, nor likely immediately to need to be used.
The basic outline of the idea for redistributing these resources is fairly simple:
First, register all of the US’s existing ventilators in a centralized database.
Second (using a system like the one Hal describes), collect and update in real time the relevant test results, contact tracing, demographic, and other epidemiological data and input it into a database.
Third, analyze this data using one or more compartmental models (or more targeted, virus-specific models) — (NB: I am the furthest thing from an epidemiologist, so I make no claims about how best to do this; the link above, e.g., is merely meant to be illustrative and not a recommendation) — to predict the demand for ventilators at various geographic levels, ranging from specific hospitals to counties or states. In much the same way, allocation of organs in the OPTN is based on a set of “allocation calculators” (which in turn are intended to implement the “Final Rule” adopted by HHS to govern transplant organ allocation decisions).
Fourth, ask facilities in low-expected-demand areas to send their unused (or excess above the level required to address “normal” demand) ventilators to those in high-expected-demand areas, with the expectation that they will be consistently reallocated across all hospitals and emergency care facilities according to the agreed-upon criteria. Of course, the allocation “algorithm” would be more complicated than this (as is the HHS Final Rule for organ allocation). But in principle this would be the primary basis for allocation.
Not surprisingly, some guidelines for the allocation of ventilators in such emergencies already exist — like New York’s Ventilator Allocation Guidelines for triaging ventilators during an influenza pandemic. But such guidelines address the protocols for each facility to use in determining how to allocate its own scarce resources; they do not contemplate the ability to alleviate shortages in the first place by redistributing ventilators across facilities (or cities, states, etc.).
I believe that such a system — like the OPTN — could largely work on a voluntary basis. Of course, I’m quick to point out that the OPTN is a function of a massive involuntary and distortionary constraint: the illegality of organ sales. But I suspect that a crisis like the one we’re currently facing is enough to engender much the same sort of shortage (as if such a constraint were in place with respect to the use of ventilators), and thus that a similar system would be similarly useful. If not, of course, it’s possible that the government could, in emergency situations, actually commandeer privately-owned ventilators in order to effectuate the system. I leave for another day the consideration of the merits and defects of such a regime.
Of course, it need not rely on voluntary participation. There could be any number of feasible means of inducing hospitals that have unused ventilators to put their surpluses into the allocation network, presumably involving some sort of cash or other compensation. Or perhaps, if and when such a system were expanded to include other medical resources, it might involve moving donor hospitals up the queue for some other scarce resources they need that don’t face a current crisis. Surely there must be equipment that a New York City hospital has in relative surplus that a small town hospital covets.
But the key point is this: It doesn’t make sense to produce and purchase enough ventilators so that every hospital in the country can simultaneously address extremely rare peak demands. Doing so would be extraordinarily — and almost always needlessly — expensive. And emergency preparedness is never about ensuring that there are no shortages in the worst-case scenario; it’s about making a minimax calculation (as odious as those are) — i.e., minimizing the maximal cost/risk, not mitigating risk entirely. (For a literature review of emergency logistics in the context of large-scale disasters, see, e.g., here)
But nor does it make sense — as a policy matter — to allocate the new ventilators that will be produced in response to current demand solely on the basis of current demand. The epidemiological externalities of the current pandemic are substantial, and there is little reason to think that currently over-taxed emergency facilities — or even those preparing for their own expected demand — will make procurement decisions that reflect the optimal national (let alone global) allocation of such resources. A system like the one I outline here would effectively enable the conversion of private, constrained decisions to serve the broader demands required for optimal allocation of scarce resources in the face of epidemiological externalities.
Indeed — and importantly — such a program allows the government to supplement existing and future public and private procurement decisions to ensure an overall optimal level of supply (and, of course, government-owned ventilators — 10,000 of which already exist in the Strategic National Stockpile — would similarly be put into the registry and deployed using the same criteria). Meanwhile, it would allow private facilities to confront emergency scenarios like the current one with far more resources than it would ever make sense for any given facility to have on hand in normal times.
There are, as always, caveats. First, such a program relies on the continued, effective functioning of transportation networks. If any given emergency were to disrupt these — and surely some would — the program would not necessarily function as planned. Of course, some of this can be mitigated by caching emergency equipment in key locations, and, over the course of an emergency, regularly redistributing those caches to facilitate expected deployments as the relevant data comes in. But, to be sure, at the end of the day such a program depends on the ability to transport ventilators.
In addition, there will always be the risk that emergency needs swamp even the aggregate available resources simultaneously (as may yet occur during the current crisis). But at the limit there is nothing that can be done about such an eventuality: Short of having enough ventilators on hand so that every needy person in the country can use one essentially simultaneously, there will always be the possibility that some level of demand will outpace our resources. But even in such a situation — where allocation of resources is collectively guided by epidemiological (or, in the case of other emergencies, other relevant) criteria — the system will work to mitigate the likely overburdening of resources, and ensure that overall resource allocation is guided by medically relevant criteria, rather than merely the happenstance of geography, budget constraints, storage space, or the like.
Finally, no doubt a host of existing regulations make such a program difficult or impossible. Obviously, these should be rescinded. One set of policy concerns is worth noting: privacy concerns. There is an inherent conflict between strong data privacy, in which decisions about the sharing of information belong to each individual, and the data needs to combat an epidemic, in which each person’s privately optimal level of data sharing may result in a sociallysub-optimal level of shared data. To the extent that HIPAA or other privacy regulations would stand in the way of a program like this, it seems singularly important to relax them. Much of the relevant data cannot be efficiently collected on an opt-in basis (as is easily done, by contrast, for the OPTN). Certainly appropriate safeguards should be put in place (particularly with respect to the ability of government agencies/law enforcement to access the data). But an individual’s idiosyncratic desire to constrain the sharing of personal data in this context seems manifestly less important than the benefits of, at the very least, a default rule that the relevant data be shared for these purposes.
Appropriate standards for emergency preparedness policy generally
Importantly, such a plan would have broader applicability beyond ventilators and the current crisis. And this is a key aspect of addressing the problem: avoiding a myopic focus on the current emergency in lieu of more clear-eyed emergency preparedness plan.
It’s important to be thinking not only about the current crisis but also about the next emergency. But it’s equally important not to let political point-scoring and a bias in favor of focusing on the seen over the unseen coopt any such efforts. A proper assessment entails the following considerations (surely among others) (and hat tip to Ron Cass for bringing to my attention most of the following insights):
Arguably we are overweighting health and safety concerns with respect to COVID-19 compared to our assessments in other areas (such as ordinary flu (on which see this informative thread by Anup Malani), highway safety, heart & coronary artery diseases, etc.). That’s inevitable when one particular concern is currently so omnipresent and so disruptive. But it is important that we not let our preparations for future problems focus myopically on this cause, because the next crisis may be something entirely different.
Nor is it reasonable to expect that we would ever have been (or be in the future) fully prepared for a global pandemic. It may not be an “unknown unknown,” but it is impossible to prepare for all possible contingencies, and simply not sensible to prepare fully for such rare and difficult-to-predict events.
That said, we also shouldn’t be surprised that we’re seeing more frequent global pandemics (a function of broader globalization), and there’s little reason to think that we won’t continue to do so. It makes sense to be optimally prepared for such eventualities, and if this one has shown us anything, it’s that our ability to allocate medical resources that are made suddenly scarce by a widespread emergency is insufficient.
But rather than overreact to such crises — which is difficult, given that overreaction typically aligns with the private incentives of key decision makers, the media, and many in the “chattering class” — we should take a broader, more public-focused view of our response. Moreover, political and bureaucratic incentives not only produce overreactions to visible crises, they also undermine the appropriate preparation for such crises in the future.
Thus, we should create programs that identify and mobilize generically useful emergency equipment not likely to be made obsolete within a short period and likely to be needed whatever the source of the next emergency. In other words, we should continue to focus the bulk of our preparedness on things like quickly deployable ICU facilities, ventilators, and clean blood supplies — not, as we may be wrongly inclined to do given the salience of the current crisis, primarily on specially targeted drugs and test kits. Our predictive capacity for our future demand of more narrowly useful products is too poor to justify substantial investment.
Given the relative likelihood of another pandemic, generic preparedness certainly includes the ability to inhibit overly fast spread of a disease that can clog critical health care facilities. This isn’t disease-specific (or, that is, while the specific rate and contours of infection are specific to each disease, relatively fast and widespread contagion is what causes any such disease to overtax our medical resources, so if we’re preparing for a future virus-related emergency, we’re necessarily preparing for a disease that spreads quickly and widely).
Because the next emergency isn’t necessarily going to be — and perhaps isn’t even likely to be — a pandemic, our preparedness should not be limited to pandemic preparedness. This means, as noted above, overcoming the political and other incentives to focus myopically on the current problem even when nominally preparing for the next one. But doing so is difficult, and requires considerable political will and leadership. It’s hard to conceive of our current federal leadership being up to the task, but it’s certainly not the case that our current problems are entirely the makings of this administration. All governments spend too much time and attention solving — and regulating — the most visible problems, whether doing so is socially optimal or not.
Thus, in addition to (1) providing for the efficient and effective use of data to allocate emergency medical resources (e.g., as described above), and (2) ensuring that our preparedness centers primarily on generically useful emergency equipment, our overall response should also (3) recognize and correct the way current regulatory regimes also overweight visible adverse health effects and inhibit competition and adaptation by industry and those utilizing health services, and (4) make sure that the economic and health consequences of emergency and regulatory programs (such as the current quarantine) are fully justified and optimized.
A proposal like the one I outline above would, I believe, be consistent with these considerations and enable more effective medical crisis response in general.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Hal Singer, (Managing Director, econONE; Adjunct Professor, Georgeown University, McDonough School of Business).]
In these harrowing times, it is a natural to fixate on the problem of testing—and how the United States got so far behind South Korea on this front—as a means to arrest the spread of Coronavirus. Under this remedy, once testing becomes ubiquitous, the government could track and isolate everyone who has been in recent contact with someone who has been diagnosed with Covid-19.
A good start, but there are several pitfalls from “contact
tracing” or what I call “standalone testing.” First, it creates an outsized
role for government and raises privacy concerns relating to how data on our
movements and in-person contacts are shared. Second, unless the test results
were instantaneously available and continuously updated, data from the tests would
not be actionable. A subject could be clear of the virus on Tuesday, get tested
on Wednesday, and be exposed to the virus on Friday.
Third, and one easily recognizable to economists, is that standalone
testing does not provide any means by which healthy subjects of the test can credibly signal to their peers that they
are now safe to be around. Given the skewed nature of economy towards
services—from restaurants to gyms and yoga studios to coffee bars—it is vital
that we interact physically. To return to work or to enter a restaurant or any
other high-density environment, both the healthy subject must convey to her
peers that she is healthy, and other co-workers or patrons in a high-density
environment must signal their health to the subject. Without this mutual trust,
healthy workers will be reluctant to return to the workplace or to integrate
back into society. It is not enough for complete strangers to say “I’m safe.”
How do I know you are safe?
As law professor Thom Lambert tweeted, this information problem is related to the famous lemons problem identified by Nobel laureate George Akerlof: We “can’t tell ‘quality’ so we assume everyone’s a lemon and act accordingly. We once had that problem with rides from strangers, but entrepreneurship and technology solved the problem.”
Akerlof recognized that
markets were prone to failure in the face of “asymmetric information,” or when
a seller knows a material fact that the buyer does not. He showed a market for
used cars could degenerate into a market exclusively for lemons, because buyers
rationally are not willing to pay the full value of a good car and the discount
they would impose on all sellers would drive good cars away.
To solve this related problem, we need a way to verify our good health. Borrowing
Lambert’s analogy, most Americans (barring hitchhikers) would never jump in a
random car without knowledge that the driver worked for a reputable
ride-hailing service or licensed taxi. When an Uber driver pulls up to the
curb, the rider can feel confident that the driver has been verified (and vice
versa) by a third party—in this case, Uber—and if there’s any doubt of the
driver’s credentials, the driver typically speaks the passenger’s name when the
door is still ajar. Uber also mitigated the lemons problem by allowing
passengers and drivers to engage in reciprocal rating.
Similarly, when a passenger shows up at the airport, he
presents a ticket, typically in electronic form on his phone, to a TSA officer.
The phone is scanned by security, and verification of ticket and TSA PreCheck
status is confirmed via rapid communication with the airline. The same verification
is repeated at stadium venues across America, thanks in part to technology
developed by StubHub.
A similar verification technology could be deployed to solve
the trust problem relating to Coronavirus. It is meant to complement standalone testing. Here’s how it might work:
Each household would have a designated testing center in
their community and potentially a test kit in their own homes. Testing would
done routinely and free of charge, so as to ensure that test results are up to
date. (Given the positive externalities associated with mass testing and
verification, the optimal price is not positive.) Just as an airline sends
confirmation of a ticket purchase, the company responsible for administering
the test would report the results within an hour to the subject and it would
store for 24 hours in the vendor’s app. In contrast to the invasive role of
government in contact tracing, the only role for government here would be to
approve of qualified vendors of the testing equipment.
Armed with third-party verification of her health status on
her phone, the subject could present these results to a gatekeeper at any
facility. Suppose the subject typically takes the metro to work, and stops at
her gym before going home. Under this regime, she would present her phone to
three gatekeepers (metro, work, gym) to obtain access. Of course, subjects who
test positive for Coronavirus would not gain access to these secure sites until
the virus left their system and they subsequently test negative. Seems harsh
for them, but imposing this restriction isn’t really a degradation in mobility relative
to the status quo, under which access is denied to everyone.
When I floated this idea on Twitter a few days ago, it was
generally well received,
but even supporters spotted potential shortcomings. For example, users could
have a fraudulent app on their phones, or otherwise fake a negative result. Yet
government sanctioning of a select groups of test vendors should prevent this
type of fraud. Private gatekeepers such as restaurants presumably would not
have to operate under any mandate; they have a clear incentive not only to restrict
access to verified patrons, but also to advertise that they have strict rules
on admission. By the same token, if they did, for some reason, allowed people
to enter without verification, they could do so. But patrons’ concern for their
own health likely would undermine such a permissive policy.
Other skeptics raised privacy
concerns. But if a user voluntarily conveys her health status to a
gatekeeper, so long as the information stops there, it’s hard to conceive a
privacy violation. Another potential violation would be an equipment vendor’s
sharing information of a user’s health status with third parties. Of course,
the government could impose restrictions on a vendor’s data sharing as a
condition of granting a license to test and verify. But given the
circumstances, such sharing could support contact tracing, or allow supplies to
be mobilized to certain areas where there are outbreaks.
Still others noted that some
Americans lack phones. For these Americans, I’d suggest paper verification
would suffice, or even better yet, subsidized phones.
No solution is flawless. And it’s incredible that we even
have to think this way. But who could have imagined, even a few weeks ago, that
we would be pinned in our basements, afraid to interact with the world in close
quarters? Desperate times call for creative and economically sound measures.
We don’t yet know how bad the coronavirus outbreak will be in America. But we do know that the virus is likely to have a major impact on Americans’ access to medication. Currently, 80% of the active ingredients found in the drugs Americans take are made in China, and the virus has disrupted China’s ability to manufacture and supply those ingredients. Generic drugs, which comprise 90% of America’s drugs, are likely to be particularly impacted because most generics are made in India, and Indian drug makers rely heavily on Chinese-made ingredients. Indeed, on Tuesday, March 3, India decided to restrict exports of 26 drugs and drug ingredients because of reductions in China’s supply. This disruption to the generic supply chain could mean that millions of Americans will not get the drugs they need to stay alive and healthy.
Coronavirus-related shortages are only the latest
in a series of problems recently afflicting the generic drug industry. In the last few years, there have been many
reports of safety issues affecting generic drug quality at both domestic and overseas manufacturing facilities. Numerous studies have uncovered shady
practices and quality defects, including
generics contaminated with carcinogens, drugs in which the active ingredients
were switched for ineffective or unsafe alternatives, and manufacturing facilities
that falsify or destroy documents to conceal their misdeeds.
We’ve also been inundated with stories of generic drug makers hiking prices for their products. Although, as a whole, generic drugs are much cheaper than innovative brand products, the prices for many generic drugs are on the increase. For some generics – Martin Shkreli’s Daraprim, heart medication Digoxin, antibiotic Doxycycline, insulin, and many others – prices have increased by several hundred percent. It turns out that many of the price increases are the result of anticompetitive behavior in the generic market. For others, the price increases are due to the increasing difficulty of generic drug makers to earn profits selling low-priced drugs.
Even before the coronavirus outbreak, there were
of shortages for critical generic drugs. These shortages often result from drug
of incentive to manufacture low-priced drugs that don’t earn
much profit. The shortages have been growing in frequency
and duration in recent years.
As a result of the shortages, 90 percent of U.S. hospitals report having
to find alternative drug therapies, costing patients and hospitals over
$400 million last year.
In other unfortunate situations, reasonable alternatives simply are not
available and patients suffer.
With generic drug makers’ growing list of
problems, many policy makers have called for significant changes to America’s approach
to the generic drug industry. Perhaps the FDA needs to increase its inspection of overseas facilities?
Perhaps the FTC and state and federal prosecutors should step
up their investigations and enforcement actions
against anticompetitive behavior in the industry? Perhaps FDA should do even
more to promote generic competition by expediting
While these actions and other proposals could certainly help, none are aimed at resolving more than one or two of the significant problems vexing the industry. Senator Elizabeth Warren has proposed a more substantial overhaul that would bring the U.S. government into the generic-drug-making business. Under Warren’s plan, the Department of Health and Human Services (HHS) would manufacture or contract for the manufacture of drugs to be sold at lower prices. Nationalizing the generic drug industry in this way would make the inspection of manufacturing facilities much easier and could ideally eliminate drug shortages. In January, California’s governor proposed a similar system under which the state would begin manufacturing or contracting to manufacture generic drugs.
of public manufacturing argue that manufacturing and
distribution infrastructure would be extremely costly to set up, with taxpayers
footing the bill. And even after the
initial set-up, market dynamics that affect costs, such as increasing raw
material costs or supply chain disruptions, would also mean greater costs for
taxpayers. Moreover, by removing the
profit incentive created under the Hatch-Waxman
Act to develop and manufacture generic drugs, it’s
not clear that governments could develop or manufacture a sufficient supply of generics
(consider the difference in efficiency between the U.S. Postal Service and
either UPS or FedEx).
Another approach might be to treat the generic
drug industry as a regulated
industry. This model has been applied to utilities in the
past when unregulated private ownership of utility infrastructure could not
provide sufficient supply to meet consumer need, address market failures, or
prevent the abuse of monopoly power.
Similarly, consumers’ need for safe and affordable medicines, market
failures inherent throughout the industry, and industry consolidation that could give rise to market power suggest the regulated model
might work well for generic drugs.
Under this approach, Hatch-Waxman incentives
could remain in place, granting the first generic drug an exclusivity period
during which it could earn significant profits for the generic drug maker. But when the exclusivity period ends, an
agency like HHS would assign manufacturing responsibility for a particular drug
to a handful of generic drug makers wishing to market in the U.S. These companies would be guaranteed a profit
based on a set rate of return on the costs of high-quality domestic manufacturing. In order to maintain their manufacturing
rights, facilities would have to meet strict FDA
guidelines to ensure high quality drugs.
Like the Warren and California proposals, this
approach would tackle several problems at once.
Prices would be kept under control and facilities would face frequent
inspections to ensure quality. A
guaranteed profit would eliminate generic companies’ financial risk, reducing
their incentive to use cheap (and often unsafe) drug ingredients or to engage
in illegal anticompetitive behavior. It
would also encourage steady production to reduce instances of drug
shortages. Unlike the Warren and
California proposals, this approach would build on the existing generic
infrastructure so that taxpayers don’t have to foot the bill to set up public
manufacturing. It would also continue to
incentivize the development of generic alternatives by maintaining the
Hatch-Waxman exclusivity period, and it would motivate the manufacture of generic
drugs by companies seeking a reliable rate of return.
Several issues would need to be worked out with a regulated generic industry approach to prevent manipulation of rates of return, regulatory capture, and political appointees without the incentives or knowledge to regulate the drug makers. However, the recurring crises affecting generic drugs indicate the industry is rife with market failures. Perhaps only a radical new approach will achieve lasting and necessary change.
Last week the Senate Judiciary Committee held a hearing, Intellectual
Property and the Price of Prescription Drugs: Balancing Innovation and
Competition, that explored whether changes to the pharmaceutical patent
process could help lower drug prices. The
committee’s goal was to evaluate various legislative proposals that might
facilitate the entry of cheaper generic drugs, while also recognizing that strong
patent rights for branded drugs are essential to incentivize drug
innovation. As Committee Chairman
Lindsey Graham explained:
One thing you don’t want to do is kill the goose who laid the golden egg, which is pharmaceutical development. But you also don’t want to have a system that extends unnecessarily beyond the ability to get your money back and make a profit, a patent system that drives up costs for the average consumer.
Several proposals that were discussed at the hearing have
the potential to encourage competition in the pharmaceutical industry and help
rein in drug prices. Below, I discuss these proposals, plus a few additional
reforms. I also point out some of the language in the current draft proposals
that goes a bit too far and threatens the ability of drug makers to remain
1. Prevent brand drug makers from blocking generic companies’ access to drug samples. Some brand drug makers have attempted to delay generic entry by restricting generics’ access to the drug samples necessary to conduct FDA-required bioequivalence studies. Some brand drug manufacturers have limited the ability of pharmacies or wholesalers to sell samples to generic companies or abused the REMS (Risk Evaluation Mitigation Strategy) program to refuse samples to generics under the auspices of REMS safety requirements. The Creating and Restoring Equal Access To Equivalent Samples (CREATES) Act of 2019 would allow potential generic competitors to bring an action in federal court for both injunctive relief and damages when brand companies block access to drug samples. It also gives the FDA discretion to approve alternative REMS safety protocols for generic competitors that have been denied samples under the brand companies’ REMS protocol. Although the vast majority of brand drug companies do not engage in the delay tactics addressed by CREATES, the Act would prevent the handful that do from thwarting generic competition. Increased generic competition should, in turn, reduce drug prices.
2. Restrict abuses of FDA Citizen Petitions. The citizen petition process was created as a way for individuals and community groups to flag legitimate concerns about drugs awaiting FDA approval. However, critics claim that the process has been misused by some brand drug makers who file petitions about specific generic drugs in the hopes of delaying their approval and market entry. Although FDA has indicated that citizens petitions rarely delay the approval of generic drugs, there have been a few drug makers, such as Shire ViroPharma, that have clearly abused the process and put unnecessary strain on FDA resources. The Stop The Overuse of Petitions and Get Affordable Medicines to Enter Soon (STOP GAMES) Act is intended to prevent such abuses. The Act reinforces the FDA and FTC’s ability to crack down on petitions meant to lengthen the approval process of a generic competitor, which should deter abuses of the system that can occasionally delay generic entry. However, lawmakers should make sure that adopted legislation doesn’t limit the ability of stakeholders (including drug makers that often know more about the safety of drugs than ordinary citizens) to raise serious concerns with the FDA.
3. Curtail Anticompetitive Pay-for-Delay Settlements. The Hatch-Waxman Act incentivizes generic companies to challenge brand drug patents by granting the first successful generic challenger a period of marketing exclusivity. Like all litigation, many of these patent challenges result in settlements instead of trials. The FTC and some courts have concluded that these settlements can be anticompetitive when the brand companies agree to pay the generic challenger in exchange for the generic company agreeing to forestall the launch of their lower-priced drug. Settlements that result in a cash payment are a red flag for anti-competitive behavior, so pay-for-delay settlements have evolved to involve other forms of consideration instead. As a result, the Preserve Access to Affordable Generics and Biosimilars Act aims to make an exchange of anything of value presumptively anticompetitive if the terms include a delay in research, development, manufacturing, or marketing of a generic drug. Deterring obvious pay-for-delay settlements will prevent delays to generic entry, making cheaper drugs available as quickly as possible to patients.
However, the Act’s rigid presumption that an exchange of anything of value is presumptively anticompetitive may also prevent legitimate settlements that ultimately benefit consumers. Brand drug makers should be allowed to compensate generic challengers to eliminate litigation risk and escape litigation expenses, and many settlements result in the generic drug coming to market before the expiration of the brand patent and possibly earlier than if there was prolonged litigation between the generic and brand company. A rigid presumption of anticompetitive behavior will deter these settlements, thereby increasing expenses for all parties that choose to litigate and possibly dissuading generics from bringing patent challenges in the first place. Indeed, the U.S. Supreme Court has declined to define these settlements as per se anticompetitive, and the FTC’s most recent agreement involving such settlements exempts several forms of exchanges of value. Any adopted legislation should follow the FTC’s lead and recognize that some exchanges of value are pro-consumer and pro-competitive.
4. Restore the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. I have previously discussed how an unbalanced inter partes review (IPR) process for challenging patents threatens to stifle drug innovation. Moreover, current law allows generic challengers to file duplicative claims in both federal court and through the IPR process. And because IPR proceedings do not have a standing requirement, the process has been exploited by entities that would never be granted standing in traditional patent litigation—hedge funds betting against a company by filing an IPR challenge in hopes of crashing the stock and profiting from the bet. The added expense to drug makers of defending both duplicative claims and claims against challengers that are exploiting the system increases litigation costs, which may be passed on to consumers in the form of higher prices.
The Hatch-Waxman Integrity Act (HWIA) is designed to return the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. It requires generic challengers to choose between either Hatch-Waxman litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) or an IPR proceeding (which is faster and provides certain pro-challenger provisions). The HWIA would also eliminate the ability of hedge funds and similar entities to file IPR claims while shorting the stock. By reducing duplicative litigation and the exploitation of the IPR process, the HWIA will reduce costs and strengthen innovation incentives for drug makers. This will ensure that patent owners achieve clarity on the validity of their patents, which will spur new drug innovation and make sure that consumers continue to have access to life-improving drugs.
5. Curb illegal product hopping and patent thickets. Two drug maker tactics currently garnering a lot of attention are so-called “product hopping” and “patent thickets.” At its worst, product hopping involves brand drug makers making minor changes to a drug nearing the end of its patent so that they gets a new patent on the slightly-tweaked drug, and then withdrawing the original drug from the market so that patients shift to the newly patented drug and pharmacists can’t substitute a generic version of the original drug. Similarly, at their worst, patent thickets involve brand drug makers obtaining a web of patents on a single drug to extend the life of their exclusivity and make it too costly for other drug makers to challenge all of the patents associated with a drug. The proposed Affordable Prescriptions for Patients Act of 2019 is meant to stop these abuses of the patent system, which would facilitate generic entry and help to lower drug prices.
However, the Act goes too far by also capturing many legitimate activities in its definitions. For example, the bill defines as anticompetitive product-hopping the selling of any improved version of a drug during a window which extends to a year after the launch of the first generic competitor. Presently, to acquire a patent and FDA approval, the improved version of the drug must be different and innovative enough from the original drug, yet the Act would prevent the drug maker from selling such a product without satisfying a demanding three-pronged test before the FTC or a district court. Similarly, the Act defines as anticompetitive patent thickets any new patents filed on a drug in the same general family as the original patent, and this presumption can only be rebutted by providing extensive evidence and satisfying demanding standards to the FTC or a district court. As a result, the Act deters innovation activity that is at all related to an initial patent and, in doing so, ignores the fact that most important drug innovation is incremental innovation based on previous inventions. Thus, the proposal should be redrafted to capture truly anticompetitive product hopping and patent thicket activity, while exempting behavior this is critical for drug innovation.
Reforms that close loopholes in the current patent process should facilitate competition in the pharmaceutical industry and help to lower drug prices. However, lawmakers need to be sure that they don’t restrict patent rights to the extent that they deter innovation because a significant body of research predicts that patients’ health outcomes will suffer as a result.
On March 14, the
Federal Circuit will hear oral arguments in the case of BTG
International v. Amneal Pharmaceuticals that could dramatically influence the
future of duplicative patent litigation in the pharmaceutical industry. The court will determine whether the America
Invents Act (AIA) bars patent challengers that succeed in invalidating patents
in inter partes review (IPR) proceedings from repeating their winning arguments
in district court. Courts and litigants
had previously assumed that the AIA’s estoppel provision only prevented
unsuccessful challengers from reusing failed arguments. However, in an amicus
brief filed in the case
last month, the U.S. Patent and Trade Office (USPTO) argued that, although it
seems counterintuitive, under the AIA, even parties that succeed in getting
patents invalidated in IPR cannot reuse their arguments.
the Federal Circuit agrees with the USPTO, patent challengers could be strongly
deterred from bringing IPR proceedings because it would mean they couldn’t
reuse any arguments in district court.
This deterrent effect would be especially strong for generic drug
makers, who must prevail in district court in order to get approval for their
Abbreviated New Drug Application from the FDA.
of the USPTO’s position assert that it will frustrate the AIA’s
purpose of facilitating generic competition.
However, if the Federal Circuit adopts the position, it would also
reduce the amount of duplicative litigation that plagues the pharmaceutical
industry and threatens new drug innovation.
According to a 2017
analysis of over 6,500 IPR challenges filed between
2012 and 2017, approximately 80% of IPR challenges were filed during an ongoing
district court case challenging the patent.
This duplicative litigation can
increase costs for both challengers and patent holders; the median
cost for an IPR proceeding that results in a final
decision is $500,000 and the median cost for just filing an IPR petition is
$100,000. Moreover, because of
duplicative litigation, pharmaceutical patent holders face persistent
uncertainty about the validity of their patents. Uncertain patent rights will lead to less innovation because
drug companies will not spend the billions of dollars it typically costs to
bring a new drug to market when they cannot be certain if the patents for that
drug can withstand IPR proceedings that are clearly stacked against them.
And if IPR causes drug innovation to decline, a significant body of
research predicts that patients’ health outcomes will suffer as a result.
addition, deterring IPR challenges would help to reestablish balance between
drug patent owners and patent challengers.
As I’ve previously discussed here
bias in IPR proceedings has led to significant deviation in patent invalidation
rates under the two pathways; compared to district court challenges, patents
are twice as likely to be found invalid in IPR challenges. The challenger is more likely to prevail in IPR
proceedings because the Patent Trial and Appeal Board (PTAB) applies a lower standard of
invalidity in IPR proceedings than do federal courts. Furthermore, if the
challenger prevails in the IPR proceedings, the PTAB’s decision to invalidate a
patent can often “undo” a prior district court decision in favor of the patent
holder. Further, although both district court judgments and PTAB
decisions are appealable to the Federal Circuit, the court applies a more
deferential standard of review to PTAB decisions, increasing the likelihood
that they will be upheld compared to the district court decision.
However, the USPTO acknowledges that its position is counterintuitive because it means that a court could not consider invalidity arguments that the PTAB found persuasive. It is unclear whether the Federal Circuit will refuse to adopt this counterintuitive position or whether Congress will amend the AIA to limit estoppel to failed invalidity claims. As a result, a better and more permanent way to eliminate duplicative litigation would be for Congress to enact the Hatch-Waxman Integrity Act of 2019 (HWIA). The HWIA was introduced by Senator Thom Tillis in the Senate and Congressman Bill Flores In the House, and proposed in the last Congress by Senator Orrin Hatch. The HWIA eliminates the ability of drug patent challengers to file duplicative claims in both federal court and IPR proceedings. Instead, they must choose between either district court litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) and IPR proceedings (which are faster and provide certain pro-challenger provisions).
Thus, the HWIA would reduce duplicative litigation that increases costs and uncertainty for drug patent owners. This will ensure that patent owners achieve clarity on the validity of their patents, which will spur new drug innovation and ensure that consumers continue to have access to life-improving drugs.
Drug makers recently announced their 2019 price increases on over 250 prescription drugs. As examples, AbbVie Inc. increased the price of the world’s top-selling drug Humira by 6.2 percent, and Hikma Pharmaceuticals increased the price of blood-pressure medication Enalaprilat by more than 30 percent. Allergan reported an average increase across its portfolio of drugs of 3.5 percent; although the drug maker is keeping most of its prices the same, it raised the prices on 27 drugs by 9.5 percent and on another 24 drugs by 4.9 percent. Other large drug makers, such as Novartis and Pfizer, will announce increases later this month.
So far, the number of price increases is significantly lower than last year when drug makers increased prices on more than 400 drugs. Moreover, on the drugs for which prices did increase, the average price increase of 6.3 percent is only about half of the average increase for drugs in 2018. Nevertheless, some commentators have expressed indignation and President Trump this week summoned advisors to the White House to discuss the increases. However, commentators and the administration should keep in mind what the price increases actually mean and the numerous players that are responsible for increasing drug prices.
First, it is critical to emphasize the difference between drug list prices and net prices. The drug makers recently announced increases in the list, or “sticker” prices, for many drugs. However, the list price is usually very different from the net price that most consumers and/or their health plans actually pay, which depends on negotiated discounts and rebates. For example, whereas drug list prices increased by an average of 6.9 percent in 2017, net drug prices after discounts and rebates increased by only 1.9 percent. The differential between the growth in list prices and net prices has persisted for years. In 2016 list prices increased by 9 percent but net prices increased by 3.2 percent; in 2015 list prices increased by 11.9 percent but net prices increased by 2.4 percent, and in 2014 list price increases peaked at 13.5 percent but net prices increased by only 4.3 percent.
For 2019, the list price
increases for many drugs will actually translate into very small increases in
the net prices that consumers actually pay.
In fact, drug maker Allergan has indicated
that, despite its increase in list prices, the net prices that patients actually
pay will remain about the same as last year.
One might wonder why drug makers would bother to increase list prices if there’s little to no change in net prices. First, at least 40 percent of the American prescription drug market is subject to some form of federal price control. As I’ve previously explained, because these federal price controls generally require percentage rebates off of average drug prices, drug makers have the incentive to set list prices higher in order to offset the mandated discounts that determine what patients pay.
Further, as I discuss in a recent Article, the rebate arrangements between drug makers and pharmacy benefit managers (PBMs) under many commercial health plans create strong incentives for drug makers to increase list prices. PBMs negotiate rebates from drug manufacturers in exchange for giving the manufacturers’ drugs preferred status on a health plan’s formulary. However, because the rebates paid to PBMs are typically a percentage of a drug’s list price, drug makers are compelled to increase list prices in order to satisfy PBMs’ demands for higher rebates. Drug makers assert that they are pressured to increase drug list prices out of fear that, if they do not, PBMs will retaliate by dropping their drugs from the formularies. The value of rebates paid to PBMs has doubled since 2012, with drug makers now paying $150 billion annually. These rebates have grown so large that, today, the drug makers that actually invest in drug innovation and bear the risk of drug failures receive only 39 percent of the total spending on drugs, while 42 percent of the spending goes to these pharmaceutical middlemen.
Although a portion of the increasing rebate dollars may eventually find its way to patients in the form of lower co-pays, many patients still suffer from the list prices increases. The 29 million Americans without drug plan coverage pay more for their medications when list prices increase. Even patients with insurance typically have cost-sharing obligations that require them to pay 30 to 40 percent of list prices. Moreover, insured patients within the deductible phase of their drug plan pay the entire higher list price until they meet their deductible. Higher list prices jeopardize patients’ health as well as their finances; as out-of-pocket costs for drugs increase, patients are less likely to adhere to their medication routine and more likely to abandon their drug regimen altogether.
Policymakers must realize that the current system of government price controls and distortive rebates creates perverse incentives for drug makers to continue increasing drug list prices. Pointing the finger at drug companies alone for increasing prices does not represent the problem at hand.
Last week, the DOJ cleared the merger of CVS Health and Aetna (conditional on Aetna’s divesting its Medicare Part D business), a merger that, as I previously noted at a House Judiciary hearing, “presents a creative effort by two of the most well-informed and successful industry participants to try something new to reform a troubled system.” (My full testimony is available here).
Of course it’s always possible that the experiment will fail — that the merger won’t “revolutioniz[e] the consumer health care experience” in the way that CVS and Aetna are hoping. But it’s a low (antitrust) risk effort to address some of the challenges confronting the healthcare industry — and apparently the DOJ agrees.
I discuss the weakness of the antitrust arguments against the merger at length in my testimony. What I particularly want to draw attention to here is how this merger — like many vertical mergers — represents business model innovation by incumbents.
The CVS/Aetna merger is just one part of a growing private-sector movement in the healthcare industry to adopt new (mostly) vertical arrangements that seek to move beyond some of the structural inefficiencies that have plagued healthcare in the United States since World War II. Indeed, ambitious and interesting as it is, the merger arises amidst a veritable wave of innovative, vertical healthcare mergers and other efforts to integrate the healthcare services supply chain in novel ways.
These sorts of efforts (and the current DOJ’s apparent support for them) should be applauded and encouraged. I need not rehash the economic literature on vertical restraints here (see, e.g., Lafontaine & Slade, etc.). But especially where government interventions have already impaired the efficient workings of a market (as they surely have, in spades, in healthcare), it is important not to compound the error by trying to micromanage private efforts to restructure around those constraints.
Current trends in private-sector-driven healthcare reform
In the past, the most significant healthcare industry mergers have largely been horizontal (i.e., between two insurance providers, or two hospitals) or “traditional” business model mergers for the industry (i.e., vertical mergers aimed at building out managed care organizations). This pattern suggests a sort of fealty to the status quo, with insurers interested primarily in expanding their insurance business or providers interested in expanding their capacity to provide medical services.
Today’s health industry mergers and ventures seem more frequently to be different in character, and they portend an industry-wide experiment in the provision of vertically integrated healthcare that we should enthusiastically welcome.
But a number of other recent arrangements and business models center around relationships among drug manufacturers, pharmacies, and PBMs, and these tend to minimize the role of insurers. While not a “vertical” arrangement, per se, Walmart’s generic drug program, for example, offers $4 prescriptions to customers regardless of insurance (the typical generic drug copay for patients covered by employer-provided health insurance is $11), and Walmart does not seek or receive reimbursement from health plans for these drugs. It’s been offering this program since 2006, but in 2016 it entered into a joint buying arrangement with McKesson, a pharmaceutical wholesaler (itself vertically integrated with Rexall pharmacies), to negotiate lower prices. The idea, presumably, is that Walmart will entice consumers to its stores with the lure of low-priced generic prescriptions in the hope that they will buy other items while they’re there. That prospect presumably makes it worthwhile to route around insurers and PBMs, and their reimbursements.
Meanwhile, both Express Scripts and CVS Health (two of the country’s largest PBMs) have made moves toward direct-to-consumer sales themselves, establishing pricing for a small number of drugs independently of health plans and often in partnership with drug makers directly.
Also apparently focused on disrupting traditional drug distribution arrangements, Amazon has recently purchased online pharmacy PillPack (out from under Walmart, as it happens), and with it received pharmacy licenses in 49 states. The move introduces a significant new integrated distributor/retailer, and puts competitive pressure on other retailers and distributors and potentially insurers and PBMs, as well.
Whatever its role in driving the CVS/Aetna merger (and I believe it is smaller than many reports like to suggest), Amazon’s moves in this area demonstrate the fluid nature of the market, and the opportunities for a wide range of firms to create efficiencies in the market and to lower prices.
At the same time, the differences between Amazon and CVS/Aetna highlight the scope of product and service differentiation that should contribute to the ongoing competitiveness of these markets following mergers like this one.
While Amazon inarguably excels at logistics and the routinizing of “back office” functions, it seems unlikely for the foreseeable future to be able to offer (or to be interested in offering) a patient interface that can rival the service offerings of a brick-and-mortar CVS pharmacy combined with an outpatient clinic and its staff and bolstered by the capabilities of an insurer like Aetna. To be sure, online sales and fulfillment may put price pressure on important, largely mechanical functions, but, like much technology, it is first and foremost a complement to services offered by humans, rather than a substitute. (In this regard it is worth noting that McKesson has long been offering Amazon-like logistics support for both online and brick-and-mortar pharmacies. “‘To some extent, we were Amazon before it was cool to be Amazon,’ McKesson CEO John Hammergren said” on a recent earnings call).
Other efforts focus on integrating insurance and treatment functions or on bringing together other, disparate pieces of the healthcare industry in interesting ways — all seemingly aimed at finding innovative, private solutions to solve some of the costly complexities that plague the healthcare market.
Walmart, for example, announced a deal with Quest Diagnostics last year to experiment with offering diagnostic testing services and potentially other basic healthcare services inside of some Walmart stores. While such an arrangement may simply be a means of making doctor-prescribed diagnostic tests more convenient, it may also suggest an effort to expand the availability of direct-to-consumer (patient-initiated) testing (currently offered by Quest in Missouri and Colorado) in states that allow it. A partnership with Walmart to market and oversee such services has the potential to dramatically expand their use.
Capping off (for now) a buying frenzy in recent years that included the purchase of PBM, CatamaranRx, UnitedHealth is seeking approval from the FTC for the proposed merger of its Optum unit with the DaVita Medical Group — a move that would significantly expand UnitedHealth’s ability to offer medical services (including urgent care, outpatient surgeries, and health clinic services), give it a significant group of doctors’ clinics throughout the U.S., and turn UnitedHealth into the largest employer of doctors in the country. But of course this isn’t a traditional managed care merger — it represents a significant bet on the decentralized, ambulatory care model that has been slowly replacing significant parts of the traditional, hospital-centric care model for some time now.
And, perhaps most interestingly, some recent moves are bringing together drug manufacturers and diagnostic and care providers in innovative ways. Swiss pharmaceutical company, Roche, announced recently that “it would buy the rest of U.S. cancer data company Flatiron Health for $1.9 billion to speed development of cancer medicines and support its efforts to price them based on how well they work.” Not only is the deal intended to improve Roche’s drug development process by integrating patient data, it is also aimed at accommodating efforts to shift the pricing of drugs, like the pricing of medical services generally, toward an outcome-based model.
Similarly interesting, and in a related vein, early this year a group of hospital systems including Intermountain Health, Ascension, and Trinity Health announced plans to begin manufacturing generic prescription drugs. This development further reflects the perceived benefits of vertical integration in healthcare markets, and the move toward creative solutions to the unique complexity of coordinating the many interrelated layers of healthcare provision. In this case,
[t]he nascent venture proposes a private solution to ensure contestability in the generic drug market and consequently overcome the failures of contracting [in the supply and distribution of generics]…. The nascent venture, however it solves these challenges and resolves other choices, will have important implications for the prices and availability of generic drugs in the US.
More enforcement decisions like CVS/Aetna and Bayer/Monsanto; fewer like AT&T/Time Warner
In the face of all this disruption, it’s difficult to credit anticompetitive fears like those expressed by the AMA in opposing the CVS-Aetna merger and a recent CEA report on pharmaceutical pricing, both of which are premised on the assumption that drug distribution is unavoidably dominated by a few PBMs in a well-defined, highly concentrated market. Creative arrangements like the CVS-Aetna merger and the initiatives described above (among a host of others) indicate an ease of entry, the fluidity of traditional markets, and a degree of business model innovation that suggest a great deal more competitiveness than static PBM market numbers would suggest.
This kind of incumbent innovation through vertical restructuring is an increasingly important theme in antitrust, and efforts to tar such transactions with purported evidence of static market dominance is simply misguided.
While the current DOJ’s misguided (and, remarkably, continuing) attempt to stop the AT&T/Time Warner merger is an aberrant step in the wrong direction, the leadership at the Antitrust Division generally seems to get it. Indeed, in spite of strident calls for stepped-up enforcement in the always-controversial ag-biotech industry, the DOJ recently approved three vertical ag-biotech mergers in fairly rapid succession.
As I noted in a discussion of those ag-biotech mergers, but equally applicable here, regulatory humility should continue to carry the day when it comes to structural innovation by incumbent firms:
But it is also important to remember that innovation comes from within incumbent firms, as well, and, often, that the overall level of innovation in an industry may be increased by the presence of large firms with economies of scope and scale.
In sum, and to paraphrase Olympia Dukakis’ character in Moonstruck: “what [we] don’t know about [the relationship between innovation and market structure] is a lot.”
What we do know, however, is that superficial, concentration-based approaches to antitrust analysis will likely overweight presumed foreclosure effects and underweight innovation effects.
We shouldn’t fetishize entry, or access, or head-to-head competition over innovation, especially where consumer welfare may be significantly improved by a reduction in the former in order to get more of the latter.
The two-year budget plan passed last week makes important changes to payment obligations in the Medicare Part D coverage gap, also known as the donut hole. While the new plan produces a one-year benefit for seniors by reducing what they pay a year earlier than was already mandated, it permanently shifts much of the drug costs insurance companies were paying to drug makers. It’s far from clear whether this windfall for insurers will result in lower drug costs for Medicare beneficiaries.
Medicare Part D is voluntary prescription drug insurance for seniors and the permanently disabled provided by private insurance plans that are approved by the Medicare program. Last year, more than 42 million people enrolled in Medicare Part D plans. Payment for prescription drugs under Medicare Part D depends on how much enrollees spend on drugs. In 2018, after hitting a deductible that varies by plan, enrollees pay 25% of their drug costs while the Part D plans pay 75%. However, once the individual and the plan have spent a total of $3,750, enrollees hit the coverage gap that lasts until $8,418 has been spent. In the coverage gap, enrollees pay 35% of brand drug costs, the Part D plans pay 15%, and drug makers are required to offer 50% discounts on brand drugs to cover the rest. Once total spending reaches $8,418, enrollees enter catastrophic coverage in which they pay only 5% of drug costs, the Part D plans pay 15%, and the Medicare program pays the other 80%.
The Affordable Care Act (ACA) included provisions to phase out the coverage gap by 2020, so that enrollees will pay only 25% of drug costs from the time they meet the deductible until they hit the catastrophic coverage level. The budget plan passed last week speeds up this phase out by one year, so enrollees will start paying only 25% in 2019 instead of 2020. The ACA anticipated that with enrollees paying 25% of drug costs and drug maker discounts of 50%, the Part D plans would pay the other 25%. However, last week’s budget plan drastically redistributed the payment responsibilities from the Part D insurance plans to drug makers. Under the new plan drug makers are required to offer 70% discounts so that the plans only have to pay 5% of the total drug costs. That is, the new plan shifts 20% of total drug costs in the coverage gap from insurers to drug makers.
Although the drug spending in each individual’s coverage gap is less than $5,000, with over 42 million people covered, the total spending, and the 20% of spending shifted from insurers to drug makers, is significant. CMS has estimated that when drug makers’ discounts were only covering 50% of drug spending in the gap, the annual total discounts amounted to over $5.6 billion. Requiring drug makers to cover another 20% of drug spending will add several billion dollars more to this total.
A government intervention that forces suppliers to cover 70% of the spending in a market is a surprising move for Republicans—supposed advocates of free markets. Moreover, although reducing prescription drug costs has become a national priority, it’s unclear whether shifting costs from insurers to drug makers will benefit individuals at all. Theoretically, as the individual Part D plans pay less of their enrollees’ drug costs, they should pass on the savings to enrollees in the form of lower premiums. However, several studies suggest that enrollees may not experience a net decrease in drug spending. The Centers for Medicare and Medicaid Services (CMS) has determined that under Medicare Part D, drug makers increase list prices to offset other concessions and to more quickly move enrollees out of the coverage gap where drug makers are required to offer price discounts. Higher list prices mean that enrollees’ total out-of-pocket drug spending increases; even a 5% cost-sharing obligation in the catastrophic coverage for a high-priced drug can be a significant expense. Higher list prices that push enrollees out of the coverage gap also shift more costs onto the Medicare program that pays 80% of drug costs in the catastrophic coverage phase.
A better, more direct way to reduce Medicare Part D enrollees’ out-of-pocket drug spending is to require point-of-sale rebates. Currently, drug makers offer rebates to Part D plans in order to improve their access to the millions of individuals covered by the plans. However, the rebates, which total over $16 billion annually, are paid after the point-of-sale, and evidence shows that only a portion of these rebates get passed through to beneficiaries in the form of reduced insurance premiums. Moreover, a reduction in premiums does little to benefit those enrolled individuals who have the highest aggregate out-of-pocket spending on drugs. (As an aside, in contrast to the typical insurance subsidization of high-cost enrollees by low-cost enrollees, high-spending enrollees under Medicare Part D generate greater rebates for their plans, but then the rebates are spread across all enrollees in the form of lower premiums).
Drug maker rebates will more directly benefit Medicare Part D enrollees if rebates are passed through at the point-of-sale to reduce drug copays. Point-of-sale rebates would ensure that enrollees see immediate savings as they meet their cost-sharing obligations. Moreover, the enrollees with the highest aggregate out-of-pocket spending would be the ones to realize the greatest savings. CMS has recently solicited comments on a plan to require some portion of drug makers’ rebates to be applied at the point of sale, and the President’s budget plan released yesterday proposes point-of-sale rebates to lower Medicare Part D enrollees’ out-of-pocket spending. Ultimately, targeting rebates to consumers at the point-of-sale will more effectively lower drug spending than reducing insurance plans’ payment obligations in hopes that they pass on the savings to enrollees.
Last week, several major drug makers marked the new year by announcing annual increases on list prices. In addition to drug maker Allergan—which pledged last year to confine price increases below 10 percent and, true to its word, reported 2018 price increases of 9.5 percent—several other companies also stuck to single-digit increases. Although list or “sticker” prices generally increased by around 9 percent for most drugs, after discounts negotiated with various health plans, the net prices that consumers and insurers actually pay will see much lower increases. For example, Allergan expects that payors will only see net price increases of 2 to 3 percent in 2018.
However, price increases won’t generate the same returns for brand drug companies that they once did. As insurers and pharmacy benefit managers consolidate and increase their market share, they have been able to capture an increasing share of the money spent on drugs for themselves. Indeed, a 2017 report found that, of the money spent on prescription drugs by patients and health plans at the point of sale, brand drug makers only realized 39 percent. Meanwhile, supply-chain participants, such as pharmacy benefit managers, realized 42 percent of these expenditures. What’s more, year-after-year, brand drug makers have seen their share of these point-of-sale expenditures decrease while supply-chain entities have kept a growing share of expenditures for themselves.
Brand drug makers have also experienced a dramatic decline in the return on their R&D investment. A recent Deloitte study reports that, for the large drug makers they’ve followed since 2010, R&D returns have dropped from over 10 percent to under 4 percent for the last two years. The ability of supply-chain entities to capture an increasing share of drug expenditures is responsible for at least part of drug makers’ decreasing R&D returns; the study reports that average peak sales for drugs have slowly dropped over time, mirroring drug maker’s decreasing share of expenditures. In addition, the decline in R&D returns can be traced to the increasing cost of bringing drugs to market; for the companies Deloitte studied, the cost to bring a drug to market has increased from just over $1.1 billion in 2010 to almost $2 billion in 2017.
Brand drug makers’ decreasing share of drug expenditures and declining R&D returns reduce incentives to innovate. As the payoff from innovation declines, fewer companies will devote the substantial resources necessary to develop innovative new drugs. In addition, innovation is threatened as brand companies increasingly face uncertainty about the patent rights of the drugs they do bring to market. As I’ve discussed in a previous post, the unbalanced inter partes review (IPR) process created under the Leahy-Smith America Invents Act in 2012 has led to significantly higher patent invalidation rates. Compared to traditional district-court litigation, several pro-challenger provisions under IPR—including a lower standard of proof, a broader claim construction standard, and the ability of patent challengers to force patent owners into duplicative litigation—have resulted in twice as many patents deemed invalid in IPR proceedings. Moreover, the lack of a standing requirement in IPR proceedings has given rise to “reverse patent trolling,” in which entities that are not litigation targets, or even participants in the same industry, threaten to file an IPR petition challenging the validity of a patent unless the patent holder agrees to specific settlement demands. Even supporters of IPR proceedings recognize the flaws with the system; as Senator Orrin Hatch stated in a 2017 speech: “Such manipulation is contrary to the intent of IPR and the very purpose of intellectual property law. . . I think Congress needs to take a look at it.” Although the constitutionality of the IPR process is currently under review by the U.S. Supreme Court, if the unbalanced process remains unchanged, the significant uncertainty it creates for drug makers’ patent rights will lead to less innovation in the pharmaceutical industry. Drug makers will have little incentive to spend billions of dollars to bring a new drug to market when they cannot be certain if the patents for that drug can withstand IPR proceedings that are clearly stacked against them.
We are likely to see a renewed push for drug pricing reforms in 2018 as access to affordable drugs remains a top policy priority. Although Congress has yet to come together in support of any specific proposal, several states are experimenting with reforms that aim to lower drug prices by requiring more pricing transparency and notice of price increases. As lawmakers consider these and other reforms, they should consider the current challenges that drug makers already face as their share of drug expenditures and R&D returns decline and patent rights remain uncertain. Reforms that further threaten drug makers’ financial incentives to innovate could reduce our access to life-saving and life-improving new drugs.