Site icon Truth on the Market

A Brief History of the US Drug Approval Process, and the Birth of Accelerated Approval

This is the second post about the U.S. drug-approval process; the first post is here. It will explore how the Food and Drug Administration (FDA) arose, how disasters drove its expansion and regulatory oversight, and how the epidemic of the human immunodeficiency virus (HIV) changed the approval processes.

The Arrival of New Medicines

Lone inventors, groups of scientists, serial entrepreneurs and, more recently, large corporations have been introducing new medicines, medical devices, and other products to improve our lives for millennia. Some of the early products were merely placebos, but a few—like tonics and vitamins—helped to prevent or remediate disease. Especially over the past century, many of these products have saved lives and improved the quality and length of human life. But in a few instances, these products have proven deadly or harmful, and led to calls for stricter regulation to ensure safety and efficacy. 

Reputation matters in business. Companies like Crosse and Blackwell in the United Kingdom and Heinz in the United States were among the first to ensure quality. As a result, they thrived, while cost-cutting competitors with often-contaminated products failed. Today, companies like Merck, Pfizer, Novartis, BMS, GSK, and others have reputations for quality. But every company has made mistakes, and establishing the safety, efficacy and, hence, value of medicine is even harder than with food products. Over time, independent evaluators of quality were both demanded and introduced. 

FDA Begins US Regulation of Medicines 

The FDA began its modern regulatory functions with the passage of the 1906 Pure Food and Drugs Act, which prohibited interstate commerce in adulterated and misbranded food and drugs. The FDA has grown enormously since then, in terms of staff and budget. Today, it oversees roughly a quarter of all U.S. economic activity. The expansion of FDA oversight often followed public-health disasters. The two most notable examples involved diethylene glycol and thalidomide. 

Diethylene Glycol and Thalidomide

The first antibiotics had scarcely been invented before they were misused with fatal consequences. In 1937, a newly developed class of antibiotics called sulfonamides were wildly popular, and experiments with hundreds of formulations led to new life-saving products. But one of these new formulations included an even newer compound: diethylene glycol, a sweet-tasting syrupy solvent. 

Chemists at the highly respectable drug company Massengill mixed sulfanilamide with diethylene glycol to make the drug easier for patients to swallow. The problem is that diethylene glycol is fatally toxic and killed 105 Americans in the year or so after its introduction. At the time, the FDA could only prosecute Massengill for mislabeling its product. Public outcry led to the 1938 Federal Food Drug and Cosmetic Act (FDC), which required companies to perform product-safety tests prior to marketing. This disaster drove public support for the regulator’s watchdog role.

Diethylene glycol has continued to kill people across the world: in South Africa in 1969 and in Nigeria in 1990. In 2022, 60 children died in the Gambia from the same product. In total, at least 850 deaths in 11 countries have been attributed to drugs (often imported from India) contaminated with diethylene glycol—all in poorer nations without respected and well-funded drug regulators, and without widespread social media or an independent press to provide rapid feedback loops.

Thalidomide

The most significant expansion of the FDA’s authority over drugs came two decades later. Thalidomide was developed as a sleeping pill, and was also expected to treat insomnia, nausea, and headaches. After its introduction in Germany, it was to treat morning sickness, without a trial in pregnant women. Indeed, pregnant women would never normally be in a clinical trial, due to potential risks to the fetus. It was nonetheless marketed from 1957 by German manufacturer Chemie Grunenthal as a safe sedative to combat morning sickness in pregnant women. Thalidomide resulted in about 8,000 birth defects in Europe.

Manufacturer Richardson Merrell had wanted to market the drug in the United States, but FDA reviewer Frances Kelsey read reports of harm published in the British Medical Journal. She ultimately refused to approve the drug for widespread use in the United States because she was not satisfied with the safety reports provided. Not all Americans escaped unscathed, however. There were 17 American babies born with birth defects because their mothers took the drug, as some U.S. doctors had been persuaded to use it experimentally. 

The media avidly reported the largely averted disaster in the United States, which elevated the political clamor for stronger drug laws and led to the 1962 Amendments to the Federal Food, Drug and Cosmetic (FD&C) Act. Known colloquially as the “Kefauver-Harris amendments” to the FDC, the legislation established that drugs had to be both safe and effective prior to FDA approval; the 1938 law had only required proof of safety. The major change in the 1962 act would have made no difference to the thalidomide case, because drug safety was at issue there. Nevertheless, the experience with thalidomide drove stricter controls for advanced testing of new drugs. 

Ironically, larger or tighter controlled clinical trials would not have found the thalidomide problem, because pregnant women would not have been used in a trial. Testing thalidomide on pregnant mice might have identified the teratogenic problem seen in humans, but expanded human testing would not have found it. Real-world data about pregnant women using the drug is what alerted authorities to the problem, not a controlled trial. 

For an expanded FDA to find a future thalidomide, what was needed is an enhanced feedback mechanism for real-world use of medicines. Problems can be found in clinical trials, but they often arise with medicines when used as intended by the general population post-trial or—as with thalidomide—when used “off-label” (to treat a condition for which the drug was not tested in a clinical trial). Post-thalidomide, the FDA’s efforts, budget, staff, and authority all grew enormously, leading to a massive expansion of clinical trials.

Delayed Drug Introduction

Though the initial changes demanded in the 1960s and early 1970s were not too onerous for the drug companies, the continued increase in demands for testing and, hence, costs led to the demise of many firms from 1973 onward. Economists like Sam Peltzman have argued that these changes reduced the flow of new drugs entering the market. And it is possible, perhaps even probable, that the drug lag induced by the 1962 act has caused more deaths than the FDA’s extra caution has saved. This is certainly Peltzman’s conclusion: “FDA’s proof of efficacy requirement was a public health disaster, promoting much more sickness and death than it prevented.” 

It is hard to accurately assess the historical costs and benefits of delayed approvals but, as later posts will discuss, the costs probably significantly outweigh the benefits. What is certain is that the incentive structure for FDA officials is quite simple: you may be criticized for delays, but to allow a second thalidomide-like incident would be terminal to a career. The bias toward caution certainly exists. 

As will be discussed in later posts, there should be a constant tension between safety concerns and faster approvals. Patients and the patient groups representing them want the right to try speculative medicines, but they are concerned about safety. Patients and the small companies developing cutting-edge therapies might benefit from faster approval, with appropriate liability waivers and risk acceptance by patients. But staff at the FDA and the large multinational companies they oversee benefit from the barrier to entry that stricter safety enforcement requires. In other words, most of the insiders benefit from delay, while only some of the outsiders want faster approval.

What Does the FDA Require?

The FDA implemented the efficacy requirement for new drugs, promulgating regulations that detailed the scientific principles of adequate and well-controlled clinical investigations. This was how FDA standards of efficacy through clinical trials evolved. FDA regulations defined “adequate and well-controlled” clinical investigations as permitting “a valid comparison with a control to provide a quantitative assessment of drug effect.” In practice, this meant studies that typically were randomized, blinded, and placebo-controlled, generating data where clinical benefit could be assessed. 

The FDA engaged with experts to improve the design and execution of these clinical trials, establishing external advisory committees that changed study designs and drove greater understanding of what data were required across various therapeutic areas. Critically, the FDA wanted trial sponsors to demonstrate clinical benefits, such as improved survival rates or improved function. 

To reach this end, the FDA claimed that more than one adequate and well-controlled investigation was necessary, since a single trial might have biases that falsely demonstrated efficacy. As a result, two clinical trials became the standard. While this lowered the chance of a Type One error (a safety concern with the medicine being tested), it massively increased the chances of Type Two errors (the risk to patients of a delay in use of the medicine). 

Demonstrating clinical benefit over two trials takes a long time, and is expensive. By the 1980s, it became obvious that the cost of drug development was a significant barrier for developing drugs for rare diseases (small markets) or for diseases with no current therapies (where the science was new or uncertain). 

The Orphan Drug Act of 1983 aimed to simplify approval for drugs with small markets. Ultimately, wider awareness of patients in dire need of treatment drove calls for reform, but the impact was minimal at best. There was significant distrust of the pharmaceutical industry—especially that it would cut corners on safety and overprice products. Suggestions or demands that approvals be made more rapidly were met with skepticism. This changed with the arrival of HIV. 

How HIV Drove Accelerated Access

While rare diseases attracted some media attention, a disaster like HIV led to more significant concern and, ultimately, reform. From mid-1981 until the end of 1982, the U.S. Centers for Disease Control and Prevention (CDC) received more than 650 daily reports of HIV cases. Roughly 40% of those with HIV who developed AIDS would ultimately die.

The AIDS epidemic was an emergency in health care and a catalyst for change everywhere, including clinical-trial requirements. AIDS drove a reevaluation of what was essential to demonstrate efficacy, including how the FDA defined “adequate and well-controlled studies.” By 1987, activist sit-ins outside of FDA headquarters and widespread media attention to HIV/AIDS became daily occurrences, building pressure to speed up access.

The FDA created the new class of investigational new drug (IND) application, which allowed patients to receive investigational treatments in an unblinded setting. But the FDA still allowed trial sponsors to use data collected through such treatments in new drug applications for full approval. This process was limited for cases like HIV, since the FDA did not want to undermine blinded studies, which remained the gold standard for drug approvals.

By allowing HIV patients to take investigational treatments, the FDA opened up research into how drugs could be approved faster. Trial experts sought ways to streamline trials by focusing on surrogate endpoints, which—while not direct measures of clinical benefit—were demonstrably correlated with improved clinical outcomes.

For example, improved T-cell count was determined to reliably predict fewer infections in AIDS patients and was accepted as a surrogate endpoint that could be used to demonstrate the efficacy of HIV/AIDS drugs. AZT, the first medicine approved to combat HIV, improved T-cell counts and was provisionally approved March 20, 1987. The time between the first demonstration that AZT was active against HIV in the laboratory and its approval was 25 months.  

AZT was approved for all HIV patients in 1990. AZT was initially administered in significantly higher dosages than today, typically 400 mg every four hours, day and night, compared to the modern dosage of 300 mg twice daily. While the drug’s side effects (especially anemia) were significant, the decision to approve was widely supported, given the alternative of a slow and painful death from AIDS.

The AZT approval convinced many that success could be predicated on the use of surrogate endpoints. As consensus grew about the utility of surrogate endpoints in clinical-trial design, the FDA came under pressure to accept drug-approval reform. As a result, the FDA formalized the accelerated approval pathway in 1992. 

The FDA could expedite the approval of and patient access to drugs that were intended to treat serious and life-threatening diseases and conditions for which there were unmet medical needs. By relying on surrogate endpoints or other intermediate clinical endpoints that could be measured earlier than irreversible morbidity, development programs could be accelerated. Patients would generally be well-served and the “substantial evidence of efficacy” standard had largely been met. 

In short, for patients with serious or life-threatening illnesses, and where there was an unmet medical need, there could be a different risk-benefit calculation: The more serious the illness and the greater the effect of the drug on that illness, the greater the acceptable risk from the drug. If products “provide meaningful therapeutic benefit over existing treatment for a serious or life-threatening disease, a greater risk may also be acceptable.” 

The question of whether accelerated approval would be a success was uncertain in 1992, but it offered hope for AIDS patients otherwise destined to die due to lack of effective treatment.

 

Exit mobile version