Drug regulation, development, names, and information

CHAPTER 3


Drug regulation, development, names, and information


In this chapter we complete our introduction to pharmacology by considering five diverse but important topics. These are (1) drug regulation, (2) new drug development, (3) the annoying problem of drug names, (4) over-the-counter drugs, and (5) sources of drug information.




Landmark drug legislation


The history of drug legislation in the United States reflects an evolution in our national posture toward regulating the pharmaceutical industry. That posture has changed from one of minimal control to one of extensive control. For the most part, increased regulation has been beneficial, resulting in safer and more effective drugs.


The first American law to regulate drugs was the Federal Pure Food and Drug Act of 1906. This law was very weak: It required only that drugs be free of adulterants. The law said nothing about safety or effectiveness.


The Food, Drug and Cosmetic Act, passed in 1938, was much stronger than the Pure Food and Drug Act, and was the first legislation to address drug safety. The motivation behind the 1938 law was a tragedy in which more than 100 people died following use of a new medication. The lethal preparation contained an antibiotic (sulfanilamide) plus a solubilizing agent (diethylene glycol). Tests showed that the solvent was the cause of death. (Diethylene glycol is commonly used as automotive antifreeze.) To reduce the chances of another such tragedy, Congress required that all new drugs undergo testing for toxicity. The results of these tests were to be reviewed by the Food and Drug Administration (FDA), and only those drugs judged safe would receive FDA approval for marketing.


In 1962, Congress passed the Harris-Kefauver Amendments to the Food, Drug and Cosmetic Act. This law was created in response to the thalidomide tragedy that occurred in Europe in the early 1960s. Thalidomide is a sedative now known to cause birth defects and fetal death. Because the drug was used widely by pregnant women, thousands of infants were born with phocomelia, a rare birth defect characterized by the gross malformation or complete absence of arms or legs. This tragedy was especially poignant in that it resulted from nonessential drug use: The women who took thalidomide could have done very well without it. Thalidomide was not a problem in the United States because the drug never received approval by the FDA (see Chapter 107, Box 107–1).


Because of the European experience with thalidomide, the Harris-Kefauver Amendments sought to strengthen all aspects of drug regulation. A major provision of the bill required that drugs be proved effective before marketing. Remarkably, this was the first law to demand that drugs actually offer some benefit. The new act also required that all drugs that had been introduced between 1932 and 1962 undergo testing for effectiveness; any drug that failed to prove useful would be withdrawn. Lastly, the Harris-Kefauver Amendments established rigorous procedures for testing new drugs. These procedures are discussed below under New Drug Development.


In 1970, Congress passed the Controlled Substances Act (Title II of the Comprehensive Drug Abuse Prevention and Control Act). This legislation set rules for the manufacture and distribution of drugs considered to have the potential for abuse. One provision of the law defines five categories of controlled substances, referred to as Schedules I, II, III, IV, and V. Drugs in Schedule I have no accepted medical use in the United States and are deemed to have a high potential for abuse. Examples include heroin, mescaline, and lysergic acid diethylamide (LSD). Drugs in Schedules II through V have accepted medical applications but also have the potential for abuse. The abuse potential of these agents becomes progressively less as we proceed from Schedule II to Schedule V. The Controlled Substances Act is discussed further in Chapter 37 (Drug Abuse: Basic Considerations).


In 1992, FDA regulations were changed to permit accelerated approval of drugs for acquired immunodeficiency syndrome (AIDS) and cancer. Under these guidelines, a drug could be approved for marketing prior to completion of Phase III trials (see below), provided that rigorous follow-up studies (Phase IV trials) were performed. The rationale for this change was that (1) medications are needed, even if their benefits may be marginal, and (2) the unknown risks associated with early approval are balanced by the need for more effective drugs. Although accelerated approval seems like a good idea, in actual practice, it has two significant drawbacks. First, manufacturers often fail to conduct or complete the required follow-up studies. Second, if the follow-up studies—which are more rigorous than the original—fail to confirm a clinical benefit, the guidelines have no clear mechanism for removing the drug from the market.


The Prescription Drug User Fee Act (PDUFA), passed in 1992, was a response to complaints that the FDA takes too long to review applications for new drugs. Under the Act, drug sponsors pay the FDA fees (about $500,000 per drug) that are used to fund additional reviewers. In return, the FDA must adhere to strict review timetables. Because of the PDUFA, new drugs now reach the market much sooner than in the past.


The Food and Drug Administration Modernization Act (FDAMA) of 1997—an extension of the Prescription Drug User Fee Act—called for widespread changes in FDA regulations. Implementation is in progress. For health professionals, four provisions of the act are of particular interest:



• The fast-track system created for AIDS drugs and cancer drugs now includes drugs for other serious and life-threatening illnesses.


• Manufacturers who plan to stop making a drug must inform patients at least 6 months in advance, thereby giving them time to find another source.


• A clinical trial database will be established for drugs directed at serious or life-threatening illnesses. These data will allow clinicians and patients to make informed decisions about using experimental drugs.


• Drug companies can now give prescribers journal articles and certain other information regarding “off-label” uses of drugs. (An “off-label” use is a use that has not been evaluated by the FDA.) Prior to the new act, clinicians were allowed to prescribe a drug for an off-label use, but the manufacturer was not allowed to promote the drug for that use—even if promotion was limited to providing potentially helpful information, including reprints of journal articles. In return for being allowed to give prescribers information regarding off-label uses, manufacturers must promise to do research to support the claims made in the articles.


Two laws—the Best Pharmaceuticals for Children Act (BPCA), passed in 2002, and the Pediatric Research Equity Act (PREA) of 2003—were designed to promote much-needed research on drug efficacy and safety in children. The BPCA offers a 6-month patent extension to manufacturers who evaluate a drug already on the market for its safety, efficacy, and dosage in children. The PREA gives the FDA the power, for the first time, to require drug companies to conduct pediatric clinical trials on new medications that might be used by children. (In the past, drugs were not tested in children. Hence, there is a general lack of reliable information upon which to base therapeutic decisions.)


In 2007, Congress passed the FDA Amendments Act (FDAAA), the most important legislation on drug safety since the Harris-Kefauver Amendments of 1962. The FDAAA expands the mission of the FDA to include rigorous oversight of drug safety after a drug has been approved. (Prior to this act, the FDA focused on drug efficacy and safety prior to approval, but had limited resources and authority to address drug safety after a drug was released for marketing.) Under the new law, the FDA has the legal authority to require postmarketing safety studies, to order changes in a drug’s label to include new safety information, and to restrict distribution of a drug based on safety concerns. In addition, the FDA is required to establish an active postmarketing risk surveillance system, mandated to include 25 million patients by July 2010, and 100 million by July 2012. Because of the FDAAA, adverse effects that were not discovered prior to drug approval will come to light much sooner than in the past, and the FDA now has the authority to take action (eg, limit distribution of a drug) if postmarketing information shows a drug to be less safe than previously understood.


In 2009, Congress passed the Family Smoking Prevention and Tobacco Control Act, which, at long last, allows the FDA to regulate cigarettes, the single most dangerous product available to U.S. consumers. Under the Act, the FDA can now strengthen advertising restrictions, including a prohibition on marketing to youth; require revised and more prominent warning labels; require disclosure of all ingredients in tobacco products and restrict harmful additives; and monitor nicotine yields and mandate gradual reduction of nicotine to nonaddictive levels.



New drug development


The development and testing of new drugs is an expensive and lengthy process, requiring 6 to 12 years for completion. Of the thousands of compounds that undergo testing, only a few enter clinical trials, and of these, only 1 in 5 gains approval. Because of this high failure rate, the cost of developing a new drug can exceed $1 billion.


Rigorous procedures for testing have been established so that newly released drugs might be both safe and effective. Unfortunately, although testing can determine effectiveness, it cannot guarantee that a new drug will be safe: Significant adverse effects may evade detection during testing, only to become apparent after a new drug has been released for general use.



The randomized controlled trial


Randomized controlled trials (RCTs) are the most reliable way to objectively assess drug therapies. Accordingly, RCTs are used to evaluate all new drugs. RCTs have three distinguishing features: use of controls, randomization, and blinding. All three serve to minimize the influence of personal bias on the results.




Use of controls.

When a new drug is under development, we want to know how it compares with a standard drug used for the same disorder, or perhaps how it compares with no treatment at all. In order to make these comparisons, some subjects in the RCT are given the new drug and some are given either (1) a standard treatment or (2) a placebo (ie, an inactive compound formulated to look like the experimental drug). Subjects receiving either the standard drug or the placebo are referred to as controls. Controls are important because they help us determine if the new treatment is more (or less) effective than standard treatments, or at least if the new treatment is better (or worse) than no treatment at all. Likewise, controls allow us to compare the safety of the new drug with that of the old drug, a placebo, or both.



Randomization.

In an RCT, subjects are randomly assigned to either the control group or the experimental group (ie, the group receiving the new drug). The purpose of randomization is to prevent allocation bias, which results when subjects in the experimental group are different from those in the control group. For example, in the absence of randomization, researchers could load the experimental group with patients who have mild disease and load the control group with patients who have severe disease. In this case, any differences in outcome may well be due to the severity of the disease rather than differences in treatment. And even if researchers try to avoid bias by purposely assigning subjects who appear similar to both groups, allocation bias can result from unknown factors that can influence outcome. By assigning subjects randomly to the control and experimental groups, all factors—known and unknown, important and unimportant—should be equally represented in both groups. As a result, the influences of these factors on outcome should tend to cancel each other out, leaving differences in the treatments as the best explanation for any differences in outcome.



Blinding.

A blinded study is one in which the people involved do not know to which group—control or experimental—individual subjects have been randomized. If only the subjects have been “blinded,” the trial is referred to as single blind. If the researchers as well as the subjects are kept in the dark, the trial is referred to as double blind. Of the two, double-blind trials are more objective. Blinding is accomplished by administering the experimental drug and the control compound (either placebo or comparison drug) in identical formulations (eg, green capsules, purple pills) that bear a numeric code. At the end of the study, the code is accessed to reveal which subjects were controls and which received the experimental drug. When subjects and researchers are not blinded, their preconceptions about the benefits and risks of the new drug can readily bias the results. Hence, blinding is done to minimize the impact of personal bias.



Stages of new drug development


The testing of new drugs has two principal steps: preclinical testing and clinical testing. Preclinical tests are performed in animals. Clinical tests are done in humans. The steps in drug development are outlined in Table 3–1.





Clinical testing

Clinical trials occur in four phases and may take 2 to 10 years to complete. The first three phases are done before a new drug is marketed. The fourth is done after marketing has begun.






Limitations of the testing procedure


It is important for nurses and other healthcare professionals to appreciate the limitations of the drug development process. Two problems are of particular concern. First, until recently, information on drug use in women and children has been limited. Second, new drugs are likely to have adverse effects that were not detected during clinical trials.



Limited information in women and children


Women.

Until recently, very little drug testing was done in women. In almost all cases, women of child-bearing age were excluded from early clinical trials. The reason? Concern for fetal safety. Unfortunately, FDA policy took this concern to an extreme, effectively barring all women of child-bearing age from Phase I and Phase II trials—even if the women were not pregnant and were using adequate birth control. The only women allowed to participate in early clinical trials were those with a life-threatening illness that might respond to the drug under study.


Because of limited drug testing in women, we don’t know with precision how women will respond to drugs. We don’t know if beneficial effects in women will be equivalent to those seen in men. Nor do we know if adverse effects will be equivalent to those in men. We don’t know how timing of drug administration with respect to the menstrual cycle will affect beneficial and adverse responses. We don’t know if drug disposition (absorption, distribution, metabolism, and excretion) will be the same in women as in men. Furthermore, of the drugs that might be used to treat a particular illness, we don’t know if the drugs that are most effective in men will also be most effective in women. Lastly, we don’t know about the safety of drug use during pregnancy.


During the 1990s, the FDA issued a series of guidelines mandating participation of women (and minorities) in trials of new drugs. In addition, the FDA revoked a 1977 guideline that barred women from most trials. Because of these changes, the proportion of women in trials of most new drugs now equals the proportion of women in the population. The data generated since the implementation of the new guidelines have been reassuring: Most gender-related effects have been limited to pharmacokinetics. More importantly, for most drugs, gender has shown little impact on efficacy, safety, or dosage. However, although the new guidelines are an important step forward, even with them, it will take a long time to close the gender gap in our knowledge of drugs.




Failure to detect all adverse effects

The testing procedure cannot detect all adverse effects before a new drug is released. There are three reasons why: (1) during clinical trials, a relatively small number of patients are given the drug; (2) because these patients are carefully selected, they do not represent the full spectrum of individuals who will eventually take the drug; and (3) patients in trials take the drug for a relatively short time. Because of these unavoidable limitations in the testing process, effects that occur infrequently, effects that take a long time to develop, and effects that occur only in certain types of patients can go undetected. Hence, despite our best efforts, when a new drug is released, it may well have adverse effects of which we are as yet unaware. In fact, about half of the drugs that reach the market have serious adverse effects that were not detected until after they were released for general use.


The hidden dangers in new drugs are illustrated by the data in Table 3–2. This table presents information on 11 drugs that were withdrawn from the U.S. market soon after receiving FDA approval. In all cases, the reason for withdrawal was a serious adverse effect that went undetected in clinical trials. Admittedly, only a few hidden adverse effects are as severe as the ones in the table. Hence, most do not necessitate drug withdrawal. Nonetheless, the drugs in the table should serve as a strong warning about the unknown dangers that a new drug may harbor.


Stay updated, free articles. Join our Telegram channel

Jul 24, 2016 | Posted by in NURSING | Comments Off on Drug regulation, development, names, and information

Full access? Get Clinical Tree

Get Clinical Tree app for offline access