Skip to main content

Molecules to Medicine: Clinical Trials for Beginners

Have you ever wondered about the medicines you take—how they are developed and produced? We'll explore that in "Molecules to Medicine." This new series could be described as "medicine for muggles," intended to take the mystery out of clinical research and drug development and to provide background information so that both patients and physicians can [...]

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


Have you ever wondered about the medicines you take–how they are developed and produced? We’ll explore that in “Molecules to Medicine.” This new series could be described as “medicine for muggles,” intended to take the mystery out of clinical research and drug development and to provide background information so that both patients and physicians can make more informed decisions about whether they wish to participate in clinical trials or not.

Why care?

To develop a medicine, from the time of discovery of the chemical until it reaches your drug store, takes an average of 12-15 yearsand the participation of thousands of volunteers in the process of clinical trials (Fig 1).


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Very few people participate in clinical trials–it is even less than 5% for patients with cancer–due to lack of awareness or knowledge about the process. We’ll go into detail about how drugs are developed in later posts.

An inadequate number of volunteers is one of the major bottlenecks in drug development, delaying the product’s release and usefulness to the public. Of course, many people may suffer or even die during this wait, if they have an illness that is not yet otherwise treatable. So if you want new medicines, learn about–and decide if you wish to participate in–the process. I have, as a volunteer subject, researcher, and advocate.

Why are clinical trials done?

People naturally want to find things that will help them feel better. So people concocted brews, sometimes known as “patent meds.” But then some asked if the medicine actually worked, or what else the medicine did, besides its intended use. Questions later became more sophisticated, asking if the drug might be dangerous, either to people with specific conditions or people taking other meds. Or how does the drug work? If it works by a specific mechanism, does that suggest it might be useful for another condition? Are there other unintended consequences? So these, and many more questions, are why clinical trials are undertaken.

How clinical trials came to be

In order to understand why trials are done in certain ways, it’s helpful–and interesting–to understand how they evolved.

Experiments were described as far back as biblical times, with Daniel following a diet of pulses and water in lieu of meat and wine. Others followed, with observations between groups receiving treatments. But the first known, prospective controlled clinical trial occurred in 1747 when James Lind gave sailors different dietary supplements, in an effort to prevent scurvy, an illness due to Vitamin C deficiency. While he demonstrated efficacy, Lind didn’t obtain consent from his participants, resulting in his study having the dubious honor of also being the first to be criticized on ethical grounds.

Epidemics of smallpox, a devastating viral infection now eliminated, were common in the 1700s and 1800s. In 1796, Edward Jenner demonstrated that a vaccine made from cowpox, a related but milder illness, could be used to prevent smallpox. In the U.S., attempts were made to develop a vaccine from cowpox scabs imported from England. Because the cowpox virus could not live very long in dried scabs, the virus was propagated by arm-to-arm transmission in successive person-to-person inoculations: an infected vaccination lesion on one person was scraped and used as the source of material with which to inoculate the next person.

Congress mandated that an adequate supply of uncontaminated cowpox be maintained and that the vaccine be available to any citizen. Under this 1813 Vaccine Act, Dr. James Smith, a Baltimore physician propagated cowpox for 20 years via arm-to-arm transmission every 8 days. Unfortunately, in 1821 Dr. Smith mistakenly sent smallpox crusts instead of cowpox vaccine to North Carolina, precipitating a smallpox epidemic, as well as the subsequent repeal of the Vaccine Act of 1813.

Besides this egregious error in vaccinating people with live smallpox virus, rather than attenuated (weakened) cowpox, the arm-to-arm inoculation also often transmitted other infectious diseases along with the cowpox vaccine, dampening the enthusiasm for vaccination efforts. (This nicely illustrates the Law of Unintended Consequences; such person-to-person propagation is no longer done, fortunately).

Many laws were subsequently passed reactively, in response to tragedies, rather than proactively, preventing problems from occurring.

For example, after American troops fighting in Mexico received ineffective, counterfeit medications for malaria, the Import Drugs Act of 1848, was passed, establishing customs laboratories, to verify the drug’s authenticity. Ironically, counterfeit anti-malarials are again a huge problem in Southeast Asia and Africa–but that is for a later story.

In 1880, the first major attempt at passage of a national food and drug law was attempted–and failed, as there was no immediate crisis in the public’s eye.

After the gory, nauseating descriptions in Upton Sinclair’s expose of Chicago’s meat-processing plants, The Jungle, Congress passed legislation in 1906 forbidding commerce in impure and mislabeled food and drugs–though not requiring efficacy. But there weren’t really teeth in the legislation, as the burden of proof was on FDA to show that a drug’s labeling was false and fraudulent before it could be taken off the market.” Similarly, there was no requirement to disclose ingredients of drugs, as they were considered trade secrets, lending the name “patent medicine.” Sound familiar?

There was minimal progress towards consumer protection when, in 1911, the Supreme Court, in U.S. v. Johnson, ruled that the 1906 FDA Act did prohibit false or misleading statements about the ingredients or identity of a drug–but still did not prohibit lying about efficacy.

It wasn’t until 1938, after 107 deaths from “Elixir Sulfanilamide” had occurred, that the FDA was able to require “a manufacturer to prove the safety of a drug before it could be marketed.” This established the need for clinical trials.

Unintended Consequences

Clinical trials seek to learn whether a drug (or device) works as expected–it’s unknown, until tested in people. That’s why early phase trials use only a few people, and more are added as experience is gained. Sometimes unexpected discoveries are made along the way. For example, Rogaine was discovered by an astute clinician researcher during a clinical trial studying high blood pressure. The drug, minoxidil, originally under study as an anti-hypertensive medication, was serendipitously found to have the unexpected side effect of stimulating hair growth, prompting a whole new line of products for baldness.

Similarly, Viagra was discovered by accident. Sildenafil, the generic form, was being studied as a treatment for angina, as it dilates blood vessels by blocking an enzyme, phosphodiesterase (PDE). While not very effective for angina, it was found to prolong erections, stimulating the whole “life-style drug” industry. Fortunately, PDE inhibitors are now being found useful for a host of important medical conditions, ranging from pulmonary hypertension to asthma and muscular dystrophy.

Of course, not all inadvertent discoveries have such rosy outcomes.

For example, Diethylstilbesterol (DES), a synthetic estrogen, was commonly prescribed in the US 1938-1971, to help prevent miscarriages. It was only after many years that DES was found to cause a rare type of vaginal cancer in daughters of exposed women. Later, other types of cancers showed up as well, in small numbers.

The tragic effects of thalidomide on developing embryos is perhaps the most notorious and horrible unexpected outcome in the history of drug development. Thalidomide was first released in 1957, with over-the-counter availability in Germany, to treat morning sickness. It was several years before the link was clearly made between thalidomide’s use in early pregnancy and the rash of children born with small seal-like flippers instead of limbs (phocomelia). Thalidomide was then removed from the market. There was a controversial resurgence of interest in the drug and approval by the FDA for thalidomide’s use in multiple myeloma in 1998; it’s use is now being explored for other serious illnesses.

Good from Evil-Ethical Standards

In 1928, through a chance discovery, Alexander Fleming discovered Penicillin in a “mold juice” that inhibited the growth of bacteria on Petri dishes. The potential value of Penicillin was underappreciated until 1939, when its purification and development began in earnest, as part of a war-time effort. War (and friendlier competition between nations) is, all too often, the impetus for research, and has lead to many useful inventions. Such advances naturally occurred in vascular surgery, regional anesthesia, and orthopedics, as well as less obviously related therapies as immunizations and treatments for malaria and other infections.

For example, animal studies in 1940 showed that mice could be effectively treated for Streptococcus with penicillin. The first patient received penicillin in 1941, under what would now be called “compassionate use.” Unfortunately, although he initially responded to treatment, there was not enough drug available, and he later died of his Streptococcus infection.

This “proof of concept” was enough, however, to spur development and extensive cooperation between the Britain and the U.S., driven by the desire to have the drug available to treat military injuries in World War II. Spurred by the Office of Scientific Research and Development (OSRD), pharmaceutical companies joined this patriotic, war-time effort; Merck was the first to develop the antibiotic for clinical use, followed soon by Squibb, Pfizer, and Lilly. In 1943, the War Production Board (WPB) assumed responsibility for increasing production to meet the military’s needs. The National Research Council’s chairman, Dr. Chester Keefer, had the thankless position of rationing the limited stocks of penicillin available to those outside of the military. Although not via a formal clinical trial, Dr. Keefer, too, gathered data as to the civilians’ response. The same process is followed today when a patient receives an experimental drug outside of a trial protocol.

As of March 15, 1945, rationing of penicillin stopped, as there were adequate supplies for both military and public needs. Unfortunately, the “miracle drug” was squandered and now it, along with many other antibiotics, is of limited use–the bacteria having evolved to become resistant much more successfully than pharmaceutical development has kept pace. Many of us are concerned we are entering the post-antibiotic era–the Infectious Diseases Society of America has been trying to call attention to this critical problem since their “Bad Bugs, No Drugs” campaign began in 2004. In a d?j? vu moment, Australian news just reported that international shortages are once again necessitating rationing of Penicillin there.

Regulated, standardized clinical trials formally began in response to the horrors of World War II abuses, and ethical requirements for human research was established by world consensus.

Most drug trials are closely regulated and safe to participate in. Those that aren’t make the headlines, as they sell copy. Rebecca Skloot’s fine, captivating tale, The Immortal Life of Henrietta Lacks, is a superb example both of medical research gone awry and our fascination with the dark side of stories. But without clinical trials, none of us would have any prescription medicines available.

As we’ve seen, the history of drug development has been checkered at times. Clinical research is neither perfect nor without some degree of risk, but these can be minimized, and more safeguards are in place than ever in the past. There have been huge strides in the development of drugs, medical devices, vaccines and novel therapies in recent decades. Each has gone through a similar process of extensive testing before approval for use by the general public. But because these early phases of testing involve, at most, a few thousand volunteers, unexpected outcomes after market approval are inevitable and unavoidable, as the new drug is taken by millions.

In future posts, I’ll explain more about how clinical trials are conducted and what questions you should ask if you ever considering volunteering. I’ll also cover obstacles in drug development, issues related to women and minorities, and ethical issues, including our priorities in drug development. Suggestions for what you might like to discuss are also most welcome.

 

This post is adapted from my book, Conducting Clinical Research: A Practical Guide for Physicians, Nurses, Study Coordinators, and Investigators.

Judy Stone, MD is an infectious disease specialist, experienced in conducting clinical research. She is the author of Conducting Clinical Research, the essential guide to the topic. She survived 25 years in solo practice in rural Cumberland, Maryland, and is now broadening her horizons. She particularly loves writing about ethical issues, and tilting at windmills in her advocacy for social justice. As part of her overall desire to save the world when she grows up, she has become especially interested in neglected tropical diseases. When not slaving over hot patients, she can be found playing with photography, friends' dogs, or in her garden. Follow on Twitter @drjudystone or on her website.

More by Judy Stone