Developing a drug is a highly regulated, painstaking process. The promise of beneficial effects from compounds discovered in the lab is only the beginning. Scientists have to design and conduct experiments to determine the mechanism of action, ideal delivery method, and dosage and rule out any adverse events or toxic side effects.
And only then can development proceed to preclinical research, followed by clinical trials, then FDA review and monitoring.
Animal testing occurs in the preclinical stage. Since 1938, the FDA has required that all new drugs be tested on animals before being administered to humans in a clinical trial. The intention is to screen out harmful effects early.
But other methods, like assays of cell viability or microdosing, offer superior predictive capabilities when determining the safety and efficacy of new drugs. Is animal testing still necessary?
The argument against cruelty
As you’d expect, some of the fiercest critics of animal testing are animal rights advocates. They often cite that when drugs are tested on laboratory animals, the procedures involved are a form of cruelty to other living beings.
Consider the Draize test, which is standard practice when checking for possible eye or skin irritation. In this method, rabbits’ eyelids are forced open, possibly for days, while a test solution is applied directly to the eyes. For skin, the test substance is directly applied to an area that’s been shaved and abraded. In both situations, the animals can experience intense pain, itching, and even bleeding from the affected areas.
And that’s just part of the problem. The Animal Welfare Act provides limited protection by requiring laboratories to comply with standards of care for test animals. But only a few species, including rabbits, are covered. The vast majority of animals used in testing, including rats, mice, birds, fish, and reptiles, receive little to no protection. And when ill-treated, this can even render testing results inaccurate or make replication impossible.
Those in favor of animal testing would say that it’s done for the greater good of human well-being, as we would be the end-users of the drugs involved. But the evidence strongly suggests that animal testing might not even be necessary.
Needless harm for questionable results
In the example of the Draize test, over three-quarters of the drugs administered were for systemic, not topical, application, rendering the data irrelevant. Yet, it was used in 94% of skin tests and 60% of eye tests for irritation simply because of the FDA mandate.
The harm done to animals can also be rendered needless because many adverse effects aren’t screened anyway.
The determinants of health can vary greatly from one person to another. We all have differences in our genes, family background, environment, lifestyle, and social influences that can yield markedly different results in our responses to medical interventions.
In defense of animal testing, it’s commonly cited that animals share many anatomical and physiological similarities with humans. Species such as mice, for instance, even have 95% homologous genes. But if responses among humans themselves can be so varied, testing on a different species is bound to be even less accurate at predicting adverse reactions.
Statistics show that 92% of drugs that pass animal studies will fail at the clinical stage. Volunteers in clinical trials may lose their lives due to substances that seemed safe in animal testing. In 2004, the drug Vioxx was pulled after, causing 320,000 adverse events, 140,000 of them fatal, despite convincing animal evidence for its safety.
Holes in the model
The stringent process of drug development can be understood as a way of managing risk, along the lines of the ‘Swiss cheese’ model popularized during the pandemic.
Every barrier or method of screening is like a slice of Swiss cheese: it can catch threats, but it has holes, allowing some to slip through. The more layers you stack together, the lower the chances of having multiple holes line up.
With Covid-19, using the Swiss cheese model meant applying multiple precautions at the same time. Wearing face masks, washing our hands, staying at home as much as possible, and observing physical distancing when going out all were meant to go together and protect our lives.
But with drug development, it seems increasingly likely that animal testing is a redundant layer. Its holes line up with those of other existing processes. It can probably be replaced by newer, better methods to form a more effective stack of safety barriers, protecting the general population when drugs are approved.
All that remains is for regulatory oversight to change. Update the mandate of decades past, and then labs will start switching over to better methods for their efficiency and public safety.