January 15, 2013 | 5
Science can be risky business, but it is important to know what those risks are. It is established wisdom that we need to experiment on viruses, for example, to better defend against emerging infectious diseases. But there is a fine line between creating a new strain of avian influenza to better understand how to defend against infectious disease, and using the same strain to cause a deadly pandemic.
Studies like this, which increase a virus’ ability to infect and kill, are part of a class of research known as “gain-of-function studies.” These studies, and others like them, are “dual-use.” That is, studies into highly pathogenic avian influenza (HPAI) H5N1 viruses, more commonly known as “bird flu,” are promised to raise awareness of the imminence of a bird flu pandemic in humans, improve disease surveillance, and aid in building better countermeasures against flu.
Despite these benefits, however, some fear the research could lead to the accidental or intentional release of a new, deadly strain of bird flu. Yet while recognizing dual-use research is easy, deciding what to do is a fraught exercise. How to regulate research so we capitalize on the benefits of new discoveries, while avoiding or mitigating the harms caused by the malevolent use of science and technology?
This dilemma isn’t a matter of idle speculation, but a live debate with far-reaching implications. Two gain-of-function studies have already shown that bird flu can be modified to become transmissible between ferrets, which often stand in for humans in influenza studies. In one study, bird flu was passed from ferret to ferret until it gained the capacity to spread through the air. The other study showed that the combination of parts of the 2009 H1N1 “swine flu” virus with bird flu creates a strain of bird flu that is transmissible in ferrets.
Both studies were embroiled in controversy in late 2011 when the National Science Advisory Board for Biosecurity (NSABB) recommended that neither piece of research be published (a recommendation it recanted in March of 2012). A continuing international moratorium exists on gain-of-function studies in bird flu, highlighting the seriousness of gain-of-function research.
At the end of 2012, the US Department of Health and Human Services (HHS) proposed a new policy framework for determining funding of this dual-use, gain-of-function research into bird flu. Under the new policy, gain-of-function studies must meet the following criteria in order to secure funding from HHS:
If gain-of-function research meets these criteria, pending a further review the research may then be funded but monitored for its outcomes, transferred to an agency to be conducted as classified research, or denied funding altogether.
The policy is still in its infancy, but already raises some concerns. Critics have already questioned what counts as being “significant” to public health, or what constitutes sufficient evidence for a virus strain occurring in nature. There is always likely to be room for disagreement about the extent to which gain-of-function studies are significant, or viruses emergent, but at present the ambiguity is stark enough to be worrying. With no guidance, researchers may be pressed upon to provide assurances they simply can’t make given evidence possessed before research is conducted.
Yet even if we do overcome the hurdle of identifying what is beneficial, and what manifestly dangerous, the proposed actions given by the framework are somewhat alarming. The framework gives the option to transfer dangerous gain-of-function research to agencies that conduct classified research, such as the Department of Defence or the Department of Homeland Security.
Yet classified government research in the life sciences doesn’t have a great track record of being in the public interest: the Defence Intelligence Agency’s attempts to make genetically modified anthrax, the Defence Threat Reduction Agency’s milling weapons-grade anthrax in secret, or the CIA’s creation of Soviet-style “bomblets” that are used to disperse biological agents (in the name, so claimed, of assessing their effectiveness in use against the US), are all example of deeply troubling classified life sciences research purported to be in the public interest.
The possibility of taking research we’ve already ascertained is problematic, and giving it to an agency with a history of misuse of research, is frightening. We should question this new policy to the extent that it leaves open this option. If research is risky to public health, or doesn’t show merit regarding actually emerging infectious diseases, why open the way for that research to be done in secret?
Finally, it remains to be seen how the international community will respond to the US policy. In the absence of strong international compliance or confidence building measures around the spread of biological weapons, gain-of-function research may simply move offshore to locales with as much money but less oversight, something that is surely to the detriment of all. Alternately, other countries may simply ignore the lead of the US in favour of their own research agendas.
These aren’t reasons to reject the new framework, but they are difficult problems that need addressing. Ultimately, the reasons for creating policy need to be the right reasons, and generate the right kinds of outcome—in this case, funding research that really is in the public interest. After all, when regulation succeeds in its aims, no one notices; when it fails, science is derailed, corruption occurs, and people get hurt. So it pays to engage in and question the policy process to ensure the right regulations happen for the right reasons. We should all take a closer look at how HHS and others intend to handle this type of dual-use research. Ultimately, our lives really might depend on it.