EDITORIAL

Culture of responsibility

See allHide authors and affiliations

Science  05 Sep 2014:
Vol. 345, Issue 6201, pp. 1101
DOI: 10.1126/science.1260424
PHOTO: ANN BORDEN
PHOTO: MICHAEL STRAVATO/STRAVATO.COM

The current crisis with the Ebola virus vividly illustrates the priority that must be given to infectious diseases because of their potentially devastating consequences to individuals and to society. Few would argue against the need for more research on Ebola and the expedited development of a cure; however, recent incidents in biocontainment laboratories and the proliferation of such facilities globally raise concerns about safety and have split the scientific community. Scientists who defend research on dangerous pathogens as vital to protecting populations are opposed by those who fear the potential devastation caused by the intentional or unintentional release of pathogens from the lab. Achieving a “culture of safety,” so often alluded to after recent lapses in biosafety procedures, demands adopting a “culture of responsibility” as well.

“Why are scientists…not required to have ethical training related to the potential risks of research to the public?”

PHOTO: SCOTT SMITH/CDC

Certainly, the U.S. Centers for Disease Control and Prevention has a far broader mission today than in the 1970s, when it investigated the first outbreaks of Ebola virus in Zaire and Sudan. Nevertheless, its labs and every lab worldwide must enforce a culture of safety and responsibility. Lapses have led to illnesses and deaths in lab workers and in the community, such as in 2004, when the mishandling of the virus that causes severe acute respiratory syndrome resulted in tertiary infections and the death of an attending physician in China. The risks must be reduced, even if they cannot be eliminated. Training and safe lab operations require greater emphasis. Infrastructure, not just innovation, is critical.

“Gain-of-function” research in which pathogens are altered to increase transmissibility has raised the debate about research on dangerous pathogens to an even more contentious level. Those in public health must develop a better understanding of the real and potential benefits of such research, whereas those engaged in the research should acknowledge more fully the real and potential risks. But a far wider discussion on gain-of-function research is needed. Some have called for a meeting like the Asilomar Conference on Recombinant DNA in 1975, where biosafety guidelines were drawn up. Others have called for greater engagement by the U.S. National Academies, whose work, including the 2003 “Fink report” on research standards and practices for potentially dangerous applications of biotechnology, has been so vital. Some propose that members of the U.S. National Science Advisory Board for Biosecurity, who have already contributed greatly to the discussion, should be engaged, but report to an entity independent of funding decisions. All of these recommendations have merit, and there is no one solution. No single meeting or organization is likely to grapple successfully with the conundrum of weighing the risks and benefits of certain lines of research. Meetings held as isolated events have the potential to be more divisive than constructive.

When the Human Genome Project was begun and complex ethical issues arose, an ongoing program to study the Ethical, Legal, and Societal Implications was funded as an integral part of the project. Similar dedication to scholarly work on ethical and societal issues related to the controversial study of dangerous pathogens, including gain-of-function research, should be robustly supported. The nascent (and now defunct) Policy, Ethics and Law cores of the Regional Centers for Excellence in Biodefense (funded by the U.S. National Institute of Allergy and Infectious Diseases) offer examples of contributions that can be gained from such support, including the engagement and training of scientists in ethical issues related to gain-of-function research, biosafety, and biosecurity.

Why are scientists required to understand the individual risks to participants in a clinical trial but not required to have ethical training related to the potential risks of research to the public? This is a fundamental disconnect in the ethics education of scientists and in the review process of protocols.

Scientists conduct work for the benefit of humanity. When the balance is unclear as to risks and benefits, as it currently is, should we not adhere to the principle of “first do no harm?”

Subjects

Navigate This Article