Biologists are trying to make bird flu easier to spread. Can we not?

Biologists are trying to make bird flu easier to spread. Can we not?

This research into viruses could help us understand pandemics better – or it could cause one.

The bird flu is a deadly virus with the potential to spark a global pandemic. Now, thanks to the US government, two lab experiments trying to find ways to make it more dangerous will resume their work after years on hold.

It’s a troubling development, and one that highlights the risks of something called “gain-of-function” research. That’s research in which pathogens are manipulated to change their capabilities — usually to make them deadlier.

Science Magazine last week broke the news that the US had quietly approved the two dangerous and controversial experiments. One of them will begin within the next few weeks. The other is expected to begin later this spring. The two had been on hold since 2012 amid a fierce debate in the virology community about gain-of-function research. In 2014, the U.S. government declared a moratorium on such research.

That was a bad year on the biohazard front. In June 2014, as many as 75 scientists at the Center for Disease Control and Prevention were exposed to anthrax. A few weeks later, Food and Drug Administration officials ran across 16 forgotten vials of smallpox in storage. Meanwhile, the “largest, most severe and most complex” Ebola outbreak in history was raging across West Africa, and the first patient to be diagnosed in the U.S. had just been announced.

It was in that context that scientists and biosecurity experts found themselves embroiled in a debate about “gain of function” research. The scientists who do this kind of research argue that we can better anticipate deadly diseases by making diseases deadlier in the lab. But many people at the time and since have become increasingly convinced that the potential research benefits — which look limited — just don’t outweigh the risks of kicking off the next deadly pandemic ourselves.

While internally divided, the U.S. government came down on the side of caution at the time. It announced a moratorium on funding gain-of-function research — putting potentially dangerous experiments on hold so the world could discuss the risks that this research entailed.

But in 2017, the government released new guidelines for gain-of-function research, signaling an end to the blanket moratorium. And the news from last week suggests that dangerous projects are proceeding.

Experts in biosecurity are concerned that the field is heading toward a mistake that could kill innocent people. They argue that, to move ahead with research like this, there should be a transparent process with global stakeholders at the table. After all, if anything goes wrong, the mess we’ll face will certainly be a global one.

The need for caution in biological research

In the 1970s, biologists were struggling to come to grips with the implications of new techniques in their field. By 1975, it was possible to put DNA from one virus into an unrelated bacterium. What they weren’t sure of was whether this was a good idea.

“They didn’t know if they were going to make a fitness advantage that might spread in the wild,” Kevin Esvelt, a researcher at MIT and pioneer with the gene-editing tool CRISPR, told me. “They didn’t know if they might make a supervirus. They had no idea whether genes spread across species.”

So that year, they called for a voluntary moratorium on experiments with recombinant DNA. The new techniques, they believed, could be powerful forces for good — as they indeed turned out to be, enabling the modification of crops to feed more people.

But at the time they simply didn’t know enough yet to be sure that these techniques weren’t also incredibly dangerous. The moratorium was controversial, but it was universally obeyed, and the following year scientists, ethicists, religious leaders, and policymakers met at the Asilomar Conference in California to come up with guiding principles for the field.

Today, we know that those early experiments in gene editing were in fact safer than scientists at the time thought, and it’s easy to look back on the Asilomar Conference and the moratorium as unnecessarily paranoid.

That’s dead wrong, Esvelt argues. “Based on the information they had, it was the right call.” When you’re just venturing into a new field and you don’t know how deadly your research might be, you should be exceptionally cautious. As you learn more, you might find that not all of that caution was necessary. But it’s much better than pushing ahead recklessly.

Pushing ahead recklessly is what we’re doing today — and the stakes are higher. Some scientists are studying diseases with the potential to become pandemics that could kill millions. They’re sometimes making these diseases either more dangerous or easier to transmit among animals, in order to better understand how diseases spread. Amid controversies, this research is continuing. And experts argue there isn’t enough transparency about why.

What gain-of-function research looks like

In 2001, an Australian research team went to work on what was intended to be a contraceptive virus for pest control, targeting mice. Instead of sterilizing the mice, the Ectromelia virus the scientists were using killed the mice — all of them. (In viruses, lethality and transmissibility tend to trade off, so this was a bad outcome for pest control as well as an alarming one from a safety perspective).

Their decision to publish their research struck some as profoundly ill-advised. Should they publish detailed guidelines on how to recreate the deadly virus?

“We went away wondering what to do about it,” one of the researchers, Ian Ramshaw, said in an interview a decade later.

In those times there was no pathway in the structure of scientific institutions for resolving a case like this. I gave a talk at a retreat when all our researchers were there. I gave them the results and asked them, “What do we do? Do we publish or don’t we?” We came away with the consensus of the scientists, who probably weren’t qualified, that there was already so much out there that could be used by bioterrorists that, I think I can quote, “One more won’t make a difference.” We informed the military and we never heard anything back. They probably wondered “Who the heck are these people?” or “What the heck is this?”

So the research — which could have been dangerous — was published.

The ectromelia researchers stumbled across their innovation by accident. The next teams to raise questions about how to handle dangerous research were doing research that we could have easily predicted would be dangerous. They were working with an influenza strain — H5N1.

The deadliest influenza strain in history was the 1918 pandemic, which was estimated to have killed 50 million people. H5N1 has killed more than half of people it infects, and while it’d likely become less lethal when modified to be more transmissible, it’s very, very dangerous.

In 2011, two different groups of researchers announced plans to publish research in which they’d modified H5N1 — in ferrets, not in humans — to make it transmissible through the air. Other scientists objected. In an open letter to the Presidential Commission for the Study of Bioethical Issues signed by leaders in the field, including several Nobel laureates, they argued that it’s “morally and ethically wrong” to manipulate viruses to make them more deadly. The research teams studying H5N1 argued that their better understanding of the virus would allow for improved strategies to keep us safe.

In 2014, work like this was put on hold after a moratorium from the U.S. government. But now, those same two research labs — the lab of Yoshihiro Kawaoka, of the University of Wisconsin in Madison and the University of Tokyo, and the lab of Ron Fouchier at Erasmus University Medical Center in the Netherlands — have gotten the green light to continue their research.

What good does gain-of-function research do?

Advocates of this kind of gain-of-function research (not all gain-of-function research uses pandemic pathogens) point to a few things that they hope it will enable us to do.

In general, they argue that it will enhance surveillance and monitoring for new potential pandemics. As part of our efforts to thwart pandemics before they start — or before they get severe — we take samples of the viruses currently circulating. If we know what the deadliest and most dangerous strains out there are, the argument goes, then we’ll be able to monitor for them and prepare a response if it looks like such mutations are arising in the wild.

“As coordination of international surveillance activities and global sharing of viruses improve,” some advocates wrote in mBio, we’ll get better at learning which strains are out there. Then, gain-of-function research will tell us which ones are close to becoming deadly. “GOF data have been used to launch outbreak investigations and allocate resources (e.g., H5N1 in Cambodia), to develop criteria for the Influenza Risk Assessment Tool, and to make difficult and sometimes costly pandemic planning policy decisions,” they argue.

“The United States government weighed the risks and benefits … and developed new oversight mechanisms. We know that it does carry risks. We also believe it is important work to protect human health,” Yoshihiro Kawaoka told Science Magazine.

Others are skeptical. Thomas Inglesby, director of Center for Health Security at Johns Hopkins, told me that he doesn’t think the benefits for vaccine development hold up in most cases. “I haven’t seen any of the vaccine companies say that they need to do this work in order to make vaccines,” he pointed out. “I have not seen evidence that the information people are pursuing could be put into widespread use in the field.”

Furthermore, there are unimaginably many possible variants on a virus, of which researchers can identify only a few. Even if we stumble across one way a virus could mutate to become deadly, we might miss thousands of others. “It’s an open question whether laboratory studies are going to come up with the same solution that nature would,” Esvelt said. “How predictive are these studies really?” As of right now, that’s still an open question.

And even in the best case, the utility of this work would be sharply limited. “It’s important to keep in mind that many countries do not have mechanisms in place at all — much less a real-time way to identify and reduce or eliminate risks as experiments and new technologies are conceived,” Beth Cameron, the Nuclear Threat Initiative’s vice president for global biological policy and programs, told me.

With the stakes so high, many researchers are frustrated that the U.S. government was not more transparent about which considerations prompted them to fund the research. Is it really necessary to study how to make H5N1, with its eye-popping mortality rate, more transmissible? Will precautions be in place to make it harder for the virus to escape the lab? What are the expected benefits from the research, and which hazards did the experts who approved the work consider? “The HHS panel did not ask that any proposed experiments be removed or modified,” Science Magazine reported when they broke the story. But were any modifications or additional safety practices considered?

“The people proposing the work are highly respected virologists,” Inglesby said, “but laboratory systems are not infallible, and even in the greatest laboratories of the world, there are mistakes.” What measures are in place to prevent that? Will potentially dangerous results be published to the whole world, where unscrupulous actors could follow the instructions?

These are exactly the questions that the review process was supposed to answer. But since none of the reasoning was published, far from laying these questions to rest, the approval has left researchers worrying that these critical decisions are being made by people who underrate the risks.

“What I think is most important is for those deliberations to be made public so that we can understand them,” Inglesby said. “I believe the risks of proceeding in this way on this work are high, very high, and they are not outweighed by the potential benefits of the work. But what is most important now is for this process to be made transparent to the public and to the scientific community so they have a chance to decide if this work is worth the potential consequences.”

It’s no doubt frustrating for scientists to have their research in limbo. But it’s downright frightening that research that, if it went wrong, could result in millions of deaths is approved — or rejected — through a secret black-box process without any weigh-in from the experts who helped to propose it. “The deliberations and rationale for doing this work needs to be made publicly,” Inglesby said.

Cameron, who was involved with oversight for the US government guidelines and served as the senior director for global health security and biodefense on the White House National Security Council (NSC) staff, agreed. “I’m concerned with the lack of transparency on this in the public,” she said. “There should be a real understanding of why these experiments are necessary for public benefit where there could be a clear public risk.”

There are also the indirect risks. Esvelt fears that the life-saving work of virologists will be endangered, perhaps permanently, if a dangerous mistake claims innocent lives. “What are the costs to science, to trust in science, if we screw up on this? If a laboratory-made virus gets out and kills a lot of people?” he told me.

He asks his fellow researchers, “Do you think we have problems with anti-vaxxers now?” If vaccine research, however well-intentioned, results in dangerous diseases escaping the lab, things could get a great deal worse.


Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Author: Kelsey Piper

Read More

RSS
Follow by Email