Journalist Bryan Walsh explains why the threat of extinction is bigger than you think — and why we need to prepare now.
The odds that humanity will go extinct in the near future are low — but not zero. And unless we prepare now, we could manage to turn a survivable catastrophe into one that promises doom for all of us.
That’s the basic argument of End Times: A Brief Guide to the End of the World, a comprehensive, terrifying, but ultimately hopeful new book from Bryan Walsh. A longtime reporter at Time magazine, now an editor at the science magazine OneZero, Walsh goes through every major threat to humankind’s survival, from climate change to nuclear war to “supervolcanoes,” and lays out how each could destroy us, how the threats compare to each other, and what, if anything, we can do now to prevent doom.
These may seem like very unlikely catastrophes, and in some cases they are. But a threat that could end humankind is worth taking seriously, even if the probability is low, simply because the stakes are so incredibly high. And between massive nuclear arsenals and synthetic biology, humans have never had more power to destroy themselves than we do now.
Walsh and I skyped about the book earlier this month; a transcript, edited for length and clarity, follows.
When you went through these risks one by one, did your sense of which ones were the most serious threats change at all? Were there threats that are bigger than you had thought going in?
I started not too long after Donald Trump was elected. Nuclear war is something that — I grew up in the 1980s and I worried about it, and it was something I was fearful of for sure, and then kind of assumed it went away. I knew that it was a possibility, but it sort of ceased to be an active danger the way it had been for so many years after 1945.
Now I think we can really see that that’s not the case. In terms of how politics have changed, in terms of Donald Trump coming into office, you see that risk in a much more real way. I talked to [former Defense Secretary] William Perry in this book. He lived through the Cuban missile crisis working as an analyst. Now he is 91, and he’s coming back and saying, you know, we have to be wary of this again. This current situation is analogous to or is even potentially more risky than it was during some of the darker days of the Cold War. I definitely left feeling much more worried about that.
And everything around biotechnology worries me more and more each day. You know, if I were to think of what’s the biggest risk in the near-term future, that would be it.
Did you find some threats that were not as imminent as you had thought going in?
Absolutely. I had been a reporter who worked on climate change for a number of years when I was at Time magazine. I didn’t come away [from writing the book] thinking the climate doesn’t qualify as an existential risk. But I did come around to the idea that it’s not framed in the right way now.
We’re currently getting a lot of attention in the media, some parts of the scientific community, and definitely some parts of the activist community for the idea that climate change is something that could end us in a very short period of time. I suspected that wasn’t the case before. It’s definitely not the case now, having spent more time researching it.
[That conclusion] didn’t necessarily make me feel better. Rather, it made me realize that it’s something we have to worry about over the long term. I also came to the realization that the way we’ve been going at it is not going to be terribly successful. What I saw going to [climate] conferences was how hard it really is to get leaders, but really including most people on this planet, to act in a way that’s going to restrict their own growth, their own desire to use energy.
I think we have more time, but it’s something where we need to think about larger-scale techno fixes, because I don’t really have a lot of confidence in humanity’s ability to grapple with something that is a risk that will always be for the further future.
Early in the book, you have this quote from the philosopher Derek Parfit asking how much worse it is for all of humanity to die than for 99 percent of humanity to die. Parfit took the stance that it’s dramatically worse if we go extinct, full stop.
Could you lay that out a little bit and explain your own thinking on that question? I do think it affects how much you care about these risks relative to each other, given that some of them might just kill a ton of people but not literally everyone.
It’s hard to wrap your head around, because the idea of a planet where 99 percent of humanity is gone seems about as bad as we could possibly imagine it.
What this requires us to do is actually start thinking about the future. It requires us to realize that if we end now, if either by our own action or our own inaction we allow extinction to occur, that yes, obviously anyone who’s alive now will be the first victim. But when you add up the millions, billions potentially — if you really want to believe the transhumanist side of the argument, trillions — of people that could live if we go extinct now, that’s it. It’s over. All that’s gone. I think that is a very effective way to make you understand that.
And it goes against the current of human psychology, which is we don’t really think about the far future very much. In mainstream economics, we don’t value it very much. I talk about that a little bit in climate change, in terms of how we discount damages into the future. But at the end of the day, if we’re gone, that’s it. We lose potentially infinite value.
You talk at the end of the book a bit about strategies to avoid outright extinction if there’s some kind of apocalyptic event. Why is that important? What are the kinds of things that seem promising as fail-safes to make sure that some last gasp of humanity keeps going?
It is the response to the scenario Parfit lays out. If you can actually make a difference between 99 percent and total extinction, you need to take it.
In terms of the strategies you can use, first and foremost is to look into this question around alternative foods. Kelsey Piper on your site had an interview with David Denkenberger, who I’ve talked to as well, looking at this question. It’s important to do that work now, to figure out how we’ll be able to grow food in the event of one of several catastrophes that could essentially block out sunlight and end agriculture for a certain period of time. We can try to respond to that in the moment, but I don’t think we’d do very well.
The other is refuges, which is something that Robin Hanson has talked about — the idea of keeping a bank of human beings, for lack of a better term, who can start this whole thing over again should the worst happen. These aren’t shelters in the way that we tend to understand them via movies or doomsday preppers. This would be something that can be truly protected from anything that would happen, and where you’re doing it in a forward-thinking way. Imagine a kind of national service where you are going to be the bank of humanity and you’ll be down there for, I don’t know, two years, and you’re just rotating in and out.
During the nuclear era, civil defense kind of got a bad name. The government said, “Go duck and cover and this way we’ll survive a nuclear war.” And you had people like Herman Kahn talking about “tragic but distinguishable postwar states.” But we all came to realize, especially once the Cold War really hit its peak, given the sheer number of warheads that were out there that that was absurd, and you weren’t going to be able to survive most likely. But that doesn’t mean that we shouldn’t try to do that in advance of other potential disasters.
You’re right that civil defense has gotten a bad rap, and preppers have really gotten a bad rap. Is part of what you’re saying here that we need to sort of reevaluate that, or come up with a new kind of prepper identity that’s somewhat more serious and more committed to playing the role of a reserve in case of catastrophe?
The nature of doomsday prepping in the US seems like a very American phenomenon, which is, “We’re going to hole up, we’re going to have our guns, we’re gonna have our supplies, and somehow we’ll endure this on our own.” There’s also the high-end Silicon Valley wealthy-person version of this, where you have your own ranch and you have your New Zealand bolthole and you’ll just get there by private plane and you’ll just survive the end of the world.
But what I’m talking about is something that really requires an actual national or even global movement. That’s probably why it’s so hard to do. We don’t really want to do that in the US — I don’t think we really trust the government to do that.
These catastrophes are not something that you can survive on your own. You have a serious nuclear winter, you have a major supervolcano, an asteroid strike where you’re looking at significantly reduced temperatures for a long period of time — you’re going to run out of food just like everyone else. That’s where coming together in a national way can make the difference. It’s not about you surviving, of course, it’s about this species surviving. It’s about there being a future, which again is very much not the sort of individualistic model of the prepper.
Let’s talk a bit about the chapter on AI risk, since this is where I find I lose a lot of people who aren’t effective altruist types already persuaded that it’s a serious threat. I think there’s a sense that nuclear weapons and climate change are serious, but why are all these Silicon Valley types worried about these robots that don’t exist yet? Walk me through your argument there and how you became convinced that this was important.
I still don’t actually know if there’s a risk for sure. It may be that machine intelligence is beyond us in some kind of way. Not just, “It’s not gonna happen in the next 10 to 15 years,” but, “It’s not gonna happen for any conceivable future.” That’s one thing that makes it hard to come to grips with.
But if it is possible, then you’re looking at something that would potentially get out of control very quickly. You can kind of prepare for or try to survive most [catastrophes]. But if you have a truly super-powerful artificial intelligence that gets loose and goes wrong in the kind of way that’s outlined by those who do write about this, then there’s no hiding from that. So what you need to do is take the steps now to try to shape it in some way.
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, humanely handling the scourge of feral hogs, and — to put it simply — getting better at doing good.
Author: Dylan Matthews