Stephen Hawking’s final warning for humanity: AI is coming for us

Stephen Hawking’s final warning for humanity: AI is coming for us

His posthumously published book about our future offers fear and surprising optimism.

Stephen Hawking has a final message for humanity: If robots don’t get us, climate change will.

Hawking, who died at age 76 of a degenerative neurological disease earlier this year, offers his parting thoughts in a posthumously published book called Brief Answers To The Big Questions, which comes out Tuesday. It’s a message worth heeding from a man who is probably the most renowned scientist since Einstein, best known for his discovery of how black holes function. Hawking’s book A Brief History of Time sold more than 10 million copies and tackled questions as big as “How did the universe begin?” and “What will happen when it ends?” in language simple enough for the average reader.

In an excerpt published in the Times of London over the weekend, he’s funny and optimistic, even as he warns us that artificial intelligence is likely to outsmart us, that the wealthy are bound to develop into a superhuman species, and that the planet is hurtling toward total inhabitability.

Hawking’s book is ultimately a verdict on humanity’s future. At first blush, the verdict is that we’re doomed. But dig deeper and there’s something else here too, a faith that human wisdom and innovation will thwart our own destruction, even when we seem hellbent on bringing it about.

The robots might be coming for us

Hawking’s biggest warning is about the rise of artificial intelligence: It will either be the best thing that’s ever happened to us, or it will be the worst thing. If we’re not careful, it very well may be the last thing.

Artificial intelligence holds great opportunity for humanity, encompassing everything from Google’s algorithms to self-driving cars to facial recognition software. The AI we have today, however, is still in its primitive stages. Experts worry about what will happen when that intelligence outpaces us. Or, as Hawking puts it, “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

This might sound like the stuff of science fiction, but Hawking says dismissing it as such “would be a mistake, and potentially our worst mistake ever.”

Compared to robots, we humans are pretty clunky. Limited by the slow pace of evolution, it takes us generations to iterate. Robots, on the other hand, can improve upon their own design a lot faster, and soon, they’ll probably be able to do so without our help. Hawking says this will create an “intelligence explosion” in which machines could exceed our intelligence “by more than ours exceeds that of snails.”

A lot of people think that the threat of AI centers on it becoming malevolent rather than benevolent. Hawking disabuses us of this concern, saying that the “real risk with AI isn’t malice, but competence.” Basically, AI will be very good at accomplishing its goals; if humans get in the way, we could be in trouble.

“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants,” Hawking writes.

For those still unpersuaded, he suggests a different metaphor. “Why are we so worried about AI? Surely humans are always able to pull the plug?” a hypothetical person asks him.

Hawking answers: “People asked a computer, ‘Is there a God?’ And the computer said, ‘There is now,’ and fused the plug.”

The end of life on earth?

If it’s not the robots, it is “almost inevitable that either a nuclear confrontation or environmental catastrophe will cripple the Earth at some point in the next 1,000 years,” Hawking writes.

His warning comes on the heels of last week’s alarming Intergovernmental Panel on Climate Change (IPCC) report warning that we only have 12 years to make changes massive enough to keep global warming to moderate levels. Without such changes, extended droughts, more frequent tropical storms, and rising sea levels will only be the beginning.

Runaway climate change is the biggest threat to our planet, he says, and we are acting with “reckless indifference to our future on planet Earth.”

In fact, we might not have a future at all, he says, warning us not to put all our eggs “in one basket.” And yes, that basket is planet Earth. Even if humans figure out a way escape, “the millions of species that inhabit the Earth” will be doomed, he says. “And that will be on our conscience as a race.”

Another warning isn’t any less menacing. We are entering a new phase of “self-designed evolution.” This stage means we will soon be able to cast off the chains of traditional evolution and start changing and improving our own DNA now — not in hundreds of thousands of years.

As with AI, the ability to edit our own DNA holds the potential to fix humanity’s greatest problems. First, and likely not in the distant future, we’ll be able to repair genetic defects, editing out genes for things like muscular dystrophy and amyotrophic lateral sclerosis, or ALS, the disease he was diagnosed with in 1963. Hawking says that within this century, we’ll be able to edit intelligence, memory, and length of life. And that’s when things could get really complicated.

Hawking calls the people who will do this “superhumans,” and they’re likely to be the world’s wealthy elites. Regular old humans won’t be able to compete, and will probably “die out, or become unimportant.” At the same time, superhumans will likely be “colonizing other planets and stars.”

If that all sounds pretty depressing, it is. But even as Hawking serves up an apocalyptic prognosis for the planet and everyone on it, his brand of optimism comes through. He has faith that “our ingenious race will have found a way to slip the surly bonds of Earth and will therefore survive the disaster.”

He even believes that, instead of being terrifying, these possibilities are thrilling and that it “greatly increases the chances of inspiring the new Einstein. Wherever she might be.”

Figuring out a way off of planet Earth, and maybe even the solar system, is an opportunity to do what the moon landings did: “elevate humanity, bring people and nations together, usher in new discoveries and new technologies.”


Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Author: Abigail Higgins


Read More

RSS
Follow by Email