We’re bad at predicting the future and there’s no way around it

We’re bad at predicting the future and there’s no way around it

Photo by Patrick Pleul/picture alliance via Getty Images

Technology improves over time, but it’s hard to know what that means when it comes to calculating the social cost of carbon.

How good at genetically engineering improved crops will we be in the year 2300? How expensive will air conditioning be? How many people will live on the planet?

These are fairly impossible questions to answer, of course. Technology improves over time, but the track record of forecasting specific improvements is distinctly mediocre. Forecasts about social science also don’t fare amazingly well. We’re best at concrete predictions about the moderately distant future if they’re about very precise, measurable scientific phenomena — the rate of global climate change as a result of CO2 emissions, say, or the effects on the ozone layer of dangerous chemicals.

But we’re not very good at it at all when it comes to predicting the pace of innovation, or predicting things that depend on human choices and human priorities. Many crucial predictions that drove policy in the 1960s and 1970s — from concerns about peak oil to estimates about population explosions — turned out to be pretty much entirely incorrect.

This is an unfortunate state of affairs because big societal questions — like the social cost of CO2 emissions — depend on our ability to reason clearly about the moderately distant future. Scientists approach this in a methodical, careful way that’s still far from satisfying, but there’s no real alternative to trying.

The long-term future really matters, and it’s really hard to predict

The difficulty of answering questions about distant future technological and social trends is a huge — and, I think, underrated — challenge when we try to answer far more mundane policy questions, like, say, what the social cost of carbon is.

The social cost of carbon is a measure of how much harm is done in the world when an additional ton of CO2 is released into the atmosphere. That’s a combination of predicting how the additional ton of CO2 will affect the climate over the next several hundred years — which is fairly well understood thanks to a sophisticated science of climate modeling — and predicting how those climate changes will affect human well-being over the next several hundred years, which is much harder.

In the most recent report by the EPA on the social cost of carbon, researchers considered global population trends and global income trends out through 2300 (which matters because wealthier countries can adapt more to climate change and also are more able to pay to avoid the deaths of their citizens). Other recent work estimates not only income and population, but also crop yields and mortality out through 2300.

I want to be clear: Those papers that I’ve linked above strike me as excellent, high-quality work. They look squarely at the core challenge of extraordinary uncertainty about the world’s future, and they handle it using the right statistical tools for the job. It’s normal for papers to report a very wide range of possible values. For example, this 2022 Nature paper assumes annualized per capita GDP growth will average 0.17 percent to 2.7 percent between 2020 and 2300 and that a ton of CO2’s economic effects on agriculture will be anywhere between −$23 and $263. (A negative value here means that maybe it’ll have a good effect on agriculture.) They’re able to do useful work even with these large ranges of possibilities.

But are these ranges large enough? 2300 is as distant from us as the year 1746. Even the most reasonable of statistical techniques would unambiguously collapse if you were trying to use them in 1746 to predict the world we’d live in in 2023. I observed above that even our predictions about population trends from the 1960s weren’t very good. How sure should we really be that global population trends now have finally cracked the puzzle?

There’s no way around trying to predict the future

In a recent, quite good criticism of the Nature climate modeling paper by Kevin Rennert et al, economist David Friedman argues that these models end up implicitly imagining a world without most forms of technological progress simply because they’d be impossible to model.

“Rennert sums costs over the next three centuries, with about two-thirds of the total coming after 2100,” he writes. “Their solution to the problem of predicting technological change over that period is, with the exception of their estimates of CO2 production and energy costs, to ignore it, implicitly assume technological stasis. That is the wrong solution — but any projection of technological change that far into the future would be science fiction not science.”

In other words, the staggering uncertainty associated with the year 2300 is a reason models of the social costs of carbon should be more narrow in scope: They should primarily try to answer questions about the effects of climate on human lives in the next few decades and should anticipate that our uncertainty about the future makes their job nearly impossible by 2100, let alone 2300.

I’m sympathetic to this worry. When I think about the best guesses that people in the year 1746 could have made about the year 2023, it’s hard to imagine them coming up with anything they could usefully have acted on. And similarly, it feels hard to have much confidence in our modeling of the year 2300 and tempting to apply some kind of discounting factor for uncertainty that would end up making the contributions of the year 2300 to our social cost of carbon estimates quite small.

There’s no way around the core problem here. We have the power, as a civilization, to change the world that we live in in lasting and potentially irreversible ways; the decisions we make quite likely will affect our distant descendants. Guessing what crop yields will be in 2300 might be nigh impossible, and calculations about the social costs of carbon which depend on those guesses about crop yields are going to have extraordinarily high uncertainty.

But declaring that we won’t make those guesses doesn’t mean we don’t affect the world in the distant future, just that we’ve stopped trying to guess how. I tend to think it’s better to have inadequate guesses than no guesses, better to have large ranges of possibility than to treat a question as unknowable and therefore calculate as if its value is zero. But it’s important to proceed with an aching sense of the inadequacy of those guesses.

RSS
Follow by Email