Why the left should worry more about AI

Why the left should worry more about AI

Demis Hassabis, the CEO of DeepMind, the Google sister corporation on the leading edge of artificial intelligence research. | Kim Hee-Chul-Pool/Getty Images

It’s not just a weird libertarian obsession. Corporate AI is already warping our lives and our governments.

I spend a disproportionate amount of time reading and talking to two somewhat niche groups of people in American politics: democratic socialists of the Sen. Bernie Sanders variety (or maybe a bit to the left of that), and left-libertarians from the Bay Area who are interested in effective altruism.

These are both small groups, but they have social and intellectual influence bigger than their numbers. And while from a distance they look similar (I’m sure they both vote for Democrats in general elections, say), there’s a big issue on which they part ways where collaboration could be productive: artificial intelligence safety.

Effective altruists have, for complex sociological reasons I explored in a podcast episode, become very interested in AI as a potential “existential risk”: a force that could, in extreme circumstances, wipe out humanity, just as nuclear war or asteroid strikes could.

Kelsey Piper has a comprehensive Vox explainer of these arguments, and I take them seriously, but most friends to my left do not. A typical reaction is that of Elmo Keep, who dismisses AI doomsday arguments as “the stuff of stoned freshmen who’ve read too much Neal Stephenson.”

It doesn’t help that alt-right funder extraordinaire Peter Thiel has long supported research into AI safety, especially at the Machine Intelligence Research Institute, whose founder Eliezer Yudkowsky is nearly as polarizing as Thiel is.

But you don’t have to go full Silicon Valley libertarian to become convinced that the rising power of AI is a pressing social problem. There are distinctively leftist reasons to be worried.

The most obvious is that the best-funded developers of AI, at least in the US and the UK, are private companies. Insofar as the government’s involved in R&D, it’s largely through the Defense Department.

So: do we want DeepMind, a Google sister company, to be developing sophisticated wargaming technology with minimal regulation? Or do we want to democratically weigh how DeepMind should proceed and make regulations as a society?

But the leftist case for worry is broader than this. AI safety experts often characterize the problem as one of “alignment”: Is the goal for which AI is optimizing aligned with the goals of humanity as a whole? To use an oft-repeated thought experiment, would a sufficiently powerful AI, told to produce (say) paperclips, end up hoarding resources until it destroys the world and uses all the Earth’s resources to build paperclips?

That’s a ridiculous example and one that I find turns off a lot of the not-already-converted, including some eminent AI researchers who find talk of apocalypse overblown. But here’s a less ridiculous one.

Algorithms are already misaligned with human ethics, and it’s already a problem

Let’s say that Pittsburgh wants to more effectively respond to cases of child neglect, and better triage its case workers so they investigate the worst cases. So the city builds a predictive model based on thousands of previous cases that can assign a score to each case that comes through that helps prioritize it.

But using previous cases to train the AI means it winds up baking in a lot of prejudices from previous generations of case workers, who may have been biased against poor and black parents.

This method is not very precise and produces a lot of false positives, but its air of precision and science means that case workers have started to defer to the algorithm. It has gained power in spite of its weak empirical underpinnings.

This is not a hypothetical example. This is the story of the Allegheny Family Screening Tool (AFST), an algorithm used by the county encompassing Pittsburgh to triage child abuse cases. Virginia Eubanks, a political scientist at the University of Albany, chronicles this system and its many limitations in Automating Inequality (2018), the best single book on technology and government I’ve read.

Often cases like this are conceptualized through the prism of rights to privacy, or rights to oversight over algorithms; my colleague Sigal Samuel has done incredible work on this front. But fundamentally it’s an AI alignment problem of exactly the same kind, if not the same scale, that MIRI and effective altruists worry about.

Some in the artificial intelligence research community having been urging researchers and advocates to make these connections. “Some critics have argued that long-term concerns about artificial general intelligence (AGI), or superintelligence, are too hypothetical and (in theory) too far removed from current technology for meaningful progress to be made researching them now,” Cambridge’s Stephen Cave and Seán S. ÓhÉigeartaigh wrote in Nature: Machine Intelligence earlier this year. “However, a number of recent papers have illustrated not only that there is much fruitful work to be done on the fundamental behaviors and limits of today’s machine learning systems, but also that these insights could have analogues to concerns raised about future AGI systems.”

There is a left/libertarian alliance to be made in translating Bay Area worries about the long-run capabilities of AI into worries about the way it’s being deployed right this minute. We could be using these more primitive technologies to build a precedent for how to handle alignment problems that will become vastly more complex with time.

As the Silicon Valley libertarians set their sights to more specific cases, leftists should set them a bit wider. The automation of governance will not stop with Pittsburgh’s family services system. It will be vastly more dramatic in 10, 20, 30 years.

And the only way to ensure alignment is to stop treating this as a silly issue, and actively engage with AI safety.

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Author: Dylan Matthews

Read More

RSS
Follow by Email