Four-year-olds can learn things even the most intelligent machine can’t. It’s time AI researchers took note.
The mathematician and computer science pioneer Alan Turing hit on a promising direction for artificial intelligence research way back in 1950. “Instead of trying to produce a program to simulate the adult mind,” he wrote, “why not rather try to produce one which simulates the child’s?”
Now AI researchers are finally putting Turing’s ideas into action. They’re realizing that by paying attention to how children process information, they can pick up valuable lessons about how to create machines that learn.
DARPA, the Defense Department’s advanced research agency, is embracing this approach. It recently invited proposals from interdisciplinary teams of computer scientists and developmental psychologists, which will work to create AI systems capable of learning the things babies learn in the first few years of life.
To the developmental psychologist Alison Gopnik, this approach is the obvious way to go. She explains why in an essay titled “AIs Versus Four-Year-Olds,” which appears in the anthology Possible Minds: 25 Ways of Looking at AI. Noting that preschoolers can learn things even the most sophisticated AIs can’t, Gopnik argues that studying kids can give programmers useful hints about directions for computer learning.
I was drawn to Gopnik’s essay not only because of the catchy title, but also because she and I both majored in philosophy at McGill University (three decades apart), and because her piece is one of only three in the book written by women. The fact that women represent 12 percent of contributions to the anthology mirrors the gender imbalance in the machine-learning community at large, where only 12 percent of the leading researchers are female, according to Wired.
Gender may go some way toward explaining why it’s taken computer scientists nearly 70 years to act on Turing’s ideas about the importance of children in AI research. Kids have traditionally been considered the domain of women.
I spoke to Gopnik about how her research on children has been perceived by male scientists and how that may finally be changing to the benefit of AI research. A transcript of our conversation, lightly edited for length and clarity, follows.
There’s a lot of fearful talk out there about how AI poses a catastrophic risk to humanity. Do you think that fear is overblown?
I think it’s wildly overblown. We have a lot more to fear from natural stupidity than from AI. Obviously if you have, say, an AI system governing your missile launches, that’s something that could have catastrophic consequences, and it’s very legitimate to be worried about that. But the fantasy that these systems are going to develop a general intelligence and then enslave us all — I think that’s absurd.
Is that because you’ve gotten to see up close just how far our most sophisticated AIs are from the abilities of kids?
Yeah, that’s right. When you give AIs a large data set, they can figure out the difference between cats and dogs. That’s very impressive. But they’re not good if you introduce a change, or if there’s data coming from a different source, or if you alter some high-order characteristic of the environment. They have what’s called “catastrophic forgetting” — they have to relearn what they’d already learned. But somehow kids can be facing a brand new task, something they’ve never seen before, and they can figure out what the right thing to do is.
So, for example, AlphaZero, amazingly, learned how to play chess. But if you change the rules — now the bishop can’t go diagonally, it can only go horizontally — AI systems have a really hard time learning that, because they’ve learned their information from all the times they’ve played the game before.
A human chess player, even a kid, will immediately understand how to transfer that new rule to their playing of the game. Flexibility and generalization are something that even human 1-year-olds can do but that the best machine learning systems have a much harder time with.
Why are some of the problems that are easy for kids so hard for computers?
It’s called Moravec’s paradox: The things we thought would be hard — like playing chess — are things machines can do very easily, yet they can’t do things a 4-year-old can do.
The puzzle for development psychology is we don’t really understand how it is that kids can easily do what they do. We know they can make these wide-ranging inferences from very small amounts of data. They’ve got a lot of [innate] knowledge through evolution. But how can they make good inferences about things that weren’t part of our evolutionary ancestry? Look at a 4-year-old with a smartphone. They can manage it better than you can! This system hasn’t been out in the world before, yet kids are very good at mastering it. We don’t have a clue how that’s possible.
So given that we know very little about how kids learn the stuff they learn, what hints do you think AI researchers can pick up from paying attention to kids’ learning?
One difference that could be very relevant is that the AI only exists inside a computer. The kids are out in the real world. We know they spend a lot of time doing experiments and getting data that’s relevant to the problems they’re trying to solve. They’re curious. They’re running to get that smartphone, poking it and swiping it and figuring out how it works. Same thing with a pile of blocks or a plug.
Psychologists call that active learning. That ability to go out into the world and experiment, rather than just taking the data someone’s presented to you — that’s something that’s very distinct about how children learn.
People in AI are totally becoming cognizant of that now. My colleagues and I are collaborating with people in Berkeley to see if they can design an AI that’s curious. Not just picking up things from the data it’s fed but going out to find information based on something that doesn’t fit with what it already knows.
Another difference you highlight between kids and AIs is that kids are social learners, right? They don’t learn in isolation — they learn from other kids in their preschools and from adults.
Right, children are always learning from other people. And it’s not like they’re mindlessly imitating those around them. They’re using theory of mind when deciding whether and how to learn from other people — they’re understanding what’s going on in other people’s minds. Is this person trying to teach me [to perform a certain action,] or are they doing it accidentally?
The machines aren’t even in the same ballpark as being able to understand basic theory of mind, let alone using those inferences to shape whether they would learn from someone else or not.
Kids also don’t just passively obey their teachers or parents; they bend and break the rules. That’s part of creativity.
Creativity is something we’re not even in the ballpark of explaining. My example is Augie, my grandson. When he was 4, he was talking to his grandfather, who said, “I really wish I could be a kid again.” Augie thought about it and said, “Don’t eat any broccoli or green beans. Then you can be a kid again!” He was thinking: Everyone tells me that if I eat healthy food I’ll grow to be a big, strong adult, so maybe if you do the opposite, you can be a kid again.
That’s something an adult would never think of, but it’s not just crazy or random. That ability to generate ideas that are genuinely different from what you’ve been taught before but are not totally random, that’s real creativity, and we don’t have a clue how to understand that. Even the simple “kids say the darnedest things” variety — we don’t understand it.
I think the creativity issue is really core to people’s difficulty grappling with AI. Many of us want to believe there’s more to the human mind than “just” computation. There’s emotion, there’s the ability to express emotion through art. … People have a resistance to the idea that we’ll be able to build a machine that can do that.
We already know there’s a machine that can do that. We’re it! We are this system of relays and neurons that produces creativity and all the rest of it. So we have an existence proof that there is such a machine.
The best way we have for explaining how it’s possible for a machine to do all this is to say it’s a machine that does computation. If that big bet is right, we’ll be able to figure out how things work.
Maybe computation will turn out to be wrong. Maybe the panpsychist argument about consciousness [that it’s a universal feature of all things in the natural world] will turn out to be right. That seems less likely — we have no evidence for it — but I don’t think it’s impossible. I just don’t think it’s necessary to believe there’s something other than computation. It’s an incredibly productive way of thinking about how a biological system can do what we do. And I don’t think its productivity is anywhere near being used up.
I really appreciate the observation that we already make machines that think — we make children, and they’re biological machines and they have consciousness and they have creativity! And given that, it doesn’t seem inconceivable to me that we’ll one day create our children’s mechanical analogues. But I don’t think this way of framing it — AI as just another sort of child, or the child as a biological machine — has really soaked into the public consciousness yet. Why is that?
I’ll be political for a second and say, when I talk about my research to scientists, people still come up to me and say, “You should talk to my wife, she’s a preschool teacher.” I mean, yeah, I probably should, because preschool teachers do fascinating and important work. But it reflects an intellectual pecking order — anything to do with kids is soft, it’s women’s work, it’s at the opposite end of the pecking order from computation.
Do you know that in the 1967 Encyclopedia of Philosophy, children just don’t even show up at all? You’d think humans had produced asexual cloning or something! That way of thinking is still very much out there in the world.
I know what you mean. When I was studying philosophy at McGill — this was a decade ago — there were not a lot of female students in the department, and the few who were there didn’t speak up in class half as much as the guys. I always felt like it had an impact on what kinds of topics were even considered part of philosophy.
I’m 63, so I was in philosophy at McGill in the 1970s, and this has kind of been my dilemma my entire career. I remember thinking to myself, you could go do computation theory or philosophy of mind. But you could solve the same problems by thinking about kids, and that’d be a way of saying: These kids who have not been taken seriously — let’s treat them with the same intellectual seriousness. I very self-consciously decided to take that route that has completely been part of what women do, and to show that it’s as intellectually profound as what the guys have done.
Do you feel like you’ve succeeded?
Well, the guys are still telling me to talk to their wives. They’re still treating developmental psychology as if it’s something suitable for an education department. But I do think it’s really changed a lot. The fact that there are more women in computer science and philosophy is making a big difference. There’s no question that people in computer science are excited about this now and are recognizing that children are really important.
It’s appealing to think that 4-year-olds, who are the lowest on the totem pole, the most philosophically neglected, might turn out to actually be the secret to solving problems that these very high-status geeky male computer scientists can’t solve. I think there’s a nice moral inversion there.
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.
Author: Sigal Samuel