Stanford psychology professor Jennifer Eberhardt, the author of Biased: Uncovering the Hidden Prejudice That Shapes What We See, Think, and Do, says Nextdoor reduced racial profiling by 75 percent by introducing a tiny bit of friction for users.
No matter how well-intentioned their creators were, tech products can encourage and amplify existing racial biases. And Stanford psychology professor Jennifer Eberhardt says that companies can take meaningful steps to reduce that effect — although it may come at the cost of the twitchy virality that have helped them grow so quickly.
“Bias can kind of migrate to different spaces,” Eberhardt said on the latest episode of Recode Decode with Kara Swisher. “All the problems that we have out in the world and in society make their way online … You’re kind of encouraged to respond to that without thinking and to respond quickly and all of that. That’s another condition under which bias is most likely to be triggered, is when you’re forced to make decisions fast.”
In her most recent book Biased: Uncovering the Hidden Prejudice That Shapes What We See, Think, and Do, Eberhardt recounts how the local social network Nextdoor successfully reduced racial profiling among its users by 75 percent: It introduced some friction.
“They realized, after talking to me and other people and consulting the literature, that if they wanted to fight racial profiling on the platform that they were going to have to slow people down,” she said. “It used to be the case that if you saw someone suspicious you could just hit the crime and safety tab and then you could shout out to all your neighbors, ‘Suspicious person!’ Oftentimes, the person who was suspicious was a black man, and in the cases where it was racial profiling, this person was doing nothing.”
Requiring those users to complete a three-item checklist — which included an educational definition of racial profiling — shifted the “cultural norm,” Eberhardt explained, away from “see something, say something” and toward “if you see something suspicious, say something specific.”
“Companies have a huge role to play here,” she said. “I think they have a responsibility in all of this because of the power they wield. I mean, they cannot just affect one life but many, many lives, millions of lives. That checklist changed the mindset of all of these people now who are on Nextdoor. I think recognizing that power and marshaling that power is a good thing.”
Below, we’ve shared a lightly edited full transcript of Kara’s conversation with Jennifer.
Kara Swisher: Today in the red chair is Jennifer Eberhardt, a professor at Stanford University’s Department of Psychology. She studies the consequences of the psychological association between race and crime, and has written a book called Biased: Uncovering the Hidden Prejudice That Shapes What We See, Think, and Do. Jennifer, welcome to Recode Decode.
Jennifer Eberhart: Thanks for having me.
Do you want me to call you professor?
No, it’s okay.
I can if you like. I try to give everyone the titles they deserve. So this is a wider-ranging book that you’re talking about, but I wanted to focus in on tech, but I want to first talk a little about your background and how you got to this topic and why you decided to write this book. So why don’t we start with a little bit about you. How did you get into writing about this topic?
I feel we’re living through difficult times now as a country. The Pew Research Center released a report just recently, which they found that six in 10 Americans feel like race relations are generally bad in this country. And a majority of Americans feel like things are getting worse. And so we’re really struggling with these issues right now. And I wrote the book to speak to that struggle.
So tell me how you got there. How did you get to study this topic? You’re a professor at Stanford and how did you get to that spot?
I’ve been interested in issues of bias, issues of race and inequality since I was a little kid, actually. I grew up in Cleveland, in an all-black neighborhood. And when I was 12 years old, my parents announced we were moving to an all-white suburb called Beachwood. That was near the other place. It was actually just a bike ride away, but a world of difference in terms of resources and so forth.
Sometimes it’s just a highway, or a road.
Yeah, it’s definitely true. I think since that time, it just raised a lot of questions for me and I never stopped asking those questions, basically.
But you’re studying from a psychology point of view, there’s all kinds of ways to study this. Social science, things like that. You wanted to get at the heart of what causes bias and tell some of the stories around it. How did you get to Stanford? You studied and then …
Well, I came to Stanford from Yale. I had been there for a few years and my husband and I came together. He was starting out as a professor in the Law School. And so we’re both there and we raised our children there and all of that. So we moved from the East Coast to the West Coast basically because of the wonderful opportunities that Stanford provided for the two of us.
You’re also coming here at a time when tech has sort of amplified … I’ve always talked about weaponizing and amplifying a lot of bad feelings, all kinds of different things in society, and sort of fracturing a lot. Racial issues have been at the top of that list. But let’s talk a little bit. How do you define when you’re saying “biased?”
And when you say Biased, Uncovering the Hidden Prejudice That Shapes What We See, Think, and Do, I’ve seen it not hidden. They talk about in Silicon Valley, “unconscious” bias. I think it’s entirely conscious, in a lot of ways. Talk a little bit of how you define it. How do you think about the topic overall?
This sort of “unconscious bias” or some people call it “implicit bias” can be defined as the beliefs and feelings we have about social groups that can influence our decision making and our actions, even when we’re not aware of it. That’s the key, is that it can be something that you’re acting on, something that’s really affecting you and guiding your behavior without you actually being aware. That there’s a bias there that is influencing what you think and what you do so.
So that’s what we’re talking about. You were saying that you feel like it’s entirely conscious. I think …
I think the awareness of sitting around a table with people that are so homogeneous and not noticing it seems, it’s hard to believe sometimes. And at least in tech … but we’ll get to tech in a minute. But when you’re talking about, what you are trying to study here is uncovering where it comes from. So why don’t you talk a little bit about how you look at it from your studying point of view.
Right. So from a psychological point of view, I’m interested in looking at sort of multiple sources for it. And so one source just has to do with how our brains function, how we’re wired. And bias is kind of an outgrowth of that.
So our brains are built to categorize and we categorize all kinds of things, including people. And so we develop these social categories that we slap people in. And once we have those categories developed, we develop beliefs and feelings about those people in those categories. And so that’s called bias, right? That bias can then affect what we do and affect our decision-making.
So part of it just has to do with wiring, and we’re wired in that way because we are constantly confronting all kinds of things in overload, a stimuli out in the world. We have to figure out a way to do pattern matching. We have to figure out a way to categorize things so that we can manage it better and the world becomes more predictable and so forth. And so there is a utility to it, if you will. But that categorization …
So if it was like “car,” “tree,” that kind of thing.
Or “danger” or “fire” or something like that. That makes a lot more sense than “people.”
But of course …
But it’s “people,” it’s not like we have a different way of dealing with people. It’s the same brain, right? And so we’re looking for shortcuts everywhere we can. And so categorization is a shortcut, stereotyping is a shortcut. It’s something that’s seen as sort of basic and it’s universal. Most researchers believe that regardless of the country, regardless of the culture, that people categorize, right? And they stereotype other groups, and the groups might change. The actual social groups that are present in that space might change.
And then also the stereotypes that you have about them may change. But the act of stereotyping is something that’s considered fairly basic.
Fairly basic, among everybody.
Yeah. Yeah. But again, the content of those stereotypes can shift quite a bit. And a lot of that has to do with the disparities that are out there in our society. And so we were exposed to those disparities. We develop associations between certain groups and certain types of jobs and that kind of thing. Or having certain traits. And so that is something that is more culturally specific and that can do a lot of harm.
Right. Talk a little bit about some of the major themes you think are important to think about when you’re thinking about it. Because I think it does apply. The reason I think it’s important in tech, you have things like facial recognition coming. You have social media, which has bias, things like services that have bias, Airbnb, Nextdoor, things like that. Can you put the idea of bias in a bigger context? You’re initially saying, this is something people just do for comfort’s sake or to make sense of the world they live in.
Yeah, it’s something they do to function in the world.
Function, right, yeah.
So for example, you can develop, say, an association between African Americans and crimes. This is something that I look at a lot and have done a number of studies on, and that association can lead you to see weapons more clearly, for example. If you just expose someone to an African American face, a blurry image of a gun becomes more clear to you. And you can develop more clarity from that.
It can also blind you to certain things. These biases can actually influence something as basic as what you see. But they can also influence how you treat people. They can also influence whether teachers are going to discipline a child in school. It can influence whether you get hired, whether you get promoted. It can influence jurors in making death penalty decisions. All kinds of things in life. It can do so in ways that sometimes are beneath your awareness and sometimes can lead to great harm.
So talk about the ones that are these hidden beneath your awareness. How does that happen from a psychological point of view?
You have these associations that can get activated. So the association between African Americans and crime, for example, might be an association that people know about that they’re aware of. They’re not always aware of how that association is influencing how they’re making decisions about various things.
So it’s baked in and they don’t know it’s baked in there.
Exactly right. They’re applying it and they don’t know that they’re applying it, that kind of thing. They can think that they’re acting in a way that’s completely fair and they can also not be motivated at all to act in a way that’s unfair, right? Their intentions might be good, but then you can still be influenced.
So how do you realize that? Realize these hidden prejudices and then get rid of them? Because presumably what happens is then someone points to them and says, just like right now, lately, “I’m not racist.”
And then racist actions happen.
How do you uncouple those?
Yeah, it’s hard because it’s one of those things when it becomes hidden, it’s hidden, not only to the person who might hold the bias and be acting on it, but it also can be hidden to the target of bias. And so it makes it especially difficult to untangle. And that’s one of the reasons that we as social scientists, we like to study this in a controlled way in a laboratory sometimes. So we can rule out all these other factors that it could be. You could have made some decision that you’ve made based on all kinds of non-racial reasons, right? And it may seem reasonable, but then we can notice in these studies a pattern develop where, okay, for African Americans in the same situation, you’re evaluating them in a more negative way or whatever it is.
And so we can look at how these biases can take shape accounting for all these other differences in the situation that people can point to to say, “Well, my behavior is really motivated by X or Y, or whatever it is. That has nothing to do with race.”
Right. And so when you have a situation like that, when you have these hidden biases that shape what we think … first what we see. There was an expression that was used for me, “believe what you see, you don’t see what you believe.” Something like that. You believe what you actually see versus just what you think you see.
It struck me at the time when I heard it was from a report, it was an editor who was giving me that advice in terms of reporting. Because a lot of people go into reporting with — deciding what the story is before … not seeing what actually is happening.
How do you push against that? Because that seems like you’re talking about sort of an innate human trait to do these things.
Well, that’s the thing about bias. I feel like when you say that it’s something that we do as humans, it’s sort of part of how we function.
I don’t think that, I think it’s on purpose.
Okay. Well, it’s both, right? It makes it hard to fight it too. But I think the issue is, when bias is conscious, you know what you know and you sort of think what you think, right? And you feel what you feel and so it’s on the table. And so you might argue with a person about whether it’s right or wrong, but then they say, “Well, this is my belief about this certain group.” Or, “This is how I feel about this certain group.”
The issue that gets stickier is, if those associations that you’ve picked up out in the world, say, about African Americans and crime, again, since we’re talking about that, that they start to influence you in ways where you can’t actually detect it. And so you think that, “Oh, well, how I’m thinking or how I’m feeling, it’s just kind of the way things are.”
Right? Right. So it’s not the thing that you were talking about earlier, where believing is seeing. They believe that they’re seeing what’s there, what’s present.
The truth, what’s present. So fast-forward to today. Right now one of the things you’re talking about was the idea that things have gotten worse, or people feel they’re worse. And it does. I think most people do. Why do you think that is now? I want to get into the next section talking about tech because I think it has to do with proliferation of social media and everything else. Why do you imagine? Because this has been around for a long time, these issues. What has happened? What has changed?
I think the worry is that … things are more polarized than they used to be, and I think people are worried about also our norms shifting. And so we used to have these egalitarian norms, and we were really proud of that as Americans and so forth.
Well, alleged egalitarianism.
Right. I mean, so this was the ideal. I’m not saying we always reach that ideal, certainly, but it certainly was a norm that people valued. And now I think people worry that that’s eroding. It’s actually something that doesn’t define us as Americans even anymore. And so, I think that’s part of the issue because once the norms start to go, then our behavior can follow that. And our behavior can follow that, even when we pride ourselves on it being egalitarian, because we’re social creatures and the social environment that we’re in matters.
And so, even if we see ourselves as egalitarian and we’re in a situation, we’re in an environment that is less so, we become less so over time. And so I think that’s the concern is this fight over who we are as Americans and what we should be trying to uphold, even if we don’t always make it there. This idea that this is something to strive for, that when once that starts to erode then people get worried.
And, why now do you feel it start to erode? Where are we at currently in this?
Yeah. So this is the other thing about biases that it’s not something … so we’re all vulnerable to bias, right? But it’s not something that we’re acting on all the time. So there are situations that can trigger it and there are situations that can kind of keep it at bay. And there are lots of things that are happening in the world now where we’re triggered by that bias more than we used to be.
Talk about that.
So, for example, again, the changing or the eroding of the egalitarian norms, that’s a worry. That’s an issue. So that’s a situational trigger of bias. I think also, people are feeling under threat. People are fearful, people are stressed. So even our emotional states can make it more likely that we will act on bias. When people are worried about their status in the world and so forth and whether they’re going to be able to maintain their way of life and the way of life that they want for their children and so forth. That tends to lead to more bias. And so there are lots of conditions of the world that are pushing us in this direction.
How do you un-push it? Because we’ve got a president who is saying things that now people are calling racist, for example. “Bias” is a loaded word. “Racist” is the most loaded word. And not even loaded. It’s what it is. When you’re in that environment, when that happens and then people are arguing over it, how do you remove yourself from it? How do you get out of that?
The thing that’s interesting to me about that is that it also shows that what’s explicit versus implicit bias, that that’s also shifting too. The line is blurring. The line is shifting.
It’s all explicit.
Well, things that we used to think about as explicit and we would all agree, okay, that’s blatant, that’s explicit, now we’re arguing about whether it is. And so that’s what I mean by “the line is blurring.” For some people they don’t think the things that we thought about as explicit bias, for them it’s more implicit. Right?
And so it’s hidden from them even though it’s not hidden from other people. When that starts to happen, I think that can set the stage, or make more permissible, all kinds of things that weren’t permissible before.
And that means everyone can be biased.
And be proud of it.
They don’t have to worry. There’s no tension around it. You can still be …
There’s no shame to it.
You don’t have the shame. You can still be a good person and upstanding and all of this and still act out in this way because now you don’t define this as bias anymore.
Right. Because it just is the way I think. That’s what I think.
Right. So, when you’re studying that, is this a phenomenon that’s new or something that’s historical, goes through cycles where people do this?
It’s one of those things where it’s not new in a sense that it’s the same social conditions that have gotten us where we are now. If you’re asking whether threat has always been a trigger for bias, I think threat has been a trigger for bias. And the nature of that threat and what’s producing the threat might change across time, but threat makes us a lot more vulnerable to bias than if we’re not threatened. That’s the case for a lot of these triggers too.
Triggers right now.
A lot of people feel that social media, like myself, has weaponized and amplified a lot of what’s going on. There’s so many different areas right now in tech that are hitting these problems in different ways. One is artificial intelligence, which I think a lot of people feel is going to solidify the already … putting in bad data into this to create more feelings of misrepresenting who is committing crimes and who’s responsible for them. So, putting data into a place where it’s impossible to figure out how they come to conclusions. So, AI is one.
One is facial recognition, which I think there’s all kinds of controversies around that. In the silliest ways, in terms of how pictures are taken, how they design things, to the more serious ones is that identifying people incorrectly. Just recently, with Amazon and their Rekognition software identified members of the Congressional Black Caucus as felons, or things like that.
Then there’s the social media itself, which is used by — especially by Donald Trump and some others, as a weapon now, in that regard.
So, let’s talk first about, among these, what do you think when you’re studying bias and the links between race and how people put the link between race and crime, which one do you fear most to be abused? Or do you just fear all of them?
Yeah. I feel like there’s a lot to feel worried about now, actually. It’s interesting, too, because this is also another example of how a bias can kind of migrate to different spaces. We didn’t have an online space before. All the problems that we have out in the world and in society make their way online.
Except it’s worse because it’s anonymous. It’s loud. It’s broadcast.
It is. You’re kind of encouraged to respond to that without thinking and to respond quickly and all of that. That’s another condition under which bias is most likely to be triggered is when you’re forced to make decisions fast. You can’t sit back and think about different …
Yeah. Exactly. Tech encourages that. In tech, speed is king. People are trying to develop tech products that can be used in really straightforward ways, simple ways. You can use it quickly, intuitively. You don’t have to think. Those are the very conditions under which bias is more likely to come alive. That’s a problem.
Let’s talk about each of them. Artificial intelligence, obviously they’re going to input data that is going to create a whole new set of data where crimes might happen, what kind of people are likely to commit crimes, but the whole worry around this is first the designers of these systems are largely white men, essentially. Secondly, the data they’re putting in there is old data that’s generated badly. How do you weed that out, or is it impossible to do so?
I think one of the issues, it’s not just that there’s just a lack of diversity in terms of ethnic diversity and who’s developing the algorithms, but there’s also a lack of diversity in terms of the background that people have, the disciplines that they come from, their areas of expertise. So, if you have only people who are from a tech world, who are developing algorithms that are speaking to issues of criminal justice or education or housing — all these different areas — or issues in the workplace and so forth and you only have one way to think about that and you don’t even have training, actually, to understand the historical realities of the inequality, you don’t actually conduct research on these issues to try to understand what the sources of that inequality might be and so forth. So, you can take data that has bad data, basically, and you can bake …
I call it dirty data.
Yeah. Okay, dirty data. And bake that in and further the problem, rather than alleviate the problem that you’re seeking to address. So, yeah, then it’s an issue.
Then it confirms biases.
It does and it can make that worse, right? Because now …
Because a computer said it.
Right. This is a machine. It’s not a human, so therefore we’re doing everything right, everything’s clean. So that helps people to think, “Okay, well now we really don’t have to think about this issue anymore.” So yeah, those are all real worries.
What about facial recognition? The idea of it being memorialized in a digital way.
First of all, I used to study — I guess I still do — face recognition among humans. With that work, we look at something called the other race effect. This is this idea that people are much better at recognizing faces of their race than they are of recognizing faces of other races. That has been examined in the context of criminal justice for eyewitness testimony …
Yeah. Exactly. You look at people who are on death row, even, who get exonerated. A lot of that, the center of the case is an eyewitness who thought they saw this person and so forth. You also have an issue with machines doing this. That machines aren’t as good at recognizing groups, faces of members of certain groups and other groups. If that’s the case, then there’re all kinds of problems that can arise from that. For example, if you have this face recognition technology that law enforcement officers are using …
And using it badly or not using it correctly.
Right. You can stop people and think, this person matches the description and it’s not that person. That has to do with how we develop the technology and what faces that you train on and so forth.
Right. In a review in the Times, it said, “Eberhardt gives striking examples from her research of how racial categories and stereotypes affect perception. In one study, she and her colleagues found that people’s brains were active when they were looking at the face from someone of their own racial group. This, Eberhardt says, helps explain why people sometimes do poorly at recognizing individuals from other groups and find that matters from criminal justice where mistaken identity is common. In another study, Eberhardt examined the stereotype linking black men and crime. Police were often asked to look at a computer screen. Half were exposed, subliminally, to crime-related words like ‘apprehend’ and ‘capture.’ Those blink for a fraction of a second. The other half, when exposed to gibberish, the officers then saw two faces side by side, one black, one white. The officers who were primed to think about crime looked more at the black face.”
Explain that, why you did it that way.
Well, because we wanted to really explore how deep these associations go. You can have this association between blackness and crime that is so strong, that’s so powerful, that is directing …
You have to pick them.
Yeah. It’s directing your eyes as to what to look at out in the world. Also, once you look at an object, like I said before, even a blurry image of a gun can become more clear if you’ve just been exposed to an African American face. So, that’s power.
It has to be a gun, right?
It has to be a gun, you’re saying, in other words.
Well, you just …
You just see him.
You see the gun faster, so it lowers your threshold for understanding what’s a gun and what’s not a gun. So it shows us that these associations can influence what we see in these real, literal ways.
Right. Then, you also notice: “The same stereotypes she discovered affect perceptions of physical movement. Analyzing the data from a New York City police department, Eberhardt learned that black men were far more likely than white men to have been stopped for engaging in what’s called ‘furtive movement,’ suspicious behavior like fidgeting with something at your waistline. Yet among those stopped, whites more often had a weapon.
Yeah. Actually, we found that only 1 percent of the people stopped for furtive movement actually had a weapon. So it’s a really low hit rate there. A lot of that has to do with just the fact that furtive movement is a subjective standard that they’re using to stop people. It was hard for the department to actually define what furtive movement was and led …
“We just know it when we see it.”
Yeah. That led officers to come up with their own definitions, and those definitions can also differ depending on what area of the city you’re in and who you’re looking at and so forth. Now, they have eliminated that as an option on the form that they complete when they make a stop, so you can no longer stop someone for furtive movement alone.
Those kinds of stops were huge. For example, during the height of stop and frisk in 2010, there were about 600,000 stops. These were all pedestrian stops on the streets of New York City. Over 300,000 of those stops were for furtive movement. It was by far the No. 1 reason people were being stopped on the streets of New York, even though it’s hard to actually define what furtive movement means.
Right. Absolutely. So, when you add sensors and cameras into the situation, it gets even worse, presumably. Cameras are supposed to eventually be able to say what’s furtive, right? For example, when you get on a plane now, they take your picture. They’re looking at your face and figuring out whether you’re going to hijack the plane or just be difficult in coach, or whatever they’re trying to look for. Is that a better thing if the computers are doing it, or not?
Well, it all depends on how it’s being used. The whole thing also, with the body-worn cameras, for example, since you mentioned cameras for police. That can be used in a way where, okay, we’ll have this camera. You can stop people and look at their face and see if that face matches some face of a wanted criminal in a database. So, it could be used in that way. Or, it could be used in a way where you’re trying to improve police community interactions. You’re trying to understand what happens during those interactions when things go awry. What is the cause of that? How can you use language?
We, for example, with researchers at Stanford, we’ve begun to look at the language the police officers use doing routine traffic stops. We use body-worn camera footage to allow us a window into those stops. We found that officers are professional overall. This is in Oakland, actually, California. But there’s a race difference, where they treat white drivers with more respect through their language than black drivers.
The words they use. I saw someone did an art project on this, where they just played the sounds and how they spoke to African Americans stopped. The words were more diminutive, disrespectful. The words to white motorists were “sir,” “ma’am.” It was really interesting.
Yeah. For black motorists, it was “bro” and “dude,” and those kinds of things didn’t happen with white motorists.
Right. So, these technologies should presumably be able to show people the bias in order to fix it if they see, “Oh, look what I just did,” but the opposite seems to have happened. No one can agree on what they’re seeing, which is pretty clear. You’d think technology would help. It’s like, “Look what just happened.”
Well, seeing is subjective. That’s what we were talking about before, the whole believing is seeing. It’s not like it’s just … our own histories, our own beliefs influence what we see and how we see. That’s the whole point of those studies we were just talking about where you can prompt police officers to think of “apprehend” and “capture” and “arrest.” That directs their eyes towards black faces or the blurry image of the gun, the fact that an African American face can lead you to see that gun sooner. Those are examples of how this tight association we have between blackness and crime can influence what we see.
Even if there’s absolutely proof.
Amazing. I remember the videos that came out of all the … the various videos that were happening of killings of motorists by police. I had at least three people say, “Well, it’s not clear what happened there.” I’m like, “Oh, it’s clear.” It was fascinating. It was sort of the beginning of this entire Trump era. It was really interesting because I was sort of like, “It’s actually right there.”
Right. Well, not just what you see, but again, what you look at, what you’re focused on when you’re looking at one of these viral videos of an officer-involved shooting, for example. What you’re looking at and what you’re looking for and if there’s something ambiguous, how you interpret that and so forth. All that varies.
So, more data doesn’t help people become less biased.
Well, I think data can help, but I don’t think data resolves everything. I think sometimes you need more than that. I think that the same is true for … There’s a push in California, for example, it’s actually a state-wide mandate now that police officers record information about who they’re stopping. The race of the person, the gender, and so on and so forth. The idea is that …
“Look what you’re doing.”
Yeah. That you can have this data now and you can see whether there are racial disparities in stops and racial disparities in searches and arrests and so forth. But people can look at the same data differently. So, collecting that data alone isn’t going to resolve the problem, because some people will look at that data and say, “Well, this shows that there’s bias and that we need police reform.” Then, other people can look at that same data and say, “Well, it doesn’t show bias at all. Police officers are …”
“They’re just doing their jobs.”
“They’re doing their job and they’re stopping the people who are committing crime.” If there were racial disparities in who commits crime, then you would expect that. Again, you have the same data but a huge disagreement about how to see that data. So, you need more than data. I don’t think data’s worthless. It’s an anchor, but it’s not going to completely resolve things for every question.
We’re here with Jennifer Eberhardt. She’s a professor at Stanford University who’s written a book called Biased: Uncovering the Hidden Prejudice That Shapes What We See, Think, and Do.
Jennifer, when you’re talking about this, if data doesn’t work, necessarily, or cameras or sensors or tech, how do you solve the problem? It seems like it’s only getting worse because on social media, as you said, it’s twitchy, it creates that, it separates people.
People thought it was going to bring people together, so people would see each other more clearly.
Right. That was the intention for a lot of these tech products. I think that was the intention.
Right, but the intention sometimes is irrelevant. When you have a problem that’s created, you have to focus on what that problem is and how it’s being produced, regardless of the intention. That’s the case for bias too. You can have a bias and you can do something that reflects that bias or you can make some decision that’s infected by bias. The issue isn’t whether you intended it or not, so you have to focus on the impact and have the impact be a motivator for trying to figure out how to correct it.
So how do you do that? Because people, for example, in Silicon Valley with hiring, they’re like, “It just is that way. It’s just there’s more of these people than these people,” and ultimately it leads you to their conclusion, which is that only young white men can invent things, which is untrue. But they’re saying, “Look, this is only the people …” But then you’re like, “Well, you hire them and therefore they’re able to and therefore …” They can’t see the whole systemic problem.
Yeah. I think it’s hard for people to see, especially for Americans to see things at a systemic level, because we’re so socialized to think about ourselves as independent spirits, and we don’t really see the situation that people are in as much as we just kind of see the behavior as a true reflection of the self in the person’s desires and intentions.
“You aren’t rich because you didn’t try hard enough.”
“You didn’t get into college because you aren’t smart enough.” Whatever that idea is, “you didn’t work hard enough” really is at the heart of it.
Right. So, when people see the disparities, oftentimes they interpret those disparities in a way that has to do with how they feel about that group or the associations that are well practiced about that group.
So is it healthy? Because you do see it on social media now, people don’t hold back anymore. They’re saying these things out loud, especially the president and others. And they’re using these tools which naturally allow people to express themselves and the id. The id takes over, I guess, and everything else. How do you solve for that then? Is there a hope to get rid of bias entirely? Or is it just not …
Well, I think you can solve for it in a lot of ways. I mean, like we talked earlier about slowing people down, that people are more likely to express bias when they’re not thinking and they’re just kind of going on these automatic sort of well-rehearsed, well-practiced associations that they have, and so they kind of spring to life and influence how people are making decisions and so forth. So, I’ll give you a good example of this, which is Nextdoor. Right?
Right. This is a company, explain the company for people.
Yeah. So this is an online platform, right? That was created with the intention of bringing people together, of making communities happier and safer, where people could gather and share information.
About their neighborhoods.
About their neighborhoods, and so it’s all kind of neighborhood based. Really great idea, sort of good intentions there. But …
Over the back fence kind of thing. There’s one called Back Fence that was like that too.
Oh, okay. I didn’t know about that one.
So anyways, with Nextdoor, I mean, Nextdoor is, I think, in 95 percent of our neighborhoods in the US right now, and so it spread quite a bit. But they have problems sometimes with racial profiling, right?
They did. They’ve been trying to solve those.
Right. And then how do you solve that?
What it was is there are people allowed to put up videos with their cameras and things like that in a lot of places, and it’s always black people committing crimes. That’s what it degenerated into in a lot of neighborhoods. And then the question was whether people were reporting incorrectly, or whether they’re recording too much of those or whether that was the real thing. And so it was a big debate around that.
Yeah. The debate I heard about was several years ago, the co-founder of Nextdoor, Sarah Leary, reached out to me to ask, “Well, how do we curb racial profiling on the platform?” Because that was an issue and completely … This was the opposite of why they created the platform.
Right, same thing with Airbnb.
Yes, the same thing.
Which is people didn’t want to rent to people of color or whether we should put pictures on it of people … it went on, it was like this, it was a similar issue.
Right. Similar issue.
So, with Nextdoor they realized, after talking to me and other people and consulting the literature, that if they wanted to fight racial profiling on the platform that they were going to have to slow people down. Right? But again, we were just talking earlier about how a lot of tech products are built to …
Speed people up.
Yeah, so you don’t have to think.
How do you slow people down in that regard?
So what they decided to do was, it used to be the case that if you saw someone suspicious you could just hit the crime and safety tab and then you could shout out to all your neighbors, “Suspicious person!” Oftentimes, the person who was suspicious was a black man, and in the cases where it was racial profiling, this person was doing nothing. Like it was …
Right. Furtive movements that weren’t very furtive. Which is called “walking” to most people.
Or just the presence in the mostly white community was enough to make that person suspicious. It had nothing to do with any kind of criminal wrongdoing. So that was a problem, right? So, they decided to slow people down by putting up a checklist. It’s a three-item checklist. You have to go through the checklist before you can just shout out to all their neighbors about this suspicious black man.
The first thing on the list is, what is it about the person’s behavior that is making him suspicious? So it can’t be …
It can’t be “black man.” I mean, a lot of times it was just the social category. So your social category can’t make you suspicious. They learned that. And then the second thing was to describe the person in enough detail that you’re actually describing that person’s individual features rather than simply his social category, which often, again, was black male. And then the third and last thing was they gave them a definition of what racial profiling was. And so, a lot of people actually didn’t know what racial profiling was, nor did they understand that they were engaging in it. So just educating people about what it was and what they were doing.
And then so letting them know that this is prohibited on the platform. So that’s getting back to the whole cultural norm thing that we talked about earlier. You’re setting the cultural norms where you’re saying that this is not permissible. They were trying to modify … You’ve seen those signs, “If you see something, say something.” They were trying to modify that so it’s, “If you see something suspicious, say something specific.” That’s what they were going for.
So using this method and trying to slow people down with the checklist, they were able to curb profiling on the website by over 75 percent.
Which makes for a better experience, correct, for people.
Yeah. Yeah, for sure. Because what would happen in those situations is that you would get a lot of incivility, where people end up in shouting matches or something like that. And you’re trying to create a product where you’re bringing neighbors together and not polarizing things.
It’s an interesting thing on Ring, too. Same thing is going on on Ring. It’s just like a video cavalcade of racism. I just, I can’t, it’s amazing when you watch it.
Yeah. I think people feel like tech is going to solve a lot of our problems and make things easier. In some ways it can, but in some ways it raises all these other issues. The bias migrates to these other spaces, and especially if we haven’t really dealt with the issue — and we don’t want to talk about race, we don’t want to talk about bias — it can emerge in these other spaces.
So first just slow down. What else?
We talked about furtive movements, so not using subjective standards to evaluate the behavior of other people. This idea of what’s furtive and all that was subjective and it actually led to huge, huge racial disparities in who got stopped by the police. They decided to take that off the form so that you can’t stop someone based on furtive movement alone.
“Just looks sneaky.”
Yeah, and that’s a good thing, right? So you want to evaluate others and evaluate yourself, too, on a basis of objective standards rather than subjective standards.
Such as, changing it to what? Rather than furtive, what other reasons?
So, if they have a gun and they’re pointing it at someone.
That would be a sign.
So there are other criteria officers can use to determine whether somebody is worthy of stopping.
Okay. So that, and what else?
I mean, I think, so we got speed, we got subjective standards, we got … Also, I think accountability is a big issue. Bias is more likely to happen when you don’t have accountability and you don’t have metrics to actually measure some of the things that you care about. So for example, I think we started out talking about just how there’s not much diversity in tech, at these tech companies at all. We know that partly because a lot of the tech companies have started to keep track of what they’re doing. That accountability, using those metrics so that they’re transparent about what’s happening, is a good thing, because it allows us to see how bad their problem is. It allows them to create goals for themselves for where they want to be.
They’re just telling us that.
Well, without the data you have nothing, right? You don’t really know. It’s hard to hold yourself accountable to something that you can’t see or you don’t want to see.
Well, you can see it, you just don’t want to say …
And you don’t want to focus on it. But having those metrics allows not just them to focus on it, but us as a society to focus on it and think about sort of how can we …
Okay, so accountability?
Yeah. So that’s, I think, huge. I mean, there are a lot of these. I think increasing positive contact across groups, too, is a big one. That’s something from social psychology we’ve known for many decades now, that not just bringing people into contact with each other, but establishing the right kind of contact, actually can reduce bias.
What’s the difference between the wrong [kind of contact] …?
Well, if you bring people together and they’re two different groups, say, and they’re of unequal status, that’s not the best contact. So, sometimes if it’s unequal status or if you know it’s competitive or if people in leadership positions in that context don’t condone the contact and all of that, that’s negative, that’s bad contact. In those situations you can actually make bias worse.
“I knew those people were like that.”
Yeah. Yeah. Now you have proof that they’re actually, exactly what you thought they were and so forth. So, that can backfire. It has to be equal status contact where it’s people working together cooperatively for common goals and it has to be contact that’s sanctioned by leadership. There’s a long sort of laundry list of conditions that make contact either bad or good.
Right. And then finally, when you think about all these things, it seems as if we’re past a point of no return, but that’s probably not the case because we’re right in the middle of it. Right? It feels like that. And it does feel like, especially tech creates that situation of people being … There’s no analog contact as much. There’s digital contact and so it’s very hard to … Again, it was supposed to bring you together, it brings you apart, because there’s no physical contact, there’s no physical contact, there’s no looking at each other.
Right. The anonymity issue is huge, and we’ve known about that for a while as researchers as well.
Anytime it’s anonymous, it’s ugly. Yeah. It brings out your worst self, actually. Why is that?
Well, people are responding to social norms. They are responding to how they’re seen. They’re responding to the image of themselves and whether they’re going to be liked or shunned and all of this. You don’t have those concerns if you’re anonymous. People don’t know who you are, and so you’re more likely to …
Not respond to shame or something.
Which is interesting, although there’s a whole culture of that you shouldn’t shame people online. I’m like, well, “maybe you should.”
But will the shame even work if it’s anonymous?
That’s right. If it’s anonymous, yeah, it’s really interesting. So lastly, when you think about this idea of bias … What are you studying next?
So I’m trying to look at ways that we can use science to help us to understand bias and to help us to mitigate it.
More drugs. No, I’m kidding. Everybody’s on LSD, that’s the answer from Silicon Valley, in case you’re interested.
Sure. It gets rid of the ego, and then we’re all id. We’re all id, I guess. But what do you think? Do you have any clue right now of how you use science to do that?
Yeah, I mean, I think we’ve talked about some of the clues, right? We talked about figuring out ways to slow people down so that they don’t do these things, and we can do that as individuals ourselves or we can slow people down in the social environments that we control. So Nextdoor, right?
It worked, yeah.
It’s in 95 percent of our communities, and now they have figured out a way to slow people down so that there’s less profiling. So that’s huge, right? And I feel like companies have a huge role to play here. I think they have a responsibility in all of this because of the power they wield. I mean, they cannot just affect one life but many, many lives, millions of lives. That checklist changed the mindset of all of these people now who are on Nextdoor. I think recognizing that power and marshaling that power is a good thing.
They don’t like to feel like they have impact when they have enormous impact, which I think is difficult. One of the things I’m obsessed with is, you know when you’re fill in a Google box? If you do “black people are …” “women are …” Try it some time.
You won’t like it.
There you go. It’s fascinating, the results. And of course they all say, “Well that’s what people are searching for.” And you’re like, “Yeah, but you can stop them from …”
You know what I mean? You can suggest other things or don’t suggest …
You’re kind of aiding and abetting that, right?
Or don’t say anything at all. Don’t let anything be filled in. Saying that’s the way people are is sort of like, “Well, people do terrible things to each other.” It doesn’t mean we can’t mitigate it.
And you’re giving people a tool to express that.
And so do you have some responsibility in that? I mean, that’s …
I know this is gonna sound like a really crazy question, but is there any good use of bias? Is it ever good? Categorization I get, being able to … This is a car, this is a lion, that’s a cat. That’s okay. These are all good things. Don’t get eaten.
Yeah, and there’s bias that comes with that, with the cats and the lions and all that, because once you categorize them, you have some idea about whether they’re dangerous.
Dangerous or you should pet them.
Yeah, exactly. And so, in that sense you can say stereotyping has a function, right? Once you put people in a category and then you develop beliefs about the people who are in that, or the animals who were in that category.
It’s easier with animals.
Yeah, so that helps you, right? It makes things more predictable for you. People talk about stereotyping as an energy-saving device where you don’t have to think of everything that you’re confronted with fresh. You can think about what kinds of things go with the animals, say, that are in that category. And so it saves us time and saves our brain resources, it makes things more predictable. So there’s a function to it.
And we have all kinds of biases. We were spending a lot of time here talking about racial bias, right? But there’s hindsight bias and there’s confirmation bias. There are all kinds of biases that we are …
Wait, what’s hindsight bias?
It’s just basically when you’re thinking about, after something has happened or occurred …
“I knew that would happen.”
Yeah, you knew that and you think, “Oh, it should have been clear from the beginning,” and so forth. And then confirmation bias is you have a theory about how …
And you look for things.
Yeah, and you look for certain things and not others. And even when you’re confronted with something that’s inconsistent with what you thought, you ignore that, you minimize it. So there are all these biases that we have, including this racial bias that we want to be aware of.
Right. Thank you so much, Professor Eberhart. It’s really good to talk to you. I think the robots should just take over at this point. They’ll be biased in a whole different kind of way, but at least it’ll seem fair.
Author: Eric Johnson