It can be hard for female scientists to get promoted. This study may help.
This piece was first published in August 2019 and has been lightly updated.
Quick, close your eyes and picture a scientist.
Did you just picture a man?
There’s a pretty good chance you did. Many of us unconsciously associate the concept “science” with the concept “male,” even if we would consciously reject that association. Unfortunately, the “science = male” stereotype is making it harder for female scientists to get promotions they deserve. Yes, even in 2020.
A two-year study published in Nature Human Behavior examined how 40 scientific evaluation committees decided which researchers should get promoted to plum positions. It found that most scientists on the committees — whether they were men or women, and whether they worked in particle physics or political science — unconsciously associated science with men.
That implicit bias affected their promotion decisions, so long as they didn’t consciously believe there were external barriers (like discrimination) holding back women in science. But, interestingly, the implicit bias did not influence their decisions if they acknowledged the existence of such barriers.
Basically, if someone can say, “Yes, gender bias exists — women really do get discriminated against on the basis of gender,” the simple fact of acknowledging that can undercut their unconscious tendency to discriminate against women. Aware that such bias can exist, they’ll seek to counteract it.
The findings are both disheartening and potentially very heartening. Although they show that both male and female scientists still harbor gender stereotypes that are hampering the careers of brilliant women, they also show that these stereotypes can be combated.
“We highlight the need for efforts to educate committees and governing bodies about the existence and consequences of these biases,” the authors write. “Recognizing the role that such biases can play might enable committees to set them aside at the time of final decisions, thereby facilitating gender equity and diversity.”
The authors’ suggestion here — that educating scientists about gender biases may cause the biases to lose their swaying power — needs further study. Research into implicit bias and how to effectively counter it has become a controversial and heated field in recent years, not least because it’s often racial bias that’s come in for scrutiny.
Implicit bias, explained
Before we dive into the methods and findings of the new gender bias study, it’ll help to get a bit more grounding in the field of implicit bias.
Implicit biases are associations that get activated automatically in our minds and can lead us to discriminate against people we subconsciously associate with negative traits (like aggressiveness or laziness) even though we have no conscious intention to do so.
These days, plenty of companies are aware that many of us harbor implicit biases against different racial groups, and they’re trying to “train” employees out of them. Last year, after two African American men were arrested at a Starbucks in Philadelphia just for waiting around for a business associate, in an incident that went viral, the chain’s 8,000-plus US stores closed for an afternoon so employees could attend an anti-bias training to “address implicit bias.”
But as my colleague Julia Belluz has explained, anti-bias trainings typically don’t work. In fact, they can sometimes backfire by making people think more about stereotypes. A better approach may be to make the stores more racially integrated and put more minorities in leadership positions, because there’s evidence that we become less prejudiced when we interact with members of other groups. That’s known as the contact hypothesis.
A common way to measure people’s subconscious bias is something called the implicit association test (IAT), a social psychology test for detecting people’s unconscious associations between different concepts. For years, this computer-based test was popular as people all over used it to determine whether they were biased against some racial groups, perhaps unwittingly. The so-called “racist test” appeared on TV shows, from Oprah to King of the Hill.
Just one problem: The test can’t really predict your individual bias, as Vox’s German Lopez has reported. He took it several times and got several different answers — he was either slightly biased against black people, slightly biased against white people, or not biased at all, depending on the day. Here’s what he later discovered:
It turns out the IAT might not tell individuals much about their individual bias. According to a growing body of research and the researchers who created the test and maintain it at the Project Implicit website, the IAT is not good for predicting individual biases based on just one test. It requires a collection — an aggregate — of tests before it can really make any sort of conclusions.
For individuals, this means they would have to take the test many times — maybe dozens of times — and average out the results to get a clear indication of their bias and potentially how that bias guides behavior. For a broader population, it’s similar: You’d have to collect the results of individuals in the population and average those out to get an idea of the overall population’s bias and potential behavior.
Even in aggregate, some researchers doubt how reliable the IAT really is. Others insist that it does yield valuable data, just as testing your blood pressure repeatedly over several days and then averaging the results yields a solid reading. For now, the IAT might be the best tool we’ve got for measuring subconscious bias, at least in aggregate.
The debate over implicit gender bias in science — and how the new study worked
Over the past five years or so, we’ve seen several studies and lots of discussion about how and why women are underrepresented in STEM fields. People have started to take action. They’ve compiled a bunch of lists and databases full of women experts on every subject imaginable, so that no one can use the “I just couldn’t find a qualified woman” excuse. And in June, National Institutes of Health director Francis Collins vowed that he will never again appear on an all-male panel.
But some scientists still argue that “academic science isn’t sexist.”
In France, a group of behavioral scientists decided to use a real-world context — the country’s annual competition for promotion to elite research positions — to shed light on the role that implicit bias plays in deciding the professional fate of female scientists.
Every member of the National Committee for Scientific Research, a collective body that includes 40 specialized committees and that plays a major role in French science, was invited to take part in a study. They were told it would examine whether committees’ promotion decisions were biased against women. Half of the members — 414 scientists — participated.
These scientists were asked to complete an IAT measuring associations between the concepts “male,” “female,” “science,” and “liberal arts.” Words representing each of the concepts flashed on a computer screen and the scientists had to categorize them very quickly — too quickly to allow for conscious deliberation. (As noted above, the IAT is controversial and should be taken with a grain of salt, though in this instance, it was used to come up with an aggregate measure of a committee’s level of unconscious bias.)
The scientists were also asked to fill out questionnaires about why they think women are underrepresented in STEM fields: Is it because they’re discriminated against? Because familial duties burden their time? Because they’re unwilling to choose scientific careers? Because men and women don’t have the same scientific abilities?
While the IAT elicited the participants’ unconscious beliefs, the questionnaire elicited their conscious beliefs.
Crucially, these tests were only administered in year one of the two-year study. In year two, participants weren’t given any reminder that there was still a study underway. They just went about their usual business, deciding who should be promoted and who shouldn’t.
The study authors analyzed the change in promotion decisions from year one to year two:
There was a numerical trend for committees with stronger implicit biases, paired with a lower belief that barriers are a problem, to initially favor women in their selection decisions made at year 1 when the study of gender bias was first announced. These committees showed the opposite numerical trend at year 2 when not under scrutiny.
It seems that a year after the study was announced and the scientists took the tests, the study was less salient to them. They were less aware that implicit biases might be influencing their decisions and that people were watching them to figure out just how much they were in thrall to those biases. Under these conditions, they were less likely to choose accomplished women for elite research positions.
The findings imply that if we want to achieve parity — not just one year, but every year — we need to do more than educate scientists about gender bias. We also need to make scientists feel that they’re being regularly scrutinized and that they’re accountable for making biased decisions.
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.
Author: Sigal Samuel