After my last post, some friends of mine drew my attention to some useful resources. This is a subject I’m strongly interested in, but I’m definitely not an expert, so I’m grateful for additional information.
My friend Rig Hernandez is director of Project OXIDE, which runs workshops “to reduce inequities that have historically led to disproportionate diversity representation on academic faculties.” Their web site is still under development, but there’s a Diversity Portal which contains what appear to be useful links to more information. Rig also reminded me about Project Implicit, a bunch of studies attempting to measure implicit (unconscious) biases.
I want to expand a bit on one thing I mentioned in my last post. The big difference between the recent study and a lot of the previous work I’ve heard about is that this was a controlled study: rather than examining real-world data, in which there are all kinds of hard-to-control variables, the researchers made sure that the applications people reviewed were identical in every way except the applicant’s gender.
I certainly don’t claim that the real-world studies aren’t worthwhile. I think that they can provide valuable insights. But there’s one thing they can never do. They can’t distinguish between the hypothesis that invidious discrimination is at work and the hypothesis that the dearth of women in science is due to actual differences between men and women (whether biological or cultural). If (unlike me) you’re partial to the Larry Summers hypothesis, for instance, you’ll be able to interpret the results of the real-world studies in that light. But you can’t interpret the more recent study in that way.
If you think that gender bias is a problem (which I do) and want to advocate for policy changes to fix it (which I do), then you need to convince people who don’t already agree with you. Those people can much more easily ignore the results of studies with all sorts of uncontrolled variables. That’s why I think the new study is especially worth trumpeting.
For comparison, consider a study that examined recommendation letters written for actual faculty job applicants. This study showed that letter-writers used different sorts of words to characterize male and female applicants: women tended to be described using “communal” words, men using “agentic” words. Moreover, there was a negative correlation between the use of communal words and the perceived hireability of the applicant.
Leave aside for now any correlation-causation qualms you might have, and suppose that this study showed that the use of communal words caused female applicants to fare more poorly. You still can’t tell whether that’s because of implicit bias on the part of the letter writers or because the female applicants actually are, on average, more “communal” (whatever that means).
For what it’s worth, in this case I happen to find the implicit-bias hypothesis very plausible, but there’s no way to know for sure from this study. Scientists tend to be a skeptical bunch, so if you’re trying to convince a scientist who’s not already a believer that implicit bias is a problem, this sort of study is probably not going to do it.
(One thing you should certainly take away from that study: if you’re writing a recommendation letter for a female candidate (and you want her to get the job), pay attention to your use of communal and agentic words.)