Impact factors

My last post reminded me of another post by Peter Coles that I meant to link to. This one’s about journal impact factors. For those who don’t know, the impact factor is a statistic meant to assess the quality or importance of a scholarly journal. It’s essentially the average number of citations garnered by each article published in that journal.

It’s not clear whether impact factors are a good way of evaluating the quality of journals. The most convincing argument against them is that citation counts are dominated by a very small number of articles, so the mean is not a very robust measure of “typical” quality. But even if the impact factor is a good measure of journal quality, it’s clearly not a good measure of the quality of any given article. Who cares how many citations the other articles published along with my article got? What matters is how my article did. Or as Peter put it,

The idea is that if you publish a paper in a journal with a large [journal impact factor] then it’s in among a number of papers that are highly cited and therefore presumably high quality. Using a form of Proof by Association, your paper must therefore be excellent too, hanging around with tall people being a tried-and-tested way of becoming tall.

But people often do use impact factors in precisely this way. I did it myself when I came up for tenure: I included information about the impact factors of the various journals I had published in in order to convince my evaluators that my work was important. (I also included information about how often my own work had been cited, which is clearly more relevant.)

Peter’s post is based on another blog post by Steven Curry, which ends with a rousing peroration:

  • If you include journal impact factors in the list of publications in your cv, you are statistically illiterate.
  • If you are judging grant or promotion applications and find yourself scanning the applicant’s publications, checking off the impact factors, you are statistically illiterate.
  • If you publish a journal that trumpets its impact factor in adverts or emails, you are statistically illiterate. (If you trumpet that impact factor to three decimal places, there is little hope for you.)
  • If you see someone else using impact factors and make no attempt at correction, you connive at statistical illiteracy.

I referred to impact factors in my tenure portfolio despite knowing that the information was of dubious relevance, because I thought that it would impress some of my evaluators (and even that they might think I was hiding something if I didn’t mention them). Under the circumstances, I plead innocent to statistical illiteracy  but nolo contendere to a small degree of cynicism.

To play devil’s advocate, here is the best argument I can think of for using impact factors to judge individual articles: If an article was published quite recently, it’s too soon to count citations for that article. In that case, the journal impact factor provides a way of predicting the impact of that article.

The problem with this is that the impact factor is an incredibly noisy predictor, since there’s a huge variation in citation rates for articles even within a single journal (let alone across journals and disciplines). If you’re on a tenure and promotion committee, and you’re holding the future of someone’s career in your hands, it would be outrageously irresponsible to base your decision on such weak evidence. If you as an evaluator don’t have better ways of judging the quality of a piece of work, you’d damn well better find a way.

Published by

Ted Bunn

I am chair of the physics department at the University of Richmond. In addition to teaching a variety of undergraduate physics courses, I work on a variety of research projects in cosmology, the study of the origin, structure, and evolution of the Universe. University of Richmond undergraduates are involved in all aspects of this research. If you want to know more about my research, ask me!

2 thoughts on “Impact factors”

  1. At best, one could say that a journal with a high impact factor will reject a higher fraction of articles than most journals. As such, those which do get through are probably of higher than average quality. Such a situation is self-sustaining: even if impact factors were assigned randomly, those with high impact factors would be able to publish higher-quality stuff, which would attract more citations, thus sustaining the impact factor.

    However, I seem to remember an interview with a Nature editor who said that he liked to publish interesting stuff, some of which might not turn out to be right. Certainly I’ve seen a few strong claims in Nature which turned out to be wrong which probably wouldn’t have survived refereeing elsewhere.

Comments are closed.