Carl Zimmer has a column in Sunday’s NY Times about the ways in which science does and does not succeed in self-correcting when incorrect results get published:
Scientists can certainly point with pride to many self-corrections, but science is not like an iPhone; it does not instantly auto-correct. As a series of controversies over the past few months have demonstrated, science fixes its mistakes more slowly, more fitfully and with more difficulty than Sagan's words would suggest. Science runs forward better than it does backward.
One of the controversies inspiring this column is the scientist who failed to replicate a controversial clairvoyance study and were unable to publish their results, because the journals only publish novel work, not straight-up replications. I offered my opinion about this a while ago: I think that this is a bad journal policy, and that in general attempts to replicate and check past work should be valued at least a bit more than they currently are.
On the other hand, it’s possible to make way, way too much of this problem. The mere fact that people aren’t constantly doing and publishing precise replications of previous work doesn’t mean that previous work isn’t checked. I don’t know much about other branches of science, but in astrophysics the usual way this happens is that, when someone wants to check a piece of work, they try to do a study that goes beyond the earlier work, trying to replicate it in passing but breaking some new ground as well. (A friend of mine pointed this out to me off-line after my previous post. I knew that this was true, but in hindsight I should have stressed its importance in that post.)
For instance, one group claims to have shown that the fine-structure constant has varied over time, then another group does an analysis, arguably more sensitive than the original, which sees no such effect. If the second group can successfully argue that their method is better, and would have found the originally-claimed effect had that effect been real, then they have in effect refuted that claim.
(I should add that in this particular example I don’t have enough expertise to say that the second group’s analysis really is so much better as to supersede the original; this is just meant as an example of the process.)
The main problem with Zimmer’s article is a silly emphasis on the unimportant procedural matter of whether an incorrect claim in a published paper has been formally retracted in print. This is sort of a refrain of the article:
For now, the original paper has not been retracted; the results still stand.
…
[T]he journal has a longstanding policy of not publishing replication studies … As a result, the original study stands.
…
Ms. Mikovits declared that a retraction would be "premature” … Once again, the result still stands.
It’s completely understandable that a journalist would place a high value on formal, published retraction of error, but as far as the actual progress of science is concerned it’s a red herring. Incorrect results get published all the time and are rarely formally retracted. What happens instead is that the scientific community goes on, learns new things, and eventually realizes that the original result was wrong. At that point, it just dies away from lack of attention.
One example that come to mindis the Valentine’s Day monopole. This was a detection of a magnetic monopole, which would be a Nobel-Prize-worthy discovery if verified. It was never verified. As far as I know, the original paper wasn’t formally retracted, and I’m not even sure whether it’s known what went wrong with the original experiment, but it doesn’t matter: subsequent work has shown convincingly that this experiment was in error, and nobody believes it anymore.
(By the way, this is also an illustration of the fact that getting something wrong doesn’t mean you’re a bad scientist. This was an excellent experiment done by an excellent physicist. If you’re going to push to do new and important science, you’re going to run the risk of getting things wrong.)
Of course, there is some value in a formal retraction, especially in cases of actual misconduct. And when an incorrect result is widely publicized in the general press, it would be nice if the fact that it was wrong were also widely publicized. But a retraction in the original scholarly journal would be of precisely no help with the latter problem — the media outlets that widely touted the original result would not do the same for the retraction.
I remember once reading an erratum where the acknowledgments thanked a colleague for trying to replicate the results. [Marvels at the wonders of the internet, learned of decades ago in essays by Arthur C. Clarke, except that he didn’t call it the internet.] Yes, here it is: http://adsabs.harvard.edu/abs/1996A%26A…313.1028K