Replication

I heard (via Sean Carroll) about this piece in Science headlined “Replication Effort Provokes Praise—And ‘Bullying’ Charges.” It’s about efforts to replicate published results in certain areas of psychology.

In general, I think that publication bias and dodgy statistics are real problems in science, so I’d bet that lots of results, particularly those that are called “significant” because they clear the ridiculously weak threshold of 5%, are wrong. Apparently lots of people, particularly in certain parts of psychology, are worried about this. I think it’s great for people to try to replicate past results and find out. (Medical researchers are on the case too, particularly John Ioannidis, who claims that “most published research findings are false.”)

The most striking part of the Science piece is the “bullying” claim. It seems ridiculous on its face for a scientist to complain about other people trying to replicate their results. Isn’t that what science is all about? But I can understand in part what they’re worrying about. You can easily imagine someone trying to replicate your work, doing something wrong (or perhaps just different from what you did), and then publicly shaming you because your results couldn’t be replicated. For instance,

Schnall [the original researcher] contends that Donnellan’s effort [to replicate Schnall’s results] was flawed by a “ceiling effect” that, essentially, discounted subjects’ most severe moral sentiments. “We tried a number of strategies to deal with her ceiling effect concern,” Donnellan counters, “but it did not change the conclusions.” Donnellan and his supporters say that Schnall simply tested too few people to avoid a false positive result. (A colleague of Schnall’s, Oliver Genschow, a psychologist at Ghent University in Belgium, told Science in an e-mail that he has successfully replicated Schnall’s study and plans to publish it.)

The solution, of course, is for Donnellan to describe clearly what he did and how it differs from Schnall’s work. The readers can then decide (using Bayesian reasoning, or as I like to call it, “reasoning”) whether those differences matter and hence how much to discount the original work.

The piece quotes Daniel Kahneman giving an utterly sane point of view:

To reduce professional damage, Kahneman calls for a “replication etiquette,” which he describes in a commentary published with the replications in Social Psychology. For example, he says, “the original authors of papers should be actively involved in replication efforts” and “a demonstrable good-faith effort to achieve the collaboration of the original authors should be a requirement for publishing replications.”

If the two groups work in good faith to do a good replication, it’ll make the final results easier to interpret. If the original group refuses to work with people who are trying to replicate their results, well, everyone is entitled to take that into account when performing (Bayesian) reasoning about whether to believe the original results.

 

Published by

Ted Bunn

I am chair of the physics department at the University of Richmond. In addition to teaching a variety of undergraduate physics courses, I work on a variety of research projects in cosmology, the study of the origin, structure, and evolution of the Universe. University of Richmond undergraduates are involved in all aspects of this research. If you want to know more about my research, ask me!