Social psychologists were shocked when the fraud committed by Tilburg-based professor Diederik Stapel was revealed in 2011. For years he had been making up the results of all sorts of experiments. How had he been able to get away with it for so many years?
In other fields of study, too, the results of scientific research are often the subject of debate, simply because some academics like to exaggerate their findings somewhat. After all, their articles will not receive much attention if their findings are unspectacular.
Replicating studies
How can we tackle this kind of pseudo-science? John Mackenbach, a Professor of Public Health who also serves as the Chairman of the committee established by KNAW for this very purpose, feels there is only one solution. “Academics must repeat each other’s experiments more often. This should be a normal course of action.”
Mackenbach’s committee presented its recommendation last Monday. The committee feels that non-reproducible research obstructs academic progress. If scientific errors are not detected, others may be negatively impacted. For instance, patients may end up receiving the wrong therapy.
Mackenbach’s recommendation was clear: replicate studies more often and earmark more money for doing so. Moreover, in order to make it easier for others to replicate studies, academics should provide much more detailed information on their research methods. “Now that journals are all published online, there is plenty of room for such details.”
Since replicated studies are by their very nature not exactly innovative, do they require a different type of academic?
“No, we should all be replicating studies. It is high-quality research, pure and simple. It requires the same competencies and equipment as the original study. In other words, it must be carried out by the best researchers. The only difference is that, as you put it, it is not exactly innovative.”
Should replicating studies be regarded as a kind of scientific conscription?
“That is not how we put it in our recommendation. But perhaps all new PhD students should, as part of their training, replicate previously conducted studies. Not only will this help them master their profession, but it will also allow them to contribute to the reliability of their own discipline.”
Speaking of replication, you are not the first to suggest this approach. Even Noble Prize winners have advocated it. Why is it so hard for people to accept this suggestion?
“Academics are interested in novelties, and obviously, anything you repeat is not new. It is a different experience. But there are objective barriers, as well. Journals will be less interested, so your study will not achieve the same status. Researchers receive more praise for innovative research, which in turn helps them obtain funding for new studies, and so on. We must seek to remove these barriers.”
How?
“There are all sorts of major international initiatives. Entire disciplines are checking the reproducibility of their study results. Journals are moving in the same direction. Some journals are now promising that they will publish studies replicating their earlier articles, regardless of the outcome. We must also look into this in the Netherlands. NWO, the organisation that funds research, has already established a programme for replication studies, but it does not have much of a budget. It is nowhere near enough to tackle the issue.”
Which fields of study are particularly prone to this?
“Of course we understand that disciplines such as psychology are at an increased risk, because the work involves more people. In fields such as physics, observations are generally made by means of equipment, which is why you will sometimes hear physicists say, ‘These things don’t happen in our field.’ But how do you know for sure if you don’t check? One of the key recommendations presented in our report is that academics must systematically study the reproducibility of studies carried out in entire disciplines or sub-disciplines.”
Do you think academics regard replication studies as motions of no confidence?
“I’m not ruling out that some academics may feel this way, but that’s not what I’m hearing from the academics to whom I’ve spoken. They are interested in this sort of thing, as they should be.”
How much money should be earmarked for this? One percent of the research budget, or rather ten percent?
“Somewhere in between, I think. But first we must gain a better understanding of the scale of the problem.”