Rolf Zwaan is Professor of Biological and Cognitive Psychology. Among other things, he conducts research on language, communication and knowledge in the digital age and the replication crisis within science.
Since 1 January, tens of thousands of scientific articles have appeared on the subject of COVID-19. According to critics, this makes it impossible to keep up to date. This applies to the Outbreak Management Team and even more so to an individual intensive care physician. Is too much being published?
“Yes. And not just to be able to stay up to date. An even bigger problem is that too much is published to be able to guarantee the quality of the articles. One of the most important foundations of academic research is the peer review system. Scientists scrutinise the work of colleagues because they also want their own work to be peer-reviewed. For example, I myself receive several peer review requests every week, hundreds a year, although officially it is not part of my job. I get paid to teach, research and manage a group of scientists. Consequently, I have to refuse a large proportion of these review requests. Therefore, these kinds of articles are not subjected to review or are then reviewed by someone who is not as well versed in that particular subject. That does have an effect on the quality.”
Why is so much being published?
“A whole industry has sprung up around publications. Contributors to these publications include doctors in training to become specialists who have to publish four or five articles over the course of their PhD programme and people who have a permanent position and are judged on the how many of their articles appear in print. There is a drive towards quantity. This is not new, by the way. When I started out in science thirty years ago, people were already talking about the overly excessive pressure to publish.”
This publication pressure might also lead to sub-standard science. Why is that?
“You see a lot of pressure on scientists to be considered relevant, interesting and to have an impact. As a result, scientists often feel compelled to inflate results. Look, science is not a chain of exciting discoveries. Usually it comes down to making a cautious initial finding that says very little and is also surrounded by uncertainties. Yet if you present your paper like that, no one wants it. Plenty of journals ask: “What’s the highlight of your article?” You have to sell your manuscript somehow. So that’s what you do. And then there’s a communications department at a university that spices things up a bit, a journalist that puts a bold caption above it and before you know it, badly executed research spreads across the internet like some revolutionary discovery.”
The Greek-American epidemiologist John Ioannidis claimed a number of years ago that most research results were flawed. Is he right?
“That is a pretty strong statement. But we do see that a whole system has arisen in which scientific results are being exaggerated and the shortcomings of specific research are not clearly visible. In my field of psychology, a lot of attention is being paid to the replication crisis. That is why we have launched an initiative to repeat research. The basic idea is that something is only scientific knowledge once you have developed a method that allows you to reliably produce and reproduce a result. It turned out that this did not actually work for a lot of research within psychology. A recent study in Science magazine showed that a third of several important psychological experiments could not be replicated.”
Is that shocking?
“It’s very easy to say after the fact that that was pretty stupid of all those researchers. But they often acted with the best intentions and with the knowledge that was available back then. It could also be possible that at the time, something was overlooked that later proved to be very important. Well, that is essential when conducting science. We must not forget that it is difficult to carry out good research, right?
In recent years, we have increasingly been working with large-scale datasets where patterns are sought, without formulating a hypothesis beforehand. Is that a major problem?
“Hypothesis-driven research is the method to prove causality. You start with a theory, form a hypothesis and then you test it. If you just randomly browse through the data, you will always find something. This is not such a serious problem, by the way, as long as you make a distinction between testing and exploratory research. And that’s where it sometimes runs into snags.”
In practice, it is very difficult for the public to ascertain which research is of any value. Look what is happening right now concerning the coronavirus. Any research looking at a potential solution goes viral. For example, high concentrations of vitamins were said to help prevent COVID-19 and there are still people who think that hydroxychloroquine is an effective medicine. Who is in charge of ensuring that these kinds of misconceptions are avoided?
“You may not want to hear this, because it’s another typically nuanced answer from a scientist, but: everyone is. It is a systemic problem. Researchers, press officers, journalists, politicians, citizens – we all have a responsibility to handle things conscientiously. For instance, I think it’s great that fact checks are taking place more and more often. And now even Twitter is posting warnings next to Donald Trump’s tweets that are so blatantly false.”
Do you have any tips for the news consumer as to how to cope with all the science news?
“I think it is right to question very bold claims. The more staunch a scientist is, the more I have my doubts. It’s also good to check if someone is really a specialist in the field he or she is talking about. A lot of scientists tend to dabble in things they actually don’t know much about. I know plenty of professors and they all talk nonsense sometimes. Myself included, as it happens.”
“Without wanting to lead people to conspiracy theories, you may well ask yourself if there are perhaps ulterior motives at work in some research. For example, for years, a great deal of research into the effects of smoking has been made possible by the tobacco industry. The most important thing is to simply use your common sense. Every day, an article pops up in the newspapers about the relationship between food and health. But, for instance, is it really the case that half an avocado a day has the same effect on everyone?”
The lack of clear, unequivocal knowledge has proven to be quite difficult over the past few months. On the one hand, virologists from the Outbreak Management Team were listened to very carefully. On the other, there was fierce criticism when the RIVM (the Dutch National Institute for Public Health and the Environment) director Jaap van Dissel had to revise his views. Were the expectations of the general public too high as far as the science is concerned?
“I think so. The population got a peek inside the kitchen during this crisis. And I think that this is a good thing. We should become aware of the fact that the knowledge that you acquire as a scientist tends to be provisional in nature, until a sort of corpus of research results emerges that is readily reproducible. That process is slow, which is problematic whenever you want answers quickly, as is the case now.”
Over the past ten years, we’ve regularly heard that science is ‘merely an opinion’. Is there possibly any truth to that?
“No. Churchill once said about democracy: “Democracy is the worst form of government, except for all the others.” You could think that way about science, too. Wherever we already have knowledge, it comes from science. Without it, we would be lost. Having said that, we are not infallible. Even scientists sometimes fall prey to human behaviour that you see all over the place – like sloppiness, vanity and laziness. There are already a lot of self-correcting capabilities, but these could certainly be stepped up a notch. We have to learn not to jump to conclusions too quickly. One swallow does not make a summer. Scientists find it difficult to be open about their limitations, assuming that this would come at the expense of their status or reliability. But it would be good if each publication explicitly stated the scope of the findings presented and the extent to which they can be generalised.”