Producing replicable findings in the lab
A central pillar of science is verification by replication. But this step often fails, as more and more analyses show. Swiss institutions are trying to advance the cause of reproducible research by encouraging researchers to make sure their data is solid – and shared.
In an uncertain world, regular surveys show that the public continues to trust science. But do scientists trust science? Probably not as much as they used to. There is a growing awareness of a reproducibility crisis: more and more examples of published scientific findings that can’t be validated when other researchers repeat experiments but fail to get the same results.
The problem is fuelled by many factors, such as the weakness of published studies using questionable statistics, the pressure to produce high-profile findings, publication bias, the reluctance of scientists to submit negative results, and the reluctance of journals to publish them.
Changing the culture
Swiss institutions are starting to act. The University of Zurich last year launched its Center for Reproducible Science to address the issue. It’s headed by Leonhard Held, a professor of biostatistics. While it’s difficult to point to tangible signs of progress after one year, he says much is going on behind the scenes. “We have already improved the visibility and knowledge of the reproducibility issue at the university”, he says. This has included holding a ‘reproducibility day’ across the university in February 2019.
It has already made several grant applications. One is for investment in training and delivering courses in good research practice – such as on the importance of confirmatory findings. “The current thinking in science is too often that if we’ve shown something, it must be true”, he says. “We need to develop a culture of replication studies”.
A call from the boss
One key strategy to improve reproducibility is sharing data and methods. This is the essence of the open science movement. Institutions like EPFL and ETH Zurich are organising workshops about reproducibility, open science and research data management. Anna Krystalli, a computer scientist at the University of Sheffield in the UK, was a guest speaker at a joint EPFL-ETH Zurich summer school in 2018. She says two things struck her. First, the event was organised by PhD students themselves, not by officials from the institutions over the heads of the young researchers. And secondly, the boss of one of the two institutions joined by Skype. “I think that shows how supportive the senior people are”, she says. “It seemed to be a big part of Swiss research culture. It’s hard to say if it’s better than elsewhere, but I was definitely impressed”. Such events have an impact, she says, both because they raise awareness and because they suggest concrete improvements that young scientists can make and share, such as the best software tools for sharing data and computer code.
“The problem of results that fail to stand up to scrutiny is a challenge for all academic and scientific institutions”, says Hanno Würbel, a biologist at the University of Bern, who has looked at the reproducibility of pre-clinical experiments. He says that research with animals is in a good position to start addressing these concerns, because it already comes with many checks and paperwork about experimental design. These could be adapted to ensure that the work is reproducible before it starts, such as by making sure the sample size is big enough to produce statistically significant results. Animal researchers are used to such guidelines and assessments, and so are less likely to resist steps to improve reproducibility by claiming that they involve unnecessary bureaucracy.
While many universities speak of the need to tackle reproducibility, Würbel believes that successful progress tends to come down to the dedication of a few individuals. “A big part of it is education and training”, he says. “To get change, we might need to wait until the older scientists retire”.
Held doesn’t want to wait that long. He would like his centre in Zurich to develop its own research in meta-science, or the science of science. One model for him is the Meta-Research Innovation Center at Stanford University (METRICS), launched in 2014 and run by John Ioannidis, a long-standing advocate of reproducibility. It focuses on the entire research cycle, from how experiments are planned and their results disseminated, to how universities and funders reward and incentivise scientists. The last point is key: so far, the academic system mainly rewards long publication lists in high-impact journals, which are rarely keen to publish replication studies. That’s one of the changes science must focus on as it seeks to clean up its act.