Op-Ed: We rely on science. Why is it letting us down when we need it most?
Science is suffering from a replication crisis. Too many landmark studies can’t be repeated in independent labs, a process crucial to separating flukes and errors from solid results. The consequences are hard to overstate: Public policy, medical treatments and the way we see the world may have been built on the shakiest of foundations.
In June, the latest chapter in the replication saga featured a highly influential study on memory. In 2010, in a blockbuster article in the journal Nature, New York University researchers found that it was possible — without the use of drugs, brain stimulation or anything invasive — to “rewrite†a person’s memory so they’re less afraid when shown a reminder of something that had scared them in the past.
Such results could have groundbreaking implications for the treatment of posttraumatic stress disorder. Accordingly, the Nature paper has been cited more than 1,100 times, often in trials of new PTSD therapies. The finding also has received generous attention in the popular media, including articles such as “How to Erase Fear in Humans†and an influential New Yorker profile of the lead author.
However, when scientists at KU Leuven, a research university in Belgium, tried to replicate the memory experiment, they ran into one problem after another. They found a host of errors, inconsistencies, omissions and other troubling details in the original study. For example, the NYU researchers had tested a much larger number of subjects than they reported; they made a “judgment call†to drop the data from around half their sample — not a move that’s consistent with full transparency.
Pebble Mine remains a mortal threat to the people, native culture and watershed of pristine, rich and wild Southwest Alaska.
Months turned to years as the Belgian scientists wrangled with the data to get to the bottom of the discrepancies and work out exactly how the original experiment was performed. Finally, when they eventually did get their new experiment running, they found no evidence for the “rewriting†effect. Their report was published, but a full 10 years after the much-lauded original finding.
It’s important to note that failure to replicate doesn’t imply misconduct on the part of the original researchers but it does call into question their conclusions, and other research that relied on them.
There are many similar cases. In 2013, scientists set out to replicate 50 high-profile studies on the biological aspects of tumor growth. They discovered that not a single one of the original published papers documenting the work reported enough information about the study methods to allow them to even attempt an independent replication. Eventually, after contacting the original authors, some researchers managed to repeat some of the experiments, with a mixed bag of results compared to the initial findings. Others among the replication researchers gave up entirely.
Science shouldn’t be like this. The scientific record is supposed to be a clear, complete document of what scientists have done. If other researchers struggle to even attempt to replicate a study, there’s been a major breakdown in scientific communication.
The fact that papers are written and published with such scant, insufficient detail reveals how little the system cares about replication. In fact, surveys of psychology, education,economics and criminology research estimate that no more than 1% of all studies in those fields are explicit replications. Perhaps a look at “harder†sciences would find less discouraging results, but to my knowledge no such surveys have been performed.
Scientists care so little for replication because it doesn’t advance their careers. Why run such a study, double-checking someone else’s work, when you could run your own entirely new, exciting experiment? Why focus on carefully adding to an established line of research when what distinguishes you to university tenure committees and to journal editors is a flashy, unique finding?
Breaking science’s addiction to novelty will take serious effort on multiple fronts. But the story of the memory-rewriting replication study offers some hope.
The journal where the replicators published their work, Cortex, is at the forefront of a new type of science publication that is as interested in investigating past findings as in showcasing what’s brand-new. Cortex had assured the KU Leuven researchers that a competent replication study would be published, making it less likely the researchers would throw up their hands and move on after encountering the frustrations of repeating the original study. Cortex also published the replicators’ paper aimed purely at re-analyzing the original study data (and, in this case, exposing its defects).
If scientists are rewarded with much-coveted publication credits for running replications and for long-form critiques of each other’s work, it will help rebalance the broken system of incentives.
So much relies on scientists getting things right, including our ability to escape from the COVID-19 pandemic. It’s tragic, then, that the scientific system has decoupled the goal of “getting it right†— which usually requires replication — from that of “getting it published.â€
We may not be able to rewrite our worst memories but we can rewrite the rules of a system that let flawed findings stand unchallenged for 10 years. With the correct incentives, we can make the scientific literature what it’s meant to be: robust, reliable and replicable.
Stuart Ritchie is a faculty member in the Social, Genetic and Developmental Psychiatry Centre at King’s College London and author of “Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth.â€
More to Read
A cure for the common opinion
Get thought-provoking perspectives with our weekly newsletter.
You may occasionally receive promotional content from the Los Angeles Times.