The world-renowned journal Science has published a new study which examines the reproducibility and, therefore, the validity of 100 previously published studies in social cognitive psychology. The results of the review, called the Reproducibility Project: Psychology, are shocking—investigators reported that they were able to replicate only about 40 percent of the results from these recent studies.
These new results call into question many of the bedrock principles of academic research—especially in the area of psychology, psychotherapy, and social science. This new study comes at a time when many aspects of academic research and its value to society are being questioned. Should minor advances be rewarded more than they are? Are ‘novel’ findings really based on reality? Is academic science distancing itself too far from reality by advancing so far ahead of common practice and industry?
“Central to the scientific method, experiments must be reproducible,” says Gilbert Chin, a senior editor at Science. “That is, someone other than the original experimenter should be able to obtain the same findings by following the same experimental protocol.”
“There has been growing concern that reproducibility may be lower than expected or desired,” said author Brian Nosek, a psychology professor at the University of Virginia. Alan Kraut, executive director of the Association for Psychological Science and a COS board member, said, “Inevitable variations in study participants, timing, location, the skills of the research team and many other factors will always influence outcomes. The only finding that will replicate 100 percent of the time,” Kraut noted, “is one that is likely to be trite and boring.”
In the final analysis, only 36 percent of the studies could be properly replicated. Only 39 percent of the research teams deemed their replication a success. The new study also found that ‘surprising’ results were less replicable.
“Publication is the currency of science,” Nosek says. “To succeed, my collaborators and I need to publish regularly and in the most prestigious journals possible. But academic journals routinely prioritize ‘novel, positive and tidy results.’
Indeed, and unlike the non-academic, commercial world, there seems to be no reward for studies that make only minor advances or ones that review previously published experiments and advance or evolve them in a minor way.
In light of this reality, Marcia McNutt, editor in chief of Science, says, “Authors and journal editors should be wary of publishing marginally significant results, as those are the ones that are less likely to reproduce.” Nosek concludes, “If they lose sight of that fact, then the published literature may become more beautiful than the reality.”
Innerlife STS for Organizations is a cloud mobile platform that enables health care organizations and health insurance companies to evaluate the effectiveness of mental health care providers and track the progress of their patients. Innerlife STS uses data intelligence and analytics to curate conceptualized narrative reports that explain the effectiveness of mental health care spending on individual health care providers.