Is Psychology Research Unreliable?
December 23, 2023
This post originally appeared here.
In the past decade there has been a serious blow dealt to experimental psychology in the form of what’s referred to as the “replication crisis.” This “crisis” basically revolves around the finding that only about 50% of experimental psychology results are reproducible in the lab at a later date. For years, this finding has haunted experimental psychology and other social science fields, suggesting that there’s something fundamentally wrong with the way these fields conduct research, that their research questions and methods are not up to the task, and that their results are ultimately not to be trusted. Combined with the pre-existing crisis of public trust in health and science, this situation is not to be taken lightly. The proclamation of a replication crisis in psychology can only mean less trust and confidence in science among members of the general public.
Fast forward to just weeks ago in November 2023, a paper appeared in Nature Human Behaviour suggesting that the replication “crisis” may not actually be as bad as it sounds. The paper instead found that nearly 86% of experimental psychology results are replicable in the lab at a later date and even the effect sizes (that is, the amount of difference between the experimental variable and the control) endure.
How did this new finding come about? In this study, the researchers spent about 5 years working on the most rigorous studies possible and then trying to replicate them in the lab. Instead of trying to replicate pre-existing studies, they asked four prestigious research labs in the U.S. to devise and carry out experiments on topics of interest in experimental psychology. Then they took these results to their own lab and tried to replicate them. The results were unexpectedly positive. To be included in the replication study, the original studies had to be as rigorous as possible, utilizing methods like a minimum sample size of 1,500 and preregistration (that is, setting down exactly what the experimental will entail and how the results will be analyzed before starting).
While these results should prompt celebration (and relief), there are some problems with translating them to an automatic sense of increased trust by the public in science, which is a large part of what we at Critica are trying to achieve. The problem is that many (maybe even most) studies are not conducted with the degree of rigor required in this study and many of those results may therefore be impossible to replicate. This might still leave the field of experimental psychology (and who knows what other scientific endeavors) with a high volume of published studies whose results are very unclear. We can probably rest assured that, when done well, experimental psychology (and probably other social science) studies should be taken seriously. But this still leaves the problem of how to get those low-quality studies out of the field. This is a large structural problem that we still need to solve. Hopefully, last month’s published study will be a wake-up call to all social scientists that rigor in design and analysis of their experiments is required.