Reproducing the results of some of psychology’s seminal studies has proved a tricky task for scholars in recent years.
Now, a provocative paper from a US researcher argues that many of these replication efforts are doomed to fail.
In a criticism of the “enormous resources” spent on such attempts, Tal Yarkoni, research associate professor in the department of psychology at the University of Texas at Austin, argues that the reproducibility crisis is the result of a?greater “generalizability crisis”, where broad psychological theories are being inferred from results that are unique to the participants tested and to the circumstances of the experiment.
In his article, published on the preprint platform, Dr Yarkoni highlights this problem by citing a 1990 that apparently proved that witnesses found it harder to identify criminal suspects if they had previously been asked to describe facial features.
“A recognition memory task that uses just one video, one target face, and one set of foils simply cannot provide a meaningful test of a broad construct…and it does a disservice to the field to direct considerable resources to the replication of such work,” says Dr Yarkoni.
If psychologists are serious about doing research that could make “useful real-world predictions”, rather than conducting highly contextualised studies, they should use “much larger and more complex datasets, experimental designs and statistical models”, Dr Yarkoni advises.
He also suggests that the “sweeping claims” made by many papers bear little relation to their results, maintaining that a “huge proportion of the quantitative inferences drawn in the published psychology literature are so inductively weak as to be at best questionable and at worst utterly insensible”.
Many psychologists were indulging in a “collective self-deception” and should start “acknowledging the fundamentally qualitative nature of their work”, he says, stating that “a?good deal of what currently passes for empirical psychology is already best understood as insightful qualitative analysis dressed up as shoddy quantitative science”.
That would mean no longer including “scientific-looking inferential statistics” within papers, whose appearance could be considered an “elaborate rhetorical ruse used to mathematicise people into believing claims they would otherwise find logically unsound”.
Marcus Munafo, professor of biological psychology at the University of Bristol and one of the leaders of the , which promotes rigorous research, said the paper was “part of a wider process of self-reflection and critique that psychology – along with many other sciences – is going through, which in turn is part of a broader ongoing discussion of research culture”.
“The key message for me is that we need to think hard about the ways in which we work, and the tools that we use, to ensure that the work we produce is robust and valuable,” Professor Munafo told Times Higher Education.
He added that “one inevitable conclusion” was that “we may need to change the ways in which we work, including how we design, interpret and report our studies, including perhaps being more modest in our claims”.
“Although that process may be uncomfortable, and entail quite broad changes to how we work – our training, our incentive structures and so on – ultimately it will enable us to do better work,” he said.
后记
Print headline: Psychology is deceiving itself, scholar argues