It is paradoxical and ironic that peer review, a process at the heart of science, is based on faith not evidence.
There is evidence on peer review, but few scientists and scientific editors seem to know of it – and what it shows is that the process has little if any benefit and lots of flaws.
Peer review is supposed to be the quality assurance system for science, weeding out the scientifically unreliable and reassuring readers of journals that they can trust what they are reading. In reality, however, it is ineffective, largely a lottery, anti-innovatory, slow, expensive, wasteful of scientific time, inefficient, easily abused, prone to bias, unable to detect fraud and irrelevant.
As Drummond Rennie, the founder of the annual International Congress on Peer Review and Biomedical Publication, says, “If peer review was a drug it would never be allowed onto the market.”
Cochrane reviews, which gather systematically all available evidence, are the highest form of scientific evidence. A 2007 Cochrane review of peer review for journals concludes: “At present, little empirical evidence is available to support the use of editorial peer review as a mechanism to ensure quality of biomedical research.”
We can see before our eyes that peer review doesn’t work because most of what is published in scientific journals is plain wrong. The in Plos Medicine, which was written by Stanford University’s John Ioannidis, shows that most published research findings are false. Studies by Ioannidis and others find that studies published in “top journals” are the most likely to be inaccurate. This is initially surprising, but it is to be expected as the “top journals” select studies that are new and sexy rather than reliable. A series published in The Lancet in 2014 has shown that 85?per cent of medical research is wasted because of poor methods, bias and poor quality control. A showed that more than 85 per cent of preclinical studies could not be replicated, the acid test in science.
I used to be the editor of the BMJ, and we conducted our own research into peer review. In one study we inserted eight errors into a 600?word paper and sent it 300?reviewers. None of them spotted more than five errors, and a fifth didn’t detect any. The median number spotted was two. These studies have been repeated many times with the same result. Other studies have shown that if reviewers are asked whether a study should be published there is little more agreement than would be expected by chance.
Peer review is anti-innovatory because it is a process that depends on approval by exponents of the current orthodoxy. Bruce Glick, Hans Krebs and the team of Solomon Berson and Rosalyn Yalow all had hugely important work – including Nobel prizewinning research – rejected by journals.
Many journals take months and even years to publish and the process wastes researchers’ time. As for the cost, the Research Information Network estimated the global cost of peer review at ?1.9?billion in 2008.
Peer review is easily abused, and there are many examples of authors reviewing their own papers, stealing papers and ideas under the cloak of anonymity, deliberately rubbishing competitors’ work, and taking a long time to review competitors’ studies. Several studies have shown that peer review is biased against the provincial and those from low- and middle-income countries. Finally, it doesn’t guard against fraud because it works on trust: if a study says that there were 200 patients involved, reviewers and editors assume that there were.
There have been many attempts to improve peer review through training reviewers, blinding them to the identity of authors and opening up the whole process, but none has shown any appreciable improvement.
Perhaps the biggest argument against the peer review of completed studies is that it simply isn’t needed. With the World Wide Web everything can be published, and the world can decide what’s important and what isn’t. This proposition strikes terror into many hearts, but with so much poor-quality science published what do we have to lose?
Yet peer review persists because of vested interests. Absurdly, academic credit is measured by where people publish, holding back scientists from simply posting their studies online rather than publishing in journals. Publishers of science journals, both commercial and society, are making returns of up to 30?per cent and journals employ thousands of people. As John Maynard Keynes observed, it is impossible to convince somebody of the value of an innovation if his or her job depends on maintaining the status quo.
Scrapping peer review may sound radical, but actually by doing so we would be returning to the origins of science. Before journals existed, scientists gathered together, presented their studies and critiqued them. The web allows us to do that on a global scale.
Richard Smith was editor of the BMJ and chief executive of the BMJ Publishing Group from 1991 to 2004.
后记
Article originally published as: Ineffective at any dose? Why peer review simply doesn’t work (28 May 2015)