Holes in the safety net: can peer review be trusted?

peer review

I sometimes re-read the account of New York physicist Alan D Sokal’s famous spoof article to bring a sense of reality to the hyped realm of published scientific evidence. Admittedly in that case, some 20 years ago this year, it was centred less on peer review and more on an appeal to editorial ideology but it is a salutary read in any case. Sokal was so irritated by the tendency for certain journals to be less than rigorous in approving articles for publication that he decided to test his theory by submitting an article which was completely fabricated. It was ‘liberally salted’ with the kind of phrases and concepts which he thought might give it a certain appeal. It had the most wonderful (if completely meaningless title) ‘Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity’ and was essentially nonsense – but it was published. He came clean after the fact and there was a tremendous hullabaloo.

Of course we would think that with a system of peer review that kind of thing could never happen! Not so, I’m afraid. There are various systems in use by different journals. Take your pick, single blind, double blind even triple blind peer review. Alternatively, rather than concealing the identities of reviewers and authors to various degrees, the process can be left entirely transparent. Everyone then knows who is doing what to whom!

Arguably, for authors, that might be preferable and encourages referee accountability – no hiding. There is now even a process for post publication peer review. The idea that independent evaluation takes place on a pro bono basis to help an editor decide on the merits of a proposed publication seems like a really good idea. The trouble is that whenever the process has been stress tested it has failed spectacularly. One of the most famous examples in the clinical publishing world featured the editor of the BMJ, Fiona Godlee. She arranged for papers in which there were nine major and five minor deliberate mistakes to be sent to more than 600 of the journal’s experienced reviewers. Some referees missed all the errors, the mean number detected was less than three and not a single reviewer achieved the full house. Depressing. The same basic idea has been trialed in other disciplines and the results are no more reassuring. Editors may take some comfort from having a peer review process but it must be thin.

Another worrying angle is that some reviewers have essentially been victims of identity theft. Reputable publishers such as BioMed Central and Springer have retracted a large number of recent papers for a range of corrupt practices including the manipulation of the peer review process. It is a sorry tale and these details help to sustain Retraction Watch – an internet presence which tracks retractions as a window into the so called scientific process.

The reality is that peer review fails to bear the weight of responsibility that is expected of the practice. There has to be a better way. Now don’t get me started on the Impact Factor.