Vicious circles and poison pens

Vicious circles and poison pens

Share this post

Choose a social network to share with, or copy the shortened URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks
Artwork by Oliver Hoeller.

We don’t train young scientists how to review papers.

The integrity of the scientific enterprise – symbolically, and to a large extent practically – depends on the institution of peer review. Authors wishing to publish their research results submit their manuscript and data for consideration to a journal, which then commissions expert peer review. The reviewers, who provide their time and expertise for free, critique the data and the manuscript and on the basis of this, the journal’s editor (or editorial team) decides whether or not the paper will go forward for publication.

It’s a wonderful system, far from perfect, but one that provides a level of feedback and criticism to authors that stands as an example to most other areas of human activity. But there is one major problem: the reviewers are all amateurs.

Not amateurs in the sense of being inept, that is, but amateurs in the Corinthian sense – they have had no professional training whatsoever. In fact, if they have had any training at all – and many have had none – it is more in the form of guidance from one of their mentors…whose training will have been as amateurish in turn. A bit like getting your sex education from friends’ older siblings. It might be useful, but it’s probably a bit risky too (“If you want to avoid getting pregnant, you just have to hold your breath…”).

It is a great irony then, that a system that provides such potent and unsentimental feedback on the performance of authors provides zero feedback on the performance of the reviewers themselves. Authors can, of course, complain if reviewers are being very unfair, and are (almost) always given the opportunity to respond to the points raised by a reviewer, but they do not get to actually provide feedback and constructive criticism of the review itself. “Quis custodiet ipsos custodes?” (“Who will guard the guardians?”) asked Juvenal, but a scientist may reasonably ask, “Who reviews the reviewers?”

It’s an important question because while authors overwhelmingly support the peer review system, they all have their share of anecdotes that make the Spanish Inquisition look like the Salvation Army. Call it road rage syndrome, call it impunity, but it is remarkable what venom scientists will gladly pour on their peers when shielded by the anonymity of peer review.

That anonymity comes with an implicit requirement for mutual respect, or at least sympathy, between reviewer and authors, but it’s a license that’s often abused. Reviews are not a chance to settle scores, hinder competitors, or vent one’s own frustration. Being on the receiving end of a hostile review or receiving too many bad reviews may lead a scientist to conclude that that’s how it’s done. In the absence of external input and feedback, you draw conclusions on acceptable behaviour from what’s to hand – a bit like kids watching porno films and concluding that that’s normal sexual conduct. Hostile reviews are unprofessional and kill trust. They create vicious circles and poison pens.

It’s worth remembering that almost nobody knowingly does shoddy work and then submits it for publication. Suboptimal work is usually a consequence of inadequate training, and it’s the reviewer’s job to draw attention to shortcomings (in technique, in logic, in interpretation) and suggest how they could be improved. In essence, reviewing is a form of remote mentoring – gently pointing out how things could have been better, what previous work might have been overlooked, and ideally, highlighting areas worthy of praise.

When reviewing, you should go out of your way to be as thorough as possible without being unduly negative. Usually it’s the work of several years under assessment (with considerable impact on the subsequent career of young scientist(s) listed as first author(s)), so give it the time.

TIR’s recommendation: In the absence of standardised training, waive your anonymity and start signing your reviews. Signing reviews for papers has the paradoxical effect of making them impersonal. It’s a way of emphasising that you are focusing on the science, being professional, and trying to assess the manuscript critically but fairly. By waiving your anonymity, it also increases the pressure on you to actually do those things. You have to be careful about how you say and phrase things, and make sure something can’t be misread or taken the wrong way  – but given the importance of what you’re reviewing to the authors that ought to be a reasonable price to pay. They are willingly – probably warily, possibly fearfully – opening themselves up for critique; the least you can do is respect that. Could you stand in front of the authors and read your review to them aloud? If not, moderate the language. You should be tough, but being tough does not mean being nasty. After the journal has made a final decision (at which point your responsibility as a reviewer is discharged), write an e-mail to the authors to ask for feedback and inquire if there was anything they thought was unfair or needed clarifying about your review. You may not get a reply, but in TIR’s experience most authors are happy to discuss their papers. Similarly, journals have a duty towards the reviewers that they commission and should take action if a reviewer waives their anonymity and is then harassed by authors. 

Originally published on TIR - here.

Join the FEBS Network today

Joining the FEBS Network’s molecular life sciences community enables you to access special content on the site, present your profile, 'follow' contributors, 'comment' on and 'like' content, post your own content, and set up a tailored email digest for updates.