Feature: Evaluating the evaluation
Negotiating the feedback culture
To publish or not to publish: the future of research work is decided during the peer-review process. Three scientists share their thoughts on this high-responsibility task.
It’s hard to count the number of peer-review requests; I get so many – maybe two a week? I have to turn most of them down. I accept them only if I see a good reason.
I’m currently working on two. One article is in my field and written by someone whose work I respect. Doing this review gives me an excuse to find out more about their work, and could lead to new ideas or even, who knows, a collaboration. The other request comes from a fairly prestigious journal for which I’ve never done a review before. Accepting it is good experience and, let’s be honest, good for my CV.
Conferences are more important than journals
I do between five to ten reviews for journals every year, plus dozens of submissions to conferences that I evaluate as programme committee member. In my field, having a contribution accepted at one of the two major conferences can be more difficult than publishing in a prestigious journal.
I write almost all my reviews on the train or plane. I first ask myself two questions: is the article sound? And, is it worthwhile? If the answer is ‘no’, I try to explain my line of reasoning. If the answer is ‘yes’, I try more and more to stick to the essentials and make it clear when my suggestions are optional. I don’t want to overload the authors with work for improvements that are ultimately incremental. I’ve learnt how to do reviews on the job, having received virtually no advice from colleagues. I’m trying to change this: I give my students reviews to do which we discuss together.
I tend to accept the reviews I receive well. I rarely feel that my article has been misunderstood. In general, I understand the points raised and they help me improve the text. Recently, a comment on a conceptual subtlety made me change my perspective. That’s invaluable, even if unusual.
In physics, preprints deposited on the preprint server Arxiv play a major role, long before they go through peer review. Their subsequent publication is in many ways just a badge of honour. Some very important works have remained on Arxiv without ever having been published. I use the Scirate website that allows for recommending and commenting on Arxiv preprints, which is a good help in filtering through the hundreds of articles posted every week online. I also keep up to date on X (formerly Twitter), where lots of discussions take place.
Sometimes I write directly to an author if I have questions or comments about their preprint. Obviously, I then focus on the essentials – you don’t contact someone to correct grammatical errors. More and more, this form of unofficial peer-review process takes place away from journals.
In my area at least, the formal peer-review process feels at times more important for career progression than an essential ingredient of good science.
I try not to change my evaluation approach for an article according to the journal to which it’s submitted. What I first look at is the conclusions being well supported by the data. Some journals seek a clear response: ‘yes’ or ‘no’, but I prefer to write qualitative evaluations. As a reviewer, my role isn’t to decide whether an article submitted is the right fit for the journal. It’s the editors’ role on the basis of our evaluations. That’s what I do when in an editorial role.
I accept peer-review requests when an article interests me and when I’m not totally overwhelmed with work. I first take notes, give it two days and then finish the report. As each manuscript is seen by more or less three reviewers, I try in principle to conduct around three times as many reviews as I publish articles.
It’s rare for a review to be completely wrong
I tend to refuse requests from journals I’m unfamiliar with, summary articles which teach little and which require immense efforts, as well as work that seems from the outset to bring nothing of interest. I sometimes receive articles that I’ve already refused from another journal; in this case I prefer not to make a new review so as to allow a second chance with someone else.
My way of accepting the reviews of my work has changed. Early in my career, I took criticisms personally. Today, I have more perspective and see more easily where my text might actually be improved. With experience, I note that it’s rare for a review to be completely wrong.
We hear people talk about open peer-review, where reviewers sign their reports. Some researchers, especially those starting out, might hesitate in this case to criticise openly a highly positioned academic for fear of consequences, even if I think that fear is often exaggerated. A good approach is to circulate the reports among the reviewers. That takes time, but it attenuates extreme positions and boosts accountability.
The idea of ‘publish first and review afterwards’ is interesting but isn’t perfect. Comments posted online alongside an article that has yet to be peer-reviewed may help non-specialists to discern its quality, but who has time to read them all? And editors reject directly a certain number of manuscripts without even taking in the opinions of specialists in the field. That said, I welcome this kind of initiative, because the system today is overloaded and we need solutions. Peer review isn’t perfect, but there’s nothing better.
For me, the main aim of peer review is to determine whether a piece of work submitted makes a contribution to science or not. It’s the first thing that I do before looking at the details. If the answer is ‘no’ – e.g., because it doesn’t take a significant step forward or it isn’t well enough connected to known facts – I argue why and suggest alternatives for the journal. At the beginning of my career, I tended to go right into the details straight away, but that leads to losing sight of the whole.
Some reports circumvent the science presented
I receive multiple requests for peer review every month; I try to accept one of them. I see whether the topic is of interest to me, or if I’ve already submitted work to the journal – in this case, it has organised reviews for me and it’s normal for me to do the same. Generally, I refuse when I don’t know the domain well enough, or I only review specific parts of the manuscript. Some journals divide the work up: specialists will review the statistics, others the linguistic analysis. This is a very good approach that allows us to focus on the areas of our expertise.
As an author, I appreciate the reports that give room to reflect, reveal weakness in our arguments or which help us to be clearer. We don’t always have the impression that our manuscript has really been read and understood, and it’s not rare to receive reports that leave something to be desired. They circumvent the science presented, call for superfluous supplements and extra citations, or they suggest redoing the study in the way they would have preferred. Experienced scientists are sure enough of themselves to ignore this kind of feedback, but those starting out won’t necessarily. They then spend a great deal of effort responding to every point raised, meaning everyone loses lots of time and sometimes the article never leaves the process.
Journal editors (the scholars who organise the peer reviews, Ed.) should take a much more active role, and especially evaluate the reviews. They can comment on reports before sending them to the authors by suggesting to ignore one or another point, or even leaving out reviewers whose reviews are of substandard quality. I made a huge commitment to be an active editor, but it ate up lots of my time and I quit after six years. We can reduce the workload by having more editors.
There’s not enough peer-review training. We can talk about good and bad examples in the classes offered during PhD courses, but we must above all accompany postdocs when they start to take on large numbers of reviews: you must remember that the reviewer isn’t a co-author, and shouldn’t suggest other research avenues or point out typos, but focus on the essence, i.e., judge whether the argumentation is clear and well supported or not.