On the effects of evaluation on the ethics of science

Competitive evaluation of scientific research is necessary, but the criteria we adopt may promote unethical behavior. Bibliometric criteria are both unavoidable and especially prone to suggest opportunistic strategies. Awareness is important because we do not have obvious solutions.

Like Comment

   Shortage of funding and accountability of scientific research to taxpayers have caused competition among scientists to become fiercer and have put increased pressure on evaluation, mostly based on bibliometric criteria. Bibliometric criteria are notoriously imprecise, often even arbitrary, but the sheer volume of research that demands to be evaluated makes them necessary.
   This trend entails the danger of a progressive subversion of long standing ethical principles of scientific research. To give just one example, it is a common experience that we send out a paper for evaluation and the reviewers' response is that it might be accepted provided that better consideration to the current literature is given. The list of papers to be quoted follows, most of them from the same group of scientists, that one suspects may include the reviewer himself. This is a direct consequence of the usage of citation-based indexes of quality in the evaluation procedures: if one's grant proposal or professional appointment is accepted or rejected on the basis of the number of citations one has gathered in his/her career, then one's objective is to gather as many citations as possible, by whatever means.
   Since we cannot avoid that the rules we set for science evaluation may reshape the ethics of science, we should be aware and pay attention to what rules we set. Evaluation is necessary, and bibliometric criteria are unavoidable. Yet we should consider that the risk of ethical degradation is likely to be directly proportional to the pressure applied by evaluation. Relieving the pressure implies to allocate the available resources among a larger fraction of the applicants: to select the best 30% instead of the best 5%, at the cost of lowering the share allotted to successful applications. Elitism not only does not create a favorable environment for freedom of thought and innovation, but also encourages self-promotion by-all-means, especially if quality is evaluated using bibliometric criteria, which are relatively accessible to manipulation. Given that research is evaluated by peers, there is no excuse if evaluation criteria promote non-ethical practices: we scientists set our own standards.

Andrea Bellelli

Prof., Sapienza University of Rome


Go to the profile of Isabel Varela Nieto
over 3 years ago

Food for thought! Indeed it is a topic with many facets and open for discussion.
Which should we use as a reference index to objectively evaluate the impact of colleagues' work?

Go to the profile of Andrea Bellelli
over 3 years ago

I do not have a solution, aside from what I wrote in the post. Clearly we should first decide which is the scope of evaluation: allocating funds? promoting a scientist's career? awarding a position? Evaluation should be limited at those instances where it is really needed. A wrong use of evaluation is that of justifying  reduction of resources: i.e. we have less thus we adopt stricter rules.

Go to the profile of Andrea Bellelli
over 3 years ago

As a post scriptum to this article I remark that Nature has published  the commentary: "Italian scientists increase self-citations in response to promotion policy. Study reveals how research evaluations can lead to self-serving behaviour." by Dalmeet Singh Chawla. link: https://www.natureindex.com/news-blog/italian-scientists-increase-self-citations-in-response-to-promotion-policy

Go to the profile of Athel Cornish-Bowden
over 3 years ago

Another point to mention is the inflation in the number of authors per paper, which is sometimes (not always, of course) a way of padding the CVs of people whose actual contribution to the work has been negligible. I have looking at the numbers of authors of papers published in the European Journal of Biochemistry  and FEBS Journal at ten-year intervals from 1967 to 2017, in each case looking at the first issue of each year. In 1967 60% of papers had one or two authors, and in 2017 there were 9%, with a more or less steady decrease over the intervening years. The median numbers of authors tell the same story: two in 1967, three in 1977 and 1987, four in 1997, five in 2007 and 2017. In biochemistry we are still far from the 3000 authors one can find with papers in particle physics; even genome sequencing papers fall far short of that. Nonetheless, the trend is there, and I for one would be very suspicious of an "author" unable to justify the Discussion and main conclusions of a paper.