On the effects of evaluation on the ethics of science

Competitive evaluation of scientific research is necessary, but the criteria we adopt may promote unethical behavior. Bibliometric criteria are both unavoidable and especially prone to suggest opportunistic strategies. Awareness is important because we do not have obvious solutions.

Go to the profile of Andrea Bellelli
Jun 02, 2018
2
3
Upvote 2 Comment

   Shortage of funding and accountability of scientific research to taxpayers have caused competition among scientists to become fiercer and have put increased pressure on evaluation, mostly based on bibliometric criteria. Bibliometric criteria are notoriously imprecise, often even arbitrary, but the sheer volume of research that demands to be evaluated makes them necessary.
   This trend entails the danger of a progressive subversion of long standing ethical principles of scientific research. To give just one example, it is a common experience that we send out a paper for evaluation and the reviewers' response is that it might be accepted provided that better consideration to the current literature is given. The list of papers to be quoted follows, most of them from the same group of scientists, that one suspects may include the reviewer himself. This is a direct consequence of the usage of citation-based indexes of quality in the evaluation procedures: if one's grant proposal or professional appointment is accepted or rejected on the basis of the number of citations one has gathered in his/her career, then one's objective is to gather as many citations as possible, by whatever means.
   Since we cannot avoid that the rules we set for science evaluation may reshape the ethics of science, we should be aware and pay attention to what rules we set. Evaluation is necessary, and bibliometric criteria are unavoidable. Yet we should consider that the risk of ethical degradation is likely to be directly proportional to the pressure applied by evaluation. Relieving the pressure implies to allocate the available resources among a larger fraction of the applicants: to select the best 30% instead of the best 5%, at the cost of lowering the share allotted to successful applications. Elitism not only does not create a favorable environment for freedom of thought and innovation, but also encourages self-promotion by-all-means, especially if quality is evaluated using bibliometric criteria, which are relatively accessible to manipulation. Given that research is evaluated by peers, there is no excuse if evaluation criteria promote non-ethical practices: we scientists set our own standards.

Go to the profile of Andrea Bellelli

Andrea Bellelli

Prof., Sapienza University of Rome

3 Comments

Go to the profile of Isabel Varela Nieto
Isabel Varela Nieto 17 days ago

Food for thought! Indeed it is a topic with many facets and open for discussion.
Which should we use as a reference index to objectively evaluate the impact of colleagues' work?

Go to the profile of Andrea Bellelli
Andrea Bellelli 9 days ago

I do not have a solution, aside from what I wrote in the post. Clearly we should first decide which is the scope of evaluation: allocating funds? promoting a scientist's career? awarding a position? Evaluation should be limited at those instances where it is really needed. A wrong use of evaluation is that of justifying  reduction of resources: i.e. we have less thus we adopt stricter rules.

Go to the profile of Andrea Bellelli
Andrea Bellelli 9 days ago

As a post scriptum to this article I remark that Nature has published  the commentary: "Italian scientists increase self-citations in response to promotion policy. Study reveals how research evaluations can lead to self-serving behaviour." by Dalmeet Singh Chawla. link: https://www.natureindex.com/news-blog/italian-scientists-increase-self-citations-in-response-to-promotion-policy