
Although one might assume that scientists are naturally equipped with strong critical thinking skills, formal instruction on developing such a toolset is rarely discussed. This oversight can expose even the most experienced scientists to cognitive biases and reasoning flaws that impact their experiments and ideas.
As someone working in science, I initially believed critical thinking was a given in this field and would somehow be part of the package. After all, thinking is central to what scientists do. However, my experience over the years has shown me otherwise. Even seasoned researchers are not immune to cognitive pitfalls. I share this not from an enlightened perspective but as someone who has learned – and continues to learn – critical thinking through trial and error. While I cannot speak to if and how critical thinking is taught in schools today, my journey suggests that many scientists develop these skills through mistakes, whether from flawed experiments, rejected grants, or the sometimes opaque guidance of supervisors and referees. In this essay, I will share my experiences and humbling failures, which reflect challenges I have faced and have seen among my colleagues, hoping to define some problems and offer possible solutions.
My first encounter with a critical thinking “situation” was during my Master's thesis. This was my first experience with hands-on experiments, which came relatively late compared to students now. I had to do my first straightforward experiment: killing some leukaemia cells in culture. That was easy, I thought. Just treat these cells with some nasty chemicals and see how many survive. Confident in my approach, I presented my results to my supervisor, who asked, “Where is the control?” My naive response was, “What is a control?”. She gently explained that I needed to account for the vehicle in which the drug was dissolved, as it might have its own effects. Besides feeling completely stupid (Why didn’t I think about this?!?), this groundbreaking revelation reshaped my understanding of experimental design. While the concept of controls seems straightforward, and probably most of you will think these are fundamental principles for a scientist, identifying appropriate ones can be complex. In some studies, controls become a central focus, forming a significant part of the paper. Surprisingly, "failed" controls can open new lines of research (see more about the power of failed controls here). This foundational lesson taught me that robust experiments require proper technical controls to support strong conclusions. This realisation marked what I call "Critical Thinking V 0.0".
Armed with this basic understanding, I embarked on my PhD, where the complexity of experiments grew exponentially. For example, one project required me to determine whether a protein (X) influenced a process (Y) through its interaction with another protein (X1), which regulates another process (Y1). We needed to understand if protein X-driven process Y requires the X1-Y1 axis and, more importantly, if process Y and Y1 are biologically connected or independent. My supervisor suggested a strategy: eliminate protein X1 and observe whether Y still occurred. While I found the approach ingenious, its logic eluded me until I had to explain it to someone else months later*. Eventually, I learned that I was testing whether the interaction of these proteins was epistatic – whether one protein’s effect depended on the other. This was my introduction to biological controls, a step beyond the technical controls I had learned earlier. Unlike straightforward vehicle control, biological controls require understanding the interplay of multiple factors within a system. This was "Critical Thinking V 1.0". My PhD years were filled with these experiments, coinciding with advancements in molecular biology tools that allowed precise silencing or expression of genes, leading sometimes to experiments that seemed more like riddles of logic than straightforward processes. For instance, when doing silencing experiments, requesting the expression of a transcript resistant to RNA interference to verify on-target effects of shRNA, a now-standard “rescue” control, became common. These experiments added to the more standard positive (silencing a validated target) and negative (using a construct that doesn't target known transcripts) controls. And now, with the advent of more sophisticated genome editing strategies, the number of controls can be even higher. Still, while this experiment makes logical sense, its implementation can be technically challenging, and failures may not always reflect flaws in the hypothesis but technical difficulties.
By the end of my PhD, I felt confident enough to move into postdoctoral research, only to face a new set of challenges. During my postdoc, I shifted to a new research area, which meant learning new methods and an entirely different set of technical and biological controls. A more significant challenge, however, was establishing a clear logic for my projects. Unlike during my PhD, my supervisor provided less oversight, encouraging independence. Initially, I conducted experiments driven more by excitement than a well-defined hypothesis. The central question of my project was still evolving, making it difficult to “prioritise” experiments. How do you prioritise experiments? I learned the hard way that designing a coherent experimental framework was more about knowing which experiments not to do than which ones to do – and that requires much self-restraint. This realisation was pivotal for writing papers, which demand a logical sequence of experiments to validate a hypothesis or model. This phase taught me "Critical Thinking V 2.0": the importance of framing research within a broader conceptual framework.
By the time I became an independent researcher, I faced an even more significant challenge: developing a (long-term) vision. It was no longer enough to generate interesting data and put them in a paper; I now had to articulate how my research contributed to the "bigger picture". What overarching questions was I addressing? What relevance did they have to the field and beyond? This required a deep understanding of current knowledge and available technologies and the ability to identify critical gaps. Framing these questions correctly proved just as crucial as answering them. For example, while working in cancer metabolism, I struggled with how to test the causal role of metabolic pathways in tumour progression. Would testing tumour formation upon genetic ablation of a metabolic gene suffice, or were more nuanced approaches needed? These debates forced me to refine my thinking and pushed me toward what I consider "Critical Thinking V 3.0": integrating technical rigour with conceptual depth to address significant scientific unanswered questions. Intriguingly, this skill requires more expertise in epistemology – the philosophy of science – than in science itself. For instance, concepts such as Karl Popper's falsifiability or Thomas Kuhn's paradigm shifts can provide valuable insights into constructing robust scientific hypotheses and challenging existing paradigms effectively. Here, we are not discussing what experiments can be executed but how a question can be addressed. Performing (the right) experiments is only secondary to that. In other words, established experiments will not allow us to push the boundaries of science. We need deeper thinking to move beyond our current scientific paradigms. This is why I encourage everyone embarking on a scientific career to explore the philosophy of science, which discusses fundamental principles of this kind.
In conclusion, the journey of “thinking like a scientist” is a continuous evolution, marked by milestones that build on one another, from learning the importance of technical and biological controls to developing strategic focus and long-term vision. Yet, critical thinking is not a given and is rarely taught; it is a skill developed through experience, (self-)reflection, and perseverance. For those embarking on this path, the key is to use mistakes as learning opportunities and remain open to new ways of thinking. Try distorting the “thinking strategy” from every paper and discussion with peers. After all, science thrives on curiosity but is built on critical thinking.
* Over time, I found that the most effective way to test the degree of knowledge and critical understanding of a subject is the ability to explain it to others clearly; as Albert Einstein elegantly put it: "If you can't explain it to a six-year-old, you don't understand it yourself”.
Note: Four stages of critical thinking are listed here, but I am 100% sure that the journey doesn’t stop here, and many more layers are waiting. I shall be prepared!
Top image of post: by by Gerd Altmann from Pixabay
Join the FEBS Network today
Joining the FEBS Network’s molecular life sciences community enables you to access special content on the site, present your profile, 'follow' contributors, 'comment' on and 'like' content, post your own content, and set up a tailored email digest for updates.