Science visibility through AI: who are we talking to from now on?

“Ten questions to AI regarding the present and future of proteomics” was the easiest paper to write, but the one that got me thinking the most. Why are we this naïve to assume AI predicts the future? Or even discern important vs popular knowledge? Maybe it can, but if we start sharing differently.
Science visibility through AI: who are we talking to from now on?
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

How it all started…

It was a beautiful spring morning in the heart of New York City. My postdocs and myself were in a coffee shop waiting for the crowd to dissipate in front of the National Museum of Natural History. You know, the one from the movie “Night at the Museum”, where the skeleton T-rex chases you if you sleepover. We were having a hot chocolate prior to going to an exhibit and, unsurprisingly, we were having the most exciting science conversations that never happen during routine lab days. We were wondering if AI will change the world (duh!). More importantly, if the world is changing, how we should adapt. So, we scribbled down some questions that we asked to AI in a little viewpoint (Stransky et al. Front. Mol. Biosci., 2023). While analyzing the responses, we quickly realized that AI knows about proteomics way more than we suspected, but its lack of intuition is sufficient to break the spell that makes us believe we are discussing with something truly intelligent.

AI often struggles with understanding the context and nuanced aspects of scientific research. This became particularly evident when we asked 'inappropriate' questions. Obviously, I asked it to list our most prominent proteomics scientists, checking if by mistake I was in there. I spent my entire scientific career in proteomics. I might not be the most impactful proteomics scientist in the world, but I surely know some. It was funny to see AI mentioning almost complete strangers, maybe because they were very vocal on Twitter or were listed as recipients of many small awards on some websites. The transparency of AI’s decision-making process is a total black box, which might be irrelevant for futile conversations, but it could become really serious when it is too late to address. I remember the first examples of machine learning experiments, where computers were asked to name different species of dogs based on pictures. They would label “husky” every picture with snow, even if the dog was not in there… Not having a clear understanding of how AI discerns important from most cited data points could either be funny or have serious consequences.

…and what the future might bring

Then, let’s leave aside for a moment all the issues related to clinical diagnostics, translational medicine, etc. Is AI a risk for scientists today? The usual answer relates to dependency and over-reliance; there is a risk that reliance on AI could lead to a decline in critical thinking and problem-solving skills among researchers. However, I personally see other risks directly related to our interactions. For example, how will we promote our work, if everyone will do literature searches using AI? Will we disappear into oblivion if we do not update our Wikipedia page? Maybe it will even change the way we write scientific papers; we might have to concentrate every bit of precious knowledge in the abstract. The papers might begin with: “The essence of this project is Figure 6C; the rest of the text and panels are just validation”. Ok, I might be cool with that. Actually, it might be about damn time!

No problem then? Well, yes and no. AI will continue to advance, but there is a real risk that we scientists will have to adapt to it rather than vice versa. It is not impossible that each laboratory will have to budget for a 'social media manager' at some point, or your work will end up completely unnoticed. Maybe we will have to post and repost continuously our findings in personal websites in order to be picked up by the major AI platforms. Will this change even how we talk to each other? Repeating ad nauseam the same presentations might favor the algorithm rather than being labeled as excruciatingly boring (or both). We might think: “This is not going to affect me! I am who I am!” But the truth is that changes in communication happen constantly. Social network platforms are just the last small revolution we all remember. That is nothing; before Aristotle we did not even have adjectives. You would never read in the ancient epic poems “Achilles is brave” but rather “As is the lion, so is Achilles”. This implies that we live in changes, and we frequently do not even notice. The real concern is who is going to decide what is important: us or AI?

My interaction with AI so far is rollercoasting between high hopes and grounded realizations. On one side, I think we are all excited to see scientists on steroids thanks to the extra power given by this new force in data mining, calculation, and refinement. On the other side, let’s not forget that changes are like an invisible wave; if we do not learn how to surf it, we just pay the consequences… Oh, oh! And remember to have hot chocolate with your postdocs from time to time!

P.S. By the way, I obviously asked AI to help me with the first draft of this post. But let’s just say that for one more time I did things myself.


Photo by OpenAI's DALL-E

Join the FEBS Network today

Joining the FEBS Network’s molecular life sciences community enables you to access special content on the site, present your profile, 'follow' contributors, 'comment' on and 'like' content, post your own content, and set up a tailored email digest for updates.