May 2025 • PharmaTimes Magazine • 18-19
// AI FUTURE //
Preparing for an unknowable future that’s already arrived
Next iterations of generative AI are pushing the boundaries of deep research, with significant implications for life sciences not yet made tangible.
Generative AI has already disrupted entire industries – leaving knowledge workers reeling. For a time, the technology’s potential was constrained by the materials it was exposed to, then the ability to understand this in context. But with accelerating speed, those limitations are being overcome.
This is giving rise to a challenging duality: the future is both already here yet still unknown. Companies must step up their preparations for profound changes that, for now, remain intangible.
GenAI is the branch of artificial intelligence that uses everything that is known already to create something new.
From early conversational capabilities through reasoning, the technology is already delivering ‘agentic’ capabilities – goal-driven abilities to act independently and make decisions with human intervention only where needed.
There are early signs too that ‘innovating AI’ is emerging. That’s as AI becomes capable of creating novel frameworks, generating fresh hypotheses and pioneering new approaches.
This creative potential pushes AI from merely processing information to actively shaping the future of scientific discovery, applying it to problems yet to be solved.
OpenAI has just raised $40 billion in a new funding round, valuing the company at $300 billion, underscoring the belief that upcoming capabilities will be significant on a human scale.
Other GenAI foundation model players, including Anthropic, Google, Meta and xAI, are hardly idle meanwhile, and new heavy-hitters are emerging outside the US.
At the core of the latest GenAI advances is the accelerated pace of large language model (LLM) development. These deep learning models, trained on extensive data sets, are capable of performing a range of natural language processing (NLP) and analysis tasks, including identifying complex data patterns, risks and anomalies.
A growing movement towards open-source GenAI models, meantime, is making the technology more accessible and customisable, alongside proprietary models.
In life sciences, there are persuasive reasons to keep pace with and harness the latest developments as they evolve. GenAI is poised to become a game changer in scientific discovery and new knowledge generation – at speed and at scale.
In human intelligence terms, since the launch of ChatGPT in November 2022, we have already reached and surpassed human expertise levels as measured by the Massive Multitask Language Understanding (MMLU) benchmark. Recent advancements in agentic AI models have even led to the need for a new benchmark.
The advanced reasoning promise, a highlighted benefit of DeepSeek’s latest AI model, has enormous scope in science – enabling logical inferences and advanced decision-making.
Google and OpenAI both have deep research agents that go off and perform their own searches, combining reasoning and agentic capabilities. As reasoning capabilities continue to improve, and as the technology becomes more context-aware, the potential to accelerate scientific discovery becomes real through the creation of new knowledge. Essentially, the ability to project forward and consider ‘what if?’ and ‘what next?’
Already, OpenAI’s deep research is optimised for intelligence gathering, data analysis and multi-step reasoning. It employs end-to-end reinforcement learning for complex search and synthesis tasks, effectively combining LLM reasoning with real-time internet browsing.
Meanwhile, Google has recently introduced its AI co-scientist, a multi-agent AI system built with Gemini 2.0 as a ‘virtual scientific collaborator’.
‘A growing movement towards open-source GenAI models
is making technology more accessible and customisable’
Give it a research goal, and off it will go – suggesting novel hypotheses, novel research and novel research plans.
So, now what? With all of this potential, the strategic question for biopharma R&D becomes one of how to keep pace with these technology developments and build them into business as usual.
How to prepare for a future that is simultaneously already here yet continuously changing shape?
Up to now, most established companies have been experimenting with GenAI to see how it might help improve operational efficiency and accuracy, consistency and compliance across key pain points in safety/pharmacovigilance, regulatory, quality and some clinical and preclinical processes.
These include monitoring adverse events and safety reporting and streamlining regulatory submissions.
These initial pilots have been largely about becoming familiar with the technology, feeling comfortable with it and assessing its trustworthiness and value. Others have gone further, creating lab-like constructs for experimentation.
As valid as these approaches have been, the hastening pace of technology development and the intangibility around what’s coming means that the industry now needs to embed AI more intrinsically within its infrastructure and culture.
This is about becoming AI-ready and AI-first rather than simply receptive to what the technology can do.
When previously hyped technologies or business change models emerged, from blockchain to Six Sigma, it sometimes paid to ‘sprinkle’ champions across the business.
Some organisations are taking a venture capital-like approach, bringing in non-native AI talent to key roles – visionaries and master-crafters from other industries where, historically, tech innovation runs deeper.
But AI is moving so quickly, and its likely impact is so fundamental to life sciences, that experts need to be ‘neck-deep’ in it to be of strategic value.
One of the biggest challenges now is the duality companies are grappling with: the simultaneous need to be ready for and get moving with deeper AI use today, while gearing up for a tomorrow that is likely to look very different.
This has widespread change implications – at a mindset and method level, and from a technical and cultural perspective, both today and tomorrow.
For this reason, strategic partnerships are proving a safer route – with tech companies that are fully up to speed with the latest developments, are enmeshed in it and its expanding application, and are actively building sector-specific solutions.
Even so, companies will need to choose their AI advocates wisely, as ‘AI washing’ is commonplace among consultants and service providers now, as new converts to the technology inflate their credentials in the field.
The good news is that internal IT and data teams are well-versed in AI technology today and have high ambitions for it.
The challenge is bringing the technology’s potential to fruition where it could make a difference strategically.
Understanding this is likely to require sitting with the organisation’s real problem areas and determining to what extent emerging iterations of AI might present the answer.
Jason Bryant is Vice President, Product Management for AI & Data at ArisGlobal