December 2025 • PharmaTimes Magazine • 38
// AI //
Grounding generative AI with Retrieval-Augmented Generation
Generative AI is already reshaping how the life sciences industry approaches discovery, development and patient care. But while the potential is enormous, the reality is more complex.
Many early AI pilots struggle to scale because they lack access to high-quality governed data and context. Retrieval-Augmented Generation, or RAG, is emerging as a practical answer to that problem.
At its core, RAG brings structure and credibility to generative AI. Rather than relying only on what a model has been trained on, RAG combines large language models with a retrieval layer that draws on an organisation’s own data and documents. This simple shift makes AI outputs more grounded, accurate and explainable.
In life sciences, that precision matters. Drug discovery, clinical research and regulatory reporting all depend on traceable trustworthy information.
RAG enables responses to be linked back to their real sources, improving traceability and reducing the risk of hallucinations, while supporting compliance with frameworks such as the EU AI Act and FDA guidance on AI and machine learning.
The power of RAG lies in its ability to unlock value from unstructured data, which makes up as much as 90 percent of enterprise information.
This includes patient notes, research papers, lab results and correspondence that traditional analytics often overlook. By indexing and retrieving this content in context, RAG makes it usable for insight generation and decision-making.
For clinical teams, that could mean surfacing relevant evidence during protocol design. For manufacturing, it could help identify deviations or anomalies faster. For commercial teams, it could support more responsive communication with healthcare professionals.
The applications are broad but the principle remains the same: making knowledge discoverable, usable and trusted.
RAG also addresses the organisational barriers that have slowed enterprise GenAI adoption.
Traditional AI projects often rely on complex code-heavy implementations that create technical debt and vendor lock-in. RAG’s modular open design allows organisations to experiment, deploy and govern AI more efficiently, aligning innovation with existing security and compliance frameworks.
By streamlining how GenAI connects to enterprise knowledge, RAG shortens the path from pilot to production and delivers measurable value faster.
This approach also supports human oversight. By surfacing the sources behind every output, RAG keeps people in control of validation and interpretation. That transparency builds confidence both internally and externally in how AI is being used.
The technology itself is only part of the story. Successful RAG adoption depends on culture as much as capability. Organisations need cross-functional teams that combine data expertise with clinical and regulatory knowledge.
They also need clear governance structures so that the use of GenAI aligns with corporate values and patient safety.
In many cases, RAG becomes a catalyst for broader digital maturity. It helps break down silos between departments, accelerates knowledge sharing and creates a more connected data-driven culture. That shift, as much as the technology, is what unlocks sustainable value.
The promise of RAG is not speed for its own sake but smarter progress. In an industry where every decision must be justified and reproducible, RAG provides the context that makes AI outputs reliable and defensible.
It gives life sciences organisations a clear path to scale generative AI responsibly, with transparency built in from the start.
Platforms developed by SAS apply these same principles, enabling RAG to operate securely, auditably and at scale across regulated environments. The result is a more grounded form of AI and one that understands the business it serves and helps to innovate with confidence.
Pritesh Desai is Principal Strategic Advisor, Life Sciences at SAS