October 2025 • PharmaTimes Magazine • 38
// AI //
Developing an AI governance strategy – part 1
In life sciences, innovation can’t come at the expense of integrity. As advanced analytics and AI reshape research, clinical trials and patient engagement, the industry faces a dual imperative: accelerate innovation while maintaining ethical and regulatory integrity.
Governance is the framework that makes this balance possible.
Across the value chain, AI is delivering tangible impact. Generative AI models are identifying molecules for new therapeutic applications and designing smarter trial protocols, radically reducing discovery timelines.
Predictive analytics are lowering costs and helping to match the right patients with the right trials, improving recruitment and retention. Real-world data is extending this progress beyond the walls of clinical studies, providing insight into how therapies perform in everyday settings and enabling more personalised, evidence-based care.
The potential of AI to accelerate discovery, cut costs and improve patient outcomes is enormous. Analysts forecast hundreds of billions in annual value and yet optimism is tempered by concern.
With frameworks like the EU AI Act now in force alongside the existing Medical Device Regulation, In Vitro Diagnostic Regulation and GDPR, life sciences firms must prove that their use of AI is ethical, explainable and compliant.
AI governance is more than an internal checklist. It is a culture of accountability, grounded in values and principles that ensure technology is deployed safely, fairly and transparently. A strong governance strategy spans oversight, compliance, operations and culture.
It ensures models are built from data with robust consent, monitored for drift, biases addressed and outputs explainable. It also gives employees the confidence to raise concerns and regulators the assurance that the science will stand up to scrutiny.
Without this scaffolding, risks multiply. Poorly governed AI can amplify inequities in healthcare data or generate outputs that are scientifically unsound and unable to be deployed at scale.
It is no longer a question of if companies adopt AI, but how. Those that hesitate risk falling behind, while those that rush in without controls invite regulatory or ethical failure.
Computer scientists have long known that AI systems drift over time. Guard rails are essential to prevent models from degrading silently or producing results that diverge from intended use.
Done right, governance is not a brake on innovation but an accelerator. Clear guard rails reduce friction, enabling faster adoption of new tools with less downstream disruption.
They make it easier to integrate real-world evidence into trials, automate reporting or scale patient-facing services safely. A risk-based approach helps organisations balance agility with accountability, focusing scrutiny where the stakes are highest.
Governance must also extend beyond technology. It is about people and culture. AI experts increasingly want to work where their skills are applied responsibly. Organisations that can demonstrate strong governance will find it easier to attract top talent, reassure patients and build brand value.
The challenge is to embed governance from the outset, rather than bolting it on later. When oversight, compliance and culture are part of the foundation, governance becomes a catalyst for innovation, not an obstacle to it. The leaders in this space will be those who treat governance not as a defensive necessity but as a strategic enabler.
The life sciences have always been about balancing speed with safety. The same is true for AI. By establishing robust governance frameworks now, organisations can innovate quickly but with the confidence that their progress is transparent, ethical and sustainable.
Stay tuned for the next issue, where we’ll share tips on how to progress along your governance journey.
Robin Curtis is Global Strategic Advisor, Health and Life Sciences at SAS