November 2025 • PharmaTimes Magazine • 38

// AI //


AI to Z

Developing an AI governance strategy
– part 2: Tips and considerations for building a strategy

Image

In last month’s column, we explored why AI governance has become essential in life sciences. This month, the focus turns to how organisations can turn that principle into practice.

Awareness to action

AI governance is gaining urgency as adoption accelerates. Generative AI and predictive modelling are already transforming the way drugs are designed, manufactured and marketed.

They’re cutting development cycles and personalising care at a pace few thought possible even two years ago. Yet the same technologies that promise efficiency and insight also raise new questions around fairness, safety and accountability.

As regulations evolve – most notably the EU AI Act, with penalties of up to seven percent of global turnover – life sciences companies are strengthening their focus on governance as a strategic necessity.

It’s the mechanism that turns innovation into something safer, more scalable, ethical and sustainable.

Building the foundations

Governance is not just about policy. It’s about establishing a shared set of principles that act as guard rails to anchor how AI is developed and deployed across an organisation.
A mature governance framework weaves together four threads: oversight; compliance; operations and culture.

Oversight sets the strategic direction; compliance interprets law and ethics into policy; operations implement technical controls for transparency and risk; and culture sustains it all through daily behaviour.

This foundation enables what every organisation ultimately wants: trust. Trust that a model’s data and purpose are sound, that its performance is explainable, and that when things change – as they inevitably do – feedback loops will catch it early.

Readiness for AI governance is more than documentation. It’s about how resilient your organisation is when faced with complexity.

Many in life sciences are taking a page from banking, adapting the ‘three lines of defence’ model.

The first line manages risk and ensures operations comply with internal standards. The second provides oversight and challenges the first line of controls. The third tests and reports directly to leadership.

Together, these lines build accountability that extends from the lab bench to the boardroom.

Others are developing risk-based frameworks that rate the likelihood and impact of potential failures or classify risks according to who or what might be affected – for example patients, institutions or wider systems.

The point isn’t which framework you use, but that you use one consistently, with the discipline to update it as the landscape shifts.

Culture as catalyst

Technology alone won’t deliver trustworthy AI. Culture is the multiplier.

Organisations that cultivate transparency and human-in-the-loop practices not only avoid ethical pitfalls but also attract top talent and the kind of scientists and engineers who want to know their work is improving lives, not automating inequity.

The most effective governance cultures also go beyond compliance. They use tools that document data lineage and bias, track model provenance and monitor drift.

They prioritise fairness, privacy and security not as barriers but as the guard rails that make innovation possible.

The life sciences sector has always been defined by its ability to move fast while maintaining public trust. AI doesn’t change that – it amplifies it.

Those who embed governance early, rather than retrofit it later, will be the ones to innovate with trust, scaling safely and sustainably.

Ultimately, the goal of governance is trust. Trust between scientists and systems, regulators and innovators, patients and the data that represents them.

By embedding oversight and explainability into the AI life cycle – as SAS and others advocate – the life sciences industry can innovate confidently and ethically in a world of accelerating change.


Steven Tiell is Global Head of Governance Advisory at SAS