April 2026 • PharmaTimes Magazine • 38

// AI //


Control peak

The next phase of agentic AI – autonomy with accountability

Image

Agentic AI introduces something new into enterprise systems: autonomy. For health care and life sciences organisations, the key question is not whether systems can act autonomously but how that autonomy is governed.

Workflows in life sciences span clinical research, patient care, regulatory oversight and operational coordination.

Systems capable of goal-directed behaviour can reduce friction across those environments, but only when autonomy is designed as carefully as the processes they support.

From automation to autonomy

Rather than executing fixed instructions, agentic systems can manage multi-step workflows and adjust their behaviour as conditions change.

In practice this might mean coordinating trial activity across sites or monitoring patient data streams to surface early warning signals.

What distinguishes agentic AI is its ability to operate with a degree of independence but still within clearly defined constraints.

In highly regulated environments, that autonomy cannot be introduced casually. Decisions carry clinical, operational and ethical consequences.

An agent automating routine administrative steps presents one level of exposure.
An agent influencing trial operations or patient safety monitoring presents another.
For agentic AI to be credible in these settings, governance must be built into the architecture from the outset.

Three principles are particularly important. Systems must be explainable so actions and decisions can be interpreted clearly. They must be auditable, with every action logged and reproducible. And they must remain controllable, with autonomy adjusted according to the risk profile of the task.

In regulated environments these are not optional features but the foundations of compliance and trust.

Where value is emerging

Despite the caution around autonomy, agentic systems are already being deployed in targeted areas.

Organisations are using goal-directed systems to monitor patient data streams and surface early indicators of deterioration, while administrative agents coordinate documentation workflows.

Agentic capabilities are already improving trial operations by monitoring study activity, flagging protocol deviations and supporting coordination across sites and regions.

Across these use cases, agentic AI delivers the most value when it operates inside well-defined governance structures.

Designing autonomy deliberately

One of the most important architectural decisions organisations face is determining how much autonomy an agent should have.

Routine operational tasks may support higher levels of automation.

Activities such as clinical decision support, regulatory reporting or safety monitoring require tighter controls and explicit human review.

Adjustable autonomy is therefore becoming a critical design capability, with organisations defining when an agent may act independently, when it must escalate and when human approval is required.

Ultimately, agentic AI depends on a governed data foundation. Metadata management, lineage tracking and standardised data models provide the transparency required in regulated environments.

Without it, autonomous systems risk amplifying inconsistency rather than accelerating insight.

Autonomy alone will not determine the impact of agentic AI. What matters is how deliberately that autonomy is designed, aligned with risk and governed across the architecture.

In health care and life sciences, the real challenge and opportunity is not how autonomous these systems become but how carefully that autonomy is structured.


Mark Lambrecht is Global Head, Health and Life Sciences Customer Advisory at SAS