July/August 2025 • PharmaTimes Magazine • 28-29

// AI //


Transformers

Executing AI responsibly in drug safety could be a game changer

Image

Pharmacovigilance (PV) teams face pressure from many angles, including rising case volumes, increasingly complex data and growing scrutiny from regulators. Meeting these demands means transforming how safety is handled and who manages it.

What was once a manual, resource-heavy process is now being modernised through artificial intelligence (AI) and automation. As the landscape evolves, PV leaders are shifting focus towards scalable, technology-driven models that improve efficiency without compromising compliance.

AI offers significant value across the PV life cycle. From case intake and literature review to signal detection and reporting, AI increases efficiency, accelerates timelines and improves output quality. These gains require deploying AI responsibly and preparing the workforce while continuing to maintain a focus on patient safety.

As regulators engage more directly on the use of AI, the path forward is becoming clearer. The future of PV will not be driven by automation alone – it will be shaped by how well human insight, compliance and innovation are aligned.

Integration

Integrating AI into PV is more than a technological upgrade, it is reshaping roles, workflows and the overall structure of safety operations.

Manual tasks like data entry and triage are being reduced or restructured, allowing teams to redirect time and expertise to tasks that require clinical judgement and strategic insight.

This shift is changing how PV teams are built and the resources they require. Historically, a large share of personnel supported case processing.

In the years ahead, those resources will be reallocated towards specialised roles that focus on interpreting AI outputs, assessing safety signals and applying domain expertise to regulatory decision-making. These roles are essential for ensuring automated systems function as intended and that clinical and compliance risks are properly managed.

To support this shift, organisations must invest in workforce development. PV professionals need training that extends beyond traditional safety domains: they must become familiar with how AI systems generate outputs, how data quality impacts predictions and how to interpret results in a regulatory context. These skills will be necessary for reviewing AI-driven insights, identifying gaps and validating outcomes before regulatory submissions are made.

Interdisciplinary collaboration will become a norm rather than a goal; safety scientists must work closely with technologists, data scientists and governance leads to ensure the integrity of AI-driven processes.

The goal is not to eliminate human input but to elevate it. When paired effectively, automation and expert oversight allow PV teams to operate with greater accuracy, speed and consistency.

Regulation

Regulatory agencies are actively supporting the responsible use of AI in PV. In early 2025, the US Food and Drug Administration (FDA) issued draft guidance to clarify how AI should be used across the drug development life cycle, including in post-market safety surveillance.

The guidance outlines a risk-based approach and key recommendations for transparency, reproducibility and governance; it acknowledges that AI has practical value in specific PV domains, including case processing, signal detection and literature review. These are areas where data complexity and volume often strain manual systems, making them strong candidates for automation.

Another significant step was the launch of the Emerging Drug Safety Technology Programme (EDSTP) in 2024 by the FDA’s Center for Drug Evaluation and Research.  This voluntary programme allows organisations to meet with FDA officials and discuss their use of AI in PV. These non-binding conversations offer early regulatory insight and help shape best practices.

The EDSTP reflects a broader mindset shift: regulators are not only open to innovation, they are also creating structured programmes to guide and inform it. This collaboration allows PV leaders to better anticipate expectations, reduce risk and design AI systems with compliance in mind from the start.

Engaging with regulators early and often provides a path to build trust in emerging technologies, it also signals a commitment to safety and integrity, principles that remain at the heart of PV regardless of the tools being used.

Invention

Not all PV activities benefit equally from automation. The most effective use of AI focuses on high-volume, data-intensive tasks where technology can meaningfully reduce manual effort and improve accuracy.

Literature surveillance is a prime example: the volume of scientific publications is growing rapidly, and AI-enabled tools can quickly screen, categorise and extract safety-relevant information. This speeds up the identification of adverse events and reduces the time needed for manual review.

Contact centres and social media monitoring also offer strong use cases: AI can analyse real-time calls and digital content to detect potential adverse events. With natural language processing and sentiment analysis, these tools can prioritise cases and flag concerning patterns early. This enables faster triage and improves situational awareness.

AI also supports signal evaluation by surfacing connections across disparate data sets and identifying patterns that might otherwise be missed. When reporting, natural language tools can help draft safety narratives for periodic reports to improve consistency while saving time. These drafts can then be reviewed by qualified professionals who ensure that findings meet scientific and regulatory standards.

While these tools increase efficiency, successful implementation requires effective change management. This includes identifying the right use cases for AI, aligning key stakeholders early and throughout the integration process and establishing clear outcome goals.

Teams must be trained not only on new tools, but also on how their roles will evolve with AI as a foundational element. Documentation and workflows must be updated to reflect these changes, with governance structures that embed human oversight at critical points. Clear ownership and ongoing performance evaluation are also essential.

Maintenance

As AI systems are integrated into PV workflows, governance becomes a top priority. Regulatory authorities expect organisations to maintain strong oversight of their systems, including how algorithms are built, validated and updated.

Organisations need a framework to monitor performance, track model changes and evaluate output quality. This includes audit trails, version histories and quality control checkpoints: AI outputs used in safety decisions must be defensible, reproducible and well documented.

A common approach is the use of a human-in-the-loop model, in which subject-matter experts remain responsible for interpreting AI results and approving outputs before they are finalised. This model helps mitigate risk and ensures decisions are rooted in clinical expertise.

Even as confidence in AI grows, human oversight will remain essential for tasks that carry direct safety implications. Patient outcomes depend on accurate interpretation, and regulators will continue to expect human accountability.

By demonstrating that AI systems are successful at their intended objectives within a controlled environment, organisations can meet regulatory expectations while delivering value to patients and stakeholders.

Evolution

As PV data becomes more complex and global in scope, scalable AI is no longer optional, it is becoming essential to maintain quality and efficiency under increasing pressure. Successful scaling begins at integration.

AI should not operate as a standalone tool or something to bolt on, it must be interwoven with core PV operations, compatible with existing systems and supported by clear processes.

Cross-functional collaboration is also key; safety experts, data scientists and compliance professionals must work together to ensure that AI deployments are aligned with organisational goals and safety obligations. When executed well, this collaboration allows greater agility, stronger insights and better alignment between safety performance and regulatory expectations.

AI is a powerful tool, but its long-term success depends on the infrastructure and the organisation’s readiness to evolve operationally, culturally and strategically to support it.

AI has the potential to reshape pharmacovigilance by improving efficiency, enabling earlier detection and supporting better patient outcomes. But its value depends on how responsibly it is applied.

The future of PV will not be defined by algorithms – it will be shaped by the professionals who build, validate and apply them with purpose.

With regulators encouraging early engagement and collaborative innovation, PV leaders have an opportunity to lead with confidence.

By combining the strengths of technology with clinical and regulatory expertise, PV teams can build a more secure and responsive future for drug safety.


Archana Hegde is Senior Director, PV Systems & Innovations at IQVIA