January/February 2024 • PharmaTimes Magazine • 28-29
// FUTURE //
AI is making waves across the pharma European regulatory landscape – we must prepare
In recent years, the world has witnessed a surge in the development and adoption of AI.
One field where AI is increasingly used to drive innovation is the life sciences industry, particularly in medicines development.
There are numerous areas where AI can play a role, such as predictive modelling on effectiveness and safety, repurposing, clinical trial design and medicines interaction modelling.
With the fast-paced development of the technology, the legal framework is undergoing changes as well. Most notably, this will be the upcoming AI Act and – albeit more indirectly – the reform of the pharmaceutical legislation.
Further of major importance is the position the European Medicines Agency (EMA) and Heads of Medicines Agencies (HMA) are taking with their guidance.
The general framework for AI has been taking shape over the last years with the AI Act.
In general, the AI Act aims to provide a framework to ensure trustworthy and ethical AI by providing obligations for AI systems and models, such as transparency requirements, depending on the risk classification of the system itself.
Additional obligations apply to high-risk systems, i.e. systems that pose a high risk of harm to the health and safety of (fundamental rights of) persons. It is further interesting to note that medical devices employing AI are from the outset considered high risk.
For all those acquainted with medical devices regulation, this risk-based classification approach will sound familiar, as it is somewhat similar to the medical devices framework.
In December 2023, the European Council and the European Parliament reached a provisional agreement on the proposed AI Act, which still needs to be formally adopted.
If all goes well, the AI Act will enter into force in the second quarter of 2024, after which the obligations laid down therein will apply anywhere from six months to three years from that date.
The EMA also published guidance on its position on the use of AI.
In July 2023 a draft reflection paper on the use of AI in the medicinal product life cycle was published, as part of the then Big Data Workplan 2022-2025 by the HMA-EMA joint Big Data Steering Group (BDSG).
In this paper, the BDSG underlines the importance of both a risk-based as well as a human-centric approach to the use of AI in medicines development.
Regarding the risk-based approach, the BDSG points out that the degree of risk does not only follow from the technology itself, but also depends on the context of its use, also including the point of time in the development cycle.
For example, in pivotal clinical trials incremental learning approaches are not accepted anymore. Instead, all models need to be locked and documented to be eligible for confirmatory evidence generation.
A requirement like this is understandable from a regulator’s point of view, however one can also imagine that this will halter development.
Following the human-centric approach, user and patient-reported outcome and experience measures should also be included when evaluating the use of AI systems.
As a ‘key principle’ it is mentioned that the responsibility to ensure compliance lies with the marketing authorisation holder, while reference is made that these requirements could be stricter than what is currently standard practice in the field of data science.
This is not a surprising position to take from the BDSG, and further emphasises the importance for a market authorisation holder to have solid procedures in place regarding any AI tools they aim to deploy.
Notably, the reflection paper also provides general guidance on technical aspects, i.e., what to consider in developing, training, assessing and deploying AI/ML models.
Even though only of indicative nature, it is strongly advisable for pharmaceutical companies to observe these requirements. Especially in case of deviations, the authorities should be involved early on, in order to ensure that the results of using AI/ML models will be accepted.
Recently the BDSG published another multi-annual workplan, this time solely focusing on AI.
The workplan focuses on four key areas: i) guidance, policy, and product support; ii) AI tools and technology; iii) collaboration and training, and iv) experimentation. With regard to developing guidance, preparations will start mid-2024 to support the implementation of the AI Act.
The EMA and HMA have clearly taken the position that AI could be of enormous benefit to developing medicines, aiming to provide a regulatory framework that both protects patients and at the same time leaves room for the technology to develop for the better.
In April 2023 the European Council adopted a proposal for a reform of the European pharmaceutical legislation.
Reference is made to the use of AI, both in the context of the advantages of the use of real-world data, as well as with regard to so-called ‘regulatory sandboxes’.
Throughout the proposal references are made to enabling the safe use of technology.
Further discussions on the proposal have already started, and even though it is certain many amendments will be made before a final text will be concluded, it’s highly likely that the focus on technology will remain part of this new legislation.
To ensure further innovation, it is essential that the European legislator will take a more overarching approach, synchronising the new rules on AI with the existing regulatory framework.
It will further be interesting to see how the already proposed guidance and legislation will influence innovation – especially also considering the requirement in the BDSG draft reflection paper regarding the locked AI models.
The proposed regulatory sandboxes could play an interesting role in this, and pharmaceutical companies fully appreciating this regulatory option might find themselves ahead of their competitors soon.
Hester Borgers and Christian Lindenthal are from Bird & Bird.
Go to twobirds.com