June 2023 • PharmaTimes Magazine • 30-31

// INDUSTRY //


Drive time

Improved control of in-flight data is sure to boost efficiency across life sciences processes

Image

The absence of standardised data can impede agility and innovation for pharmaceutical companies. As a result, these companies are placing greater emphasis on establishing a consistent and dependable flow of data throughout their operations.

Leveraging live company master data more effectively and strategically – thereby creating a flow of broader data and insights between functions – will enhance a range of different use cases.

Much of this ‘in-flight’ data is incidental information captured as part of a task, yet its value in providing oversight, traceability and impact assessment to senior management could be considerable – if only companies could find a way to harness and control it more systematically.

The handover of data between point software solutions – such as regulatory systems, clinical trial management, pharmacovigilance – is where gaps and discrepancies in information between systems occur, leading to operational blind spots and strategic oversights at best, or regulatory non-compliance at worst.

This makes hard work of change management and could mean that product development information and patient safety events, aren’t fully traceable.

Overcoming the silos, interconnecting the data, and keeping those connections dynamic and smart, is the next big opportunity. Essentially, this provides the key to using everyday operational data to drive business improvements.

Generation game

The answer lies in understanding where key data is generated, and how the supply and demand of that data looks across the ‘chain of custody’. Then a plan can be devised for improving the connection and flow of more unified data (one enhanced source, rather than inconsistent duplicates) across departmental divides.

For young biotech companies starting from scratch, there is a clear opportunity to establish clean, consistent and definitive data from the outset, whereas for larger and more established companies the best options may be around intelligently mapping existing data sources and data flow.

Then interconnections and interdependencies can be identified and managed more effectively, until such time as data remediation and end-to-end standardisation can be achieved.

It’s in this vital context that leveraging ontologies is attracting interest as an option, for instance, allowing inconsistently formatted data to coexist, while recognising that the items referenced are the same and, indeed, linked.

This is a useful first step in the move to treat all data as one joined-up resource, so that it can drive new actionable insights, decisions and processes. A more thorough overhaul of data can then happen more gradually over time.

With all of this in mind, here are some considerations and tips for tackling internal data transformation.

Tangible gains

Unless the company is a young biotech with a largely greenfield tech set-up, life sciences companies will be approaching the road to data-based operational agility with a considerable amount of baggage.

Large legacy systems, vast volumes of data, and the variable quality and availability of that data, will make it hard to know where to start in transforming its contribution and value. Rather than try to tackle everything everywhere all at once, the prudent choice involves identifying some tangible gains from higher-quality, interconnected data, which, once cleaned and combined, will tell a fuller story.


‘For young biotech companies starting from scratch, there is a clear opportunity to establish clean, consistent and definitive data from the outset’


That might be linking supply chain data to regulatory data, to enable serialisation, semi-automated batch release and mitigation of shortage reporting, for instance. Or perhaps the aim is to trim a week off clinical development timescales, or complete applications or submit variations with greater speed. All depending on the priority and size of the company.

Visualising assets

Mapping what data exists, and where, is the best place to start with all of this. It is only through visualising the current spread of information assets and associated use cases that companies will appreciate the potential for greater uniformity and fluidity of data use between the different departments.

This will help the company establish key processes to transform, for quick yet potentially far-reaching wins for the business. An effective map will chart where given data is used along a process including creation, modification, and re-use by different teams and systems.

Where there is an existing process optimisation or digital transformation team in place, or consultants that are advising on associated initiatives, these professionals would be the ideal drivers of a cross-functional data map – in partnership with key functions such as regulatory affairs, quality and so on.

Companies that have already appointed chief data officers, or equivalents, will have a head start as these roles typically take more of a view of the commercial value of data, where regulatory affairs might not be the direct creator of the data, but more the guardian – as the spider in the web – providing a more detailed perspective of data’s links and touch points.

Robust strategy

A lot has been said and written already about the importance of improving and maintaining high data quality, as its day-to-day value in supporting real-time business processes increases.

While some arguments favour a strong sense of data ownership within specific functions with the most involvement with the given data, it can be more powerful to encourage everyone across the company to buy into the value of consistent data so that all functions and teams play their own part in keeping data clean, compliant, comprehensive and current.

Effective strategies here involve strong, broad communication of the associated benefits of robust data and incentives (recognition and reward) for those who actively play their part. Instead of data ‘ownership’, think in terms of a ‘chain of data custody’ spanning multiple groups of data processors and guardians over time.

Once companies can more readily visualise their current data position and the full scale of the task ahead of them – to make their data work harder for the organisation – it’s time to decide the most prudent way forward.

In the case of large pharma companies with extensive product portfolios and vast system and data legacies, comprehensive data remapping and/or investing in master data management is likely to be an overwhelming undertaking that could take many years.

Final analysis

Regulators, through their adoption of data standards, are championing global identifiers for medicinal products and their active substances.

Life sciences companies that are inventing and developing these products and substances would benefit greatly from adopting data standards consistently from early development, and throughout their marketing authorisation/registration information and variations submissions.

Make no mistake, companies have a significant opportunity to enhance their operations by implementing data standards internally, facilitating the smooth flow of consistently formatted data across their processes while eliminating discrepancies and overlap.

By recording data uniformly and sharing it reliably between functions throughout extended processes, companies can effectively and efficiently leverage their data to enhance productivity, process agility and drive innovation.


Max Kelleher is Chief Operating Officer at Generis and Remco Munnik is a Director at Iperion.
Go to generiscorp.com / iperionhs.eu