March 2023 • PharmaTimes Magazine • 24-25
// DATA //
Data quality governance – dispelling five common myths
Achieving robust data quality governance can feel like an overwhelming undertaking – especially in life sciences.
Everything in life sciences – from the latest measures around safety and regulatory rigor to renewed focus on agility, efficiency and streamlined paths to market – relies heavily on companies’ effective organisation and handling of data.
It’s in this context that the concept and discipline of data quality governance comes to the fore. The more critical data becomes to regulatory procedures, safety processes, clinical research, manufacturing and, ultimately, connecting all those parts of the life sciences more seamlessly, the greater the need for strategies around the governance of that data’s quality.
This includes integrity, reliability and trusted status across all internal and external touchpoints.
In Gens & Associates’ 2022 in-depth RIM (records information management) survey of 76 life sciences companies, the top performers expected to have most of their systems connected and sharing data within the next two to three years – with electronic trial master file (eTMF) systems, quality management systems (QMSs), master data management (MDM) and enterprise resource planning (ERP) being the highest priority for investment.
Yet, without the assumption of high trust in the data, the risks can become intolerably high as companies’ dependency on the flow of good data broadens. So, what’s the best way forward? It can be tempting to create a major initiative supported by a large consulting budget – due a lack of confidence in getting all of this right.
It’s more important, however, that work starts now – challenging some preconceptions can be very useful.
‘All positive change has to start somewhere, whether top-down or function-by-function’
Myth busting
Myth 1: This will inevitably be an overwhelming programme
All positive change has to start somewhere, so decide whether a top-down or a function-by-function (with consistent practices) approach will produce the quickest wins, and the greatest overall progress. What works for one company may not suit another, especially when considering the size of the product portfolio and that’s okay.
Myth 2: Complexity & high cost are unavoidable
The ‘data-driven’ agenda might feel fresh and new in life sciences, but digital process transformation is well advanced in other industries and solid frameworks already exist and have been adapted for data quality governance in a life sciences ‘regulatory+’ context.
In other words, this needn’t be a steep learning curve, or leave companies with huge holes in their transformation or organisational change/IT budgets.
Much of what’s needed has to do with nurturing the right culture, assembling the right teams or assigning key roles, communicating successes, and being on the same page as a company about the goals of this whole undertaking.
Myth 3: You’re doing this largely because you have to
Compliance with IDMP SPOR and other regulatory mandates might seem to be the most obvious driver for getting the company’s product and manufacturing process-related data in order. Yet there are many higher purposes for making data-related investments. These range from more tightly run business operations to a safer and more convenient experience for patients as consumers of existing and new products.
The tighter the controls around data quality, the more companies can do with their trusted data – use cases that could extend right out into the real world (such as prompter access to real-time updates to patient advice).
Myth 4: This is an IT/data management concern first and foremost
Time and again, the key success factors for a data quality programme are found to have little to do with technology, and everything to do with culture, organisation and mindset.
Specific contributors to progress, distilled from the most promising programmes being rolled out today, include:
Fundamental phases
With a strong three-phase framework any company can get started on the right track, irrespective of approach (e.g. function by function, top-down and enterprise wide).
The establish/launch phase. This initial ground preparation phase is about setting out a data quality vision and principles establishing an ’actionable’ data quality operating model with formal jobs and roles and conducting an awareness campaign.
The operational phase. This involves establishing optimal processes and capabilities – e.g. by adjusting to learnings from the Establish phase ensuring that all roles with a bearing on data quality have these responsibilities set out in job descriptions and covered as part of the annual review process and establishing recognisable rewards for high quality data.
In the optimisation or institutionalisation phase, desirable behaviour is embedded and fostered within the organisational culture, ensuring that everyone gets and stays on board with maintaining and continuously improving data quality, to everyone’s benefit.
Tools might include automated data quality dashboards to monitor KPIs data integration and connectivity throughout function and organisation and organisation-wide data quality level reporting, supporting a culture of quality.
Taking a phased approach to systematic data quality governance paves the way for companies to move forward with their improvement efforts now, taking a bite-sized approach. As progress is witnessed, momentum should gather organically.
Preeya Beczek is the Director of Beczek.COM and Steve Gens is the Managing Partner of Gens & Associates. Go to gens-associates.com