April 2020 • PharmaTimes Magazine • 34-35
// LAW //
Gustaf Duhs considers the legal issues surrounding the rapidly expanding field of digital health
As I write this, COVID-19 is sweeping across the globe and pharmaceutical companies and governments are working flat out to develop and release a vaccine. The scale of the response is unprecedented.
One particularly interesting feature beyond the more traditional response of vaccine development is the widespread and varied use of digital tools. In parliament on 10 March 2020, Matt Hancock, the UK’s Health Secretary, suggested a “digital first approach to accessing primary care and outpatient appointments”. The suggested use of telehealth to ease the burden on front-line staff and in the context of contagion is of course a sensible one. However, the use of digital health across the globe goes far beyond telehealth, and there are many legal issues attached to its development.
Perhaps the most widely publicised example of technology used in this context is the Chinese ‘close contact detector’ app. This app records the user’s close contacts, so that in the event the user is diagnosed with COVID-19, alerts are sent to other users who have been in close contact with the infected user so that appropriate treatment (or quarantine) can be advised.
Another example includes the use of deep learning software to quickly and accurately diagnose the Coronavirus in scans. In addition, in a collaboration between tech companies and the US government, software is being used to review the 2,000 papers that have been published on the virus since December, alongside some 29,000 papers of broader relevance. Many are predicting that the COVID-19 crisis will be a watershed moment in the use of digital health technology.
However, even before the current crisis, this was an area of rapid growth, with considerable investment and corporate activity. In light of this, it is worth briefly assessing some of the legal issues presented by digital health. It follows from the broad nature of the term and the products and services it covers that there is no shortage of potential legal hurdles. However, there are three ‘big picture’ issues of particular note: privacy law, applicable regulations and the attribution of liability.
Privacy law
The importance of confidentiality in a medical context is well established. Although the Hippocratic Oath deals with general ethical standards, it is the requirement for physicians to treat medical information as ‘holy secrets’ that is its most celebrated aspect. Confidentiality is no less important in a digital era. For example, the Chinese close contacts app has given rise to ethical issues around privacy and surveillance, and general concerns around the use and exploitation of data have meant that the term “surveillance capitalism” is increasingly heard.
Digital tools have a number of special features that are likely to make privacy particularly important, such as intimate and continuous access to the user, big data sets, automated (and therefore indiscriminate) functions and the ability to instantly disseminate large amounts of data.
The leading privacy law in the world is the EU’s General Data Protection Regulations (GDPR) and, where applicable, its national or local equivalents and emulations (for example, the new Californian privacy law).
In looking at digital health in the context of GDPR, the first thing to note is that patient health and biometric data are classified as ‘special category data’. This is particularly sensitive personal data, requiring a greater degree of protection than regular personal data. There are therefore more limited bases on which such information can lawfully be processed, for example fully informed consent is more likely to be required in respect of such data. In addition, a failure to comply with privacy law in this context is likely to lead to the upper tier of administrative fines under the GDPR (€20 million or 4% of total global turnover).
The requirement under GDPR for ‘privacy by design’ – an obligation to take into account privacy in all stages of development – is likely to be particularly important. Relevant to this, the ICO’s recent collaboration with The Alan Turing Institute (Project ExplAIn) examined the risks surrounding the use of AI, with a view to establishing principles governing how firms use AI in a privacy context.
Applicable regulations
As readers will know, healthcare’s regulatory framework is complex, with a range of bodies with different roles and remits that vary depending on the nature of the product or service being offered and geographic location. When it comes to technology, regulators are also beginning to consider issues arising from digital health specifically. For example, NHS Digital is responsible for the NHS’s use of technology and is a useful source for both developments relevant to the NHS, and specifics on the regulatory regime.
Also, in January 2020, the Care Quality Commission (who regulates all health and social care services in England) published the result of its first ‘regulatory sandbox pilot’ in relation to digital triage in health services, and identified a number of areas where regulation might be improved.
The Medicines and Healthcare products Regulatory Agency (the body responsible for regulating all medicines and medical devices in the UK) is also engaging in cross-party governmental working groups, considering how AI in healthcare may be regulated, and how AI may be used by the MHRA in fulfilling its regulatory function.
Finally, the National Institute for Health and Care Excellence (the body who provides national guidance and advice to improve health and social care) has published guidance in relation to digital health – eg the Evidence Standards Framework For Digital Health Technologies published in March 2019 set out what constitutes a good level of evidence for digital health technologies to be considered clinically effective.
However, a number of issues still need further consideration. Examples include the extent to which technology providers need to be brought within the healthcare regulatory regime, how to safeguard patient to non- human interactions, how to regulate beyond national or other territorial boundaries, the capacity for regulators to regulate the use of algorithms, and the extent to which first mover advantage or network effects may result in monopolies or quasi-monopolies requiring particular attention.
Attribution of liability
One of the most frequently discussed elements of technology law over the last decade has been around the attribution of liability in high technology areas, and in particular AI. Specifically how traditional concepts of liability under contract or tort apply where there is less, or in some cases no, human intervention. Clearly, such issues will also be relevant in the digital health space.
The European Commission’s Report on ‘Liability for Artificial Intelligence and other emerging technologies’ published in 2019 is a thorough examination of the topic. This suggests that liability regimes may need to be amended so that they are equipped to identify the human wrongdoer and to provide appropriate mechanisms for compensation where this is very difficult or not possible.
Will we keep up?
There is no doubt that digital technology will be increasingly used in healthcare. The law is developing in various ways in order to meet the challenges arising from this growth, and so it is important for those active in the field to take into account both existing law and also changing regulations and other adjustments to the law that we are likely to see over the next few years. Businesses that look forward whilst implementing technology will be the most successful.