Skip to main content

Risks and opportunities arising through the use of AI in Healthcare

24 May 2023
Charlotte Harpin

This article was first published by Healthcare Markets International

Recent developments in AI, including DALL-E 2 and ChatGPT, have reignited widespread excitement about the potential of artificial intelligence (AI) across all aspects of our lives. It has also seen an opening-up of the AI landscape, with commentators anticipating more rapid innovation because of industry democratisation. Charlotte Harpin, partner at Browne Jacobson outlines the risks and opportunities associated with the greater use of AI in healthcare.

Business adoption of AI has reportedly more than doubled since 2017, with evidence of robotic process automation; natural-language text understanding and deep learning being embedded in business settings. Generative-AI products, like ChatGPT, have also brought AI into people’s homes, in a very relatable and exciting way. It’s not hard to imagine how generative AI might form part of the health landscape, perhaps replacing ‘Dr Google’ with something more reliable.

In a healthcare context, a recent major trial of an AI programme that can predict when people might miss appointments and offer back-up bookings was announced, with a headline-grabbing title noting the expectation that this would “save [the NHS] billions”.

A helpful summary of the terminology involved in the AI/healthcare context can be found here: The Regulation of Artificial Intelligence as a Medical Device (publishing.service.gov.uk).

Potential legal risks that could arise in future regarding ongoing development and utilisation of AI technologies in healthcare

While novel areas of risk are tempting to focus upon when developing and utilising AI in healthcare, it’s important to recognise that risks can arise in all existing areas of law. The UK government has taken a light-touch approach to regulating AI and until we see the development of AI-specific legal frameworks, regulation is largely based on adapting existing laws to an AI context.

The most likely areas of risk are:

  • Regulatory burden in terms of the development and deployment of medical devices that incorporate AI
  • Clinical negligence claims
  • Product regulation-related risks
  • Data related issues and associated claims
  • Copyright and IP/commercialization
  • Public law challenges to decisions that have incorporated the use of AI, such as breach of the Equality Act 2010. [AI-bias is a recognised issue and eliminating bias in training is difficult, although the development of synthetic datasets may go some way to addressing]
  • Employment law issues [such as redundancy claims following the deployment of AI-technological solutions]
  • Practical challenges such as how patient safety incidents will be investigated where AI technologies have been involved.

More generally, there are issues around equity of access to novel technological solutions; how these will interface with existing NHS digital solutions; and the need to ensure service-user ‘buy-in’. These will all need to be factored in when considering the use of AI technologies in a healthcare setting.

Overall, there is a risk that the development and deployment of AI technology solutions is incompletely grounded in the wider legal framework where healthcare bodies operate. As part of developing organisational and/or system digital strategies, healthcare bodies need to ensure that they have an appropriate understanding of all areas of potential risk associated with developing and implementing AI technologies, to allow an informed risk-based decision to be taken.

Understanding proposed AI regulation

The AI regulatory landscape generally, and specifically in the healthcare setting, is complex and rapidly evolving. Brexit transition provides both a further degree of complexity but also an opportunity for the development and implementation of a UK designed framework.

Currently, medical devices that utilise AI are regulated under the Medical Devices Regulations 2002 (as amended).

The Medicines and Medical Devices Act 2021 (MMDA) was introduced to commence the post-Brexit transition to a UK sovereign regulatory regime. However, the timeframe for the introduction of secondary legislation under the MMDA has been delayed, with an extension to the transition standstill period.

This means there is more time to develop the details of the new regulatory framework, working to the Software and AI as a Medical Device Change Programme – Roadmap published in October 2022 by the MHRA and reflecting the UK Government’s stated “five pillars” to achieving a world-leading medical device regulatory framework, as follows:

  • Strengthening MHRA power to act to keep patients safe
  • Making the UK a focus for innovation, the best place to develop and introduce innovative medical devices
  • Addressing health inequalities and mitigating biases throughout medical device product lifecycles
  • Proportionate regulation that supports business through access routes that build on synergies with both EU and wider global standards
  • Setting world leading standards – building the UKCA mark as a global exemplar

The Roadmap establishes a number of work packages, each of which has key deliverables, including the development of regulatory guidance around key issues such as:

  • Risk-based classification, with a recognition around the need for flexibility to encompass novel devices [one of the major challenges within the AI landscape]
  • Adverse incidents [this will be key to informing the evolution and application of existing negligence-based laws and includes adaptation of the Yellow Card system to reflect the use of AI technologies]
  • Cyber security – one of the recognised key risks arising from the use of many medical devices and it will be interesting to see how this fits with existing legislative and sector requirements, including the Data Protection Act 2018; UK GDPR; and NHS England’s digital technology/NHS toolkit frameworks

In addition, other regulatory bodies will be involved depending on the nature of the AI-technology in question. In an attempt to ensure coordination, NICE, the CQC, the MHRA and the HRA have come together to form a Multi-agency advisory service (MAAS) for artificial intelligence and data driven technologies.

Some of the linked work being carried out by these other regulatory bodies neatly illustrates how wide-ranging the legal issues are:

  • A project led by the HRA to streamline the review of AI and data-driven research and to modernise the technology platform used to make applications for approvals
  • The HRA is also working to “streamline the review of research using confidential patient information without consent”, with the objective being to modernise the process “to enable a quicker and more robust oversight of projects and enhance the public visibility of approved studies
  • Validation of algorithms – the MHRA is leading on a synthetic data project that will help address issues around the development of algorithms against datasets that are difficult to access or obtain

Risks of non-compliance

The risks of non-compliance are significant, both in terms of direct impact and reputational damage. This is particularly the case in a healthcare setting where there has historically been resistance by service-users to the implementation of technological change.

The legal risks noted above are as yet relatively untested and this, coupled with the rapidly evolving regulatory framework, presents real challenge to those looking to develop and/or deploy AI technologies in a healthcare setting. However, this does not mean it is impossible to safely do so. Key to successfully navigating this landscape are the following:

  • Active monitoring of the regulatory framework, ensuring an up-to-date understanding of the requirements
  • Engagement in the various consultative processes underway, to ensure your voice is heard and reflected in the design of the regulatory framework [particularly in relation to the MHRA’s guidance, which is intended to include case studies]
  • A holistic understanding of how AI-technologies will be deployed and a fully informed analysis of risk, to ensure informed decision-making. Depending on the context, this may need to include consideration of data; human rights and equality law considerations
  • Utilise services like MAAS when looking to develop or deploy novel AI technological solutions

Contact

Contact

Charlotte Harpin

Partner

charlotte.harpin@brownejacobson.com

+44 (0)330 045 2405

View profile
Can we help you? Contact Charlotte

Discover more

Sector HealthTech

You may be interested in...