This article was first published by Healthcare Markets International
Recent developments in AI, including DALL-E 2 and ChatGPT, have reignited widespread excitement about the potential of artificial intelligence (AI) across all aspects of our lives. It has also seen an opening-up of the AI landscape, with commentators anticipating more rapid innovation because of industry democratisation. Charlotte Harpin, partner at Browne Jacobson outlines the risks and opportunities associated with the greater use of AI in healthcare.
Business adoption of AI has reportedly more than doubled since 2017, with evidence of robotic process automation; natural-language text understanding and deep learning being embedded in business settings. Generative-AI products, like ChatGPT, have also brought AI into people’s homes, in a very relatable and exciting way. It’s not hard to imagine how generative AI might form part of the health landscape, perhaps replacing ‘Dr Google’ with something more reliable.
In a healthcare context, a recent major trial of an AI programme that can predict when people might miss appointments and offer back-up bookings was announced, with a headline-grabbing title noting the expectation that this would “save [the NHS] billions”.
A helpful summary of the terminology involved in the AI/healthcare context can be found here: The Regulation of Artificial Intelligence as a Medical Device (publishing.service.gov.uk).
Potential legal risks that could arise in future regarding ongoing development and utilisation of AI technologies in healthcare
While novel areas of risk are tempting to focus upon when developing and utilising AI in healthcare, it’s important to recognise that risks can arise in all existing areas of law. The UK government has taken a light-touch approach to regulating AI and until we see the development of AI-specific legal frameworks, regulation is largely based on adapting existing laws to an AI context.
The most likely areas of risk are:
- Regulatory burden in terms of the development and deployment of medical devices that incorporate AI
- Clinical negligence claims
- Product regulation-related risks
- Data related issues and associated claims
- Copyright and IP/commercialization
- Public law challenges to decisions that have incorporated the use of AI, such as breach of the Equality Act 2010. [AI-bias is a recognised issue and eliminating bias in training is difficult, although the development of synthetic datasets may go some way to addressing]
- Employment law issues [such as redundancy claims following the deployment of AI-technological solutions]
- Practical challenges such as how patient safety incidents will be investigated where AI technologies have been involved.
More generally, there are issues around equity of access to novel technological solutions; how these will interface with existing NHS digital solutions; and the need to ensure service-user ‘buy-in’. These will all need to be factored in when considering the use of AI technologies in a healthcare setting.
Overall, there is a risk that the development and deployment of AI technology solutions is incompletely grounded in the wider legal framework where healthcare bodies operate. As part of developing organisational and/or system digital strategies, healthcare bodies need to ensure that they have an appropriate understanding of all areas of potential risk associated with developing and implementing AI technologies, to allow an informed risk-based decision to be taken.
Understanding proposed AI regulation
The AI regulatory landscape generally, and specifically in the healthcare setting, is complex and rapidly evolving. Brexit transition provides both a further degree of complexity but also an opportunity for the development and implementation of a UK designed framework.
Currently, medical devices that utilise AI are regulated under the Medical Devices Regulations 2002 (as amended).
The Medicines and Medical Devices Act 2021 (MMDA) was introduced to commence the post-Brexit transition to a UK sovereign regulatory regime. However, the timeframe for the introduction of secondary legislation under the MMDA has been delayed, with an extension to the transition standstill period.
This means there is more time to develop the details of the new regulatory framework, working to the Software and AI as a Medical Device Change Programme – Roadmap published in October 2022 by the MHRA and reflecting the UK Government’s stated “five pillars” to achieving a world-leading medical device regulatory framework, as follows:
- Strengthening MHRA power to act to keep patients safe
- Making the UK a focus for innovation, the best place to develop and introduce innovative medical devices
- Addressing health inequalities and mitigating biases throughout medical device product lifecycles
- Proportionate regulation that supports business through access routes that build on synergies with both EU and wider global standards
- Setting world leading standards – building the UKCA mark as a global exemplar
The Roadmap establishes a number of work packages, each of which has key deliverables, including the development of regulatory guidance around key issues such as:
- Risk-based classification, with a recognition around the need for flexibility to encompass novel devices [one of the major challenges within the AI landscape]
- Adverse incidents [this will be key to informing the evolution and application of existing negligence-based laws and includes adaptation of the Yellow Card system to reflect the use of AI technologies]
- Cyber security – one of the recognised key risks arising from the use of many medical devices and it will be interesting to see how this fits with existing legislative and sector requirements, including the Data Protection Act 2018; UK GDPR; and NHS England’s digital technology/NHS toolkit frameworks
In addition, other regulatory bodies will be involved depending on the nature of the AI-technology in question. In an attempt to ensure coordination, NICE, the CQC, the MHRA and the HRA have come together to form a Multi-agency advisory service (MAAS) for artificial intelligence and data driven technologies.
Some of the linked work being carried out by these other regulatory bodies neatly illustrates how wide-ranging the legal issues are:
- A project led by the HRA to streamline the review of AI and data-driven research and to modernise the technology platform used to make applications for approvals
- The HRA is also working to “streamline the review of research using confidential patient information without consent”, with the objective being to modernise the process “to enable a quicker and more robust oversight of projects and enhance the public visibility of approved studies”
- Validation of algorithms – the MHRA is leading on a synthetic data project that will help address issues around the development of algorithms against datasets that are difficult to access or obtain
Risks of non-compliance
The risks of non-compliance are significant, both in terms of direct impact and reputational damage. This is particularly the case in a healthcare setting where there has historically been resistance by service-users to the implementation of technological change.
The legal risks noted above are as yet relatively untested and this, coupled with the rapidly evolving regulatory framework, presents real challenge to those looking to develop and/or deploy AI technologies in a healthcare setting. However, this does not mean it is impossible to safely do so. Key to successfully navigating this landscape are the following:
- Active monitoring of the regulatory framework, ensuring an up-to-date understanding of the requirements
- Engagement in the various consultative processes underway, to ensure your voice is heard and reflected in the design of the regulatory framework [particularly in relation to the MHRA’s guidance, which is intended to include case studies]
- A holistic understanding of how AI-technologies will be deployed and a fully informed analysis of risk, to ensure informed decision-making. Depending on the context, this may need to include consideration of data; human rights and equality law considerations
- Utilise services like MAAS when looking to develop or deploy novel AI technological solutions
You may be interested in...
Legal Update - Shared Insights
Shared Insights round up - Winter 2023
ICO consultation on transparency in health and social care
PureHealth acquisition of Circle Health reflects growing opportunities between UK and Middle East
Copyright issues with AI webinar
Browne Jacobson advises Care Fertility Group on acquisition of CRGW
Legal Update - Shared Insights
Shared Insights: Racial disparities in healthcare and the role of health technology in improving equity, increasing patient safety and reducing claims
ICO consultation on fertility tracking apps
Investing in healthcare in Saudi Arabia under the new regulatory framework
Digital channels and healthcare apps – the UK’s regulatory landscape, challenges for stakeholders and risk of clinical liabilities
NHS announces artificial intelligence fund
Legal Update - Shared Insights
Shared Insights: Data and Information Governance Issues
New regulatory pathways announced for innovative medical technologies and internationally approved medicines
Risks and opportunities arising through the use of AI in Healthcare
Law firm launches new Health and Care Connect forum for the independent health and care sector
Government to expand network and information systems regulations
NSIA: the thorn in the side of M&A?
Digital Twin Technologies: key legal contractual considerations
Government publishes its proposals for expanding the Scope of the Network and Information Systems Regulations 2018
FAQs for startups
Below are some of the questions we are regularly asked by startups, covering a range of topic areas.
How AI and technology can transform the healthcare sector
Highlights from the Health and Care Connect Conference
Common AI related technology project disputes and how to prevent them
The increased use of artificial intelligence (AI) is revolutionising the way businesses operate and is having a disruptive impact in sectors that have traditionally been slow to modernise.
Commissioning Health Tech in an ICS World
We invite you to watch our on-demand webinar which looks into how healthtech is commissioned from a health and tech perspective.
Deal activity and market update in health and social care sector
In the last nine months of 2021 we saw a huge amount of activity across all sub-sectors of health and social care.
Care Business Briefing - Deal activity dynamics in the healthcare sector
Join Browne Jacobson and Virgin Money for an on-demand webinar as they discussed their thoughts on the outlook for acquisition activity and funding in the health and care sectors.
Browne Jacobson advise Apiary Capital backed cross-border buyout of e-learning software specialist XVR
Health care apps – Part 1 of 2: Exploring the ins and outs of intellectual property (IP)
The AI will see you now: Liability issues from the use of AI in surgery and healthcare
Care business briefing acquisitions and fundingBrowne Jacobson, Clydesdale Bank/Virgin Money and Hazelwoods Healthcare specialists discuss their thoughts on the outlook for acquisition activity and funding in the health and care sectors.
AI: support for buying new technology
The world of healthtech and digitech is continuing to develop at pace and as a result of Covid-19 we are seeing it being used and implemented to a greater extent across the NHS and wider health and care market.
Browne Jacobson advises C7 Health on £1.5m acquisition of TAC Healthcare Group
Health technology: digital therapy by prescription
The healthtech sector has been on the rise for years, changing the traditional ways for patients to interact with healthcare.