This article was first published by Health Tech World.
The rapid development and deployment of technology within the health and social care sector, including, the use of artificial intelligence (AI) in digital channels and healthcare apps, represents an exciting opportunity. AI technologies have the potential to improve the quality of healthcare by removing inequities in access to care, and improving diagnosis time, efficiency and accuracy. Improving health software also allows for individuals to take greater control of their health and wellbeing. If deployed appropriately, they also present opportunities to reduce some of the administrative burden on staff working within the health and care sector.
However, considered development and deployment is key to ensuing that these advantages can be maximised and the risks to stakeholders appropriately identified and mitigated.
Decisions made around development and deployment of digital solutions in a health and care setting require a careful balance by decision makers and regulators, in terms of balancing the public interest in the continued timely development and deployment of technologies, while also assuring concerns around their use. Although public confidence in the use of technologies in a health and care context has increased, this remains a key aspect in terms of enabling support for the successful deployment of such solutions.
Key challenges and risks for stakeholders
There is clear support from the Government for the development and deployment of technological solutions in the health and care sector. The recent announcement of a £21 million fund for the roll out of artificial intelligence across the NHS illustrates this. Funding will be available to NHS Trusts by making bids to the Government’s AI Diagnostic Fund, with bidders required to demonstrate value for money as part of their bid. Historically this has been difficult to demonstrate, especially in early-stage deployment, and so it will be interesting to see further detail around what will be required.
Alongside this, however, are wider concerns around the speed with which AI technology is being developed and a softening in the Government’s previous policy of a regulation-light environment. It remains to be seen how this will ultimately play out but as things stand, the current regulatory structure that applies to AI technologies is essentially based on the pre-existing legal system, although there have been some specific developments introduced by regulatory bodies specifically in the context of AI in a health and care setting.
Key legal risks include:
- failure to comply with the relevant regulatory schemes, most commonly through a failure to properly assess regulatory risks at an early stage of the development/deployment process;
- clinical negligence claims, for example, the failure to diagnose a condition where AI technologies have been used;
- product liability claims;
- risks with cybersecurity;
- data breaches under the Data Protection Act 2018 and the UK GDPR;
- public law challenges to decisions that have incorporated the use of AI, such as breach of the Equality Act 2010;
- employment law issues related to redundancies within the health and care sector, as a result of the deployment of AI technologies;
- practical challenges, such as how patient safety incidents will be investigated where AI technologies have been involved;
- data/system interoperability issues, and the consequential impact on staff using new technological solutions.
Additionally, some of the greatest challenges AI developers will face will be the potential of health technologies to perpetuate health inequalities through bias and differential performance across certain population groups. There is also the risk of poor performance when a certain technology is developed and tested against a population group different from that against which it is deployed. This will be particularly important to combat in circumstances where AI has been predicted to break down barriers to equal access to healthcare and improve efficiency of services. Testing and assuring deployment of new technologies will be a crucial aspect to mitigate against such risks and ensure that the aims of deploying such solutions are being met.
The UK approach
To date, the UK has adopted a pro-innovation, light-touch approach to the regulation of AI, with existing regulations across all sectors being adapted to respond to the associated risks with AI implementation. In the healthcare sector, technologies utilising AI are currently regulated as medical devices under the Medical Devices Regulations 2002 (as amended), as either software as a Medical Device (SaMD), or AI as a medical device (AIaMD).
To support the deployment of AI technologies in a health and care setting, the NHS recently established the AI & Digital Regulation Service as a tool to assist both developers and adopters of AI and digital technology to determine which set of regulations apply in their work. This is a collaboration between the Care Quality Commission (CQC), the Health Research Authority, the National Institute for Health and Care Excellence (NICE), and MHRA. While this is a welcome development, the caution we would note is that this is not a comprehensive regulatory toolbox yet and there are important aspects omitted from this, which will need to be considered as part of the options appraisal process for technological solutions development and deployment.
In addition, the Medicines & Healthcare products Regulatory Agency (MHRA) continues to work through its ‘Software and AI as a Medical Device Change Programme – Roadmap’ (the Change Programme). For those working to develop and deploy AI technologies, the Change Programme remains an important aspect to keep up-to-date with and it also presents opportunities for stakeholder involvement and engagement to help shape future changes to the regulatory landscape.”
The Proposed Artificial Intelligence Act of the European Union
By comparison, there has been considerable discussion around the European Union (EU) approach, which is founded on the development of a bespoke regulatory system for AI technologies. The approach will be a risk-based one, as set out in the European Commission’s proposed legislation, the AI Act. This is the first AI-specific regulation proposed globally.
The AI Act would introduce a uniform, cross-sector regulatory and would operate by categorising a new AI technology into one of the following risk categories:
- unacceptable risk (prohibited);
- high-risk applications (subject to a set of requirements before entering the market) – this will include medical devices;
- limited risk (subject to light transparency obligations); and
- minimal or no risk (no obligations).
The cataloguing of the AI technology would thereby dictate the relevant and applicable regulations.
The European Parliament has indicated it intends to have the final form of the AI Act agreed upon by member countries by the end of the year and it will be interesting to watch the development and implementation of this risk-based system, particularly in terms of assessing how flexible it can be in operation, given the rapidly evolving nature of AI technology and the associated risks such solutions present.
Notwithstanding the different approaches, the MHRA has expressed its intention to work closely with international bodies having identified the challenge and burden that inconsistencies across international regulatory frameworks can create for the industry, including developers of AI products. For example, in its Roadmap for the Change Programme, the MHRA states that it will “drive forward international consensus” of regulatory innovation by remaining involved and “redoubling [its] contributions” to the International Medical Device Regulators Forum (IMDRF). This will be a key concern for many developers looking to deploy technological solutions across multiple jurisdictions and consequently having to navigate a range of regulatory requirements.
The implementation of AI technologies in the health sector is fast-growing and exciting, but requires effective regulatory schemes to protect both the health and safety of the public and those involved in their early development and implementation. It is difficult at this stage to predict the legal risks which will be most prevalent for manufacturers and healthcare professions and even more difficult, until the EU enacts the AI Act to determine the best way to regulate against such risks.
You may be interested in...
Legal Update - Shared Insights
Shared Insights round up - Winter 2023
ICO consultation on transparency in health and social care
PureHealth acquisition of Circle Health reflects growing opportunities between UK and Middle East
Copyright issues with AI webinar
Browne Jacobson advises Care Fertility Group on acquisition of CRGW
Legal Update - Shared Insights
Shared Insights: Racial disparities in healthcare and the role of health technology in improving equity, increasing patient safety and reducing claims
ICO consultation on fertility tracking apps
Investing in healthcare in Saudi Arabia under the new regulatory framework
Digital channels and healthcare apps – the UK’s regulatory landscape, challenges for stakeholders and risk of clinical liabilities
NHS announces artificial intelligence fund
Legal Update - Shared Insights
Shared Insights: Data and Information Governance Issues
New regulatory pathways announced for innovative medical technologies and internationally approved medicines
Risks and opportunities arising through the use of AI in Healthcare
Law firm launches new Health and Care Connect forum for the independent health and care sector
Government to expand network and information systems regulations
NSIA: the thorn in the side of M&A?
Digital Twin Technologies: key legal contractual considerations
Government publishes its proposals for expanding the Scope of the Network and Information Systems Regulations 2018
FAQs for startups
Below are some of the questions we are regularly asked by startups, covering a range of topic areas.
How AI and technology can transform the healthcare sector
Highlights from the Health and Care Connect Conference
Common AI related technology project disputes and how to prevent them
The increased use of artificial intelligence (AI) is revolutionising the way businesses operate and is having a disruptive impact in sectors that have traditionally been slow to modernise.
Commissioning Health Tech in an ICS World
We invite you to watch our on-demand webinar which looks into how healthtech is commissioned from a health and tech perspective.
Deal activity and market update in health and social care sector
In the last nine months of 2021 we saw a huge amount of activity across all sub-sectors of health and social care.
Care Business Briefing - Deal activity dynamics in the healthcare sector
Join Browne Jacobson and Virgin Money for an on-demand webinar as they discussed their thoughts on the outlook for acquisition activity and funding in the health and care sectors.
Browne Jacobson advise Apiary Capital backed cross-border buyout of e-learning software specialist XVR
Health care apps – Part 1 of 2: Exploring the ins and outs of intellectual property (IP)
The AI will see you now: Liability issues from the use of AI in surgery and healthcare
Care business briefing acquisitions and fundingBrowne Jacobson, Clydesdale Bank/Virgin Money and Hazelwoods Healthcare specialists discuss their thoughts on the outlook for acquisition activity and funding in the health and care sectors.
AI: support for buying new technology
The world of healthtech and digitech is continuing to develop at pace and as a result of Covid-19 we are seeing it being used and implemented to a greater extent across the NHS and wider health and care market.
Browne Jacobson advises C7 Health on £1.5m acquisition of TAC Healthcare Group
Health technology: digital therapy by prescription
The healthtech sector has been on the rise for years, changing the traditional ways for patients to interact with healthcare.