Skip to main content
Share via Share via Share via Copy link

AI and the Litigant in Person: Risks for clinical negligence defendants

27 April 2026
Naiomh O'Reilly

Courts are already encountering the consequences of AI-assisted self-representation and clinical negligence claims are particularly high risk. Concerns run wider than hallucinated citations: AI has no access to clinical records and cannot apply the Bolam/Bolitho framework, yet it can produce text that closely resembles a properly pleaded causation argument. 

This article sets out the key failure modes and the CPR tools available to defendants when facing AI-assisted claims.

AI can get the law badly wrong and courts are already seeing it

The judiciary's own guidance expressly acknowledges that AI chatbots are increasingly being used by unrepresented litigants and may, in some cases, be their only source of assistance. But the courts have also been clear about the risks.

In the recent case of Ayinde v Haringey / Al-Haroun v QNB [2025] EWHC 1383 (Admin), the Divisional Court warned that freely available AI tools are not capable of conducting reliable legal research. They can produce answers that sound entirely convincing whilst being completely wrong, and can cite cases that simply do not exist. 

But in clinical negligence, the problem runs deeper than legal research. AI tools have no access to a claimant's actual clinical records, no understanding of the treating clinician's reasoning, and no ability to apply the Bolam/Bolitho framework to the specific facts. What they can do is produce text that sounds like a causation argument, and that is precisely what makes such use dangerous for all involved. 

Clinical negligence amplifies the problem

Clinical negligence claims are especially unforgiving for AI-assisted self-representation. A successful claim requires a precise medical chronology, properly pleaded allegations of breach of duty and causation, and independent expert evidence. The Pre‑Action Protocol for the Resolution of Clinical Disputes makes clear that separate expert opinions are often needed across breach, causation, prognosis, and quantum, and that obtaining this evidence is both costly and time-consuming.

AI can produce text that resembles a medical causation argument. But it is not expert evidence, and it is not a substitute for properly pleaded allegations based on careful review of that expert evidence. The real danger is that it leads a LiP to build their entire case around a theory that appears solid on paper but falls apart the moment it is tested against the clinical records and a properly instructed expert opinion.

The practical failure modes that may arise in AI-assisted LiP cases

The are a number of practical failures that may arise, should the LiP misuse or mistrust AI:

  • Citation fog: AI produces long lists of authorities, some irrelevant, some invented. Ayinde confirms hallucinated citations are a real risk; every authority must be verified at source.
  • Defective pleadings: AI templates tend to produce vague Particulars of Claim built around the logic that 'bad outcome = negligence'. This is not a legally recognisable claim in clinical negligence and creates real strike-out exposure under CPR 3.4 / PD 3A. Without a properly pleaded duty, breach, causation, and loss, there is no claim.
  • Statement of truth risk: AI may insert unverified 'facts' which the LiP then signs off under CPR 22. Errors can engage CPR 32.14 (contempt for false statements) and, even where contempt is not realistic, cause serious credibility damage when disclosure takes place. 
  • Medical record confidentiality: Judicial guidance warns that anything entered into a public chatbot should be treated as published. LiPs may have pasted sensitive clinical records into AI tools, creating data protection issues and requiring care when engaging in open correspondence.
  • Evidence integrity: AI can generate fake documents, screenshots, and messages, and judicial guidance flags the risk of hidden prompts embedded in filed material. 

QOCS 

Claimants bringing clinical negligence claims will generally have the benefit of Qualified One-Way Costs Shifting (QOCS), which limits a claimant's exposure to adverse costs orders. However, QOCS protection can be lost, for example, where a claim is struck out on specified grounds (CPR 44.15) or found to be fundamentally dishonest (CPR 44.16). AI-assisted overstatement and factual inconsistency has the potential to open the door to these arguments.

What defendants should be aware of and how to mitigate

When dealing with submissions from a LiP who may have used AI, verify the cited authority against the primary source before engaging with it. Anticipate that the claim position may shift frequently. AI makes it easy to generate new arguments, so establish a clear issues list early and keep the focus anchored to what is actually pleaded and what the primary records show.

Use the CPR purposefully to impose structure and discipline:

  • Part 18 to nail down vague allegations of breach and causation
  • CPR 31.14 to obtain documents referenced in pleadings but never produced - a common gap in AI-drafted claims
  • CPR 32.19 to challenge the authenticity of any document that appears suspicious or inconsistent with the primary records
  • CPR 3.4 / PD 3A to strike out claims that disclose no legally recognisable cause of action

Send the Pre-Action Protocol for the Resolution of Clinical Disputes to the LiP as early as possible and ensure throughout that causation and breach are addressed through CPR Part 35-compliant expert evidence. AI-generated narrative, however plausible it reads, is not expert opinion and does not meet the evidential standard the court requires.

Closing warning 

AI use by LiPs is not inherently improper and courts recognise it may be the only help a LiP has, however, Ayinde shows how quickly AI can pollute proceedings with fake authorities, wrong propositions, and misquotes.

For defendants, the safest posture is to verify, narrow issues early, insist on primary documents and proper expert evidence, and use proportionate CPR tools to convert an AI-shaped narrative into a triable (or disposable) claim. Proceed with caution, because the most dangerous AI output in litigation is the version that looks professionally written while being quietly wrong.

Contact

Contact

Naiomh O'Reilly

Associate

naiomh.o'reilly@brownejacobson.com

+44 (0)330 045 1334

View profile Connect on LinkedIn
Can we help you? Contact Naiomh

You may be interested in...