The use of artificial intelligence (AI) transcription tools in social work is analysed in a new report by the Ada Lovelace Institute.
Social workers have reported benefits from time-saving efficiencies but the research found these tools have introduced new risks such as bias and "hallucinations" that aren’t being fully assessed or mitigated by care organisations.
Researchers at the think-tank interviewed 39 social workers with experience of using AI transcription tools from across 17 local authorities, as well as senior staff members involved in procuring and evaluating them.
These were thematically coded and analysed to develop five key insights in the report, titled Scribe and prejudice?:
- Resource constraints in social care are inspiring widespread piloting, adoption and evaluation of AI transcription tools.
- Local authorities focus their evaluations on efficiency rather than the impact on people who draw on care.
- Almost all social workers report that using AI transcription tools brings meaningful benefits to their work, but they do not experience these benefits uniformly.
- Social workers assume full responsibility for AI transcription tools, but perceptions of reliability, accuracy and the need for oversight (‘human in the loop’) vary significantly.
- There is no consensus on when it is appropriate to use AI transcription tools in social care.
James Arrowsmith, Partner in social care at UK and Ireland law firm Browne Jacobson, said: “Implementing proper AI governance and training to upskill workers on best practice for using these tools correctly is crucial to creating a culture of responsible AI use in any organisation.
“Within the rush to realise the productivity gains that AI offers, organisations must recognise it is a tool that should support human practice and decisions, not act as a substitute. A GenAI tool may create a draft note that is at least 95% accurate, but it’s in those remaining few percentage points where risks emerge and human oversight becomes key.
“The seriousness of a public authority getting this wrong was illustrated by the unfortunate mishap involving West Midlands Police, which included an AI hallucination within an intelligence report ahead of a football match that ultimately undermined the entire decision-making process.
“Findings in the Ada Lovelace Institute’s report that AI is being used uncritically to complete parts of care plans or rewrite key documents is concerning, as this could suggest that AI is directly involved in formulating at least the fine detail of decisions themselves, whereas the tools under review are designed to transcribe the evidence only.
“Generally speaking, AI is at the stage where it should be used to support administrative tasks to create capacity for the social work expert to develop a bespoke social care plan for service users. Even then, it is important to recognise that some seemingly administrative tasks offer additional value. Andrew Reece, of the British Association of Social Workers, has commented that time working on notes can also be an opportunity for reflection on a case.
“However, we shouldn’t lose sight of the real challenges that exist in current practice when analysing the risks of AI integration. A social worker attempting to manually prepare notes in the evening while exhausted as a result of a large caseload, will not be in much of a position for reflective practice.
“Part of good planning for AI integration is to consider where current practices generate less obvious value and how that value can be built into new systems, while also delivering efficiency.
“This means AI-augmented social work should be intentionally designed to include opportunities for reflective and person-centred practice – something that can only be delivered by expert social workers.”
Contact
Dan Robinson
PR & Communications Manager
Dan.Robinson@brownejacobson.com
+44 0330 045 1072