First published on The MJ.
The integration of artificial intelligence (AI) into local government operations is no longer a question of ‘if’ but ‘when’ and ‘how’. James Arrowsmith and Anja Beriro explain how councils can ensure AI systems bring efficiency gains and improved service delivery.
In the Local Government Association’s 2025 State of the Sector report on AI, just over half of respondents were at the beginning of their AI journey, with nearly a quarter developing capabilities.
Yet the path from initial concept to successful, compliant AI deployment is fraught with legal, regulatory and practical challenges, and the Massachusetts Institute of Technology reports that 95% of enterprise AI fails.
Over the past 18 months, our firm has worked intensively with public sector organisations, technology providers and regulatory bodies to understand what makes AI projects succeed – or fail. Through direct project experience and a series of roundtable discussions, we've developed a comprehensive framework for successful AI deployment and management.
Beyond the procurement phase
A conventional project lifecycle might start with specification, procurement, and contracting. While all are critical, we found that organisations focusing solely on the procurement phase are setting themselves up for difficulties down the line. The most successful AI deployments share a common characteristic: they began with rigorous strategic planning focused upon organisational outcomes, long before any specification or tender documents were drafted.
The question ‘what are we actually trying to achieve?’ sounds deceptively simple, yet we've encountered numerous projects where different stakeholders held fundamentally different assumptions about desired outcomes.
AI directed at adult services may be seen as a tool for prevention, safeguarding, workforce management, or cost efficiencies. Without clear alignment on outcomes, projects are put at risk.
Equally important is understanding organisational readiness. To support AI deployment, authorities require data governance infrastructure, appropriate information security protocols, and internal expertise for effective oversight of AI vendors.
These foundational issues must be addressed before procurement begins, not discovered as problems during implementation.
Regulatory complexity challenge
Local government lawyers are accustomed to navigating complex regulatory frameworks, but AI introduces layers of compliance obligation that intersect in novel ways.
Data protection law is the obvious starting point – every AI project will require careful data compliance work and consideration of lawful bases for processing. However, the regulatory landscape extends far beyond the UK General Data Protection Regulation.
Sector-specific regulations may apply depending on the technology use case – care settings trigger different considerations than AI systems deployed for environmental monitoring or housing allocation.
Many organisations underestimate the cross-border data transfer implications of AI systems. Even when procuring from UK-based vendors, the underlying AI models may involve data processing in other countries, requiring transfer mechanisms that many procurement teams haven't considered.
And what about AI used by suppliers? The global nature of AI supply chains means seemingly domestic projects can have significant international law dimensions.
Stakeholder management: A critical success factor
One of the most striking findings is the extent to which stakeholder management determines AI project outcomes. This isn’t simply about communications plans and consultation exercises, though these matter. The deeper challenge lies in addressing the legitimate concerns of employees, service users and the wider community about AI deployment.
Trade unions and staff representatives have valid questions about how AI systems will affect employment, working conditions and professional autonomy. Service users want to understand how automated decisions are made and whether human oversight exists. Elected members need assurance that AI systems align with the authority's values and democratic accountability.
These are fundamental questions that must be addressed by legal and governance frameworks. Employment law implications, equality duties, transparency obligations, and public law principles all come into play. The public sector lawyers we've worked with who grasp this early are far better positioned to support successful AI deployment.
Ongoing nature of AI governance
Perhaps the most significant mindset shift required is recognising that AI projects don't end at go-live. Unlike traditional IT systems that remain relatively static once deployed, AI systems evolve.
Models are retrained, algorithms are updated, system behaviour changes over time and people find new use cases. This creates ongoing compliance obligations that many organisations haven't planned for.
Authorities must determine who monitors AI system performance for bias or drift, processes for rapid responses to regulatory changes, how to manage relationships with AI vendors, and how to exit an AI system when it’s no longer fit for purpose.
These issues point to the need for managed services and continuous governance – areas where legal input is essential but often overlooked in initial project planning.
The most mature organisations we've worked with treat AI governance as a permanent function, not a project phase.
Contact
James Arrowsmith
Partner
james.arrowsmith@brownejacobson.com
+44 (0) 330 045 2321
Anja Beriro
Partner
anja.beriro@brownejacobson.com
+44 (0)115 976 6589