Schools as guardians of the digital playground: Is AI the missing piece?
First published on SecEd.
Imagine a child sitting in an English class, studying Romeo and Juliet. Later that afternoon, alone with a school device, they search for information about suicide.
A filtering system flags it – briefly – before it disappears into a queue of thousands of similar alerts, because all peers in that year group are studying the same text. No one follows up. No one connects the dots.
Now imagine a different school. One where that same search is automatically cross-referenced with that child's existing vulnerability profile. A designated safeguarding lead (DSL) receives a prioritised alert within the hour. A conversation happens. Support is offered.
The difference between those two schools is not funding, or staffing ratios, or even policy. It is data integration – and increasingly, artificial intelligence is the engine making that integration possible.
The fragmentation problem
Every DSL carries with them an instinct about which children are most at risk. But instinct, however sharp, has limits. What AI offers is an evidence base – one capable of identifying patterns across datasets that no human reviewer could process alone.
The challenge is that most schools are not yet set up to harness it. Schools typically operate multiple platforms: management information systems, filtering and monitoring tools, child protection reporting systems. These platforms rarely communicate with one another. Data sits in siloes. Warning signs can be delayed or go unconnected.
This fragmentation is one of the most significant barriers to effective safeguarding in schools across the country today. A child might trigger concern across three separate systems on the same day, and yet no single professional immediately sees the full picture.
AI has the capability to bridge those silos. But the technology alone is not sufficient — the systems must be configured to talk to each other.
The opportunity hidden in plain sight
Here is something that may surprise many school leaders: the AI capability may already exist within platforms you are paying for.
Some filtering and monitoring systems increasingly incorporate functionality that can identify when children are accessing platforms such as ChatGPT or entering search terms that warrant safeguarding attention. Management information systems are similarly evolving, with AI-driven analytics becoming standard features rather than premium add-ons.
The question is not whether schools have access to these tools. In many cases, they already do. The question is whether DSLs know what those tools can do – and whether the infrastructure can be developed to make them work together.
When data from multiple systems can be analysed collectively, the results can be genuinely transformative. Analysis across a multi-academy trust might reveal, for example, that children aged between seven and eleven with special educational needs who also receive free school meals and are looked after are disproportionately likely to be victims of bullying. Recognising that pattern early allows schools to intervene before harm occurs — not after.
Data can also illuminate physical environments. If incident reports consistently reference the same corridor, the intervention is straightforward: more staff presence in that location. AI does not replace human judgement. It directs it.
Understanding Risk Without Paralysis
AI also presents genuine risks – and school leaders must engage with them honestly.
Among the most serious is the creation of deepfake imagery: AI-generated content that superimposes a child's face onto explicit material. This is not a theoretical concern. It is happening, and many parents remain entirely unaware that the technology exists, let alone how it might be used against their children.
The regulatory framework has not yet caught up. In the interim, schools can fill that gap. Educating parents about deepfakes, about the misuse of generative AI, and about online safety more broadly is a legitimate and important safeguarding function.
But the response to AI risk should never be simple prohibition. Telling children they cannot use a technology rarely prevents them from using it – it simply means they use it without the skills to do so safely. The more effective approach is to build curiosity and critical thinking into online safety education: helping children understand how these tools work, what risks they carry, and how to navigate them responsibly.
The same principle applies to platforms like social media and online gaming. These are not simply distractions. For many young people, they serve important social functions. Blanket bans ignore this reality and, in doing so, undermine the credibility schools need to have honest conversations with children about risk.
Implementation: Where to begin
For school leaders considering how to move forward, the priorities are clear.
First, audit what you already have. Before investing in new tools, understand the AI functionality within your existing platforms and whether your systems can be configured to share data.
Second, invest in staff training. AI-generated alerts are only useful if staff know how to interpret and respond to them appropriately. The technology must be matched by human capability.
Third, establish clear governance. What data can be analysed? Who has access? How are decisions made? Human judgement must remain central – AI informs, it does not decide.
Fourth, bring parents with you. Transparency is not optional. Parents who understand how AI is being used to protect their children become partners in safeguarding. Parents who are left in the dark can add barriers to progress.
The moment of choice
The integration of AI into school safeguarding is not a future possibility. It is already happening, in schools across England and Wales, right now. The platforms many schools use every day already carry this capability.
For DSLs and school leaders, the question is not whether to engage with AI. It is whether to engage with it thoughtfully and with intent – or to leave that work to chance.
Done well, AI does not replace the human at the heart of safeguarding. It equips them. It gives them the picture they need to fulfil their role effectively.
The digital playground needs informed, strategic guardians. The tools to become one are already within reach.
Dai Durbridge is a partner and Vicky Wilson is a senior associate at Browne Jacobson, specialising in education law and safeguarding.
Contact
Dai Durbridge
Partner
dai.durbridge@brownejacobson.com
+44 (0)330 045 2105