Skip to main content
Share via Share via Share via Copy link

AI adoption without safeguards: A growing risk for insurers

30 October 2025
Jeanette Flowers

We are seeing an uptick in the use of artificial intelligence (AI) tools in business; companies and organisations are adopting routine use of AI bots and increasing integration of AI into standard practices.

This step into the future, albeit exciting, comes with risks. Moody’s new survey has found that nearly a quarter of businesses surveyed have no rules in place to govern the safe use of AI tools

The survey investigated almost 2,000 organisations on how they’re safeguarding AI in the workplace. It showed that 22% of these organisations said that they have no policies in place, leaving them “vulnerable to data breaches and loss of competitive advantage”.

Data breach, supply chain and cybersecurity risks

Public AI tools such as OpenAI’s ChatGPT or Google’s Gemini often process data on external servers. Should companies submit proprietary information into such tools, they could open themselves to risks such as data and confidentiality breaches, expose sensitive data or even reputational risk.

These third-party software providers are often intertwined in a complex network of third-party vendors and suppliers, causing serious consequences should one of the members’ defences in the supply chain be vulnerable to attack, which could in turn pass through the entire supply chain.

Moody’s research also showed that many of the organisations they rate “are falling victim to cyberattacks, primarily owing to indirect incidents via third-party suppliers, partners or service providers”.

Despite the dangers, Moody’s survey revealed that 14% of organisations have never reviewed their vendors’ cybersecurity practices, with defence against ransomware being “patchy,” finding only 78% of organisations scan their back up data for ransomware or other malware. 

What this means for insurers

In the current climate, where cyberattacks are rife and the use of AI tools is on the rise, it is imperative that internal policies are in place to mitigate such risks.

Insurers should take care in the cyber cover to ensure that AI risk has been considered appropriately, along with other sectors, such as PI and MLP, which are likely to be exposed. Insurers may also want to review their pre-inception questionnaires and underwriting criteria to take account of the practices that insureds have in place (or not, as the case may be!).

Contact

Contact

Jeanette Flowers

Claims Handler

Jeanette.Flowers@brownejacobson.com

+44 (0)330 045 2178

View profile
Can we help you? Contact Jeanette

Tim Johnson

Partner

tim.johnson@brownejacobson.com

+44 (0)115 976 6557

View Profile Connect on Linkedin
Can we help you? Contact Tim

You may be interested in