Skip to main content
Share via Share via Share via Copy link
Proposal forms and question sets:

Questions professional indemnity insurers should be asking about their clients' AI usage

26 February 2026
Joanna Wallens

As the use of Artificial intelligence (AI) in business operations continues to increase, insurance policies are increasingly picking up AI-related risks. AI has become a key tool in improving work efficiency, helping professionals complete tasks faster and more efficiently.

However, its use is not without risks. These risks include AI inaccuracy, intellectual property infringement and cyber vulnerability.

Professional indemnity insurance claims are often long-tail, meaning there can be a substantial gap between when an incident occurs and when a claim is made. Widespread use of AI technologies is still new – and the risk of future claims cannot be dismissed. For insurers, understanding how clients use AI has become essential to proper risk assessment and pricing.

Core questions for all professional services clients

1. AI implementation and scope

Insurers may want to ask questions about the specific technology they are using and what it is doing. For example: 

  • "Where is AI being implemented within your business?" – the aim of this question is to understand the extent of the AI use and what the AI is doing. 
  • “What is AI used for in your business?”/ “Which of your business activities do you use AI for/ to assist with?” – the aim of this question is to understand if they are using it for routine, low risk administrative tasks or higher risk functions such as decision support for professional advice or for critical systems. 
  • "How is AI integrated into your processes?" – this is to help determine whether the AI is fully embedded or used in specific tasks.

2. Data usage, handling and security

Data governance is a significant risk area for professional indemnity insurers to assess. There are a number of questions which insurers may want to ask relating to data usage, handling and security, for example:

  • “Are you using AI for processing or handling customer data? If so, how is the data used and stored?”
  • "How is data being used to train your AI models?"
  • "Have you considered the risk of intellectual property infringement related to AI training data or outputs?" 
  • For clients using third party AI solutions: "What due diligence have you done on the vendor? How does the vendor handle data and mitigate risks?"
  • For clients using third party AI solutions: “What measures are in place to protect data when using third-party AI systems?” Using third-party systems can introduce cybersecurity vulnerabilities.
  • “Have you adapted existing privacy policies, security protocols and technical and organisational security measures to account for AI usage? If so, how?" This can demonstrate the level of commitment the business has to responsible AI practices.
  • "What governance and human oversight will be used?" Who is responsible for AI outcomes and is there always a human reviewer for important decisions or work product? What validation and monitoring will be employed?”
  • “What are the liability provisions in your contracts with your suppliers of AI software?” This is important to ensure that the extent of any recovery rights insurers may have in the event of covering a claim caused by faulty AI are understood. Most AI software providers significantly limit their liability. (Note that PI wordings commonly have exclusions which exclude situations where recovery rights have been limited by contract and that if to exclude AI in such a situation is not the insurer’s intention then an endorsement may need to be added to the wording).  

Law firms

Law firms are using AI for various applications including drafting text using AI tools, checking legal documents for drafting errors and legal research. Examples of potential questions include:

  • “What mechanisms does the firm have in place to monitor the use of AI to avoid errors and inaccuracies, particularly AI "hallucinations"?”
  • “Does the firm obtain clients' approval and consent to use AI in the delivery of legal services?” “How do you inform clients about the use and involvement of AI in their case?”
  • “Have you established protocols to ensure human oversight and review of all AI-generated legal outputs before submission to courts or clients?”

There have been a number of high profile cases of AI hallucinated case citations being submitted to court. 

Construction professionals

Construction professionals are using AI to streamline projects, compiling paperwork such as bids or reports, in scheduling and tracking timelines and to assist procurement. Examples of questions to clients on their AI use include:

  • “What checks are in place to ensure AI tools recognise material compatibility issues, such as when materials that are safe in isolation cannot be used in conjunction due to chemical or physical interactions?” 
  • “What processes exist to validate AI recommendations regarding design, specifications and to ensure compliance with all relevant building codes, safety standards and planning regulations?” 
  • “How do you verify AI-generated scheduling and cost estimates against real-world constraints and historical project data?”
  • “How do you validate AI-generated property valuations against comparable market data and professional judgement? How do you ensure AI valuations account for local market conditions, upcoming infrastructure developments, and neighbourhood factors?”

Examples of potential claims include:

  • An AI tool may neglect practical design considerations such as emergency exits or accessibility requirements. 
  • While certain materials may be safe to use in isolation, an AI tool may fail to recognise that they can't be used in conjunction, due to chemical or physical interactions.

Accountants

The Financial Reporting Council issued landmark guidance in June 2025 on AI use in audit. The findings of the thematic review on understanding the processes and controls in place at the six largest audit firms may also be of interest to underwriters. 

In July 2025, the ICAEW updated its Code of Ethics, including new sections on professional competence, confidentiality, and managing ethical threats arising from the use of technology. It highlighted potential risks that new technologies pose for the profession. It stresses the importance of fundamental principles of professional competence and due care.

Questions will vary depending on what business activities AI is being used in. Examples of questions insurers could ask their accountant clients include:

  • “How do you ensure AI tools correctly interpret and apply current tax legislation, accounting standards, and regulatory requirements?”
  • “What processes verify the accuracy of AI-generated financial statements, tax returns, and audit reports?”
  • “What controls exist to prevent AI from overlooking material misstatements or fraud indicators during audit procedures?”
  • “What human oversight exists for AI-assisted audit sampling, risk assessments, and materiality judgements?”
  • “How does the firm manage ethical threats in audits arising from the use of technology”. The ICAEW updated its Code of Ethics now clarifies (amongst other technology related ethical threats) circumstances in which firms and network firms may not provide audits due to a close business relationship with an audit client arising from purchase of goods and services including the licensing of technology of such a nature or magnitude that a self-interest threat is created. 

Examples of potential claims include:

  • fabricated quotes and citations in reports 
  • accuracy and reliability issues due to low quality training data.

Conclusion

This list is intended to be non-exhaustive. Further follow up questions specific to the relevant client may arise. As AI models develop, parts of the above lists may also need to be adapted. Not all the questions are relevant to all professionals within their category as it depends on the business activities the client is involvement in and which of those activities the client is using AI for.

Please also see our article on Silent AI: The risk of unintended consequences and a more general (non-professional indemnity focused) article on questions insurers may want to consider asking their clients about their AI usage.

Contact

Contact

Joanna Wallens

Associate

joanna.wallens@brownejacobson.com

+44 (0)330 045 2272

View profile
Can we help you? Contact Joanna

Tim Johnson

Partner

tim.johnson@brownejacobson.com

+44 (0)115 976 6557

View Profile Connect on Linkedin
Can we help you? Contact Tim

You may be interested in...