Skip to main content

How can generative AI assist the insurance industry?

31 August 2023
Tim Johnson

Generative Artificial Intelligence (AI) utilises machine learning algorithms that learn from data to generate a human-like output. Large language models have the potential to assist insurers as information can be provided for it to grasp, summarise, analyse, translate and create an output, based upon the instructions and data set it is given.

How can AI help insurers?

By analysing previous statistics, drawing on relevant points and predicting future risks, the efficiency of risk research may be furthered, assisting underwriters to make more informed decisions. The technology also has the potential to support with client communications, by being programmed to respond to general queries. This may allow for conversations surrounding claims, policies and other matters to be dealt with by the model. With the ability to draft documentation too, AI has the potential to reduce the amount of time spent on administrative tasks by employees, allowing them to focus on more complex tasks. 

However, AI also poses a number of risks and challenges for insurers, including:

  • Inconsistent outputs – AI will not always produce the same outputs, even where the same input is provided. This can cause Conduct Risk challenges as there is an increased risk of unequal customer outcomes
  • Operational resilience – AI is still in its relative infancy. Any insurer using AI as part of its processes must take steps to ensure that those process are sufficiently resilient to maintain suitable levels of operational continuity
  • Material outsourcing – where AI is provided by a third party software provider, additional steps will need to be taken to ensure regulatory requirements relating to material outsourcing are complied with
  • AI is not infallible – there are countless of cases of AI producing output that is incorrect. Until AI improves, users should always double check the veracity of its output, which may nullify any efficiency gains from using AI in the first place 
  • Bias – AI uses historic data sets, which can include biases, so checks are required to ensure that insureds are not experiencing discrimination, for example, where premiums increase as a result of a risk prediction

Use of AI by insureds

In addition to the use of AI by underwriters, AI is increasingly used by insureds. Whilst such use may assist an insured’s business, there are additional risks that may not have been fully contemplated by underwriters. For example, as data is inputted into an AI system, privacy, data protection and intellectual property issues may arise where permission to use the data has not been provided or inputted data can be accessed by third parties. In addition, claims for fraud and reputational matters may arise due to AI’s ability to create realistic documentation and images. It should also be borne in mind that AI’s output can be incorrect, as a recent case has highlighted where US lawyers used AI to create court submissions, but failed to spot that the AI included reference to fictitious cases! 

As use of these models is on the rise (by both insurers and their insureds), insurers should consider the potential impact from both an internal and external perspective. In previous editions of The Word, we have considered risks and opportunities for insurers that are arising from the AI product boom. To hear further discussion of these issues, take a listen to our partner, Tim Johnson’s, interview for the Insuring Cyber podcast.

You may be interested in...