Skip to main content

AI modelling biases in quote engines

16 May 2023
Kay Chand

The Equality Act 2010 defines protected characteristics as age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex and sexual orientation.

Subject to some very limited exemptions and only in certain circumstances, it is generally unlawful for premiums to be calculated based on protected characteristics.

However, in March 2022, the Citizens Advice Bureau urged the Financial Conduct Authority to investigate why people of ethnic minorities were paying more in premiums than white customers in the motor insurance market. The alarm has been raised again by the charity earlier in 2023.

We understand that the insurance industry does not collect data on race or ethnicity (at least not in order to calculate insurance premiums).

Yet, there is still a disparity between motor insurance premiums paid by those customers who are ethnic minorities and those who are white.

There has been evidence to suggest that a greater proportion of those from ethnic minority backgrounds live in areas of poverty. It may be that the algorithms assessing post codes of those areas have calculated a higher risk of crime or other impacting variables. Certain organisations dispute these explanations stating that there is still a premium paid by ethnic minorities over and above the poverty premium.

In an industry that is moving towards increased granularity and more personalised insurance premiums, the overriding principle of fairness surely must be paramount?

As the world starts to become more reliant on artificial intelligence, it is important to remember that AI is still an algorithm based piece of digital technology which nevertheless has the ability to learn. Over a period, the learnings of the algorithm may become skewed due to data that has been inputted along the line which may already have inherent biases within it.

AI in some ways can be thought of as a young child which grows and develops through learning. Whilst the adult provider will have instilled the core foundations in the child at an early stage for it to develop the abilities to function in society, from time to time the adult will need to step in to keep the child on track for example, if a child eats too many sweets, speaks to a stranger or doesn’t hold cutlery correctly.

In the same way, unlike traditional digital solutions, an AI digital solution should be subject to regular and rigorous testing to ensure that results are not skewed or impacted by biases. From time to time the AI digital solution may need to be fed with up to date synthetic data to ensure an accurate assessment of risk and that no unintended biases creep in. This data will need to reflect changes in the real world to reflect up to date risk factors.

When outsourcing the provision of quote management systems insurance providers should consider asking suppliers a question on the extent to which AI technology is incorporated within the solution and when it was last tested. For an in-house provided solution insurance providers should build such questions into their governance processes. Insurance providers should also ensure that they build in appropriate testing into their acceptance testing processes for early identification of any issues with the algorithm used by the AI digital solution.

The supplier of the AI digital solution should also be required to commit to ongoing and regular testing and provide its results to the insurance provider with an associated change plan to be developed and implemented to cater for any changes to the algorithm that may need to be released.

Insurance providers may also want to build in measures for monitoring accuracy or unintended biases into their service level regimes with their suppliers. They may also want to consider putting in place rights and remedies to protect them against any sanctions that may ultimately be imposed by a regulator where issues are found within the procured AI digital solution that results in unfair or inequitable premiums.

It is important for the insurance industry to take an ethical approach to their digital strategies to ensure trust and confidence in the market.

The EU is currently considering legislation to govern the use of AI digital technology. It is only a matter of time before the industry is mandated to ensure certain processes and protections are in place in respect of their use of AI digital technology.

Key contact

Key contact

Kay Chand

Partner

Kay.Chand@brownejacobson.com

+44 (0)330 045 2498

View profile
Can we help you? Contact Kay

You may be interested in...