Could AI bias increase underinsurance?
Artificial intelligence (AI) could deepen existing inequalities in insurance access and affordability. Insurers are increasingly deploying AI in their underwriting systems. There is concern that, if not deployed correctly, this could leave vulnerable populations further marginalised from essential financial protection.
Difficulties in obtaining insurance cover that is affordable for the full extent of a risk may tempt insureds to take out insufficient insurance policy cover. Although underinsuring will mean lower premiums, the economic loss resulting from a claim can far outweigh initial savings, exacerbating financial difficulties.
Jarrod Johnson, director of Scenario Risk Partners was recently quoted in the Insurance Post as saying:
“While AI has the potential to rapidly speed up processes within the industry, it isn’t fundamentally challenging those processes.”
There is a risk that AI will only exacerbate the issue of underinsurance, with its reliance on historical data and pattern recognition, causing higher premiums for some businesses and individuals or reducing the time to decline others seeking insurance.
Historical data as a source of discrimination
The most prevalent source of bias stems from historical data itself. Past human decisions, reflecting societal biases and structural inequalities, are embedded in the datasets used for training AI models. The algorithm then learns and replicates these patterns without understanding their discriminatory nature.
For instance, if historical insurance data shows higher claim rates or default rates in certain communities - not because residents are inherently riskier, but because of issues like limited investment and economic opportunity - an AI system trained on this data is likely to assign lower risk scores or higher premiums to applicants from those areas, irrespective of their individual circumstances.
The proxy variable problem
Even when insurers exclude explicitly protected characteristics such as race, gender, or age from their algorithms, AI systems can identify and rely upon seemingly neutral variables that serve as proxies for these protected classes. Common examples include postcodes, credit information, education level, occupation, and even the colour of a car or patterns of late-night driving.
These proxy variables can function as surrogates for unfair discrimination, creating discrimination that is difficult to detect.
Feedback loops and compounding disadvantage
Worse still, biased AI decisions create new biased data, establishing feedback loops that can reinforce discrimination over time. If an algorithm unfairly denies insurance applications or charges higher premiums to individuals from certain groups, those individuals will have less positive insurance history data, further reinforcing the algorithm's bias against them in future assessments. This can create a self-perpetuating cycle of disadvantage.
Conclusion
Artificial intelligence is developed and designed by humans and using human data. It therefore puts a mirror on humanity, reflecting patterns in society and its biases and values (published in Daedalus: 'Mirror, Mirror, on the Wall, Who's the Fairest of Them All?'). A key challenge for the insurance industry is to build systems that do not mirror and reflect human biases and perpetuate historical disadvantage. Getting it right is key to the fairness of insurance markets.
Contents
- Insurance Insights: The Word, January 2026
- Climate risks outpacing insurers’ projections
- FCA acts on home and travel insurance super-complaint
- FCA focus on clarifying insurance policy wordings for understanding
- UK audit reform retreat: Weak response and implications for insurers
- Implications for professional indemnity insurers following Coady v Coady
Tim Johnson
Partner
tim.johnson@brownejacobson.com
+44 (0)115 976 6557