Insurance companies are increasingly wrestling with how to manage the risks posed by artificial intelligence — especially as they rely on AI for underwriting, claims, and customer interactions. Regulators, insurers, and risk managers are working to create frameworks that ensure AI systems are used responsibly, while still enabling innovation in the sector.
One major effort involves pushing for clear liability and governance rules. Insurance broker Aon, for instance, is calling for a national AI policy that encourages transparency and accountability. Their proposal includes requiring AI tools to have audited inventories, clearly defined ownership, and regular performance checks. This kind of governance is viewed as vital to ensuring AI is both powerful and trustworthy.
At the same time, regulators are keeping close tabs on how insurers use AI. A survey by the National Association of Insurance Commissioners (NAIC) found that most health insurers are already using AI or machine learning in their operations, and many are developing internal governance frameworks to align with regulatory guidelines. Meanwhile, some in the insurance industry argue against overly restrictive laws around AI, warning that too much regulation could hurt actuarial science and raise unfair barriers.
Experts also expect increased litigation risk. As insurers use AI more, they face potential lawsuits over algorithmic bias, claim-denial errors, or lack of transparency. That pressure is pushing companies to invest not just in AI tools, but in robust risk-management practices — essentially treating AI like a high-stakes underwriting line.