Artificial intelligence (AI) has the potential to reshape the insurance sector, and while some insurers are experimenting, for many, progress remains cautious. Despite widespread experimentation, few insurers have embedded AI at scale in underwriting, claims or pricing. Most are still navigating uncertainty: how to innovate responsibly while staying aligned with emerging regulation and ensuring responsible use.
In our work with insurers across both markets, we’ve observed a consistent pattern. A handful of frontrunners are piloting AI-driven underwriting, claims automation or fraud detection. Others have quietly embedded models within their analytical frameworks. But most insurers remain cautious, still experimenting or waiting for regulatory clarity before scaling up or organisation constraints to ease innovation.
That caution is understandable. However, regulatory guidance is signalling that AI governance is no longer optional. It is becoming a core part of operational resilience and customer trust. Supervisors are no longer asking 'Do you have a policy?' or 'Do you have documentation?', they are asking 'Show us how it works'. Regulators are becoming increasingly aligned on this point, governance must be operational, evidenced and proportionate to risk.
Some of the qualities that make AI powerful also introduce challenges such as bias, opacity, data provenance, fragmentation and explainability. The principles-based, sector-led approach to AI regulation, encourages existing regulators like the FCA and PRA to oversee AI risks within current frameworks.
While there are no new AI-specific rules for UK insurers, there is a strong emphasis on applying existing standards to ensure fairness, transparency, and accountability in AI use. We expect that the regulators will be particularly focused on customer impacts, preventing bias in underwriting, ensuring explainability in automated decisions, and avoiding financial exclusion. At an industry level, the ABI recently issued practical guidance to help insurers implement the government’s five AI principles responsibly, while the FCA continues to support innovation through initiatives like its AI Lab and live testing environments.
Our observation is that many insurers still view regulatory frameworks as individual compliance streams. However, insurers that view regulatory readiness as a strategic capability, rather than a compliance cost, will move faster. EIOPA, the FCA and the PRA are aligned on this point, governance must be operational, evidenced and proportionate to risk.
From our perspective, insurers that embed AI governance within their operating model are already pulling ahead. They will innovate with confidence, engage regulators constructively and maintain customer trust even under scrutiny.
Across the market, we continue to see uneven maturity. When the hype is stripped away, most insurers still sit somewhere between early development and initial coordination on the AI
From our work across the sector, most insurers remain in the first column, with small but growing areas of maturity. The barriers include general scepticism, data fragmentation, legacy technology, limited specialist skills, clarity on the return on investment, and uncertainty about regulatory timing.
A further source of uneven maturity is the variety of ways AI is used across the sector. We see a clear spectrum, from personal experimentation with enterprise versions of Generative AI tools, to embedded AI within enterprise platforms, through to bespoke models in risk, underwriting, claims and operations.
| AI use level | Description | Typical example | Governance implications |
|
Personal/ad-hoc use |
Individual use of public GENAI tools for productivity or ideation. | Microsoft Copilot, ChatGPT, Gemini, Claude. | Establish clear acceptable-use policies, guidance on data sensitivity and basic training. |
|
Embedded/enterprise tools |
AI features integrated into enterprise software. | Microsoft 365 Copilot, AI in GRC Tools, AI in ERP systems, and Finance systems. | Manage through vendor risk oversight and ICT controls e.g. under DORA or in line with NIST. |
| Business process AI | AI-supported core workflows. | Fraud detection, claims triage, and pricing algorithms. | Apply full mode governance: documentation, validation, fairness testing and human oversight. |
| Bespoke/proprietary AI and LLMS | Custom-built or fine-tuned models trained on proprietary data. | Internal underwriting or document analysis LLMs. | Treat as 'high-risk' with rule lifecycle control, explainability, an post-market monitoring. |
We expect that all four levels will eventually coexist in many organisations and where this is the case, they should not be governed as if they present identical risk. That one-size approach is unsustainable. Instead, proportional governance should dictate a lighter approach for low-risk tools and a rigorous approach for systems that influence customers or capital.
AI governance is not bureaucracy. It is what allows insurers to experiment safely and to scale confidently. Mature insurers will innovate faster precisely because their guardrails are clear and embedded. Everyone knows how AI use is approved, monitored and challenged. Governance maturity means building control into everyday operations, not just publishing a policy.
Embedding AI maturity can be viewed through seven interrelated dimensions that together define how governance functions in practice.
As organisations mature, they should apply a consistent lifecycle to every AI model with controls embedded into workflows for example a trigger for human in the loop review. This shifts governance from an annual review exercise into an ongoing management discipline.
| Stage | What good likes like |
| Ideation and Experimentation | Innovation sandbox using masked data, defined time limits and ethical review. |
| Design and Build | Standard templates covering purpose, data lineage, bias testing and explainability. |
| Validation and Approval | Formal second-line challenge, risk tiering and mapping of human oversight. |
| Deployment and Monitoring | Continuous performance monitoring and regulatory-compliant incident playbooks. |
| Review and Assurance | Regular audits, lessons learned, and board-level reporting. |
Responsible AI depends on people, from Boards and executive who 'set the compass' through to AI engineers who builds models and end users who provide 'human-in-the-loop' oversight. Leading firms are investing in organisation wide AI literacy and develop practical playbooks for underwriters, actuaries and claims teams. They are also promoting a culture of constructive challenge, encouraging staff to question model outputs without fear of blame. We now have cross-functional roles such as AI champions, super users or responsible AI leads bridge data science, risk and operations, ensuring governance is embedded where AI is built and used.
Culture is the operating system of governance; without it, the best frameworks remain superficial.
Mature firms will begin to monitor AI systems with defined metrics that link to fairness, reliability and customer outcomes. These indicators turn AI assurance into live MI and provide a good feedback loop between innovation and control. AI governance will be integrated into broader transformation and resilience programmes, and aligned with other regulatory initiatives, such as Consumer Duty, to avoid duplication and embed AI within the wider operating model.
| Focus area | Example metrics |
| Fairness | Adverse impact ratios, renewal parity, remediation rates. |
| Reliability | Model drift, approval stability, false positives/negatives. |
| Safety (Gen AI) | Jailbreak success rate, prompt leaks, hallucination frequency. |
| Resilience | ICT incident recovery times, vendor failover performance. |
| Customer value | Fair-value scores, complaint overturn rates. |
As the sector evolves, adapting operating models to reflect diverse AI use is no longer optional. Integrating AI governance is key to staying competitive and compliant. Download the checklist to explore the critical steps insurers should take to future-proof their operations.
AI governance maturity is not about more committees or documents. It is about building an operating model capable of managing varying AI use, from individual use of Copilot to bespoke LLMs, in a transparent, fair and resilient way.
Across our work, AI governance is fast becoming a strategic capability, not a compliance exercise and achieving this requires clarity, proportionality and evidence. Firms that begin embedding governance early are likely to help shape the benchmark for responsible and explainable AI. Those that wait for certainty risk finding that regulation arrives before their operating model is prepared.
Over the next two years, regulatory focus will move from policies to outcomes. Insurers will need to demonstrate not just that controls exist, but that AI systems behave as intended in real-world conditions. Those preparing now, with embedded governance, effective monitoring and clear accountability, will be ready for that shift. If you need further assistance, please reach out to Buki Obayiuwana or visit our AI-enabled transformation hub for more information.
Download your 'Immediate 10-Point checklist'
Get the checklist now and take the first step toward responsible, scalable AI adoption.
Thank you for requesting our checklist.