lady wearing glasses blue reflection

Regulating the future: How insurers can begin to lead on AI governance

Buki Obayiuwana
10/11/2025
lady wearing glasses blue reflection

Artificial intelligence (AI) has the potential to reshape the insurance sector, and while some insurers are experimenting, for many, progress remains cautious. Despite widespread experimentation, few insurers have embedded AI at scale in underwriting, claims or pricing. Most are still navigating uncertainty: how to innovate responsibly while staying aligned with emerging regulation and ensuring responsible use.

In our work with insurers across both markets, we’ve observed a consistent pattern. A handful of frontrunners are piloting AI-driven underwriting, claims automation or fraud detection. Others have quietly embedded models within their analytical frameworks. But most insurers remain cautious, still experimenting or waiting for regulatory clarity before scaling up or organisation constraints to ease innovation.

That caution is understandable. However, regulatory guidance is signalling that AI governance is no longer optional. It is becoming a core part of operational resilience and customer trust. Supervisors are no longer asking 'Do you have a policy?' or 'Do you have documentation?', they are asking 'Show us how it works'. Regulators are becoming increasingly aligned on this point, governance must be operational, evidenced and proportionate to risk. 

Point of view 1:
AI governance and regulatory responses should form part of a firm’s overall resilience and risk management framework

Some of the qualities that make AI powerful also introduce challenges such as bias, opacity, data provenance, fragmentation and explainability. The principles-based, sector-led approach to AI regulation, encourages existing regulators like the FCA and PRA to oversee AI risks within current frameworks.  

While there are no new AI-specific rules for UK insurers, there is a strong emphasis on applying existing standards to ensure fairness, transparency, and accountability in AI use. We expect that the regulators will be particularly focused on customer impacts, preventing bias in underwriting, ensuring explainability in automated decisions, and avoiding financial exclusion. At an industry level, the ABI recently issued practical guidance to help insurers implement the government’s five AI principles responsibly, while the FCA continues to support innovation through initiatives like its AI Lab and live testing environments.

Our observation is that many insurers still view regulatory frameworks as individual compliance streams.  However, insurers that view regulatory readiness as a strategic capability, rather than a compliance cost, will move faster. EIOPA, the FCA and the PRA are aligned on this point, governance must be operational, evidenced and proportionate to risk.

From our perspective, insurers that embed AI governance within their operating model are already pulling ahead. They will innovate with confidence, engage regulators constructively and maintain customer trust even under scrutiny.

Point of view 2:
Current market maturity is uneven, and most insurers still sit somewhere between early development and initial coordination on the AI governance curve

Across the market, we continue to see uneven maturity. When the hype is stripped away, most insurers still sit somewhere between early development and initial coordination on the AI 

Ad-hoc/experimental
  • AI use is confined to pilots
  • Minimal documentation and governance often limited to slide decks.
What good looks like
  • A defined AI strategy and risk appetite supported by an inventory and clear ownership.
Emerging frameworks
  • Policies and steering groups exist but are inconsistently applied.
  • Collaboration between data science, technology, risk and compliance is limited and often politicised.
What good looks like
  • AI is embedded in the three lines of defence.
  • Board oversight is active.
  • Controls are proportionate to model risk and AI use.
 
Operationalised
What good looks like
  • AI lifecycle management is fully integrated into business-as-usual.
  • Testing, monitoring, explainability and resilience form part of day-to-day assurance.

From our work across the sector, most insurers remain in the first column, with small but growing areas of maturity. The barriers include general scepticism, data fragmentation, legacy technology, limited specialist skills, clarity on the return on investment, and uncertainty about regulatory timing.

Point of view 3:
Maturity is also uneven in the variety of ways AI is used across the sector

A further source of uneven maturity is the variety of ways AI is used across the sector. We see a clear spectrum, from personal experimentation with enterprise versions of Generative AI tools, to embedded AI within enterprise platforms, through to bespoke models in risk, underwriting, claims and operations.

AI use level  Description  Typical example  Governance implications 

Personal/ad-hoc use

Individual use of public GENAI tools for productivity or ideation. Microsoft Copilot, ChatGPT, Gemini, Claude.  Establish clear acceptable-use policies, guidance on data sensitivity and basic training.

 

Embedded/enterprise tools

AI features integrated into enterprise software. Microsoft 365 Copilot, AI in GRC Tools, AI in ERP systems, and Finance systems. Manage through vendor risk oversight and ICT controls e.g. under DORA or in line with NIST. 
Business process AI  AI-supported core workflows. Fraud detection, claims triage, and pricing algorithms. Apply full mode governance: documentation, validation, fairness testing and human oversight.
Bespoke/proprietary AI and LLMS Custom-built or fine-tuned models trained on proprietary data. Internal underwriting or document analysis LLMs. Treat as 'high-risk' with rule lifecycle control, explainability, an post-market monitoring.

 

We expect that all four levels will eventually coexist in many organisations and where this is the case, they should not be governed as if they present identical risk. That one-size approach is unsustainable. Instead, proportional governance should dictate a lighter approach for low-risk tools and a rigorous approach for systems that influence customers or capital.

Moving from policy to practice, building the AI-enabled operating model

AI governance is not bureaucracy. It is what allows insurers to experiment safely and to scale confidently. Mature insurers will innovate faster precisely because their guardrails are clear and embedded. Everyone knows how AI use is approved, monitored and challenged. Governance maturity means building control into everyday operations, not just publishing a policy.

Embedding AI maturity can be viewed through seven interrelated dimensions that together define how governance functions in practice.

Strategy and risk appetite: Setting the compass
A Board that is yet to articulate a clear AI risk appetite; risks falling behind.  The most mature firms now define where AI can add value, what levels of autonomy are acceptable and which practices are prohibited, for instance, emotion recognition or unreviewed auto-declines.  Each AI initiative should be mapped to strategic objectives and Consumer Duty outcomes. Without this, AI adoption risks becoming reactive rather than deliberate.
Organisation and accountability: Clear, collaborative but distributed ownership
AI Governance Committees, typically chaired by the CRO or COO bring together IT, risk, compliance, data, and business teams, promote collective ownership and eliminate the debates on who owns AI.  Each AI system or use case should have a named owner accountable for its performance, fairness and explainability. 
Effective Lines of Defence: Empowered second line
The second line must be equipped to challenge effectively, while Internal Audit provides independent, evidence-based review.  Embedding these responsibilities within existing committees ensures AI oversight feels part of the normal operating rhythm, not an added layer of bureaucracy.
Processes and controls: Embedded controls

As organisations mature, they should apply a consistent lifecycle to every AI model with controls embedded into workflows for example a trigger for human in the loop review.  This shifts governance from an annual review exercise into an ongoing management discipline.

Stage What good likes like 
Ideation and Experimentation Innovation sandbox using masked data, defined time limits and ethical review.
Design and Build Standard templates covering purpose, data lineage, bias testing and explainability.
Validation and Approval Formal second-line challenge, risk tiering and mapping of human oversight.
Deployment and Monitoring Continuous performance monitoring and regulatory-compliant incident playbooks.
Review and Assurance Regular audits, lessons learned, and board-level reporting.

Technology and Data.  Automated and continuous assurance
There is an emerging market in AI assurance, and many will begin to see this as part of their annual audits. AI assurance will evolve from periodic checks to continuous visibility. This should include AI registers that capture every model and vendor system, supported by platforms that provide version control, explainability and bias monitoring. Real-time dashboards to track fairness, drift and resilience, and are integrated with ICT-risk management systems and enable relevant MI and metrics reported through to the relevant governance committees.
People and Culture : Personal responsibility

Responsible AI depends on people, from Boards and executive who 'set the compass' through to AI engineers who builds models and end users who provide 'human-in-the-loop' oversight. Leading firms are investing in organisation wide AI literacy and develop practical playbooks for underwriters, actuaries and claims teams. They are also promoting a culture of constructive challenge, encouraging staff to question model outputs without fear of blame. We now have cross-functional roles such as AI champions, super users or responsible AI leads bridge data science, risk and operations, ensuring governance is embedded where AI is built and used.

Culture is the operating system of governance; without it, the best frameworks remain superficial.

Metrics and Learning – Closing the Loop

Mature firms will begin to monitor AI systems with defined metrics that link to fairness, reliability and customer outcomes. These indicators turn AI assurance into live MI and provide a good feedback loop between innovation and control. AI governance will be integrated into broader transformation and resilience programmes, and aligned with other regulatory initiatives, such as Consumer Duty, to avoid duplication and embed AI within the wider operating model.

Focus area Example metrics
Fairness Adverse impact ratios, renewal parity, remediation rates.
Reliability Model drift, approval stability, false positives/negatives.
Safety (Gen AI) Jailbreak success rate, prompt leaks, hallucination frequency.
Resilience ICT incident recovery times, vendor failover performance.
Customer value Fair-value scores, complaint overturn rates.

As the sector evolves, adapting operating models to reflect diverse AI use is no longer optional. Integrating AI governance is key to staying competitive and compliant. Download the checklist to explore the critical steps insurers should take to future-proof their operations.

AI governance maturity is not about more committees or documents. It is about building an operating model capable of managing varying AI use, from individual use of Copilot to bespoke LLMs, in a transparent, fair and resilient way.

Across our work, AI governance is fast becoming a strategic capability, not a compliance exercise and achieving this requires clarity, proportionality and evidence. Firms that begin embedding governance early are likely to help shape the benchmark for responsible and explainable AI. Those that wait for certainty risk finding that regulation arrives before their operating model is prepared.

Over the next two years, regulatory focus will move from policies to outcomes. Insurers will need to demonstrate not just that controls exist, but that AI systems behave as intended in real-world conditions. Those preparing now, with embedded governance, effective monitoring and clear accountability, will be ready for that shift. If you need further assistance, please reach out to Buki Obayiuwana or visit our AI-enabled transformation hub for more information.

 

Contact us


Buki Obayiuwana
Buki Obayiuwana
Managing Director and Head of Transformation

Download your 'Immediate 10-Point checklist'

Get the checklist now and take the first step toward responsible, scalable AI adoption.

* Complete required field