AI governance framework for insurers hero

Building a future-proof AI governance framework for insurers

The use of AI in insurance is growing. From underwriting and pricing to claims management and fraud detection, AI is driving efficiency and transforming the customer experience. However, as AI adoption increases, insurers face increasing pressure to ensure their AI models are fair, transparent, and compliant with evolving regulations.

In both the UK and EU, regulatory bodies are introducing governance requirements to ensure AI is safe, accountable, and explainable. For example, the new Data Use and Access Act 2025 facilitates the responsible and transparent use of AI and automation while seeking to maintain key protections for individuals regarding sensitive data and high-risk decisions.

But governance isn’t just about compliance, it’s about building trust with customers and ensuring AI remains a strategic asset rather than a risk.

So, what is AI governance and how can insurers establish robust AI governance?

When it comes to AI governance you need don’t always need to start from scratch. Many insurers will be looking to adapt their existing governance operating models, integrating appropriate AI policies, ensuring clear decision rights and establishing clear accountability for decisions relating to the investment, deployment and use of AI tools and techniques across the organisation.

Below are a few key areas of focus.

1. Establish a resilient AI governance operating model

Governance is the foundation of responsible AI adoption. Without a clear AI oversight structure, insurers risk regulatory non-compliance, biased decision-making, and reputational damage. 

There are three key steps for implementing AI governance.

Establish an AI governance operating model

  • AI is not just a technology issue; it’s a strategic business imperative. Ensure senior leadership awareness of the risks and use of AI and engage them in identifying your key AI risks and mitigations.

Define AI policies and accountability frameworks

  • Establish a cross-functional team with representatives from areas like actuarial, data science, compliance, legal, and risk management.
  • Align internal policies in line with your risk appetite and applicable regulatory guidance from the UK Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA) and other applicable legislation such as the EU AI Act and the Data Use and Access Act.
  • Define clear ownership of AI models, who is responsible for monitoring, reviewing, and updating AI-driven processes.
  • Implement audit trails for AI decision-making especially automated decision making, to ensure accountability and traceability.

Create an AI risk committee as appropriate to your organisation

  • Assign independent reviewers to assess AI ethics, fairness, and bias mitigation.
  • Establish mechanisms for escalating risks related to automated decision-making, data usage, and model accuracy.

2. Implement a comprehensive AI risk management program 

AI models are not static, they evolve based on new data inputs and external market conditions. Without proper risk controls, insurers may experience unexpected model drift, bias amplification, or security vulnerabilities. Insurers need to build a robust AI Risk Management Framework. This framework should be integrated into your existing ERM framework and include the following components.

Conduct AI Risk Assessments

  • Identify all AI use cases across the business and categorise them by risk level which aligned with the EU AI Act’s risk tiers or other risk categories as appropriate.
  • Assess potential risks related to model bias, data drift, explainability, and regulatory non-compliance.
  •  Ensure appropriate compliance.

Implement continuous model validation

  • AI models should be regularly tested, validated, and stress-tested to ensure they remain accurate and unbiased.
  • Establish performance benchmarks and review cycles to detect errors before they impact customers.

Enhance AI cyber security protections

  • Adversarial testing to detect vulnerabilities in AI models.
  • Encryption and access controls for sensitive AI-driven processes.
  • Incident response protocols for AI-related breaches.

3. Strengthen data governance and ethical AI practices 

AI models are only as good as the data they’re trained on. Poor data governance can lead to biased predictions, unfair pricing models, and regulatory penalties. They should consider good practices for data governance.

Ensure high-quality data inputs

  • Implement data validation controls to prevent garbage-in, garbage-out AI models.
  • Regularly audit data to remove bias, outdated records, or errors.

Comply with GDPR, UK Data Protection Act and/or EU AI Act and the new Data Use and Access Act 2025

  • Maintain full audit trails of AI decision-making for regulatory compliance.
  • Ensure AI-driven pricing and claims models do not discriminate against protected groups.

Establish ethical AI review boards

  • Implement internal ethics committees to review AI-driven decisions and ensure they align with fairness and inclusivity standards.
  • Provide AI bias mitigation training to data science and actuarial teams.

4. Enhance Consumer transparency and explainability

Customers have a right to understand how AI is making decisions about policy pricing, risk scoring, and claims approvals. Transparency is not just a regulatory obligation; it builds trust and confidence. Insurers can improve transparency by doing the following:

Develop and embed AI disclosure policies

  • Clearly communicate when and how AI is used in decision-making.
  • Provide explainability reports for customers to understand why they received a specific outcome.

Enable AI contestability and redress mechanisms

  • Customers should be able to challenge AI decisions and request human intervention when needed.
  • Implement customer-facing tools that allow users to review and appeal AI-driven decisions.
  • Ensure compliance with the new Data Use and Access Act 2025. The Act reinforces the need for the “human in the loop” when it comes to automated decision making and requires that insurer ensure
    • Pre or post decision human review of automated decision making
    • all human reviewers are competent and authorised
    • customers know they have a right to challenge AI-led decisions and request a human review.

5. Upskilling employees on AI ethics and compliance

AI is not just a technology issue; it’s a business transformation. Ensuring that the whole organisation and in particular teams across underwriting, claims, operations, finance, actuarial, risk, and compliance understand AI is crucial for governance success.

Provide ongoing AI ethics and compliance training

  • Raise awareness of employees at all levels on AI bias, fairness, and regulatory expectations.
  • Develop role-specific training for teams such as actuaries, underwriters, and claims handlers on how AI impacts their workflows.

Foster a culture of responsible AI Use

  • Encourage cross-functional collaboration between data science, risk, and legal teams to ensure AI models are ethically sound and regulatory compliant.
  • Establish an AI Ethics Charter to reinforce the insurer’s commitment to responsible AI adoption.

The Road Ahead: Navigating AI regulation and innovation

The UK and EU regulatory landscapes reflect two sides of the same coin, one fostering innovation through principles-based guidance, the other enforcing strict legal compliance.

For insurers, the key challenge is navigating the frameworks as relevant to them while leveraging AI to drive operational efficiencies and better customer outcomes. By adopting strong AI governance, transparent data practices, and proactive risk management, insurers can confidently deploy AI while ensuring compliance with evolving regulations.

What’s your AI governance strategy?

AI in insurance is here to stay, how is your organisation preparing? If you’d like to explore how your firm can implement AI responsibly while staying compliant, reach out to Buki Obayiuwana. Or if you are looking for tailored AI governance insights, please go to our AI enabled transformation hub for more information.

 

Contact us


Buki Obayiuwana
Buki Obayiuwana
Managing Director and Head of TransformationLondon