Responsible AI in Insurance – Beyond Compliance to Trust

Abstract

The insurance sector is undergoing rapid transformation driven by artificial intelligence (AI). From underwriting and pricing to claims automation and fraud detection, AI is re-wiring actuarial science and operational risk management. Yet the deployment of AI in insurance raises acute concerns around fairness, explainability, discrimination, and systemic stability. This paper examines the global landscape of AI regulatory frameworks as they apply to insurance, with a comparative review of the European Union, United States, United Kingdom, and international standards (OECD, NIST, ISO). The paper then undertakes a deep dive into Singapore’s regulatory architecture, anchored in the Monetary Authority of Singapore (MAS) FEAT Principles, the Veritas Initiative, the PDPC AI Guidelines, and AI Verify / Generative AI Governance Framework. The analysis identifies distinctive features of Singapore’s principles-based, operationalised approach and proposes a practical compliance and governance blueprint for insurers. The paper concludes with scenario-based implications for the governance of Agentic AI and outlines areas for future research.

1. Introduction

Insurance is a trust-based industry. Policyholders pool risks on the promise that insurers will act fairly and transparently when allocating premiums and adjudicating claims. AI systems—especially in underwriting, claims management, distribution, and fraud detection—pose both opportunities (efficiency, personalisation, risk prevention) and risks (algorithmic discrimination, opacity, exclusion of vulnerable groups). Regulatory frameworks are converging towards principles of fairness, accountability, transparency, and contestability, but diverge in implementation. This divergence is most visible in how different jurisdictions treat AI in insurance as a “high-risk” use case, and in how supervisory authorities operationalise oversight.

2. Literature Review: AI Governance in Financial Services

EU AI Act (2024–25): categorises insurance risk assessment and pricing as high-risk systems. Requires conformity assessment, technical documentation, risk management, transparency, and human oversight. EIOPA supplements with AI governance principles and Opinions for supervisors. NAIC Model Bulletin (2023): in the US, establishes expectations for insurer AI Systems Programs (AISPs), with controls for governance, model inventory, testing, monitoring, and vendor oversight. Colorado SB21-169 and SB24-205 expand requirements on discrimination and high-risk AI. UK FCA/BoE/PRA: AI guidance (DP5/22; FS23/6) emphasises proportionate regulation, model risk management (PRA SS1/23), and alignment with the UK’s pro-innovation strategy. International standards: OECD AI Principles (2019); NIST AI Risk Management Framework (2023); ISO/IEC 42001:2023 AI Management Systems. Academic and policy debate: growing discourse on AI fairness metrics, explainability trade-offs, and regulatory arbitrage across jurisdictions.

3. Comparative Analysis: Global AI Regulatory Approaches for Insurance

3.1 European Union

Legal classification: Insurance underwriting/pricing = high-risk. Obligations: Risk management system, conformity assessment, quality data, logging, explainability, human oversight, post-market monitoring. Supervision: EIOPA guidance integrates Solvency II model risk supervision with AI-specific expectations.

3.2 United States

NAIC Model Bulletin: governance structures, risk-based testing, explainability, complaint handling. Colorado laws: prohibit unfair discrimination, mandate impact assessments and customer notifications for high-risk AI. Fragmented oversight: state-based, but NAIC’s Bulletin is becoming the de facto national standard.

3.3 United Kingdom

Regulator coordination: FCA/BoE/PRA joint papers on AI. PRA SS1/23: model risk governance expectations (documentation, validation, independent challenge). ICO guidance: transparency, lawful basis for data use, DPIAs for automated decisions.

3.4 International Standards

OECD AI Principles: human-centric, transparent, accountable AI. NIST AI RMF: governance functions (Map–Measure–Manage–Govern). ISO/IEC 42001: management system standard for AI, enabling certification and audit. These are being localised into sectoral regulatory regimes.

4. Deep Dive: Singapore’s AI Governance in Insurance

4.1 MAS FEAT Principles (2018)

Fairness, Ethics, Accountability, Transparency for AIDA (AI and Data Analytics) systems. Sector-specific, widely adopted as the ethical baseline for financial institutions.

4.2 Veritas Initiative & Toolkit (2020–23)

MAS, industry, and academia collaboration. Provides quantitative fairness metrics, explainability tools, governance checklists. Toolkit 2.0 extends FEAT beyond fairness to ethics, accountability, and transparency. Allows insurers to audit and benchmark their AI models.

4.3 PDPC / IMDA Instruments

Model AI Governance Framework (2019/2020): operational governance guidance (explainability, human-in-the-loop). AI Verify (2022–ongoing): testing framework + software for assurance; aligned with NIST AI RMF. Generative AI Governance Framework (2024): incident reporting, content safeguards, evaluation protocols. Advisory Guidelines on AI Decision Systems (2024): clarifies PDPA obligations for AI development, deployment, and procurement.

4.4 Distinctive Features of Singapore’s Model

Principles-based, operationalised: FEAT → Veritas provides regulator-endorsed toolkits. Cross-walked globally: AI Verify aligns with NIST, OECD, ISO. Data protection integration: PDPA guidance addresses AI-specific risks (training data, consent, automated decisions). Regulatory philosophy: balance between innovation enablement and risk assurance.

5. Implementation Framework for Insurers in Singapore

Governance Blueprint (aligned to FEAT + NIST + ISO 42001):

AI Policy & Inventory: board-approved AI policy, register of all AI/ML models. FEAT by Design: embed fairness, ethics, accountability, transparency into model lifecycle. Data Lifecycle Controls: PDPA compliance for data collection, use, storage, transfer. Testing & Assurance: AI Verify for robustness, bias, explainability; model cards and datasheets. Model Risk Management: extend Solvency II/ERM controls to AI. Customer Redress Mechanisms: explanation rights, contestability pathways. Third-Party Oversight: vendor due diligence, FEAT-aligned procurement clauses.

6. Discussion and Future Outlook

6.1 Agentic AI in Insurance

The transition from static predictive models to Agentic AI systems—autonomous agents capable of decision-making and goal-seeking—will stress current regulatory models. Insurance AI governance will require:

Continuous, real-time monitoring of model behaviour. Dynamic assurance mechanisms (AI-driven oversight of AI). Scenario planning for systemic risk (e.g., simultaneous agentic decisions across markets).

6.2 Scenario Outlook

Optimistic (Commonwealth Model): Responsible AI leads to fairer premiums, expanded access to protection, predictive prevention services. Pessimistic (Fortress Model): Opaque AI entrenches discrimination, exclusion, and regulatory capture.

7. Conclusion

AI governance in insurance is entering a new phase. Jurisdictions are converging on principles but diverge in enforceability and technical implementation. Singapore offers a unique principles-based but operationalised model—grounded in FEAT, Veritas, PDPA guidance, and AI Verify—that could serve as a template for other markets. Insurers must move beyond compliance to embed trust-by-design into their AI systems, especially as Agentic AI reshapes the industry’s risk, compliance, and ethical frontiers.

References (selected)

MAS (2018). Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of AI and Data Analytics. MAS / Veritas Consortium (2023). Veritas Toolkit 2.0. PDPC (2024). Advisory Guidelines on Use of Personal Data in AI Decision Systems. IMDA (2024). Generative AI Governance Framework. EU (2024). Artificial Intelligence Act. EIOPA (2021). AI Governance Principles. NAIC (2023). Model Bulletin on Use of AI Systems by Insurers. Colorado SB21-169; SB24-205. FCA/BoE/PRA (2022, 2023). DP5/22, FS23/6 on AI. PRA (2023). Supervisory Statement SS1/23 – Model Risk Management. OECD (2019). AI Principles. NIST (2023). AI Risk Management Framework. ISO/IEC (2023). 42001: Artificial Intelligence Management System Standard.

Leave a comment