Essential Plays for Safer, Responsible Agentic AI Transformations

By Dr Luke Soon

18 November 2025

In a multipolar world where AI is reshaping financial services—from algorithmic trading and credit underwriting to fraud detection, anti-money laundering (AML), personalised wealth management, and regulatory reporting—the urgency for robust governance has never been greater. PwC’s 2025 Global Responsible AI Survey shows that while 92 % of financial institutions now deploy AI in customer-facing or core operational processes, fewer than 25 % have reached mature, enterprise-wide responsible AI programmes. This gap exposes firms to regulatory sanctions, reputational damage, and systemic risks, as highlighted in the Financial Stability Board’s October 2025 monitoring report on AI vulnerabilities in finance.

Drawing on the World Economic Forum’s AI Governance Alliance playbook Advancing Responsible AI Innovation (September 2025), Stanford Institute for Human-Centered Artificial Intelligence (HAI) research on human-centred AI design, PwC’s Responsible AI Toolkit and diagnostic frameworks, and the collective insights of the International Network of AI Safety Institutes (UK AI Security Institute, US AI Safety Institute at NIST, Singapore Digital Trust Centre, and others), the following nine plays offer a technically rigorous, finance-specific roadmap.

Dimension 1: Strategy and Value Creation

Play 1: Lead with a long-term, responsible AI strategy and vision for value creation

Financial services leaders must secure Board and EXCO sponsorship for a multi-year responsible AI strategy that links directly to business outcomes—e.g., higher risk-adjusted returns, lower conduct costs, and improved customer trust scores.

Financial services examples: HSBC’s global AI ethics board ties responsible AI KPIs to executive remuneration; Standard Chartered’s “AI for Good” framework prioritises financial-inclusion use cases in emerging markets while enforcing strict bias thresholds.

Play 2: Unlock AI innovation with trustworthy data governance

Banks and insurers rely on vast, sensitive datasets. Implement enterprise-wide data lineage, provenance tracking, and privacy-enhancing technologies:

Federated learning for collaborative credit-risk modelling across consortium banks without raw data sharing (e.g., FABLE consortium in Europe).

Data clean rooms for joint fraud-detection initiatives (e.g., UK Finance’s Synthetic Data Project).

Differential privacy and synthetic data pipelines with statistical fidelity checks for fair-lending stress testing—critical after the International AI Safety Report 2025 flagged synthetic-data-induced model collapse as a systemic risk in finance.

Play 3: Design resilient responsible AI processes for business continuity

Stress-test AI systems against regulatory change (EU AI Act, UK FCA/PRA rules, MAS FEAT principles) and macroeconomic shocks using scenario planning and regulatory sandboxes.

Example: A Tier-1 UK bank uses horizon-scans quarterly for “agentic AI + quantum” convergence risks to its high-frequency trading systems.

Dimension 2: Governance and Accountability

Play 4: Appoint and incentivise AI governance leaders

Appoint a Chief AI Ethics or Responsible AI Officer reporting to the Chief Risk Officer or CEO, backed by a cross-functional AI Governance Committee (Risk, Compliance, Legal, Data, Tech).

Financial services practice: Lloyds Banking Group and DBS Bank have federated models with business-unit AI champions who hold “veto rights” on high-risk deployments.

Play 5: Adopt a systematic, systemic, and context-specific approach to risk management

Map AI use cases against the EU AI Act risk tiers and NIST AI RMF. Conduct annual third-party model risk assessments (MRM 2.0) that include explainability, robustness, and third-party supply-chain risks.

Singapore view (Digital Trust Centre, Singapore AISI): Emphasises multilingual safety testing—critical for Asia-Pacific banks where non-English customer interactions dominate.

Play 6: Provide transparency into responsible AI practices and incident response

Publish AI system cards, model inventories, and near-miss registers. Establish mandatory internal reporting thresholds (e.g., >0.5 % shift in credit-decision disparity triggers escalation).

UK AI Security Institute recommendation (2025 evaluations): Financial firms should adopt standardised incident taxonomies to enable cross-industry learning without disclosing competitively sensitive data.

Dimension 3: Development and Use

Play 7: Drive AI innovations with responsible design as the default

Embed fairness-by-design in credit scoring (e.g., proxy-variable detection, counterfactual explanations) and fraud models (adversarial robustness testing). Use multidisciplinary teams including conduct risk, fair-lending, and vulnerable-customer specialists.

US AI Safety Institute at NIST (2025 guidance): Recommends red-teaming for financial crime models to prevent “reward hacking” where AI learns to game sanctions-screening rules.

Play 8: Scale responsible AI with technology enablement

Deploy automated assurance platforms for continuous monitoring of production models (e.g., drift detection, concept shift alerts). For agentic AI in robo-advisory or claims processing, implement permission hierarchies, human approval gates, and kill switches.

International AI Safety Report 2025 (Yoshua Bengio chair, 100+ experts): Warns that uncontrolled agentic systems in capital markets could amplify flash-crash risks; recommends API-level interoperability standards for oversight.

Play 9: Increase responsible AI literacy and workforce transition opportunities

Deliver tiered training: Board-level strategic modules, risk-function deep dives on model cards and bias metrics, and frontline staff sessions on “when to escalate AI decisions”. Partner with regulators on just-transition programmes as AI automates routine underwriting and compliance roles.

Stanford HAI & PwC joint research (2025): Firms with >80 % AI-literate staff see 40 % fewer conduct incidents and 25 % faster time-to-market for new AI products.

The International Network of AI Safety Institutes—now spanning the UK, US, Singapore, Canada, Japan, Korea, Australia, France, and the EU—consistently stresses that financial services is a “high-impact sector” requiring mandatory safety evaluations, third-party auditing, and cross-border information sharing on emergent risks. Their 2025 joint statements underscore that responsible AI is no longer optional compliance—it is the foundation of licence to operate, systemic stability, and sustained shareholder value.

Financial institutions that embed these nine plays today, leveraging tools such as PwC’s Responsible AI Diagnostic, AI Verify (Singapore), and the emerging global assurance standards, will lead the next wave of trusted, innovative finance.

Leave a comment