From AI -> GenAI Risk Tiering -> Agentic AI Governance in Financial Services

Financial services has long relied on robust governance models: Basel’s operational risk framework, MAS TRM guidelines, ECB supervisory expectations, MiFID’s algo-trading guardrails. These frameworks assume three things:

Human accountability is direct and proximate. Models are static and validated pre-deployment. Automation is bounded by deterministic rules.

GenAI has already broken these assumptions by introducing stochastic, non-deterministic outputs. Agentic AI takes this further by chaining tasks and acting across (eco)systems. Traditional model risk management (MRM) and operational risk frameworks are already at breaking point.

1. Expandeds Risk Dimensions

With GenAI, risks were mostly about outputs (accuracy, bias, explainability).

With Agentic AI, risks extend to actions:

Autonomy risk: Agents execute workflows without continuous human approval. Escalation failure: Missed or delayed hand-off from AI to human in critical scenarios. Multi-agent interactions: Agents coordinating with each other may amplify systemic risk. Integration risk: Agents acting across multiple systems (trading, KYC, surveillance) can propagate errors rapidly.

👉 This means the tiering lens must evolve from “use case materiality” to “systemic action potential.”

2. Human Oversight Models Shift

GenAI: Focus on Human-In/On-the-Loop effectiveness (fidelity, time-efficiency). Agentic AI: Oversight shifts to Human-Above-the-Loop (governance frameworks, circuit breakers, escalation triggers). Humans can no longer review every output — instead, they design control boundaries (like trading limits or kill switches).

3. Control Environment Redesign

Banks will need to extend their model risk management (MRM) frameworks into Agentic Risk Management (ARM), with new control types:

Pre-commit controls: Sandboxing, scenario stress-tests before an agent can act. Real-time guardrails: Hard-coded thresholds (e.g., no trade > $X without escalation). Post-action monitoring: Continuous audit logs, anomaly detection on agent behaviours. Meta-controls: AI agents that watch other agents for compliance breaches (AI-as-a-control).

4. Tiering in the Agentic Context

Risk tiering evolves from 3 lenses → 4 lenses:

Inherent model risk (still relevant). Control strength (but now includes autonomy guardrails). Business materiality (consequence of failure). Agency scope — the level of independent action the agent is authorised to take.

👉 Example:

Low risk: Agent drafting HR FAQs in a sandboxed environment. Medium risk: Agent generating client KYC reports with human sign-off. High risk: Agent autonomously rebalancing trading portfolios or escalating suspicious transactions directly to regulators.

5. Regulatory and Legal Evolution

GenAI today: No legal personality; humans/orgs accountable. Agentic AI tomorrow: Banks may need to define “AI fiduciary duties” or delegated authority registers (like trader mandates). Regulators will likely demand pre-registration of agentic systems, continuous assurance, and independent validation — similar to algo-trading controls under MiFID or MAS TRM.

The Big Shift

👉 GenAI tiering = outputs + human oversight.

👉 Agentic AI tiering = actions + systemic safeguards.

In banking, that means:

From reviewing answers → to governing decisions. From fixing hallucinations → to preventing runaway actions. From compliance-by-design → to agency-by-design.

🔑 Summary:

Agentic AI requires a new layer of risk governance focused on autonomy, escalation, and systemic control. Risk tiering must explicitly consider the scope of agency — because the greater the independence granted, the higher the risk tier, regardless of model accuracy.

Leave a comment