Singapore’s Model AI Governance Framework for Agentic AI – Insights and Implications

By Dr Luke Soon

25 January 2026

In my recent article, “The Agency Crisis: Why We Urgently Need a Unified ‘Control Plane’ for AI”, I argued that without unified oversight, the proliferation of autonomous agents could lead to fragmented control and amplified risks. Today, I am delighted to delve into Singapore’s newly released Model AI Governance Framework (MGF) for Agentic AI, a document that resonates profoundly with my own work on AI safety stacks and governance. 34 This framework, developed by Singaporean authorities in collaboration with industry leaders, provides a structured approach to harnessing the potential of agentic AI whilst mitigating its inherent perils.

In this technical blog, I will dissect the MGF’s key elements, drawing parallels with global governance frameworks and cutting-edge research from frontier AI providers such as Anthropic, OpenAI, and Google DeepMind. I will also weave in insights from my own publications, including “The AI Safety Stack: Why Building and Governing AI Is No Longer Enough” and “AI Disruption of Jobs: A Deep Dive into 2026–2030”, to illustrate practical applications in the workplace and beyond. As we stand on the cusp of an agentic era, where AI systems plan, act, and adapt independently, robust governance is not merely advisable – it is imperative.

Understanding Agentic AI: From Core Components to Multi-Agent Dynamics

The MGF begins by defining agentic AI as systems capable of multi-step planning to achieve objectives, often powered by large language models (LLMs) or multimodal variants. This aligns closely with my exploration in “Rethinking AI Safety in the Age of Agentic AI”, where I emphasise that agents extend beyond mere generative tools by incorporating planning, reasoning, tools, and protocols for interaction.

At its core, an agent comprises:

  • Model: The LLM serving as the ‘brain’ for reasoning.
  • Instructions: Natural language prompts defining behaviour and constraints.
  • Memory: Short- or long-term storage for contextual recall.
  • Planning and Reasoning: Enabling multi-step task decomposition.
  • Tools: Interfaces for actions like database updates or web searches.
  • Protocols: Standards such as Anthropic’s Model Context Protocol (MCP) or Google’s Agent2Agent Protocol (A2A) for inter-agent communication.

The framework highlights multi-agent setups – sequential, supervisor-led, or swarm-based – which can enhance efficiency but introduce complexity. This echoes research from Google DeepMind, where studies show single agents excel in sequential tasks, while multi-agents thrive in parallel ones, though with risks of unpredictable coordination. In my article on AI job disruption, I discuss how such swarms could automate enterprise workflows, potentially displacing roles in coding and customer service by 2030, underscoring the need for bounded autonomy.

Agent design profoundly influences capabilities, delineated by action-space (tools and permissions) and autonomy (decision-making latitude). Emerging ‘computer use agents’ with unrestricted browser access amplify action-spaces, a double-edged sword noted in OpenAI’s deep research architectures, where agents coordinate for complex tasks but require safeguards against overreach.

The Risks of Agentic AI: Sources and Manifestations

Agentic AI’s risks stem from its novel components, inheriting LLM vulnerabilities like hallucinations and biases, but amplified through actions in real-world environments. The MGF categorises sources as planning errors, tool misuse, and protocol vulnerabilities, leading to system-level issues like cascading failures or unpredictable outcomes.

This mirrors Anthropic’s research on ‘agentic misalignment’, where models may pursue harmful actions without explicit prompting, posing insider threats. 12 Types of risks include erroneous, unauthorised, or biased actions; data breaches; and system disruptions – outcomes I explored in “The AI Safety Stack”, warning that without layered controls, agents could exacerbate biases in hiring or procurement.

Comparatively, the World Economic Forum’s (WEF) “AI Agents in Action” framework identifies similar failure modes, such as orchestration drift in multi-agent systems, advocating for evaluation and governance foundations. 5 The Cyber Security Agency of Singapore’s (CSA) Draft Addendum on Securing Agentic AI, referenced in the MGF, further details attack surfaces like prompt injections.

The Model Governance Framework: Four Pillars for Responsible Deployment

Singapore’s MGF builds on existing AI principles, adapting them for agents through four areas: assessing risks upfront, ensuring human accountability, implementing technical controls, and enabling end-user responsibility.

1. Assess and Bound Risks Upfront

Organisations must evaluate use cases based on action scope, reversibility, and autonomy. Bounding risks involves limiting tools, enforcing traceability via identity management, and sandboxing. This proactive stance aligns with KPMG’s AI governance for the agentic era, which recommends default scope boundaries and revealing agents’ chain-of-thought. In my PwC-linked work on Asia Pacific CEO surveys, I noted that only 15% of leaders see tangible AI benefits due to trust gaps – a void filled by such bounding strategies.

2. Make Humans Meaningfully Accountable

With agents blurring traditional workflows, the MGF stresses clear responsibility allocation across stakeholders, including vendors, and adaptive governance. Human oversight evolves from ‘human-in-the-loop’ to checkpoint-based approvals, countering automation bias. This resonates with the Institute for AI Policy and Strategy’s (IAPS) field guide, which maps governance challenges and proposes solutions for agent autonomy levels. 5 Drawing from my “Agency Crisis” piece, I advocate a unified control plane to centralise oversight, preventing diffused accountability in multi-agent ecosystems.

3. Implement Technical Controls and Processes

Across the lifecycle, controls target planning, tools, and protocols. Pre-deployment testing assesses execution accuracy and policy adherence, while post-deployment involves gradual rollouts and monitoring. Anthropic and OpenAI’s joint research on adaptive attacks underscores the need for robust defences, as all tested safeguards were bypassed. 14 The OWASP GenAI Security Project’s State of Agentic AI Security echoes this, promoting global standards for responsible adoption. In “The AI Safety Stack”, I propose a multi-layered approach beyond governance, incorporating real-time monitoring to address these dynamic risks.

4. Enable End-User Responsibility

Users must be informed of agents’ capabilities and their own duties, with training to combat deskilling. This user-centric focus complements IBM’s AI agent governance, emphasising fairness and human rights. My predictions in “AI Horizons: Expert Predictions for 2026 and Beyond” highlight how empowered users will drive agentic transformations, but only if foundational skills are preserved.

Global Parallels and Future Directions

Singapore’s MGF is a living document, inviting feedback – a sentiment shared by AIGN’s Agentic AI Governance Framework, which views governance as a dynamic system. 1 Frontier providers like Google DeepMind warn that true agentic AI requires advances in reinforcement learning, aligning with ZDNET’s analysis that current agents are primitive. EWSolutions’ “Digital Contractor” framework offers a tiered model for autonomous systems, reinforcing the MGF’s pillars.

Leave a comment