As artificial intelligence evolves from passive tools to autonomous agents capable of independent decision-making and action, governance frameworks must keep pace to ensure safety, accountability, and innovation. Singapore, through its Infocomm Media Development Authority (IMDA) and related bodies, has positioned itself as a global leader in this space. Notably, deep research into similar frameworks reveals that no other country has yet released a dedicated governance model specifically for agentic AI—AI systems that can reason, plan, and execute tasks autonomously. Instead, jurisdictions like the European Union (via the EU AI Act, a risk-based regulatory framework that classifies AI systems by risk levels and imposes mandatory requirements on high-risk applications, with full enforcement phased in by 2026-2027), the United States (through NIST’s voluntary AI Risk Management Framework, supplemented by state-level laws but lacking a comprehensive federal mandate), and international bodies such as the OECD (with its 2019 AI Principles adopted by over 40 countries, emphasising ethical AI but without binding enforcement) rely on broader AI regulations or adaptations of existing laws to address agentic systems. For instance, the UK’s techUK has explored how agentic AI fits into current regulations, emphasizing reconciliation with data protection and safety standards, but without a standalone framework.
China has advanced technical frameworks like Alibaba’s Qwen-Agent for building agents, but governance remains embedded in general AI ethics guidelines rather than agent-specific policies, with a focus on state control and alignment with national interests.
Global initiatives, such as the AI Governance Network (AIGN)’s conceptual Agentic AI Governance Framework or PwC’s control considerations for agentic AI, offer high-level guidance but lack the country-level specificity and practical depth of Singapore’s approach.
Global initiatives, such as the AI Governance Network (AIGN)’s conceptual Agentic AI Governance Framework or PwC’s control considerations for agentic AI, offer high-level guidance but lack the country-level specificity and practical depth of Singapore’s approach. This absence underscores Singapore’s proactive stance, which contrasts with the EU’s more prescriptive, binding regulations that could potentially stifle innovation, as criticised by some experts, versus the US’s fragmented, voluntary approach that risks uneven adoption.
Experts like those from the Center for Strategic and International Studies (CSIS) note that varying definitions of “agentic AI” globally complicate governance, potentially leading to fragmented regulations.
Yoshua Bengio, a leading AI researcher and Turing Award winner, has emphasized the need for international coordination on autonomous AI risks, praising adaptive frameworks like Singapore’s for balancing innovation with safeguards while warning of power concentration in AI giants without global oversight.
Research houses such as Gartner highlight that without dedicated agentic governance, organisations risk over-reliance on general AI policies, which may not address unique threats like tool misuse or cascading errors in multi-agent systems.
In this blog, we’ll trace Singapore’s AI governance timeline chronologically, detailing key milestones, technical components, and insights from experts and research firms. This narrative illustrates how Singapore has built a layered, evolving ecosystem that prioritises trustworthiness while fostering AI adoption, often in contrast to more rigid or laissez-faire approaches elsewhere.The Foundations: Launching the Model AI Governance Framework for Traditional AI (2019–2020)Singapore’s AI governance story begins with a national vision to harness AI for economic growth while mitigating risks. In November 2019, the government unveiled its National AI Strategy (NAIS), aiming to invest S$500 million over five years in AI research, talent development, and industry adoption. This set the stage for governance, emphasising ethical AI use across sectors like healthcare, finance, and transportation.The cornerstone was the Model AI Governance Framework (MGF), first released in January 2019 by IMDA and the Personal Data Protection Commission (PDPC). This voluntary framework targeted traditional AI—systems focused on prediction, classification, and optimisation—providing guidelines for organisations to deploy AI responsibly. It was structured around four principles:
- Human-Centricity: Ensuring AI decisions are explainable, transparent, and fair to build trust.
- Internal Governance: Establishing accountability structures, such as AI ethics boards and risk assessments.
- Data Management: Guidelines for secure, high-quality data handling under Singapore’s Personal Data Protection Act (PDPA).
- Stakeholder Interaction: Promoting transparency with users and regulators.
The framework included practical tools like job redesign guides for AI-impacted roles and case studies from early adopters. A second edition followed in January 2020, incorporating feedback from over 100 organizations and international experts. Updates emphasised bias mitigation techniques (e.g., diverse training datasets) and robustness testing (e.g., adversarial simulations).
In contrast, during this period, the EU was developing its AI Act proposal (formally introduced in 2021), which adopted a more regulatory stance with prohibitions on certain high-risk uses, while the US focused on voluntary NIST guidelines without enforcement. Expert perspectives at the time were laudatory.
Andrew Ng, co-founder of Coursera and a prominent AI advocate, commended Singapore’s approach for being “innovation-friendly” without heavy regulation, contrasting it with more prescriptive models like early EU proposals. McKinsey reports from 2020 noted that Singapore’s framework accelerated AI adoption in Asia-Pacific, with 60% of surveyed firms reporting improved governance practices. However, some critics, including researchers from SMU, argued it lacked enforcement teeth, relying on voluntary compliance.
Geoffrey Hinton, often called the “Godfather of AI,” early on warned about the dangers of unregulated AI, advocating for governments to dedicate resources to safety research to prevent existential risks, a view that aligns with Singapore’s emphasis on ethical foundations but pushes for stronger global regulations. Fei-Fei Li, the “Godmother of AI,” echoed this by calling for human-centered governance that prioritises dignity and well-being, critiquing sensationalist narratives and favoring pragmatic, science-driven policies over ideological ones.
Building Trust: AI Verify and the Shift to Generative AI (2023–2024)
By 2023, generative AI (GenAI) exploded globally with tools like ChatGPT, prompting Singapore to update its strategy. In December 2023, NAIS 2.0 was launched, allocating S$1 billion for advanced AI compute infrastructure, talent pipelines, and public-private partnerships. This included the AI Verify Foundation, established in May 2023 as a non-profit collaboration between IMDA and industry leaders like IBM and Google. AI Verify introduced the world’s first AI governance testing framework and toolkit, open-sourced for global use. It featured 11 test criteria (e.g., safety, fairness, robustness) and a process for benchmarking GenAI models against risks like hallucinations, toxicity, and bias. Over 100 companies piloted it by mid-2024.The Model AI Governance Framework for Generative AI was proposed in January 2024 and finalised in May 2024 by IMDA and AI Verify. Building on the original MGF, it addressed GenAI-specific risks:
- Nine Dimensions: From accountability (e.g., watermarking outputs) to content provenance (tracking data origins) and security (protecting against prompt injections).
- Risk Tiering: Classifying models by deployment scale and sensitivity.
- Evaluation Tools: Catalogs for safety testing, including red-teaming exercises.
Comparatively, China’s approach to GenAI involves strict state oversight, requiring AI models to align with socialist values, while Japan’s soft-law guidelines promote self-regulation. International experts hailed this as a “global trustworthy AI solution.” The World Economic Forum (WEF) spotlighted it in 2023, with Singapore’s Minister Josephine Teo noting its role in demonstrating verifiable AI deployment. Fei-Fei Li praised the framework’s emphasis on transparency, arguing it sets a benchmark for Asia-Pacific nations amid US-China tensions. Deloitte analyses from 2024 suggested Singapore’s approach reduced GenAI adoption barriers, with 70% of regional firms citing improved confidence in ethical use. Critiques from Cambridge researchers pointed to potential over-reliance on self-assessment, advocating for third-party audits. Mo Gawdat, former Google X executive and AI ethicist, views GenAI as a mirror of humanity’s flaws, warning of 15 years of chaos without ethical governance, and calls for symbiotic human-AI development to mitigate job disruptions and societal risks. Erik Brynjolfsson, Stanford economist, highlights GenAI’s productivity boosts but stresses regulatory needs to address workforce impacts, advocating co-governance models that blend top-down and decentralised approaches.
Addressing Autonomy: The Model AI Governance Framework for Agentic AI (2025–2026)
As AI advanced to agentic systems—capable of multi-step planning, tool use, and interaction—Singapore anticipated new risks like unauthorised actions, memory poisoning, and automation bias. Drafts for agentic and quantum AI governance emerged in October 2025, focusing on resilience against emerging threats.
twobirds.comThe culmination was the Model AI Governance Framework for Agentic AI (MGF for Agentic AI), unveiled on January 22, 2026, at the WEF in Davos by Minister Josephine Teo.
This 1.0 version, described as the world’s first comprehensive guide for agentic AI, extends prior frameworks to cover deployment of in-house or third-party agents. Key components include:
- Core Elements: Definitions of agent architecture (e.g., reasoning engines like LLMs, tools for actions, protocols for multi-agent communication).
- Risk Mitigation: Four pillars—assess/bound risks upfront (e.g., threat modeling, sandboxing); ensure human accountability (e.g., oversight checkpoints); implement technical controls (e.g., plan reflection, robustness testing); enable end-user responsibility (e.g., transparency on capabilities).
- Practical Recommendations: Pre-deployment testing for workflow reliability, post-deployment monitoring for anomalies, and graduated rollouts.
Unlike Singapore’s voluntary model, the EU AI Act treats agentic AI under high-risk categories, requiring conformity assessments, while the UK’s pro-innovation stance mirrors Singapore’s flexibility but lacks agent-specific detail. Early reactions are positive. AI expert Jorge Maestre Vidal called it “one of the most comprehensible works on governance in AI autonomous agents,” highlighting its relevance for defense applications like DARPA’s CASTLE. Paul Gurnett, an AI governance enthusiast, praised it for providing “guardrails” through best practices and risk controls. Research from CSIS warns that without such frameworks, agentic AI could exacerbate global governance gaps.
Demis Hassabis, CEO of Google DeepMind, urges caution on agentic AI’s rapid progress, supporting pauses in development if competitors agree, to allow society and regulations to catch up, while emphasising its potential for radical abundance if governed well. Dario Amodei, CEO of Anthropic, describes agentic AI’s phase as the “adolescence of technology,” warning of a 25% risk of catastrophic outcomes without a mix of voluntary company actions and government interventions to handle unimaginable power. Forrester predicts Singapore’s model will influence ASEAN partners, with 50% adoption in regional enterprises by 2028.
Looking Ahead: Singapore’s Lasting Impact
Singapore’s timeline—from the 2020 MGF for traditional AI, through GenAI updates in 2024, to agentic AI in 2026—demonstrates an iterative, stakeholder-driven approach. By avoiding rigid laws and focusing on voluntary, testable frameworks, it has earned acclaim for flexibility, differing from China’s control-oriented model or the EU’s compliance-heavy regime. As agentic AI proliferates, experts like Bengio urge global alignment, positioning Singapore as a blueprint. Challenges remain, such as enforcement and international harmonization, but its story is one of foresight in an uncertain AI landscape, with voices like Hinton’s call for regulation to prevent loss of control and Amodei’s emphasis on balancing innovation with safeguards highlighting the urgency of adaptive governance.


Leave a comment