AI Governance – the Singapore Story (thus far..)

The rapid evolution of artificial intelligence continues to reshape economies, societies, and global power dynamics. As we stand in early 2026, the governance of AI has shifted from theoretical debate to urgent practical necessity. Jurisdictions worldwide are navigating a delicate balance: fostering innovation and economic advantage while addressing risks from bias and misuse to existential threats posed by advanced systems.

The global landscape remains fragmented yet increasingly sophisticated. The European Union leads with its comprehensive EU AI Act, the world’s first binding horizontal regulation, fully phased in by mid-2026 for high-risk systems. It classifies AI by risk levels—prohibited, high-risk, limited-risk, and minimal-risk—imposing strict obligations on providers and deployers for transparency, conformity assessments, and human oversight. This risk-based, prescriptive approach prioritises fundamental rights and safety but has drawn criticism for potential compliance burdens on startups.

In contrast, the United States maintains a flexible, innovation-oriented stance without comprehensive federal legislation. A patchwork of executive orders (such as those emphasising safe AI development and national security), NIST’s voluntary Risk Management Framework, sector-specific rules (e.g., FDA for medical AI), and accelerating state-level laws prevails. Recent federal actions underscore a pro-growth posture, with efforts to limit overly restrictive state measures. Stanford HAI’s AI Index Report 2025 highlights this dynamism: AI-related federal regulations doubled in 2024, while state legislation surged, reflecting growing policy urgency amid slow federal progress.

China adopts a centralised, state-driven model, building on its 2017 New Generation AI Development Plan to aim for global leadership by 2030. Regulations focus on generative AI, requiring registration, content labelling, security reviews, and alignment with socialist values—emphasising control, cybersecurity, and national interests over individual rights.

The United Kingdom pursues a principles-based, pro-innovation framework, avoiding prescriptive laws in favour of sector-specific guidance and voluntary commitments, positioning itself as a hub for responsible AI.

Other regions contribute: emerging frameworks in South Korea and Vietnam, international efforts via the OECD, G7 Hiroshima Process, and UN dialogues, and growing emphasis on AI safety institutes and cooperative networks.

Amid this diversity, Singapore stands out as a pragmatic, innovation-first exemplar. Rooted in the Smart Nation vision since 2014, Singapore treats AI as a frontier technology for “quantum leaps” in productivity across high-impact sectors like healthcare, finance, and logistics. No AI-specific legislation exists; instead, a suite of voluntary, practical tools builds trust and enables deployment.

Key milestones trace this journey:

2018: Monetary Authority of Singapore’s FEAT Principles for fairness, ethics, accountability, and transparency in financial AI.

2019: Model AI Governance Framework (updated 2020), voluntary guidance on internal structures, human involvement, operations, and communication—drawing from OECD, EU, and IEEE principles.

2019/2023: National AI Strategy (NAIS 2.0), focusing on ecosystem strengthening, talent cultivation, infrastructure, and agile, risk-based interventions.

2022: AI Verify, a self-assessment toolkit testing trustworthiness against 11 global principles.

2023: AI Verify Foundation, advancing open-source benchmarks and tools.

2024: Governance Framework for Generative AI and Singapore AI Safety Institute for safety research and multilingual evaluation.

2026: Landmark Model AI Governance Framework for Agentic AI (MGF), launched in January at the World Economic Forum. This world-first addresses autonomous “agentic” systems capable of independent reasoning, planning, and multi-step actions. It outlines four dimensions: upfront risk assessment and bounded autonomy, meaningful human accountability, technical controls (oversight, traceability, sandboxing), and end-user transparency. Voluntary and iterative (Version 1.0 invites feedback), it embodies Singapore’s agile ethos—providing guardrails without stifling progress.

A significant recent development in the financial sector is the Monetary Authority of Singapore’s (MAS) proposed Guidelines on Artificial Intelligence (AI) Risk Management (often referred to as AIRG or MAS AI RG 2025), issued for public consultation on 13 November 2025. These guidelines apply to all financial institutions (FIs) and build on the existing FEAT principles by setting supervisory expectations for oversight of AI risk management, key systems/policies/procedures, lifecycle controls (covering data, fairness, monitoring, explainability, and third-party risks), and capabilities/capacity for AI use—including generative AI and emerging AI agents. They emphasise board/senior management accountability, AI use case inventories, risk materiality assessments, proportionate application based on FI size and risk profile, and a holistic lifecycle approach. The consultation closed on 31 January 2026, with a proposed 12-month transition period post-finalisation, signalling a shift toward more structured, yet proportionate, governance in finance while complementing national horizontal frameworks.

Standardisation supports this through the Singapore Standards Council, adopting ISO/IEC standards and developing national references on AI security and use cases.

Singapore’s approach resonates globally. The World Economic Forum highlights agile governance as essential for scaling AI responsibly in the agentic era, stressing transparency, real-time monitoring, and assurance. Stanford HAI’s 2025 report notes surging policy activity worldwide, with legislative mentions rising sharply.

Experts reinforce urgency: Geoffrey Hinton and Yoshua Bengio warn of existential risks, advocating robust safety and coordination. Demis Hassabis emphasises alignment and testing. Max Tegmark calls for proactive treaties, while Erik Brynjolfsson focuses on economic transformation via responsible scaling. White papers from Anthropic (responsible scaling policies), DeepMind, and OpenAI stress iterative risk assessment and red-teaming—principles mirrored in Singapore’s toolkits and MGF.

Institutions like MILA advance ethical AI and bias mitigation, aligning with Singapore’s multilingual focus.

Singapore’s model—voluntary frameworks, testing sandboxes, iterative updates, ecosystem-building—offers a compelling middle path: innovation thrives when trust is assured. As agentic and frontier AI accelerate, global alignment remains vital to prevent fragmentation and capture shared benefits.

I see AI governance as an enabler: trusted systems accelerate adoption, unlock value, and deliver societal gains. Singapore’s journey, from foundational frameworks to agentic guidance, exemplifies practical, collaborative leadership.

What are your thoughts on the global divergence in AI governance—and where might convergence emerge? I’d love to hear in the comments.

Leave a comment