AI Governance – the Singapore Story (thus far..)

The rapid evolution of artificial intelligence continues to reshape economies, societies, and global power dynamics. As we stand in early 2026, the governance of AI has shifted from theoretical debate to urgent practical necessity. Jurisdictions worldwide are navigating a delicate balance: fostering innovation and economic advantage while addressing risks from bias and misuse to existential threats posed by advanced systems.

The global landscape remains fragmented yet increasingly sophisticated. The European Union leads with its comprehensive EU AI Act, the world’s first binding horizontal regulation, fully phased in by mid-2026 for high-risk systems. It classifies AI by risk levels—prohibited, high-risk, limited-risk, and minimal-risk—imposing strict obligations on providers and deployers for transparency, conformity assessments, and human oversight. This risk-based, prescriptive approach prioritises fundamental rights and safety but has drawn criticism for potential compliance burdens on startups.

In contrast, the United States maintains a flexible, innovation-oriented stance without comprehensive federal legislation. A patchwork of executive orders (such as those emphasising safe AI development and national security), NIST’s voluntary Risk Management Framework, sector-specific rules (e.g., FDA for medical AI), and accelerating state-level laws prevails. Recent federal actions underscore a pro-growth posture, with efforts to limit overly restrictive state measures. Stanford HAI’s AI Index Report 2025 highlights this dynamism: AI-related federal regulations doubled in 2024, while state legislation surged, reflecting growing policy urgency amid slow federal progress.

China adopts a centralised, state-driven model, building on its 2017 New Generation AI Development Plan to aim for global leadership by 2030. Regulations focus on generative AI, requiring registration, content labelling, security reviews, and alignment with socialist values—emphasising control, cybersecurity, and national interests over individual rights.

The United Kingdom pursues a principles-based, pro-innovation framework, avoiding prescriptive laws in favour of sector-specific guidance and voluntary commitments, positioning itself as a hub for responsible AI.

Other regions contribute: emerging frameworks in South Korea and Vietnam, international efforts via the OECD, G7 Hiroshima Process, and UN dialogues, and growing emphasis on AI safety institutes and cooperative networks.

Amid this diversity, Singapore stands out as a pragmatic, innovation-first exemplar. Rooted in the Smart Nation vision since 2014, Singapore treats AI as a frontier technology for “quantum leaps” in productivity across high-impact sectors like healthcare, finance, and logistics. No AI-specific legislation exists; instead, a suite of voluntary, practical tools builds trust and enables deployment.

Key milestones trace this journey:

2018: Monetary Authority of Singapore’s FEAT Principles for fairness, ethics, accountability, and transparency in financial AI.

2019: Model AI Governance Framework (updated 2020), voluntary guidance on internal structures, human involvement, operations, and communication—drawing from OECD, EU, and IEEE principles.

2019/2023: National AI Strategy (NAIS 2.0), focusing on ecosystem strengthening, talent cultivation, infrastructure, and agile, risk-based interventions.

2022: AI Verify, a self-assessment toolkit testing trustworthiness against 11 global principles.

2023: AI Verify Foundation, advancing open-source benchmarks and tools.

2024: Governance Framework for Generative AI and Singapore AI Safety Institute for safety research and multilingual evaluation.

2026: Landmark Model AI Governance Framework for Agentic AI (MGF), launched in January at the World Economic Forum. This world-first addresses autonomous “agentic” systems capable of independent reasoning, planning, and multi-step actions. It outlines four dimensions: upfront risk assessment and bounded autonomy, meaningful human accountability, technical controls (oversight, traceability, sandboxing), and end-user transparency. Voluntary and iterative (Version 1.0 invites feedback), it embodies Singapore’s agile ethos—providing guardrails without stifling progress.

A significant recent development in the financial sector is the Monetary Authority of Singapore’s (MAS) proposed Guidelines on Artificial Intelligence (AI) Risk Management (often referred to as AIRG or MAS AI RG 2025), issued for public consultation on 13 November 2025. These guidelines apply to all financial institutions (FIs) and build on the existing FEAT principles by setting supervisory expectations for oversight of AI risk management, key systems/policies/procedures, lifecycle controls (covering data, fairness, monitoring, explainability, and third-party risks), and capabilities/capacity for AI use—including generative AI and emerging AI agents. They emphasise board/senior management accountability, AI use case inventories, risk materiality assessments, proportionate application based on FI size and risk profile, and a holistic lifecycle approach. The consultation closed on 31 January 2026, with a proposed 12-month transition period post-finalisation, signalling a shift toward more structured, yet proportionate, governance in finance while complementing national horizontal frameworks.

Standardisation supports this through the Singapore Standards Council, adopting ISO/IEC standards and developing national references on AI security and use cases.

Singapore’s approach resonates globally. The World Economic Forum highlights agile governance as essential for scaling AI responsibly in the agentic era, stressing transparency, real-time monitoring, and assurance. Stanford HAI’s 2025 report notes surging policy activity worldwide, with legislative mentions rising sharply.

Experts reinforce urgency: Geoffrey Hinton and Yoshua Bengio warn of existential risks, advocating robust safety and coordination. Demis Hassabis emphasises alignment and testing. Max Tegmark calls for proactive treaties, while Erik Brynjolfsson focuses on economic transformation via responsible scaling. White papers from Anthropic (responsible scaling policies), DeepMind, and OpenAI stress iterative risk assessment and red-teaming—principles mirrored in Singapore’s toolkits and MGF.

Institutions like MILA advance ethical AI and bias mitigation, aligning with Singapore’s multilingual focus.

Budget 2026, Workforce Transformation, and the Next Chapter of AI Governance

If the earlier years of AI governance in Singapore were about principles, frameworks and voluntary guidance, Budget 2026 signals something more decisive: national-scale execution.

This year’s Budget moves beyond rhetoric and into coordinated economic repositioning. AI is no longer framed merely as a digital capability — it is positioned as a foundational growth engine, a productivity multiplier, and a strategic pillar of national resilience.

AI as Economic Infrastructure

Budget 2026 reinforces three structural priorities:

Enterprise AI adoption at scale Expanded funding mechanisms and co-investment schemes aim to accelerate AI deployment across SMEs and large enterprises alike. The emphasis is not just experimentation, but operationalisation — embedding AI into core workflows, supply chains, customer journeys and regulatory processes. Compute, data and digital infrastructure Continued investment in sovereign AI capabilities, secure data environments, and trusted compute infrastructure signals that AI capacity is now viewed as economic infrastructure — akin to ports, aviation hubs, or financial markets. Trusted AI as a competitive advantage Singapore’s longstanding approach — voluntary yet structured governance — is being translated into a differentiator. In an increasingly fragmented global environment, trust is becoming exportable.

This is not industrial policy in the traditional sense. It is the shaping of a national AI ecosystem grounded in governance credibility.

Workforce: From Displacement Anxiety to Capability Acceleration

Perhaps more importantly, Budget 2026 places substantial weight on workforce transformation.

AI is not treated as a labour-reducing force but as a capability amplifier. The salient aspects include:

Expanded AI upskilling pathways across technical and non-technical roles Stronger integration of AI literacy into mid-career transitions Enhanced partnerships between industry, Institutes of Higher Learning, and government agencies Support for companies to redesign jobs — not merely automate them

This reflects a subtle but critical philosophical stance:

AI policy must integrate economic competitiveness with human dignity.

In my own framing of HX — Human Experience as the integration of CX and EX — this is precisely the moment that matters. If enterprises deploy AI purely for efficiency, trust erodes. If they deploy AI to augment employees, elevate decision-making, and reduce cognitive load, then productivity and purpose align.

Singapore’s workforce strategy recognises this.

Institutional Leadership: The National AI Council

Governance, however, requires orchestration.

The appointment of Prime Minister Lawrence Wong as Chair of the National AI Council (NAIC) is not symbolic. It reflects a structural reality: AI is no longer a sectoral issue. It is whole-of-government, whole-of-economy, and whole-of-society.

By placing AI governance under direct Prime Ministerial oversight, Singapore signals three things:

AI is strategic, not tactical. Coordination across ministries is essential. Governance must evolve alongside capability.

The NAIC’s role — aligning safety, innovation, talent, infrastructure and economic strategy — ensures that the Model AI Governance Framework, AI Verify initiatives, and the recent Agentic AI guidance are not isolated artefacts, but components of an integrated national approach.

From Frameworks to National Positioning

When viewed in totality, Budget 2026 marks a transition:

From governance design → to governance deployment From AI pilots → to AI productivity From AI ethics → to AI-enabled national competitiveness

In a world where regulatory regimes are diverging — the EU formalising through legislation, the United States navigating federal and state fragmentation, China advancing state-directed controls — Singapore continues to chart a pragmatic path.

Flexible. Structured. Trust-driven. Economically grounded.

The next phase will not be judged by white papers or frameworks, but by outcomes:

Are enterprises measurably more productive? Are workers meaningfully more capable? Does trust scale alongside autonomy — particularly in the age of Agentic AI?

Budget 2026 suggests that Singapore intends to answer these questions not in theory, but in execution.

And that, perhaps, is the most significant governance shift of all.

Singapore’s model—voluntary frameworks, testing sandboxes, iterative updates, ecosystem-building—offers a compelling middle path: innovation thrives when trust is assured. As agentic and frontier AI accelerate, global alignment remains vital to prevent fragmentation and capture shared benefits.

I see AI governance as an enabler: trusted systems accelerate adoption, unlock value, and deliver societal gains. Singapore’s journey, from foundational frameworks to agentic guidance, exemplifies practical, collaborative leadership.

What are your thoughts on the global divergence in AI governance—and where might convergence emerge? I’d love to hear in the comments.

Leave a comment