The Global AI Regulation Landscape: From Voluntary Frameworks to Global Assurance

By Dr Luke Soon

Over the past decade, AI governance has transitioned from aspirational guidelines to enforceable regulation at an unprecedented pace. What began as high-level ethical principles—the OECD AI Principles (2019), UNESCO’s Recommendation on the Ethics of AI (2021), and the G7 Hiroshima Process (2023)—has evolved into binding obligations such as the EU AI Act, China’s Generative AI Measures, and Brazil’s AI Bill. This rapid evolution raises profound questions of interoperability, trust, and global assurance.

From Principles to Statutes: A Taxonomy of AI Regulation

One of the greatest challenges in navigating AI regulation is the lack of a shared vocabulary. Some jurisdictions embrace horizontal frameworks (e.g. the EU AI Act, Brazil’s draft law) that categorise risk across all sectors; others rely on vertical, sector-specific approaches (e.g. the US, where AI oversight is distributed across agencies such as the FTC, FDA, and EEOC). Meanwhile, China has opted for technology-specific regulation, targeting algorithms, deepfakes, and foundation models with mandatory filings and pre-release security checks.

Scholars such as Halim & Gasser (2023) and Lévesque (2024) highlight how these divergent logics—risk-based oversight in Europe, sectoral self-regulation in the US, and state-driven control in China—create fragmentation that undermines trust. The OECD’s AI Policy Observatory echoes this, warning of “regulatory capture” risks when voluntary commitments are presented as de facto safeguards.

The convergence we see is not in form but in function: most frameworks now blend ex ante obligations (impact assessments, conformity checks, sandboxing) with ex post accountability (liability regimes, consumer protection). This hybridisation signals an emerging consensus that AI demands both preventive and corrective safeguards.

The Taxonomy of AI Regulation: A Global Comparison

AI regulation is fragmenting into multiple approaches, often described as comparing “apples to oranges.” A landmark study by Harvard and Stanford researchers introduces a taxonomy to make sense of these differences. It highlights critical dimensions:

Horizontal vs Vertical: The EU, Brazil, and Canada adopt broad, horizontal risk-based regulation; the US remains sector-specific and vertical; China sits in between, targeting specific technologies like recommender systems and generative AI. Ex Ante vs Ex Post: The EU and China lean on preventive safeguards, with conformity assessments and licensing before deployment. The US, by contrast, emphasises post-hoc liability. Brazil and Canada blend both. Technology vs Application Layer: The EU AI Act regulates both applications (e.g., biometric surveillance) and general-purpose AI models. The US and China focus heavily on technology types (frontier models, deepfakes). Enforcement Models: Centralised enforcement (EU AI Office, China’s CAC) contrasts with decentralised ones (US agencies, Brazil’s SIA). Each has strengths—coherence versus adaptability—but risks fragmentation. Stakeholder Participation: Civil society inclusion remains limited. Regulatory capture by industry remains a concern globally.

This comparative lens clarifies why Singapore’s path—anchored in AI Verify—is unique.

Singapore’s AI Verify: From Experiment to Assurance Infrastructure

In this global context, Singapore has taken a pragmatic path. Rather than rushing into a sweeping AI law, it pioneered AI Verify, a testing and certification framework that operationalises responsible AI.

AI Verify combines technical testing with process-based disclosure, providing developers and deployers with a structured way to evidence compliance. Crucially, it is now mapped to SS ISO/IEC 42001, the world’s first AI management system standard, enabling Singaporean firms to demonstrate global readiness.

This approach matters for three reasons:

Portability: Where the EU mandates and the US guides, Singapore equips—offering a toolkit that travels across jurisdictions. Assurance-by-design: AI Verify bridges the trust gap between voluntary codes and hard law by embedding assurance practices into everyday AI development. Alignment with international standards: By anchoring its framework to ISO/IEC 42001, Singapore positions itself as a node of regulatory interoperability rather than an isolated rule-setter.

Beyond Fragmentation: Towards Global Assurance

If left unchecked, regulatory divergence risks creating a compliance “Balkanisation” that stifles innovation. Firms could be forced into duplicative audits and conflicting risk assessments depending on jurisdiction.

To mitigate this, three trajectories are emerging:

Standardisation around ISO/IEC 42001 as the backbone for AI management systems, with countries like Singapore demonstrating practical implementation. Hybrid ex ante/ex post models, combining preventive safeguards with liability-driven accountability. Global assurance federations, where multilateral bodies (OECD, GPAI, Council of Europe’s AI Convention) provide frameworks for mutual recognition of AI audits and certifications.

What This Means for Trust

As I argued in my recent article on Singapore’s AI Governance Roadmap, trust is not a passive by-product of regulation. It must be engineered into the governance process through auditable assurance, technical transparency, and human oversight.

Singapore’s AI Verify offers precisely that: a living laboratory where industry, regulators, and civil society can co-create the infrastructure of assurance. In doing so, it positions Singapore not as a regulatory superpower, but as a trusted intermediary—a bridge between voluntary principles and binding law, between East and West, between innovation and safety.

Singapore’s Assurance-First Model

Unlike jurisdictions locked in legislative cycles, Singapore emphasises assurance through voluntary but rigorous testing and governance mechanisms. AI Verify, launched in 2022 and now part of global pilot programmes, provides:

Ex ante assurance: Transparent documentation, red-teaming, and risk assessments before deployment. Interoperability: Alignment with ISO/IEC 42001, OECD AI Principles, and the EU AI Act’s risk classifications. Trade enablement: Assurance artefacts (audit reports, risk profiles, compliance mappings) that can travel across borders, enabling businesses to prove “trustworthiness at source.”

In WTO’s framing, this assurance-first approach positions Singapore as a regulatory interoperability layer—a model that other resource-constrained economies can emulate by adapting open models and shared frameworks rather than reinventing them.

For businesses, AI governance is no longer a compliance checkbox—it is fast becoming a trade passport.

Assurance artefacts such as AI Verify test results, ISO/IEC 42001 certifications, and third-party audit reports will be demanded in cross-border supply chains. Trade costs will decline as firms streamline documentation, logistics, and compliance through AI-driven regulatory alignment. Digitally deliverable services—from cloud platforms to financial services—could expand by over 40%, amplifying the role of AI governance in market access.

Companies that invest in trust-by-design will not only mitigate risks but also secure competitive advantage in the AI-enabled global economy.

Conclusion: Assurance as the New Trust Infrastructure

The state of AI regulation globally is still fragmented, but the direction is unmistakable: towards assurance as the bedrock of trust. Standards, sandboxes, audits, and certifications are the new governance currency.

In this landscape, Singapore’s AI Verify shines as more than a local experiment—it is a prototype of the future. If adopted internationally, it could be the scaffolding for a truly interoperable AI assurance ecosystem.

A Distinctive Non-Prescriptive Model

Among global approaches to AI governance, Singapore stands out for its pro-innovation, non-prescriptive stance. Rather than legislating prematurely, it emphasises voluntary adoption, toolkits, and assurance frameworks.

The global AI regulatory landscape is still unsettled:

The EU AI Act stands as the most comprehensive horizontal framework, but faces challenges of fragmented enforcement. The US lacks federal AI law, relying on sectoral patchworks and ex post remedies. China enforces stringent, centralised, tech-specific rules, balancing innovation with information control. Brazil and Canada are experimenting with EU-inspired hybrids, but progress is uneven. Singapore provides a new paradigm: voluntary assurance as a bridge to regulatory convergence.

This spectrum underscores a hard truth: without regulatory interoperability, AI risks becoming another axis of global fragmentation. Singapore’s model shows a way forward—assurance frameworks that reduce friction, enable trade, and build trust across diverse regimes.

The Alan Turing Institute’s 2025 comparative study of AI governance regimes highlights Singapore’s model as one of “practical scaffolding rather than regulatory overreach” . This reflects a conscious strategy to balance trust with innovation.

Anchors include the Model AI Governance Framework (2019, updated 2020), the AI Verify toolkit (2022), the AI Verify Foundation (2023), the Generative AI Governance Framework (2024), and the recent designation of the Singapore AI Safety Institute (2025).

AI Verify and the Global Assurance Shift

The Alan Turing Institute white paper notes that AI Verify—launched in 2022—is the world’s first AI governance testing framework, combining technical tests with process checks against 11 internationally recognised principles .

This evolved into the AI Verify Foundation (2023), a not-for-profit with founding members such as IBM, Microsoft, Google, Salesforce, and Red Hat. With over 120 organisations now involved—including Adobe, Meta, and SenseTime—the Foundation plays a global role in creating open benchmarks, red-teaming protocols, and shared testing artefacts .

In 2025, the Foundation launched Project Moonshot, one of the world’s first open-source LLM evaluation toolkits, integrating benchmarking and adversarial testing for generative AI .

Generative AI: A Nine-Dimension Framework

The Model AI Governance Framework for Generative AI (2024) introduced nine interrelated dimensions:

Accountability Data Trusted Development & Deployment Testing & Assurance Incident Reporting Security Content Provenance Alignment R&D Public Good

These dimensions mirror international conversations captured in OECD AI Principles (2019), the NIST AI Risk Management Framework (2023), and emerging work from the EU AI Act (2024). Singapore’s framing is not legally binding but provides a structured, risk-based model for organisations to self-assess governance maturity .

Reflections from Safety & Research Institutes Globally

Several global research papers converge on similar themes:

Alan Turing Institute (2025): Emphasises tiered, risk-based assurance ecosystems where voluntary tools like AI Verify serve as precursors to eventual regulation. Stanford HAI (2024): Argues that voluntary governance accelerates innovation while building trust capital, provided assurance artefacts are transparent and comparable. OECD Working Party on AI Governance (2024): Highlights Singapore as a case study in “assurance-led innovation”, contrasting it with Europe’s compliance-first approach. World Economic Forum (2023): Notes that partnerships like Singapore’s collaboration with the Centre for the Fourth Industrial Revolution demonstrate how international legitimacy can be built through multilateral pilots.

Taken together, these studies suggest that Singapore’s model provides a testbed for global AI assurance, where voluntary mechanisms evolve into international standards.

As the WTO reminds us, AI could boost global trade by 40%—but only if trust keeps pace with innovation. The next phase of AI governance must move beyond compliance into assurance, from national frameworks into interoperable global standards.

Singapore, with AI Verify and its assurance-first approach, is uniquely positioned to shape this future. It offers the world a regulatory model that is not only pro-innovation but also pro-trust, anchoring AI’s potential in the very fabric of the global trading system.

In this age of accelerated AI adoption, trust is no longer a soft value—it is the hard infrastructure of global trade.

Why This Matters for Organisations

Boards and executives can extract three key insights:

Voluntary ≠ Optional: Global research points to voluntary frameworks quickly becoming market expectations, even before regulation arrives. Assurance is Emerging as a Market Differentiator: Independent testing, red-teaming, and alignment to ISO/IEC 42001 are now central to corporate trust strategies. Cross-Sector Templates are Transferable: The Monetary Authority of Singapore’s FEAT principles and Veritas toolkit for financial services offer replicable methodologies for other high-stakes domains such as healthcare and mobility.

Reflections from Safety & Research Institutes Globally

Global institutes provide diverse perspectives on assurance-led innovation. The Alan Turing Institute, Stanford HAI, and OECD highlight Singapore’s role as a testbed. Expanding on this:

  • UK AI Security Institute: Its “International Scientific Report on the Safety of Advanced AI” (2024) synthesizes risks from misuse and autonomy, advocating science-based understandings. Progress reports detail evaluations of models like those from Anthropic.
  • US AI Safety Institute (NIST): The AI Risk Management Framework addresses societal risks, with reports like “Managing Misuse Risk for Dual-Use Foundation Models” (2024) outlining mitigation practices. Collaborations with companies like OpenAI focus on capability evaluations.
  • Japan AI Safety Institute: White papers emphasise safety criteria and tools, with research on disinformation. It aligns with global efforts for trustworthy AI.
  • Singapore AI Safety Institute: The “State of AI Safety in Singapore 2025” overviews testing frameworks, while red-teaming challenge reports address cultural sensitivities.
  • Canada AI Safety Institute: Calls for high-impact actions in white papers, focusing on alignment and privacy. Research programs tackle misinformation and cybersecurity.
  • EU AI Office: The White Paper on AI (2020, updated) proposes measures for excellence, with reports on explainable AI in education. 2 sources The AI Act’s Code of Practice sets safety standards.
  • India AI Safety Institute: White papers on responsible innovation cover sectors like healthcare, emphasizing ethical guidelines. It balances safety with security.
  • Australia AI Safety Institute: White papers on governance provide leadership insights, aligning with voluntary standards.
  • France AI Safety Institute (INESIA): Contributes to international reports, focusing on regulatory integration.
  • South Korea AI Safety Institute: Policy research on safety, with acts promoting reliable AI development.

These reflections underscore a consensus on collaborative, evidence-based approaches to mitigate risks while fostering innovation.

Looking Ahead

The National AI Strategy 2.0 (2023) shifts the focus from flagship projects to systems-level enablers—compute, talent, and trust ecosystems. With the Singapore AI Safety Institute (2025) joining the global safety network, the country positions itself as a node for collaborative assurance science, particularly in multilingual and culturally contextual AI testing.

The Alan Turing Institute’s comparative framework concludes that Singapore’s contribution is not regulation per se, but assurance-first innovation, where testing toolkits, standards, and voluntary protocols set the stage for future convergence .

References

  1. Alan Turing Institute (2025). AI Governance Around the World: Singapore.

2. Info-communications Media Development Authority & PDPC (2020). Model AI Governance Framework – Second Edition.

3. AI Verify Foundation (2023). Summary Report – Binary Classification Model for Credit Risk.

4. AI Verify Foundation (2025). Project Moonshot: Open Toolkit for Generative AI Evaluation.

5. OECD (2024). Advancing AI Assurance through International Standards.

6. NIST (2023). AI Risk Management Framework.

7. World Economic Forum (2023). C4IR Singapore and Global AI Governance.

8. Stanford HAI (2024). Trust Capital in Voluntary AI Governance.

Leave a comment