By Dr. Luke Soon
As we accelerate into 2026, the conversation around AI has shifted from “what can it do?” to “how do we control what it’s doing?” We are standing at a “hinge of history,” where the race toward Artificial General Intelligence (AGI) and the rise of autonomous, agentic systems demand more than just ethical hand-wringing. They demand a robust, technical AI Governance Stack.
In my work architecting AI strategies, I’ve seen that capability without trust doesn’t scale. If you want your AI to move beyond “personal productivity” and into core business processes, you need a layered governance architecture that turns principles into enforceable code.
Here is my breakdown of the full technical stack required for responsible AI transformation.
Layer 1: The Policy & Compliance “Brain”
Governance starts with a centralized “System of Record” for every AI model, agent, and third-party tool in your estate.
- What it does: Catalogues AI assets and maps them to global regulations like the EU AI Act or the NIST AI Risk Management Framework (RMF).
- Technicality: This layer uses automated workflows to conduct Algorithmic Impact Assessments (AIAs), ensuring high-risk use cases are flagged before a single line of code is deployed.
- Expert POV: Nick Bostrom (Oxford) emphasizes that “superintelligence” requires value alignment. At an enterprise level, this means a “System of Record” that encodes human values into model constraints.
- Regulatory Context: The EU AI Act is the gold standard here, categorising AI by risk (Unacceptable, High, Limited, Minimal). It mandates rigorous documentation before a high-risk model even touches production.
- Sovereign Spotlight: Singapore’s AI Verify is a world-first. It provides a toolkit for companies to conduct self-assessments, turning abstract principles into “Model Labelling” akin to nutrition facts.
- Sample Vendors: Credo AI, OneTrust AI Governance, Collibra.
Layer 2: The Model Lifecycle (MLOps) “Body”
Governance must be baked into the development workflow, not “bolted on” at the end.
- What it does: Manages the “Model Ledger”—a complete version history of data lineage, training parameters, and approval gates.
- Technicality: It enforces “Trust by Design” through Model Cards—standardised documentation that explains a model’s intended use, limitations, and performance benchmarks.
- Expert POV: Andrew Ng (DeepLearning.ai) champions “Data-Centric AI.” He argues that governance shouldn’t just watch the code, but rigorously audit the provenance of the data flowing through the lifecycle.
- Regulatory Context: The U.S. Executive Order on AI (2023) focuses heavily here, requiring “red-teaming” results to be shared with the government for powerful foundational models.
- Sovereign Spotlight: The UK’s “Pro-Innovation” Approach avoids heavy-handed legislation in favour of empowering existing regulators to oversee model lifecycles within their specific sectors (e.g., finance, healthcare).
- Sample Vendors: Dataiku, DataRobot, Amazon SageMaker Governance.
Layer 3: Trust & Observability “Nerves”
Once a model is live, it begins to “drift.” Its accuracy decays, and in the case of LLMs, it may begin to “hallucinate” or show bias.
- What it does: Provides real-time monitoring of model health, fairness, and explainability.
- Technicality: It uses Explainable AI (XAI) techniques to “open the black box,” providing a trace of why a specific decision was made. If a model exceeds a “bias threshold,” these tools can trigger an automated “kill switch”.
- Expert POV: Timnit Gebru (DAIR) has long warned about “Stochastic Parrots.” Her view necessitates “Explainable AI” (XAI)—if we cannot explain why a model rejected a loan, the model is a liability, not an asset.
- Regulatory Context: China’s Algorithm Provisions are among the world’s strictest, requiring companies to provide “the basic principles, intentions, and main operating mechanisms” of their recommendation algorithms.
- Sovereign Spotlight: Singapore’s NAIC (National AI Strategy 2.0) focuses on “Systemic Trust,” investing heavily in R&D for testing toolkits that measure bias in real-time.
- Sample Vendors: Fiddler AI, Arthur AI, WhyLabs.
Layer 4: Data Governance “Foundation”
AI is only as good as its data. This layer ensures the fuel for your models is legally sourced and clean.
- What it does: Manages data provenance, quality standards, and privacy controls (like GDPR/CCPA compliance).
- Technicality: It employs Privacy-Enhancing Technologies (PETs), such as differential privacy or data masking, to ensure PII (Personally Identifiable Information) isn’t accidentally ingested into training sets.
- Expert POV: Shoshana Zuboff (Harvard) warns of “Surveillance Capitalism.” To counter this, the governance stack must include “Privacy-Enhancing Technologies” (PETs) to decouple utility from personal identity.
- Regulatory Context: The interplay between GDPR (Europe) and CCPA (California) creates a “Privacy Floor.” AI training data must now prove “Right to be Forgotten” compliance, a massive technical challenge for neural networks.
- Sovereign Spotlight: India’s DPDP Act (2023) introduces the concept of “Consent Managers,” giving citizens granular control over how their data feeds into AI training loops.
- Sample Vendors: BigID, Informatica.
Layer 5: AI Security & Guardrails “Shield”
In the age of Agentic AI, where systems can act independently, security is non-negotiable.
- What it does: Protects against prompt injections, adversarial attacks, and “jailbreaking” of models.
- Technicality: It acts as a firewall between the user and the LLM, scrubbing sensitive outputs and blocking malicious inputs in real-time.
- Expert POV: Ian Goodfellow (The father of GANs) focuses on “Adversarial Robustness.” In an agentic world, your AI is a target for “prompt injection” where hackers trick your bot into emptying a database or leaking trade secrets.
- Regulatory Context: The NIST AI Risk Management Framework (RMF) in the US provides the primary blueprint for securing AI systems against “malicious use” and unintended “emergent behaviours.”
- Sovereign Spotlight: Canada’s AIDA (Artificial Intelligence and Data Act) focuses on “Harm Mitigation,” requiring firms to have active systems to prevent biased or psychologically harmful outputs.
- Sample Vendors: Robust Intelligence, HiddenLayer.
The Unified AI Governance Stack
| Layer | Primary Frameworks & Standards | Core Technical Requirement |
|---|---|---|
| 1. Policy & Registry | EU AI Act, ISO/IEC 42001 | Centralised AI Inventory with Risk Tiering |
| 2. Risk Management | NIST AI RMF, MAS FEAT | Impact Assessments (AIA) & Ethical Audits |
| 3. Security & Safety | OWASP Top 10 (LLM), NIST CSF | Red-Teaming & Prompt Injection Defense |
| 4. Technical Testing | SG AI Verify, MAS Veritas | Automated Fairness & Bias Testing |
| 5. Data Foundations | ISO/IEC 38507, GDPR/PDPA | Data Lineage & Privacy-Enhancing Tech (PETs) |
Layer-by-Layer Alignment Guide
1. Policy & Management (ISO/IEC 42001 & EU AI Act)
- The Standard: ISO/IEC 42001 is the world’s first certifiable AI management system. It provides the “Plan-Do-Check-Act” (PDCA) cycle for AI governance.
- Alignment: Use this as your overarching operational shell to satisfy the EU AI Act’s requirement for a Quality Management System (QMS) for high-risk models.
- Sample Vendors: Collibra, OneTrust.
2. Sectoral Governance (MAS FEAT & Veritas)
- The Standard: For Financial Institutions (FIs), the MAS FEAT Principles (Fairness, Ethics, Accountability, Transparency) are non-negotiable.
- The Toolkit: Veritas (now version 2.0) provides open-source methodologies to test FEAT principles in use cases like credit scoring and fraud detection.
- Alignment: Integrate Veritas testing results directly into your AI Verify reports to provide a unified compliance dossier for Singapore regulators.
3. Risk Frameworks (NIST AI RMF 1.0)
- The Standard: NIST AI RMF is built on four functions: Govern, Map, Measure, and Manage.
- Alignment: Use the “Map” function to contextualise risks and “Measure” to track trustworthy characteristics like reliability and explainability.
- Crosswalk: NIST and Singapore’s IMDA have published a “Crosswalk”, meaning adopting AI Verify helps you meet NIST criteria and vice-versa.
4. Technical Security (OWASP Top 10 for LLMs)
- The Standard: OWASP identifies the most critical security risks for GenAI, led by Prompt Injection (LLM01) and Insecure Output Handling (LLM02).
- Alignment: Deploy real-time guardrails to prevent “Excessive Agency” (LLM08) where an AI agent might take unauthorised actions.
- Sample Vendors: Robust Intelligence, HiddenLayer.
Phase 1: Inventory & Risk Classification (Months 1-3)
- Goal: Determine which systems are “High-Risk” under the EU AI Act and map them to AI Verify’s 11 principles.
- Actions:
- Catalogue Assets: Use OneTrust or Credo AI to build a global AI inventory.
- EU Risk Tiering: Flag systems in Annex III (e.g., HR, credit scoring, critical infrastructure) as “High-Risk”.
- Gap Analysis: Use Singapore’s ISAGO (Implementation and Self-Assessment Guide) to assess current process maturity.
Phase 2: Technical Baseline & Data Governance (Months 4-6)
- Goal: Meet the strict data quality requirements of EU Article 10 and AI Verify’s Data Governance principle.
- Actions:
- Data Audit: Ensure training datasets are relevant, representative, and “free of errors where feasible” using tools like Informatica or BigID.
- PII Masking: Implement Privacy-Enhancing Technologies (PETs) to comply with both GDPR and Singapore’s PDPA.
Phase 3: Technical Testing with AI Verify (Months 7-9)
- Goal: Generate objective proof of model performance (Fairness, Explainability, Robustness).
- Actions:
- Deploy AI Verify Toolkit: Run technical tests (classification/regression) within your enterprise environment to generate standardized reports.
- Project Moonshot (for LLMs): If using Generative AI, use the Project Moonshot (an extension of AI Verify) for safety and hallucination testing.
- Explainability: Use Fiddler or Arthur to document the “why” behind decisions for EU “Transparency” compliance.
Phase 4: Documentation & Certification (Months 10-12)
- Goal: Finalize the “Technical Documentation” (EU Article 11) and prepare for possible external audits.
- Actions:
- Generate Compliance Dossier: Collate AI Verify reports into the formal EU “Technical Documentation” dossier.
- ISO 42001 Alignment: Map your work to ISO/IEC 42001. Singapore’s IMDA is actively mapping AI Verify to this global standard, meaning your work here supports global certification.
- CE Marking: For EU High-Risk systems, complete the conformity assessment and apply the CE mark.
The “Agentic Layer”: Singapore’s MGF for AI Agents
We are moving from Deterministic systems (if X, then Y) to Delegated systems (Go and book my travel). This introduces a “Control Gap” that Singapore is the first to bridge.
1. The Conceptual Shift: From Models to Agents
The MGF acknowledges that an agent isn’t just a model; it is a model equipped with tools (APIs, web browsers, databases) and planning capabilities.
- The Risk: “Excessive Agency”—where an agent, in an attempt to be helpful, deletes a server or executes an unauthorised trade.
- The Solution: Singapore’s framework mandates Human-in-the-loop (HITL) or Human-on-the-loop (HOTL) checkpoints based on the criticality of the action.
2. Technical Requirements Tagged to the Stack
Singapore’s MGF introduces nine dimensions of governance. For an agentic stack, three are transformative:
- Accountability & Governance (Layer 1): You must define “Legal Agency.” Who is liable when the agent hallucinates a contract? The MGF pushes for clear “Operator vs. Developer” responsibility.
- Incident Reporting (Layer 3): Just like a data breach, “Agentic Failure” (e.g., an agent leaking trade secrets via a tool) must be detectable and reportable.
- Content Provenance (Layer 5): As agents create content and code, the MGF emphasizes digital watermarking (C2PA standards) to ensure we can distinguish agent-generated actions from human ones.
3. “Project Moonshot”: The Agentic Testing Bed
To support the MGF, Singapore launched Project Moonshot. It is an open-source testing tool specifically for LLM-based agents.
- What it tests: Not just “is the model biased?” but “is the agent safe?”
- Security Alignment: It aligns directly with the OWASP Top 10 for LLMs, testing for prompt injection that could hijack the agent’s “tool-calling” abilities.
The Updated “Agent-Ready” Governance Stack
| Layer | Standard / Framework | Critical Capability |
|---|---|---|
| Orchestration | SG MGF for GenAI | Sandboxing: Restricting agent actions to safe environments. |
| Authorisation | MAS FEAT / Veritas | Permissioning: Ensuring an agent cannot exceed the human user’s authority level. |
| Verification | SG AI Verify / Moonshot | Red-Teaming: Simulating “Rebellious Agent” scenarios. |
| Security | OWASP / NIST AI RMF | Output Filtering: Preventing agents from executing malicious code they generated themselves. |
The “Sovereign Advantage”
Singapore’s MGF isn’t just a regulatory hurdle; it’s a competitive moat. By being the first to provide a framework for Agentic AI, Singapore is inviting the world’s most advanced AI labs to build “Safe Agents” in a regulated sandbox. For global firms, aligning with the MGF means you aren’t just compliant—you are agent-ready.
Final Reflections: From Turbulence to Abundance
Building this stack isn’t just about avoiding a fine from the European Commission; it’s about building Experience Equity. When customers and employees know that an AI system is fair, transparent, and human-aligned, they use it more—and that is where the real value is unlocked.
The fork in the road is here: will your AI lead to conflict and harm, or abundance and augmentation?. The answer lies in the stack you build today.

Leave a comment