The Rise of Sovereign AI: How Nations Are Taking Back Control of Intelligence

When I saw the Wall Street Journal headline — “It’s Not Just Rich Countries. Tech’s Trillion-Dollar Bet on AI Is Everywhere” — it felt like a confirmation of something I’ve been sensing for years:

We are moving from AI as a Silicon Valley export to AI as a sovereign capability.

And the global South is no longer content to sit in the passenger seat.

This blog is my attempt to unpack that shift — through the lenses of AI decolonisation, sovereign AI, and my own “Commonwealth vs Fortress” futures. I’ll draw on the latest statistics, white papers and research across economics, governance, and AI safety, and then offer a practical playbook for governments and institutions that don’t want to be digitally colonised again.

1. The trillion-dollar AI wave – and who’s actually surfing it

Let’s start with the scale of what’s happening.

UNCTAD’s latest Technology and Innovation report estimates that the global AI market could multiply 25-fold within a decade, reaching nearly $5 trillion.  The 2025 Stanford AI Index reports that U.S. private AI investment hit $109.1 billion in 2024, around 12 times China’s $9.3 billion and 24 times the UK’s $4.5 billion. Generative AI alone attracted $33.9 billion globally.  Visual Capitalist’s compilation of 2013–2024 private AI investment shows that the U.S., China, and a small cluster of advanced economies account for the overwhelming bulk of cumulative AI capital. 

At the same time, AI adoption is accelerating inside firms. An OECD / Eurostat study finds that around 13.9% of enterprises across the OECD area were using AI in 2024, with adoption doubling year-on-year in some countries — but growth is driven by leaders sprinting ahead rather than laggards catching up. 

On the macro side, the IMF warns that AI will affect almost 40% of jobs globally, replacing some and augmenting others, with a serious risk of deepening inequality without deliberate policy.  Their working paper The Global Impact of AI: Mind the Gap models uneven productivity gains that disproportionately favour AI-intensive sectors and countries that already have capital and institutions in place. 

In other words: AI is not a side-show. It’s a new general-purpose infrastructure with the capacity to rewire productivity, labour markets, and power structures.

But the WSJ headline highlights something else: the geography of AI is quietly changing.

2. From “AI for the rich world” to “AI everywhere”

The WSJ piece describes a new wave of AI infrastructure investments stretching far beyond the usual suspects: India, Southeast Asia, the Gulf, Africa, Latin America. 

You can see this in the news flow:

Google has announced a $15 billion AI hub in Visakhapatnam, India, with gigawatt-scale data-centre capacity, a subsea cable gateway, and extensive fibre and energy infrastructure — part of a broader $24 billion AI infrastructure programme.  Microsoft has committed $15.2 billion to the UAE, with a plan to upskill over 300,000 people in AI and partner with local institutions and government.  Across Southeast Asia, private funding for the digital economy hit $7.7 billion in the 12 months to June 2025; AI startups captured around a third of this, and data-centre capacity is projected to nearly triple, with Malaysia alone accounting for more than half of planned capacity (2,415 MW). 

UNCTAD notes that the global AI market’s expansion could be an engine of catch-up growth if developing nations can influence investment, governance, and rule-setting — otherwise AI risks hard-wiring existing inequalities into the next industrial revolution. 

CSIS puts it bluntly: you can’t benefit from AI if you lack compute, connectivity, and data infrastructure, and today compute remains heavily concentrated in advanced economies. 

So the trillion-dollar bet is “everywhere” in the sense that capital is searching for new regions to scale AI, but control, standards and upside are still heavily skewed.

That’s where the language of “AI colonialism” and “AI decolonisation” enters the debate.

3. From data colonialism to AI decolonisation

Scholars of “decolonial AI” argue that the current AI stack replicates familiar patterns: data, talent and capital flowing from the global South to a handful of tech hubs, while decisions about design, deployment and governance are made elsewhere. 

A few key critiques:

Political – AI development often reinforces digital capitalism’s hegemony: infrastructure, platforms and cloud are controlled by a small set of firms and jurisdictions.  Ecological – Energy-hungry training runs and data centres consume resources and water, frequently located in countries with weaker bargaining power or environmental protections.  Epistemic – Western datasets, languages and values are often taken as universal, marginalising local knowledge systems and cultural context in the global South. 

The worry is that AI becomes a new layer of colonial infrastructure: your data, language and labour fuel foreign models; your citizens get downstream applications and surveillance risks; and your policy space is constrained by standards drafted in Washington, Brussels or Beijing.

That’s why concepts like “AI decolonisation” and “sovereign AI” are gaining traction.

4. Sovereign AI: owning your intelligence stack (without going it alone)

“Sovereign AI” doesn’t mean everyone must build a national foundation model from scratch and cut themselves off from the world. The more nuanced view — articulated by bodies like the World Economic Forum, Accenture and Cloudflare — is that sovereignty is about choice, control and context, not isolationism. 

Common threads across these analyses:

Local data – ensuring that sensitive public-sector and critical-infrastructure datasets are governed within national or regional legal frameworks.  Local infrastructure – building or co-owning data centres, AI supercomputers and network backbones so that inference and (in some cases) training can run domestically.  Local talent and institutions – investing in universities, research labs, regulators and standards bodies able to shape AI, not just consume it. 

The Tech for Good Institute, analysing Malaysia’s emerging strategy, frames sovereign AI around three pillars: local data, local infrastructure and local talent, emphasising that even in a complex geopolitical environment, middle-income countries can lead in parts of this stack. 

Accenture’s Sovereign AI: Own Your AI Future reaches a similar conclusion: sovereignty is about a fit-for-purpose stack blending global and local components, governed on your own terms. 

In my own language: we’re entering an era where intelligence itself becomes a strategic asset, like energy or semiconductors. Nations that simply rent this asset from others will find their policy autonomy shrinking over time.

5. Compute, data and talent: the new strategic triad

A growing cluster of research treats compute as a lever of AI governance.

The paper Computing Power and the Governance of AI argues that governments are beginning to use control over compute — through export controls, subsidies, and domestic supercomputing investments — as a way to shape how advanced AI develops and who can access it. 

At the same time, the IMF and others warn that compute capacity is highly concentrated, and that this concentration maps onto wider economic power: the countries with the most compute are also those best placed to capture AI-driven productivity gains. 

Overlay this with:

Data – who has high-quality, labelled, domain-specific datasets for health, finance, education, agriculture? Talent – where are the researchers, safety engineers, regulators and product builders?

And you get what I call the Intelligence Power Index — a composite of compute, data and talent.

Think of it this way:

Energy shaped the 20th century. Intelligence will shape the 21st. The countries that only buy intelligence as a service will always be a step behind those who can generate, store and govern it themselves.

6. What the leading AI researchers are really saying

One way to test whether this “AI decolonisation” moment is real is to look at what the pioneers themselves are worried about.

Fei-Fei Li: human-centred and inclusive AI

Fei-Fei Li, co-founder of Stanford’s Human-Centred AI Institute and founder of AI4ALL, has repeatedly argued that AI’s benefits must be shared beyond “a fortunate few”, and that the field needs a far more diverse, globally representative talent base. 

Her work on AI4ALL is explicitly about breaking barriers for under-represented communities in AI education — a theme that maps naturally onto the global South’s push not to be left out of the AI renaissance. 

Yoshua Bengio: safety, equity and global participation

Yoshua Bengio, deep learning pioneer and founder of Mila, has become one of the most vocal advocates for AI safety and inclusive governance, stressing the need to include civil society and global South voices in frontier AI rule-making. 

In essays and talks, he argues that powerful AI systems require new forms of international co-operation, with governance architectures that look more like networked, distributed systems than a single global regulator — an idea echoed in Brookings’ work on network architectures for global AI policy. 

Geoffrey Hinton: existential and socioeconomic risk

Geoffrey Hinton, another deep-learning Turing Award winner, has shifted from quiet researcher to outspoken critic of the current AI arms race. He warns of:

A 10–20% chance that advanced AI could lead to human extinction within the next few decades, The risk that AI will bring massive unemployment and widen inequality, benefiting a small elite while making most people poorer. 

Hinton’s alarm is not just about sci-fi takeover scenarios; it’s about runaway concentration of power and wealth, which is precisely what many worry will happen if the global South remains a passive recipient of AI built elsewhere.

UNESCO, OECD and multilateral voices

UNESCO’s recent work on AI and education calls for human-rights-based, inclusive governance to ensure that AI strengthens learning opportunities rather than entrenching exclusion. 

The OECD’s updated AI Principles and its 2024 futures report on global AI governance also warn of emerging divides in AI capability and adoption, urging proactive strategies to ensure that transformation benefits a broad set of societies, not just AI “superpowers”. 

Think tanks like CIGI and the Center for Global Development go further, documenting how current patterns of AI innovation and capital allocation risk widening global inequality without deliberate countermeasures. 

Taken together, these voices — from Hinton, Bengio, Fei-Fei Li, UNESCO, OECD, IMF, UNCTAD, CSIS and others — are converging on a common message:

AI will transform the global economy. Whether it narrows or explodes inequality is still a choice.

7. Commonwealth vs Fortress: where “AI decolonisation” fits in my futures map

In my Genesis and Synthesis work, I describe two broad arcs for the AI century:

The Commonwealth Model – a world of Abundance & Renewal, where intelligence is widely distributed, governance is participatory, and AI is harnessed to expand human flourishing. The Fortress Model – a world of Fragmentation & Control, where a small set of states and firms hoard compute, data and talent, using AI to entrench power and extract rents.

The current push for AI decolonisation and sovereign AI in the global South is, to me, one of the most important signals that the Commonwealth path is still open.

When:

India negotiates investments that include local AI hubs and subsea cables, not just cloud consumption;  The UAE insists on pairing AI investment with massive local upskilling programmes;  Malaysia experiments with a sovereign AI strategy that mixes local and foreign chips, cloud and open-source models; 

…we’re seeing early steps towards in-sourcing intelligence rather than simply renting it.

At the same time, the Fortress dynamic is clearly visible:

Export controls on frontier chips, Tight clustering of AI talent in a handful of cities, Venture funding overwhelmingly skewed toward U.S. AI and data startups, capturing over 75% of reported generative-AI VC in advanced economies. 

So we are not drifting naturally toward the Commonwealth future. It has to be built.

8. A seven-point playbook for governments in the global South

If I were advising a finance minister or digital-economy minister in a developing country today, my message would be simple:

Don’t just “adopt AI”. Architect your national intelligence stack.

Here’s a seven-point playbook drawing on the research above and my own work with clients and governments.

1. Build a national “Intelligence Power” baseline

Map your current compute, data, and talent assets. Use frameworks like the IMF’s AI impact modelling and OECD adoption statistics as reference points.  Identify critical gaps: GPUs? power and cooling? local NLP talent for your languages? regulatory capacity?

This becomes your Intelligence Power Index — a living dashboard, not a one-off report.

2. Treat data estates as national strategic infrastructure

Develop sovereign data estates for health, education, finance, agriculture, and climate, governed by robust privacy and ethical frameworks (UNESCO’s guidance on AI and education is a good model).  Mandate that public-sector data used to train models be subject to local oversight and benefit-sharing (e.g. for healthcare or agricultural AI).

3. Join or create compute coalitions

Very few countries can justify a frontier-scale supercomputer alone, but regional AI compute alliances are viable — akin to cross-border energy pools. Use the insights from Computing Power and the Governance of AI to structure incentives, safeguards and export-control compliance. 

4. Anchor everything in human-centric, rights-based governance

Align with UNESCO’s Recommendation on the Ethics of AI and the OECD AI Principles as a starting point, then localise.  Build national AI regulators and advisory councils that include civil society, academia, industry and global South voices, echoing Bengio’s emphasis on inclusive governance. 

5. Invest aggressively in local talent and research ecosystems

Create AI Centres of Excellence at universities, linked to global labs through fellowships and joint chairs (think Mila, HAI, DeepMind Research).  Partner with initiatives like AI4ALL to widen participation and avoid reproducing domestic inequalities in the AI workforce. 

6. Use digital public infrastructure (DPI) as an AI launchpad

Countries with strong DPI stacks (identity, payments, data-sharing rails) are better placed to deploy AI at scale in agriculture, health, MSME finance, and education. Embed agentic AI capabilities into these rails carefully — not as shiny gadgets, but as workforce multipliers and service-quality enhancers.

7. Hard-wire “Agentic Safety” and HX (Human Experience) into every national AI programme

Don’t wait for frontier systems to misbehave in your jurisdiction. Treat agentic AI (systems that can plan, act, and adapt) as a new class of risk, beyond traditional ML. Borrow from emerging global safety research, including alignment, evaluation, and red-teaming frameworks, and adapt them into live operational controls.  Measure not only CX (citizen experience) but HX (Human Experience) — how policies and services impact both citizens and public servants over time.

9. What multinationals and advisors must do differently

As someone who spends a lot of time between global boardrooms and regional policy conversations, I’m acutely aware that large firms (including my own in consulting) can either reinforce AI colonial patterns or help unwind them.

A few principles I hold myself to:

No “one-stack-to-rule-them-all” evangelism Help clients design modular, sovereign-friendly stacks that can mix global hyperscalers, regional clouds, and local open-source components — rather than locking them into a single proprietary platform. Shift from “AI pilots” to “AI nation-building” Move beyond PoCs to programmes that build domestic capability: local model fine-tuning, local safety labs, university partnerships, skills academies. Make Agentic Safety non-negotiable Treat safety not as a compliance afterthought but as part of the core architecture — audit trails, live lineage, MoE routing audits, and transparent evaluation pipelines for agentic workflows. Insist on shared upside and local value capture Structure deals such that local partners gain equity in data centres, IP rights in co-developed models, or royalty flows from global deployments on local data.

If we don’t do this, we should be honest: we’re not “helping countries transform with AI”; we’re extending the same old extraction logic into a new substrate.

10. Choosing our AI future: a personal reflection

When Geoffrey Hinton says he’s “glad to be 77” because he may not live to see the worst-case AI scenarios play out, he is voicing a deep unease that many of us share. 

But the future is not only about existential risk. For billions of people in the global South, the more immediate question is:

Will AI be something done to us, or something we co-create?

UNCTAD, the IMF, OECD, UNESCO, CSIS and countless researchers are now converging on a simple truth: without deliberate strategy, AI will widen the very divides we claim we want to close. 

The WSJ headline captures the paradox beautifully. The money is everywhere. The data centres are everywhere. The marketing decks are definitely everywhere.

But true intelligence sovereignty — the ability for a society to shape how AI is built, governed and directed toward its own purposes — is still rare.

My hope is that we look back at this period and say:

This was the decade when the global South refused to be a passive data mine, and instead became a co-author of the AI century.

That, to me, is what AI decolonisation really means.

Not rejection. Not isolation.

But rewriting the script — from “AI made elsewhere, applied here” to “AI co-designed, co-governed and co-owned”.

If we can achieve that, then the Commonwealth future — of abundance, shared prosperity and renewed human experience — remains not just a scenario in my book, but a live possibility.

Leave a comment