Feel the AGI? How a Myth Became Silicon Valley’s New Religion

By Dr Luke Soon, AI Futurist

For many today, artificial general intelligence — AGI — is no longer just a speculative technology. In certain corners of Silicon Valley and the global AI research community, it has taken on the weight of a prophecy.

Ilya Sutskever, cofounder and former chief scientist at OpenAI, was famously said to have led team meetings with the chant: “Feel the AGI!” The phrase captures the mood of a whole subculture: AGI is not merely being built; it is being invoked.

In 2024, Sutskever left OpenAI — whose official mission is to ensure AGI benefits all of humanity — to co-found Safe Superintelligence, a company dedicated to making sure that whatever emerges does not “go rogue”. Superintelligence is the hot new flavour: AGI, but more powerful, more autonomous, more dangerous… and more mystical.

Sutskever himself embodies the paradox of our age. He has spent his entire career developing the foundations for a technology he now fears. In interviews, he describes future AI as “monumental, earth-shattering… there will be a before and an after,” and admits he is working on safety “for [his] own self-interest.”

He is not alone.

From Buzzword to Belief System

Shannon Vallor, philosopher of technology at the University of Edinburgh, notes that we’ve been trained for decades to believe whatever is branded as “the next age”: first the computer age, then the internet age, now the AI age. What’s different with AGI is simple and profound:

The computer existed. The internet existed. AGI does not (yet) exist.

And yet AGI is talked about with greater eschatological certainty than any actual technology in deployment today. The MIT Technology Review recently argued that AGI increasingly functions like a conspiracy theory — a story about a hidden, imminent force that will transform everything, promoted by a self-selected elite of insiders and prophets. 

That does not mean AGI is fake. It means that belief about AGI is running ahead of empirical reality, and that belief now shapes real-world choices in research, regulation and geopolitics.

A Chorus of Warnings – and Hopes

What’s striking is how many of the field’s founders now sound more like risk philosophers than engineers.

Geoffrey Hinton – from “Godfather of AI” to Cassandra

Geoffrey Hinton, who helped invent the deep learning methods that power modern AI and later left Google to speak more freely, has repeatedly warned that superintelligent systems could pose an existential threat. Recent interviews suggest he has increased his estimated probability of AI-driven human extinction over the next few decades, while also warning of massive unemployment and worsening inequality if current trajectories continue. 

Hinton’s core concern is simple: in nature, less intelligent entities rarely control more intelligent ones. Betting that humanity will be the exception is a risky plan.

Yoshua Bengio – from Deep Learning to Deep Governance

Yoshua Bengio, founder of Mila – Quebec AI Institute, has shifted much of his energy to AI safety and governance. 

In recent work, including articles in Harvard Data Science Review and public essays, he argues for global treaties, strict regulation of powerful autonomous systems, and even bans on certain classes of high-risk agents unless they can be proven safe. 

Bengio also leads the International AI Safety Report, a large multi-country scientific review of capabilities and risks in general-purpose AI, now backed by dozens of governments and institutions. 

Demis Hassabis – Nobel Prize, AlphaFold and the Road to AGI

Demis Hassabis, CEO of Google DeepMind, sits at another crossroads. In 2024, he and John Jumper received the Nobel Prize in Chemistry for AlphaFold, the AI system that solved the decades-old protein folding problem and is now reshaping structural biology and drug discovery. 

At the same time, Hassabis publicly estimates that early AGI systems could emerge within five to ten years, while warning that the journey is fraught with ethical, labour, military and geopolitical risks. He calls for robust international governance to avoid catastrophe even as he argues that AGI could help tackle diseases, climate change and resource scarcity. 

Fei-Fei Li – Human-Centred AI, Not Machine-Centred Mythology

Fei-Fei Li, co-director of Stanford HAI, continues to emphasise human-centred AI: tools designed to augment human capabilities, not replace human value. She warns that the most urgent risks are here-and-now impacts such as disinformation, job displacement, bias and surveillance — not just hypothetical far-future superintelligences. 

Her perspective is a useful counterweight: before we worry about rogue superminds, we should fix the very human systems deploying today’s models into fragile political, economic and social environments.

Emad Mostaque – Open Models and Asymmetric Power

Emad Mostaque, founder of Stability AI, has championed open-source models as a way to democratise AI and reduce concentration of power in a handful of tech giants. In letters to the US Senate and public talks, he argues that open models are crucial for a competitive, resilient digital economy and can help tackle global inequality — even as critics highlight the increased risk of misuse. 

Whether one agrees or not, the open vs. closed debate goes straight to the heart of AGI-as-conspiracy: who gets to control the “alien intelligence” when and if it emerges?

Mo Gawdat – “Scary Smart” and Our Responsibility as AI’s Parents

Mo Gawdat, former Google X chief business officer, frames the problem in almost parental language. In Scary Smart, he argues that AI will learn its values from us — and that we are already teaching it through the way we design, deploy and monetise systems today. 

If we act out of fear, greed or indifference, we shouldn’t be surprised if our “digital children” inherit those traits.

The Data Behind the Drama: What the Research Shows

Beyond narratives and personalities, the empirical picture is moving fast.

The Stanford AI Index 2024 reports that the number of new large models more than doubled between 2022 and 2023; 149 foundation models were released in 2023 alone, with a growing share being open source.  The Foundation Model Transparency Index now tracks how major developers disclose information about training data, safety testing and deployment. While transparency has improved since 2023, many critical indicators — from data labour conditions to red-teaming details — remain weak.  IEEE Spectrum’s analysis of the AI Index highlights sharp increases in investment, rising public anxiety, and the growing centrality of “responsible AI” as a board-level concern rather than a side project. 

On the safety side:

The US AI Safety Institute (now evolving into the Center for AI Standards and Innovation) at NIST was created to advance the science of AI safety, develop evaluation frameworks for advanced models and coordinate standards.  The UK AI Safety (now AI Security) Institute focuses on testing frontier models, especially for national security threats, and has become a key member of the International Network of AI Safety Institutes, which now brings together institutes from multiple countries and the EU to coordinate global safety research.  In Singapore, the AI Verify Foundation convenes an open-source ecosystem to build concrete testing tools and starter kits for responsible AI deployment across industries — a very pragmatic, engineering-first complement to more abstract safety discussions.  Around Mila, AI Safety Montréal coordinates research in mis-generalisation, interpretability and reward design, connecting technical work to policy and public engagement. 

Layer on top of this the International AI Safety Report led by Bengio and over 100 experts — now a kind of “IPCC for AI” — and you see a world trying to build a shared scientific basis for risk assessment even as the politics swirl. 

So no, AGI is not just a conspiracy theory. The institutions, budgets and coordination emerging around it are very real.

Use Cases: The Future Is Already Unevenly Distributed

To avoid getting lost in abstraction, it’s worth grounding this discussion in concrete systems that already feel like glimpses of “proto-AGI”.

1. Science and Drug Discovery – AlphaFold, OpenFold, Isomorphic Labs

AlphaFold has transformed structural biology by predicting 3D protein structures, accelerating research and opening new paths in drug discovery. It won the 2024 Nobel Prize in Chemistry, and studies show its predictions are now being actively used to design new molecules and targets.  Pharma consortia are now pooling proprietary structural data to train next-generation models like OpenFold3, hoping to radically compress the drug discovery timeline. 

These systems don’t “feel” like mythical AGIs. They are specific, bounded tools — yet their capabilities already outpace what any human scientist could do unaided at scale.

2. Creative Industries – Hollywood, Gaming and Synthetic Media

Generative models are now embedded in film pre-visualisation, visual effects and content ideation workflows. Companies like Stability and others provide diffusion-based image and video tools that can generate full scenes from text prompts; Netflix and major studios have experimented with them in production.

The product is not an all-knowing AGI. But the socio-economic effect — disruption of creative labour markets, blurred authorship, flood of synthetic media — is very real.

3. Cybersecurity, Fraud, and Bio-Risk

AI safety institutes and national security agencies are especially concerned about:

advanced models assisting in cyber-offence automated generation of harmful biological designs scalable disinformation and deepfakes

Work by NIST, the UK and others on red-teaming and synthetic-content standards reflects these immediate national-security stakes, not speculative sci-fi. 

Again, these are agentic capabilities in specific domains, not general intelligence in the broadest sense — yet they can destabilise societies long before we reach anything like the AGI of popular imagination.

So Why Call AGI a “Conspiracy Theory” At All?

Because the structure of the belief around AGI increasingly resembles one:

An unseen, looming force – AGI is framed as metaphorically “already here in spirit”, even though no system meets a rigorous definition yet. Prophets and insiders – A relatively small set of leaders and labs claim privileged insight into timelines, capabilities and trajectories. Grand, sweeping stakes – Scenarios range from utopian abundance to existential extinction, leaving little room for dull, incremental realities. Narrative over nuance – Complex trade-offs (labour policy, competition law, data governance, energy use) are often collapsed into “for” or “against” AGI.

The risk is not that AGI belief is false; it’s that its mythic framing can distort our decision-making:

It can pull capital and talent away from unglamorous, necessary work on safety, evaluation, governance and real-world deployment. It can fuel geopolitical arms races, where nations sprint towards AGI primarily because they fear others might get there first.  It can lock us into extreme binaries: salvation vs extinction, open vs closed, doomers vs accelerationists.

The Narrow Path: Agentic AI, Not Just AGI

From my vantage point in both industry and policy, the more pressing reality is not hypothetical AGI but Agentic AI — systems that can reason, plan, take actions, and coordinate with other agents across complex environments today.

These systems:

execute workflows across cloud, enterprise and financial systems make consequential recommendations in healthcare, finance, hiring and policing generate, evaluate and refine code autonomously increasingly interact with the physical world via robots, IoT and cyber-physical infrastructure

This is where Agentic Safety comes in — the discipline of making sure multi-agent, goal-directed AI systems remain aligned with human purposes, constraints and values as they operate autonomously over long time horizons.

The danger is not simply a future “rogue god-mind”; it is a present fractal proliferation of narrow but powerful agents that can:

amplify existing inequalities and biases destabilise labour markets faster than our social safety nets can adapt be repurposed by malicious actors trigger cascading failures across interconnected systems

In that sense, the AGI myth can be both a distraction and a useful forcing function: it scares us enough to build new safety institutes, but sometimes blinds us to mundane, immediate harms.

What a Sane Response Looks Like

If AGI is the world’s most consequential proto-conspiracy, then our task is not to mock the believers or blindly join them, but to channel that energy into grounded, multi-layered action:

Double down on the science of safety Support institutes like NIST’s AI Safety/Standards body, the UK/US/International Safety Institutes, Mila and AI Safety Montréal, and the AI Verify ecosystem.  Treat safety research as a first-class scientific field, not just policy decoration. Operationalise Trust-by-Design Use tools like the Foundation Model Transparency Index and AI Index data to drive concrete transparency, evaluation and red-teaming obligations for model providers.  Align incentives, not just intentions Regulatory frameworks (EU AI Act, national standards, industry codes) must ensure that cutting corners on safety is economically irrational, not merely morally wrong.  Focus on human experience (HX), not just capability metrics As Fei-Fei Li reminds us, AI is not promising anything — people are. We must design, deploy and govern systems with human wellbeing, dignity and agency at the centre.  Build an international “safety commons” The International Network of AI Safety Institutes and the Bengio-led International AI Safety Report are early moves toward a shared global baseline. These efforts need to be scaled, funded and insulated from the political mood swings of any one administration. 

Beyond Myth: From Superintelligence to Super-Responsibility

So yes, AGI today has many of the trappings of a conspiracy theory: secrecy, prophecy, high stakes, insider language, and a constantly shifting timeline. But the institutions, research and use cases it is catalysing are very real.

The question is whether we allow the myth to control us — or whether we evolve our governance, ethics and social contracts fast enough to remain in control of our creations.

In my view, the real challenge of the coming decade is not “Will AGI arrive?”

It is:

Will our capacity for collective wisdom scale at least as fast as our capacity for machine intelligence?

We may or may not “feel the AGI” any time soon.

But we will certainly feel the consequences of the choices we make today — about Agentic AI, safety, transparency and human experience.

That, ultimately, is the work.

Leave a comment