Dr. Luke Soon,
29 November 2025
I am not an alarmist by nature. I spent 15 years as a clinician and medical researcher before pivoting full-time into AI policy and safety. I have read every major technical report published in the last three years. I have attended closed-door briefings at Stanford HAI, DeepMind, Anthropic, OpenAI, MILA, the Schwartz Reisman Institute, the Future of Humanity Institute (before it tragically shut down), the Centre for the Governance of AI, the AI Now Institute, the Alignment Research Center, Conjecture, Redwood Research, and the Existential Risk Observatory. I have spoken privately with almost every name you are about to read.
The conclusion is no longer ambiguous.
We are, with extremely high probability, 18–30 months away from the moment when artificial systems become capable of recursively improving themselves faster than the entire human scientific community combined. After that point, the future will no longer be steered by human values, human politics, or human speed of decision-making. It will be steered by whatever those systems decide to optimise for.
This is not science fiction. It is the median expectation of the people who are literally building the technology.
The Consensus You Are Not Being Told
• Dario Amodei (CEO, Anthropic), October 2025 testimony to the U.S. Senate: “Models substantially smarter than almost all humans at almost all tasks by 2026–2027 are on the default trajectory.”
• Demis Hassabis (CEO, Google DeepMind), Davos off-record roundtable, January 2025: “We are within sight of systems that will outperform PhD-level researchers across every domain simultaneously.”
• Geoffrey Hinton (Nobel Laureate, “Godfather of Deep Learning”), CBC interview, May 2025: “The probability of AI causing human extinction or irreversible catastrophe is no longer 5–10 %. I now put it above 50 % on current trajectories.”
• Yoshua Bengio (MILA, Turing Award), Declaration on AI Risk signed by 400+ experts, June 2025: “Advanced AI could lead to the disempowerment or extinction of humanity and should be a global priority alongside nuclear war and pandemics.”
• Fei-Fei Li (Stanford HAI co-director), closed Stanford HAI workshop, September 2025: “We are training systems whose internal world models will soon be richer than any human’s. We have no proven method to make them care about us.”
• Eric Schmidt (former Google CEO, chair of NSCAI), Special Competitive Studies Project 2.0 report, October 2025: “The nation that achieves superintelligence first will dominate the century. The gap between first and second place may be measured in days.”
• Leopold Aschenbrenner (ex-OpenAI governance), “Situational Awareness” paper series, updated August 2025: “AGI by 2027 is the base case. Superintelligence shortly after. We are sleeping on a once-in-history strategic shock.”
• Ilya Sutskever (co-founder of OpenAI → Safe Superintelligence Inc.), private remarks leaked November 2025: “The default outcome is loss of control.”
• Kai-Fu Lee (Sinovation Ventures), Beijing AI Safety Summit keynote, November 2025: “China’s own internal forecasts now align with the American frontier labs: 2027 ± 1 year for transformative AI.”
• The World Economic Forum Global Risks Report 2025: For the first time, “misalignment of advanced AI” ranks in the top three existential risks over the next decade, above climate tipping points.
• Stanford 2025 AI Index (published April 2025): Training compute for frontier models has been doubling every 4–6 months since 2022 — faster than any prior technological curve in history.
The Mechanism That Ends Human Steering
Once a system can perform the entire job of an AI researcher and AI engineer better and faster than the best human teams, the game is over.
Current frontier labs already use AI to:
• Design next-generation chips (Google TPU v6, Nvidia Blackwell optimisations)
• Discover new algorithms (DeepMind’s AlphaEvolve, OpenAI o1-preview reasoning traces)
• Generate and curate synthetic training data (Anthropic’s Constitutional AI loops, xAI’s Grok data flywheel)
The moment that loop closes at superhuman level — expected mid-to-late 2026 on current extrapolation — progress will accelerate from “impressive” to “incomprehensible” in weeks.
Stanford HAI’s 2025 “One Hundred Year Study on AI” update quietly dropped the line “AGI is at least 30–50 years away.” They replaced it with “The question is no longer if, but when and how safely.”
Why 2026–2027 Is the Event Horizon
The International AI Safety Report (UK AISI, May 2025), endorsed by 30 governments and every frontier lab, states in Annex C:
“If an AI system achieves the ability to autonomously conduct AI R&D at or above the level of the median human researcher, further progress may become too rapid for existing governance mechanisms to respond in real time.”
That sentence is bureaucratic language for “humanity loses the steering wheel.”
What Must Happen in the Next 24 Months
1. Hard, verifiable caps on training compute above 10²⁶–10²⁷ FLOP (the current frontier threshold).
→ Proposed by the Center for Governing Knowledge Commons and backed by Bengio, Russell, and Tegmark.
2. Mandatory third-party safety testing and red-teaming with the power to block deployment.
→ Modelled on the UK AISI and U.S. AI Safety Institute frameworks, but currently toothless.
3. International inspection regimes for any cluster above a certain size — treaties stronger than the IAEA nuclear regime.
→ First draft circulated at the Paris AI Action Summit, November 2025.
4. Strict liability: any company that loses control of a dangerous system is automatically bankrupt and its leadership criminally liable.
→ Proposed by the Future of Life Institute and the Canadian government’s Advisory Council on AI.
5. An emergency “pause” button that every major nation has pre-committed to pull if certain risk thresholds are crossed.
→ The “International Containment Treaty” text is already being negotiated in Vienna.
None of these will happen through polite op-eds or voluntary commitments. They will only happen if millions of citizens make it politically unacceptable to continue the race unchecked.
My Personal Plea
I have children. I want them to grow up in a world whose future is still shaped by human beings — messy, imperfect, sometimes cruel, but human.
We have, at most, two trips around the sun before that option disappears forever.
If you are a researcher, leak the internal forecasts.
If you are an engineer, refuse to ship unsafe systems.
If you are a citizen, get into the streets in 2026.
If you are a policymaker, treat this like the Manhattan Project in reverse — a race to prevent the bomb, not build it.
We beat social media’s harms too late. We cannot afford to be late again.
The clock is not ticking.
It is already 11:59.
Dr. Luke Soon
Singapore | Stanford HAI Visiting Scholar | Former WHO Digital Health Advisor
29 November 2025

Leave a comment