The Fork Ahead: Navigating the AI Arms Race – the Quest for SuperIntelligence and a Post-AGI World Without Work

By Dr. Luke Soon, AI Ethicist, Philosopher, and Author of Genesis: Human Experience in the Age of AI
17 May 2025

As a computer scientist who coded in the 1990s’ nascent digital era, I’ve watched technology evolve from clunky mainframes to the cusp of Artificial General Intelligence (AGI). My 2022 book, Genesis: Human Experience in the Age of AI, written before the ChatGPT watershed, foresaw a world where AI could amplify human potential or imperil our existence. Today, as an AI ethicist and philosopher, I grapple with two intertwined questions: how do we navigate the US-China AI arms race to avoid catastrophe, and how will humanity cope in a post-AGI world where work, as we know it, may vanish?

In Genesis, I described this moment as the ‘fork’—a divergence by 2030 between a Star Trek utopia of abundance and a Mad Max dystopia of chaos. My #AgenticAI framework, blending IQ (cognitive intelligence), EQ (emotional intelligence), and AQ (adaptability quotient), envisions AI as a partner to human agency. Yet, achieving this requires urgent action. Drawing from Persia’s administrative genius and Japan’s cultural resilience, this blog explores the AI arms race, the post-AGI transition, and five critical steps to secure a utopian path, assessing our progress. It then projects a year-by-year outlook post-AGI, illustrating how humanity might navigate a world without work, animated by my Genesis concept of “short-term turbulence for long-term abundance.”

The AI Arms Race and Agentic AI’s Rise

The race to superintelligence—AI surpassing human cognition—pits the US (e.g., OpenAI) against China (e.g., SenseTime). Online voices warn China may trail by months, risking a ‘hard takeoff’ by 2030. This prisoner’s dilemma, prioritizing dominance over safety, fuels risks: misaligned AI, weaponized systems, or economic collapse. A 2024 US report flags a 1% chance of extinction-level threats by 2100—an unacceptable gamble.

My #AgenticAI concept accelerates this timeline. By integrating IQ, EQ, and AQ with embodied data—AI physically interacting with the world, manipulating objects like humans—Agentic AI completes the data spectrum for AGI. These agents, performing superhuman tasks, will form a swarm of self-improving intelligence, communicating and learning from each other. This could revolutionize industries or, if unchecked, destabilize societies. Persia’s Royal Road unified an empire; we need a global framework to align this swarm with humanity’s survival.

Five Critical Steps to Avert Catastrophe

To avoid a Mad Max dystopia, we must act now. Below, I outline five must-do actions, scoring progress from 1 (negligible) to 5 (robust).

  1. Forge a Global AI Safety Treaty (Progress: 2/5)
    A binding treaty to enforce safety standards and pause reckless development is vital. The EU’s 2024 AI Act and US-China talks mark steps, but the UN’s 2023 AI resolution lacks teeth. Mistrust stalls progress (2/5). Persia’s trust-based governance shows coordination is key.
  2. Establish Robust Ethical AI Frameworks (Progress: 3/5)
    Agentic AI demands ethical alignment. UNESCO’s 2021 AI Ethics Recommendation and industry guidelines advance the field, but non-binding frameworks limit impact (3/5). Japan’s Heian-era codes suggest ethics need societal buy-in.
  3. Accelerate Human-AI Collaboration Models (Progress: 3/5)
    Hybrid roles and reskilling integrate humans with AI swarms. Singapore’s SkillsFuture and 97 million projected AI jobs by 2025 show promise, but uneven access hinders scale (3/5). Japan’s Meiji-era inclusivity is a model.
  4. Implement Fail-Safes and Red Lines (Progress: 1/5)
    Kill switches and bans on lethal AI weapons are critical. US 2024 guidelines mention off switches, but global standards are absent (1/5). Persia’s administrative checks inspire needed safeguards.
  5. Foster a Cultural Shift Toward AI as a Shared Resource (Progress: 2/5)
    Reframing AI as a global good counters race dynamics. My #AgenticAI narratives and AI for Good campaigns promote collaboration, but fear and low literacy dominate (2/5). Persia’s cultural synthesis suggests storytelling’s power.

Post-AGI: A World Without Work

AGI could automate 30-60% of jobs by 2030, reshaping 300 million livelihoods. In Genesis, I argued this triggers “short-term turbulence for long-term abundance”—a painful transition yielding a world where AI meets material needs, freeing humans for creativity and connection. Below, I project a year-by-year outlook post-AGI (assumed achieved in 2028), exploring how humanity copes, necessary measures, and scenarios for a Star Trek utopia.

2028: AGI Arrives, Turbulence Begins

  • State: Agentic AI swarms automate white-collar roles (e.g., legal analysis, design). Unemployment spikes to 15% globally; urban protests erupt. Social media reflects panic, with #JoblessByAI trending.
  • Coping: Governments deploy emergency UBI pilots (e.g., Canada’s $2,000/month trial). Reskilling programs struggle to keep pace.
  • Needs: Expand UBI to stabilize incomes, as Finland’s 2020 trial showed. Accelerate reskilling for AI oversight roles (e.g., ethics auditors). Enforce ethical AI frameworks to prevent misaligned swarms.
  • Scenario: In London, Sarah, a displaced marketer, joins a government-funded AI ethics course. Her UBI covers rent, but she feels purposeless, attending community art workshops to cope. Turbulence dominates, but seeds of adaptation are sown.

2029: Economic Restructuring, Social Strain

  • State: Automation reaches 30% of jobs; retail and logistics collapse. Agentic AI swarms optimize supply chains, slashing costs but flooding markets with cheap goods. Inequality widens; rural areas lag.
  • Coping: Global UBI adoption grows (e.g., EU’s €1,000/month plan). Community hubs offer free education and mental health support. Online platforms celebrate “post-work” lifestyles.
  • Needs: Standardize global reskilling, funded by AI wealth taxes. Establish AI-human hybrid roles (e.g., creative directors guiding AI artists). Promote cultural narratives of purpose beyond work, as my #AgenticAI advocates.
  • Scenario: In Singapore, Raj, a former driver, trains as an AI-human mediator, overseeing delivery drones. His UBI-funded art classes spark a new passion. Turbulence persists, but abundance emerges as goods become affordable.

2030: The Fork Solidifies

  • State: Automation hits 50%; traditional work dwindles. Agentic AI swarms solve climate challenges (e.g., carbon capture), but social unrest peaks in uncoordinated nations. The Star Trek vs. Mad Max fork is clear.
  • Coping: Nations with strong policies (e.g., Scandinavia) thrive, with universal creative stipends fueling art and innovation. Others face riots, echoing Mad Max chaos.
  • Needs: Global AI safety treaty to control swarms, as Persia’s governance unified regions. Decentralized fail-safes to prevent AI misuse. Cultural campaigns to redefine value (e.g., creativity over productivity).
  • Scenario: In Stockholm, Aisha, once a teacher, leads a community lab where AI designs eco-homes. Her stipend supports her poetry, reflecting abundance. In contrast, ungoverned regions see looting, highlighting dystopian risks.

2031: Stabilization and Adaptation

  • State: Unemployment stabilizes at 60%; AI meets basic needs globally. Swarms enhance healthcare (e.g., personalized medicine), but identity crises linger. Star Trek-like societies emerge in coordinated regions.
  • Coping: Global education shifts to creativity and philosophy. Community-driven “purpose economies” flourish, with AI supporting local projects. Social media celebrates #HumanPotential.
  • Needs: Robust fail-safes to prevent AI drift. Ethical codices to ensure swarms prioritize human flourishing. Cultural reinforcement of post-work identities, as Japan’s resilience sustained traditions.
  • Scenario: In Tokyo, Hiroshi, a former engineer, mentors youth in AI-assisted robotics clubs. His community garden, funded by AI profits, fosters connection. Abundance takes root, but vigilance remains.

2032: Long-Term Abundance

  • State: Work is optional; AI swarms provide universal abundance—food, housing, healthcare. Global creativity surges, with AI amplifying art, science, and exploration. Star Trek utopia prevails in aligned nations.
  • Coping: Humans redefine purpose through relationships, learning, and creation. Global festivals celebrate AI-human synergy. Dystopian pockets persist where governance failed.
  • Needs: Continuous AI alignment to prevent value drift, as my #AgenticAI envisions. Global cultural councils to sustain human-centric narratives. Red lines to block AI weaponization.
  • Scenario: In Cape Town, Nia, once a banker, curates an AI-generated art festival, funded by universal stipends. Her community thrives, exploring Mars via AI simulations. Abundance triumphs, fulfilling Genesis’s vision.

Achieving the Star Trek Utopia

The “short-term turbulence” (2028-2030)—job losses, unrest, inequality—yields “long-term abundance” (2031-2032) if we act decisively:

  • Economic Safety Nets: Global UBI, funded by AI wealth taxes, stabilizes incomes, as 2020 trials proved.
  • Reskilling and Roles: Universal training for AI-human roles (e.g., ethics, creativity) absorbs displaced workers, mirroring Japan’s Meiji-era pivot.
  • Ethical Governance: Binding treaties and codices ensure Agentic AI swarms serve humanity, as Persia’s systems unified diversity.
  • Cultural Shift: Narratives, like my #AgenticAI stories, redefine purpose, fostering a post-work identity of creativity and connection.

Without these, the Mad Max dystopia—unrest, inequality, misaligned AI—looms. Our progress (1/5 for fail-safes, 3/5 for ethics and collaboration) shows we’re at a crossroads.

A Call to Action

The Agentic AI swarm, driven by IQ, EQ, AQ, and embodied data, could unlock AGI by 2028, shaping the ‘fork’ I foresaw in Genesis. As a 1990s coder and philosopher, I urge collective action:

  • Policymakers: Lead on treaties and UBI, building trust.
  • Industry: Embed ethics in AI swarms, prioritizing human oversight.
  • Society: Embrace reskilling and advocate for AI as a shared resource.

Join me in steering toward a Star Trek future where Agentic AI amplifies human potential. Share your thoughts below or connect on LinkedIn to shape this post-AGI world.

Dr. Luke Soon is a Partner at PwC, an AI ethicist, philosopher, and author of Genesis: Human Experience in the Age of AI. Follow his #AgenticAI series for insights on AI’s future.

Leave a comment