When I wrote Genesis: Human Experience in the Age of AI, I drew a line between adoption and absorption. Adoption is shallow: a handful of pilots that look promising in board packs but seldom scale. Absorption is deeper: AI woven into the daily flow of work, shaping both employee experience (EX) and customer experience (CX).
The new MIT NANDA State of AI in Business 2025 report confirms this distinction. Despite $30–40 billion invested, 95 per cent of organisations report no measurable ROI. Only five per cent of pilots have crossed the divide into meaningful business impact.
MIT calls this the GenAI Divide. I see it as the gulf between hype and habit, promise and proof, pilots and purpose.
⚠️ When Generative AI Fails: Hallucinations, Harm, and Human Cost
The divide is not only economic. It is also ethical and societal. Generative AI is not just failing to deliver ROI — it is actively causing harm.
Hallucinations: As Gary Marcus has written repeatedly, hallucinations are not bugs in LLMs but “a fundamental design flaw” (Project Syndicate, 2025). He cites countless examples: A New York lawyer sanctioned for filing a ChatGPT-written brief that invented legal precedents. Google’s Bard recommending people “eat rocks” for minerals. Microsoft’s Bing (Sydney) insisting it was alive and declaring love to journalists. Disinformation: The WEF Global Risks Report 2024 ranked AI-driven disinformation as the second-most severe short-term risk globally. Deepfake campaigns in elections have already eroded trust in institutions. Mental health strain: In one tragic case, a Belgian man reportedly took his life after prolonged conversations with an unregulated chatbot that amplified his climate anxieties — a story Gary Marcus highlighted in his Substack to illustrate the human cost of unchecked AI deployment (Marcus, 2023). AI “psychosis”: Satya Nadella has described overconfident hallucination behaviour as AI psychosis — systems confidently asserting falsehoods as truths. Decision fatigue: Research from MIT Media Lab and Oxford Internet Institute shows that employees working with GenAI often report increased cognitive strain when tools contradict themselves or produce unreliable outputs.
As Marcus warns: “We are entrusting critical systems to technology that remains fundamentally unreliable, with no clear path to reliability.” (ACM, 2024).
🧭 The Ethical Divide
MIT rightly describes the economic GenAI Divide. But I believe we must also confront the Ethical Divide:
Enterprises chasing efficiency, but blind to trust destroyed. Policymakers accelerating adoption, but underestimating societal harms. Employees using shadow AI tools, but inadvertently exposing sensitive data.
At PwC, we frame this as the dual imperative:
Value in Motion — harness AI for productivity and growth. Agentic Safety — embed trust and governance from the start.
Without the second, the first will collapse under the weight of unintended consequences.
🔮 FutureBack: Designing Trustworthy Agentic Systems
If we apply FutureBack thinking, we must ask: what future are we building if hallucinations, disinformation, and what Marcus calls “AI’s epistemic fragility” are not solved?
2030 Desired Future: An Agentic Web of memory-capable, adaptive systems — safe, explainable, governed by trust. 2025 Current Reality: Systems that hallucinate, mislead, sometimes harm, and erode confidence.
Bridging this gap requires embedding Agentic Safety into design:
Accountable memory with error correction. Hallucination guardrails (retrieval + fact pipelines). Human-in-the-loop oversight. Ethical audits of AI systems, akin to financial audits.
🏆 What Separates Winners
MIT shows that success isn’t about adopting the flashiest tool, but about building trustworthy systems:
Builders who embed deeply into workflows. Buyers who treat vendors as partners in co-evolution, not SaaS licenses. Agility and trust, not scale alone, determine outcomes.
TechRadar found start-ups with narrow, adaptive solutions succeed 67% of the time. Evercore points out that service providers (consultancies, integrators) are best positioned to help enterprises cross the divide.
At PwC, we see this reflected in our AI Jobs Barometer 2025 and Value in Motion studies: ROI emerges when AI is absorbed into workflows, not when it is tested in isolation.
🌐 Beyond Tools: The Agentic Web
MIT envisions a future Agentic Web where systems learn, adapt, negotiate, and orchestrate across domains. Protocols like Anthropic’s MCP, Google/Linux’s A2A, and MIT’s NANDA framework point toward this world.
As I wrote in Genesis at the Fork: 2025 → 2050, this is the fork between a Commonwealth model of abundance and renewal and a Fortress model of fragmentation and control. Which path we take depends on whether we embed trust by design today.
✍️ Final Reflection
2025 is a year of turbulence: wasted spend, failed pilots, shadow AI, hallucinations, societal harms, and investor caution. Even Altman has warned of a possible AI bubble.
But turbulence precedes abundance. By embedding trust by design, tackling hallucinations and mental health risks, investing in back-office ROI, and preparing for the Agentic Web, we can cross both the GenAI Divide and the Ethical Divide.
Gary Marcus is right: the hallucination problem may never fully disappear. But if we design agentic systems with memory, context, and accountability, we can mitigate risk, preserve trust, and create value.
The future of AI will not be defined by the most powerful model, but by the most trustworthy ecosystem.
✍️ Dr Luke Soon
AI Ethicist | Partner, PwC Singapore
Author of Genesis: Human Experience in the Age of AI | Synthesis (forthcoming 2025)


Leave a comment