“Know Thyself — Again”: AI’s Quiet Revolution Against Human Purpose, Creativity and Choice

By Dr Luke Soon, Author of Genesis: Human Experience in the Age of AI, Partner at PwC, and AI Ethicist

In my book Genesis, I wrote that artificial intelligence is not merely a tool but a mirror — one that reflects back uncomfortable questions about what it means to be human when code competes with cognition. Could AI outthink the greatest (human) philosophers? That’s the question.

Over the past year on LinkedIn, I’ve shared that the real disruption is not whether AI will do our work better — but whether it will force us to revisit the timeless Socratic injunction: “Know thyself.”

As we enter mid-2025, AI is challenging some of the oldest cornerstones of our collective identity: the dignity of labour, the sanctity of creativity, and the moral weight of decision-making. Let’s examine how.

Rethinking Purpose: If Work Fades, What Remains?

Since the Industrial Revolution, work has been central to identity, status and meaning. Marx believed our ability to shape the world through labour is what makes us truly human.

Yet the PwC AI Jobs Barometer 2025 confirms what many of us have witnessed in boardrooms and policy forums: AI is no longer a back-office productivity tool. It is reshaping entire sectors. White-collar tasks — from legal research to financial modelling — are now automated faster than many anticipated. The Stanford HAI 2025 Index reveals that 65% of surveyed firms expect generative and agentic AI to absorb large parts of knowledge work within the next five years.

In Genesis, I called this the Purpose Paradox: as machines lift the burden of repetitive or complex tasks, humans risk losing the scaffolding of identity that work provided. If our worth has long been tied to productivity, what happens when that productivity is handled by algorithms?

The next chapter demands that we uncouple purpose from labour. Maybe the future of purpose lies less in doing and more in being — in relationships, stewardship, service and experiences no neural network can replicate.

Creativity Revisited: The Algorithmic Muse

Philosophers from Kant to Nietzsche exalted creativity as a unique human trait — a sublime force born from intention, experience and suffering.

Yet as I’ve written on LinkedIn, 2025 is the year the creative myth faces its sternest test. GenAI models now compose symphonies, write short stories that pass human grading rubrics, and co-design complex scientific hypotheses. Research from the Oxford Internet Institute (2025) shows that, for the first time, over 40% of surveyed consumers could not distinguish AI-generated art from human originals.

But is mimicry the same as meaning?

Machines remix. Humans remember. The machine’s canvas holds no heartbreak, no hidden autobiography. Its ‘brushstrokes’ are pattern predictions, not the residue of a life lived. The challenge, then, is not to gatekeep creativity but to evolve it. The future belongs to the co-creator — the human who guides the model’s logic with messy, emotional depth.

Decision-Making: The Moral Burden AI Cannot Bear

Perhaps the hardest philosophical fault line lies in agency. For Aristotle and Kant, moral reasoning — the capacity to weigh right and wrong — sits at the heart of our humanity.

Yet in 2025, systems that plan, reason and act independently — Agentic AI — have become reality. From autonomous finance bots to medical triage assistants and real-time battlefield AI, we are letting code shape outcomes that matter profoundly.

The EU AI Act, Singapore’s Model AI Governance Framework 2.0 and the UK’s latest Responsible AI Standards all lean heavily on the doctrine of meaningful human control. The Stanford AI Regulation Observatory’s 2025 update shows a marked rise in legal frameworks clarifying who bears responsibility for autonomous outputs.

In Genesis, I argued that we cannot delegate moral weight to code. Accountability must flow upstream — to those who design, deploy and govern. Machines do not have skin in the game. We do.

The Human Experience Must Lead

Despite the hype, there is reassuring evidence that humans still hunger for trust, empathy and meaning in interactions. PwC’s HX 2025 Report finds that, as more tasks become automated, the human experience (HX) — the unique blend of customer and employee experience — becomes the competitive frontier.

The future is not about stopping AI’s advance but channelling it towards freeing humans for what machines cannot replicate: moral imagination, shared purpose, and authentic care.

Socrates Had It Right

2,400 years ago, Socrates said “An unexamined life is not worth living.” In 2025, this question takes on urgent, practical meaning. Every prompt we type, every model we train, every policy we shape must reflect the only question the machine will never ask for us: What does it mean to be truly human when the machine can think?

We must know ourselves — again.

Let’s debate this together. Connect with me on LinkedIn or join my upcoming sessions, where we’ll wrestle with how trust, purpose and responsibility must guide the next chapter of AI.

Luke Soon is an AI ethicist and Partner at PwC. He is the author of Genesis: Human Experience in the Age of AI and writes extensively about AI governance, trust and the future of work.

Leave a comment