Most conversations about AI and jobs are framed incorrectly.
They ask which jobs will disappear, when the real question is far more uncomfortable:
Which skills will be compressed, absorbed, or delegated to machines — and how quickly will human autonomy follow?
The future of work is not being reshaped at the level of occupations. It is being reshaped at the level of skills, tasks, and decisions. And that distinction changes everything.
From Jobs to Skills: The Real Unit of Disruption
AI does not eliminate roles in a single stroke. It reconfigures the internal structure of work.
Across research from PwC, the World Economic Forum, Stanford HAI and OECD, a consistent pattern emerges:
roles with higher exposure to AI are not disappearing fastest — they are changing fastest.
Why? Because AI targets structure, not status.
Skills that are:
rules-based procedural repeatable easily decomposed
are absorbed first — regardless of whether they sit in blue-collar, white-collar or professional roles.
Conversely, skills that rely on:
contextual judgement social legitimacy ethical accountability intent-setting
remain stubbornly human — even when AI appears technically capable.
This explains why invoicing is more exposed than leadership, and structured coding more exposed than negotiation. It is not a hierarchy of intelligence. It is a hierarchy of automatable structure.
Skill Compression: The Phase Before Displacement
For most people, AI does not arrive as redundancy. It arrives as compression.
AI quietly takes:
first drafts baseline analysis routine checks coordination overhead
Humans retain:
judgement accountability exception handling sense-making
One person now produces the output of several — but at a higher cognitive altitude.
This pattern is already visible in enterprise data: productivity rises, hiring slows, work intensity increases. PwC’s workforce research shows that this phase precedes visible job loss by years.
The danger is that institutions respond only when displacement becomes visible — by which time the structural transition is already complete.
Why Singapore Read This Earlier Than Most
Singapore’s workforce strategy implicitly understood this shift long before generative AI made it obvious.
Through SkillsFuture, MOM and WSG, the system moved away from static job titles towards:
task decomposition skills taxonomies continuous role recomposition
Empirical occupational analysis consistently shows that 30–60% of tasks within a role can change within a few years once automation is introduced. AI compresses that timeline dramatically.
The insight was simple but powerful:
Jobs persist. Skills mutate.
That skills-first lens allowed Singapore to prepare for churn rather than chase disappearing titles.
However, even this model is now being stress-tested by a more profound shift.
Agentic AI: When Skills Give Way to Autonomy
Most workforce planning still assumes assistive AI — copilots that help humans work faster.
That assumption is breaking.
Agentic AI introduces new capabilities:
agency — initiating actions planning — decomposing goals into steps reasoning — selecting actions under constraints tool use — invoking systems and workflows self-monitoring — adjusting behaviour dynamically
Once these capabilities combine, AI stops being a tool.
It becomes an actor.
At this point, the disruption is no longer about which skills are automated. It becomes about which decisions are delegated.
From Task Automation to Decision Delegation
The most exposed skills in an agentic world are not manual or technical ones. They are coordination and supervision skills.
Agentic systems are exceptionally good at:
sequencing tasks reconciling data across systems managing workflows supervising rule-based processes
This places pressure on roles that exist primarily to coordinate other people’s work — middle management, middle office, operational oversight.
These roles are not low-skill. They are structured decision loops.
And structured decision loops are exactly what agentic systems are designed to absorb.
Research tracked by Stanford HAI shows that organisations adopting agentic systems see the fastest change not in frontline execution, but in coordination layers.
How Autonomy Is Ceded — Without Anyone Deciding To
Autonomy is not transferred in a single dramatic moment. It is ceded incrementally:
AI recommends; humans decide AI executes with confirmation AI executes within guardrails AI owns outcomes; humans oversee
By the time organisations reach the final stage, autonomy has already shifted — even if governance language still claims “humans are in the loop”.
The speed at which organisations move through these stages depends not on capability, but on trust:
trust in system reliability trust in auditability trust in accountability mechanisms
This is why AI governance and safety are not compliance overheads. They are preconditions for autonomy.
As Geoffrey Hinton has repeatedly warned, intelligence without aligned objectives and oversight does not fail gradually — it fails systemically.
The Jobs Least Disrupted Over Time
There are no “safe jobs”. But there are enduring human responsibilities.
Across PwC, WEF and academic research, the least disrupted roles share three characteristics:
they define intent, carry legitimacy, and own accountability.
1. Roles that define purpose and direction
Leadership, strategy, policy and transformation endure because someone must own outcomes. AI can optimise, but it cannot be responsible.
2. Roles grounded in human trust
Healthcare professionals, educators, judges, regulators and social workers persist because trust — not accuracy — is the binding constraint.
3. Roles in adversarial or novel environments
Negotiation, diplomacy, crisis response and security remain human-led because goals are contested and contexts unstable.
4. Roles that govern AI itself
As autonomy increases, demand grows for AI governance, safety, assurance and human-AI interface design. AI creates its own most durable category of work: constraint design.
The Hard Truth: Governments Are Underestimating the Shock
Most governments are planning for a linear transition:
gradual reskilling predictable job shifts multi-year adjustment cycles
But agentic AI does not advance linearly.
Once autonomy thresholds are crossed, capability jumps are discontinuous. A single software release can eliminate entire task clusters overnight.
The industrial revolution unfolded slowly because it was constrained by physical capital.
AI is constrained only by trust and regulation — and both are now accelerating.
As the World Economic Forum has cautioned, the greatest risk is not mass unemployment, but mass underutilisation of human potential combined with delayed institutional response.
Why the Transition Will Feel Sudden
The future of work will not change smoothly. It will shift in three abrupt phases:
Invisible substitution — tasks disappear quietly Organisational recomposition — coordination layers thin rapidly Social realisation — reaction comes after the window for preparation closes
By the time disruption is politically obvious, it will already be structurally embedded.
Final Reflection: The Skills Lens Is the Only Honest One
The future of work will not be decided by how intelligent AI becomes.
It will be decided by:
which skills we allow machines to absorb which decisions we delegate which responsibilities we refuse to abdicate
If we get this wrong, the transition will not just be fast.
It will be inhumane.
The question we should be asking is no longer:
What should humans learn next?
It is:
What must humans always remain accountable for — even when machines outperform us?
That answer will determine whether the age of AI enhances human dignity, or quietly erodes it before we realise what has been lost.


Leave a comment