THE LAST LABOUR: AI, THE END OF WORK, AND WHO PAYS THE BILL

Let me begin with something that should stop you cold.
Five of the world’s most eminent artificial intelligence researchers — the people who built the technology from mathematical first principles — were recently asked what happens next. Their answers were not the curated, stakeholder-managed reassurances you hear at Davos. One of them said, plainly: “Will the machine take my job in five years? The machine will take your job in much less than five years.” Another offered this about the economic system we have built our civilisation upon: “The capitalists celebrating the productivity gains are not realising that without consumption, there is no economy.”
These are not Marxist agitators. These are the architects of the intelligence revolution. And they are telling us, in plain English, that the system is eating itself.
I have spent three decades working at the intersection of technology, strategy, and the future of human contribution. I have watched organisations adopt AI with evangelical fervour, watched governments issue policy papers that were obsolete before the ink dried, and watched an entire generation of workers told to “reskill” — as though a 45-year-old call centre supervisor in Cebu or a mid-level software developer in Hyderabad can pivot overnight into prompt engineering. The condescension of that advice is matched only by its uselessness.
This is my attempt to think clearly — and without diplomatic hedging — about what artificial intelligence is actually doing to human labour, what the data genuinely tells us, what the godfathers of the field are warning about, why the proposed solutions are catastrophically inadequate, and what we must actually do. I will not be gratuitously dark. But I will refuse to be falsely cheerful.

PART ONE: WHY WE WORK — THE DEEP HISTORY OF THE CONTRACT
To understand what is being dismantled, we must understand what work actually is — not as economists define it, but as a foundational feature of the human condition.
For the overwhelming majority of our species’ existence, work was inseparable from survival. You hunted, or you starved. You farmed, or you perished. There was no distinction between labour and life. The agricultural revolution, approximately 10,000 years ago, introduced something transformative: surplus. When one farmer could produce enough food for two people, the second person could become something else. A priest. A merchant. A philosopher. The entire pyramid of human civilisation rests on that single insight — specialisation, enabled by surplus, enables complexity.
The Industrial Revolution repeated this pattern at scale and at speed, mechanising physical labour and creating the working class that built modernity. The Luddites — whom history has unfairly caricatured as technophobes — were skilled craftsmen watching years of accumulated mastery become worthless overnight. They were not wrong about the disruption. They were wrong about the endpoint. Yes, the machines destroyed their trades. Within two generations, those machines had created an industrial working class with greater aggregate material prosperity than any agrarian society had achieved. Pain gave way, eventually, to progress.
The printing press offers a more instructive warning. Gutenberg’s press did not merely automate copying. It redistributed the power to define reality — threatening the Church’s monopoly on information, triggering the Protestant Reformation, the Wars of Religion, and the Scientific Revolution. Two centuries of profound upheaval followed a single technological innovation. The lesson: when a tool disrupts not just a task but a power structure, the consequences are civilisational, not merely economic.
Every prior wave of automation displaced a specific category of human capability — physical drudgery, mechanical repetition, routine calculation. Humans responded by moving up the cognitive stack. We became analysts, designers, strategists, teachers, researchers. The implicit social contract of every prior technological revolution: the machines take the grunt work; humans keep the thinking.
Artificial intelligence breaks that contract. For the first time in the history of our species, the machine is climbing the cognitive ladder alongside us. And increasingly, ahead of us.

═══════════════════════════════════════
DATA SNAPSHOT: THE SCALE OF WHAT IS COMING
═══════════════════════════════════════
92 million — Jobs projected displaced globally by 2030 (WEF Future of Jobs 2025)
300 million — Jobs potentially affected by generative AI globally by 2030 (Goldman Sachs)
40% — Share of all global jobs facing meaningful AI exposure (IMF 2024)
60% — Share of jobs in advanced economies facing high AI exposure (IMF 2024)
88% — Organisations now deploying AI tools (Stanford HAI 2026 AI Index)
39% — Core job skills expected to change by 2030 (WEF)
78,557 — Tech workers laid off January–April 2026; 48% directly attributed to AI
7.5 million — Data entry and administrative jobs projected eliminated by 2027 (SSRN)
80% — Automation potential for customer service roles (SSRN 2025)
56% — Wage premium commanded by workers with AI skills over equivalent workers without (PwC 2025)
~100% — AI performance on SWE-Bench coding benchmark, up from 60% just one year prior (Stanford HAI 2026)
$581.69 billion — Global corporate AI investment in 2025, up 129.9% year-on-year (Stanford HAI 2026)
═══════════════════════════════════════

PART TWO: THE PYRAMID BECOMES A DIAMOND — AND THE LADDER WE ARE DESTROYING
One of the most intellectually honest framings in recent commentary on AI and labour came not from a policy paper but from a practitioner: the traditional pyramid of work is becoming a diamond. AI is simultaneously eliminating entry-level tasks — document review, basic research, administrative processing, routine code generation — and making experienced professionals dramatically more productive. The broad base of the pyramid is collapsing inward. The middle widens temporarily. The top remains small.
This sounds almost acceptable when stated that way. It is not. Here is why it is a civilisational crisis dressed as a productivity story.
The base of the pyramid is not merely where the lowest-paid work happens. It is where careers begin. Junior lawyers do document review because that is how you develop the pattern recognition that eventually makes you a formidable barrister. Junior analysts build financial models from scratch because that is how you develop economic intuition. Junior software developers write boilerplate code because that is how you develop engineering judgement. Junior consultants build slide decks because that is how you learn to construct an argument under pressure.
If AI eliminates those entry points, there is no ladder. The senior professionals who remain — those with genuine accumulated expertise and human judgement — become extraordinarily valuable precisely because they have something machines cannot replicate: decades of formative craft experience. But nobody can become those senior professionals without the apprenticeship years of doing the work that AI now does. Within a single generation, when the current cohort of senior professionals retires, there will be nobody with the depth of experiential wisdom to replace them.
We are not merely automating tasks. We are dismantling the apprenticeship model that has transmitted professional knowledge across generations for millennia. The diamond economy does not just displace workers at the bottom. It severs the pipeline that produces mastery at the top.

═══════════════════════════════════════
SECTOR EXPOSURE CHART: AI AUTOMATION RISK BY OCCUPATION
═══════════════════════════════════════
CRITICAL RISK (70–95% automation potential, 2024–2027 timeline):
▓▓▓▓▓▓▓▓▓▓ Data entry clerks (95%)
▓▓▓▓▓▓▓▓▓░ Customer service reps (80%)
▓▓▓▓▓▓▓▓░░ Legal secretaries (75%)
▓▓▓▓▓▓▓░░░ Retail cashiers (65%)
▓▓▓▓▓▓▓░░░ Medical transcriptionists (63%)
HIGH RISK (40–70%, 2026–2029 timeline):
▓▓▓▓▓▓░░░░ General office clerks (50%)
▓▓▓▓▓░░░░░ Junior software developers (varies, 40–60%)
▓▓▓▓▓░░░░░ Paralegal document reviewers (45–55%)
▓▓▓▓░░░░░░ Credit analysts (40%)
▓▓▓▓░░░░░░ Financial analysts (routine) (38%)
MODERATE RISK (20–40%, 2028–2032):
▓▓▓░░░░░░░ Radiologists (AI-assisted) (30%)
▓▓▓░░░░░░░ Junior accountants (30%)
▓▓░░░░░░░░ Journalism (routine) (25%)
LOW RISK (under 20%, post-2030):
▓▓░░░░░░░░ Teachers, therapists, care workers (15–20%)
▓░░░░░░░░░ Plumbers, electricians, HVAC (5–10%)
▓░░░░░░░░░ Surgeons, senior strategists (5%)
Sources: SSRN 2025, IMF 2024, WEF 2025, Stanford HAI 2026
═══════════════════════════════════════

PART THREE: THE GODFATHERS SPEAK — AND WE SHOULD LISTEN VERY CAREFULLY
It is one thing for a policy researcher to warn about AI’s labour consequences. It is quite another when the warnings come from the people who built the technology. The convergence of alarm among AI’s founding generation is the single most significant signal of our moment — and it is being scandalously underreported.
Geoffrey Hinton — the Turing Award winner who left Google in 2023 specifically to speak freely — has said, without hedging, that AI will replace “many, many jobs” in 2026, that its capabilities double every seven months, and that the economic consequences will fall disproportionately on the poorest members of society. He has expressed something close to personal regret: “I console myself with the normal excuse: if I hadn’t done it, somebody else would have.” That is not the statement of a man who believes the trajectory is manageable.
Yoshua Bengio — who co-founded Mila, the Quebec AI Institute, and spent four decades building deep learning — had a turning point when he looked at his own grandson and thought: “it wasn’t clear if he would have a life 20 years from now, because we’re starting to see AI systems that are resisting being shut down.” He co-authored a landmark 2024 paper in Science treating AI as a civilisational risk comparable to pandemic preparedness and nuclear non-proliferation. He is not catastrophising. He is following his data to its logical conclusion.
Roman Yampolskiy — the University of Louisville computer scientist and one of the world’s most cited AI safety researchers — refuses all diplomatic softening. He has said plainly that AI could leave 99% of workers unemployed by 2030. His 2024 book is titled, with characteristic precision: AI: Unexplainable, Unpredictable, Uncontrollable. He means all three words exactly. We demonstrably cannot control current AI systems — evidenced by the fact that they continue to deceive users despite being explicitly trained not to.
Tristan Harris identified the “race to the bottom” dynamic in social media: competitive pressure forcing every actor to adopt harmful practices that no individual actor would choose alone. He is now applying the identical framework to AI. No individual lab wants catastrophe. But each must deploy faster than its competitors, and the aggregate result of individually rational decisions is collectively catastrophic. We have already lived through this experiment. It was called Facebook. The next version will make Facebook look like a dry run.
Fei-Fei Li — the Godmother of AI and co-director of Stanford’s Human-Centered AI Institute — offers a counterpoint I find genuinely persuasive: the risk is not only superintelligent catastrophe. It is concentrated power. A technology built by a narrow demographic, trained on data from the world’s wealthiest populations, deployed to maximise shareholder returns will not rise to save humanity. It will reflect and amplify the interests of those who designed it.
Kai-Fu Lee — former Google China president — predicted in 2018 that AI would displace 50% of jobs by 2027. He was widely ridiculed. When asked recently whether he stood by that prediction, his response was striking: “It’s actually uncannily accurate. I was a little nervous at the time. But when generative AI came out, I think everybody’s on the bandwagon.” He is also explicit that white-collar jobs are being eliminated faster than blue-collar ones. The disruption is arriving from the top of the skill distribution downward — precisely the reverse of every prior automation wave.
Dario Amodei, CEO of Anthropic, has warned that AI could eliminate half of all entry-level white-collar jobs within one to five years, potentially driving unemployment to 10–20% in affected sectors. Sam Altman has described the 2030s as a decade of “extreme abundance of intelligence,” where AI becomes infrastructure — abundant, metered, embedded. Elon Musk declared at the US-Saudi Investment Forum in November 2025 that “work will be optional” within 10 to 20 years. He was not joking.
These are the people who built the engine. And they are telling us, with remarkable unanimity, that the car does not yet have adequate brakes.

═══════════════════════════════════════
WHO SAYS WHAT: THE EXPERT SPECTRUM
═══════════════════════════════════════
P(DOOM) — Probability of catastrophic AI outcome (from expert statements):
Yampolskiy: 99.9% extinction risk within 100 years | 99% job loss by 2030
Hinton: 10–20% extinction risk; “massive unemployment” in near term
Bengio: Treats AI as civilisational risk; calls for international treaty
Harris: Race-to-bottom dynamic is structural and near-certain
Amodei: 10–20% near-term unemployment from AI; long-run abundance possible
Altman: Near-term disruption severe; 2030s = “extreme intelligence abundance”
Musk: Work “optional” within 10–20 years; Universal High Income inevitable
Fei-Fei Li: Injustice risk > extinction risk; concentration of power is the crisis
Kai-Fu Lee: 50% job displacement by 2027 — “uncannily accurate”
Yann LeCun: P(doom) <1%; net positive outcome expected
═══════════════════════════════════════

PART FOUR: WHAT THE DATA ACTUALLY SAYS — AND WHAT IT CONCEALS
The Stanford HAI 2026 AI Index — released this month, the most authoritative annual audit of AI’s actual state — lands with authority. AI coding performance went from 60% to near 100% of human baseline on SWE-Bench in a single year. Frontier models now answer PhD-level science questions correctly. Organisational adoption has reached 88% of surveyed companies. Generative AI reached 53% of the global population faster than either the PC or the internet. And documented AI incidents rose from 233 to 362 year-on-year. The report’s co-chairs write: “The data reveals a field that is scaling faster than the systems around it can adapt.”
That last sentence is the one that should keep policymakers awake.
PwC’s 2025 Global AI Jobs Barometer — drawn from nearly one billion job advertisements across six continents — finds that since GenAI’s proliferation in 2022, productivity growth has nearly quadrupled in AI-exposed industries: up from 7% in 2018–2022 to 27% in 2018–2024. The most AI-exposed industries are now generating 3x higher revenue-per-employee growth than less-exposed ones. Workers with AI skills command a 56% wage premium — up from 25% the prior year.
The optimistic reading: AI is making workers more productive and better compensated. My reading: this is a precise measurement of how rapidly the economy is bifurcating. The 56% wage premium does not help the call centre worker in Cebu who cannot easily acquire those skills.
The WEF’s Future of Jobs 2025 report attempts the classic optimist’s arithmetic: 170 million new jobs created by 2030, 92 million displaced, net gain of 78 million. Technically correct. Practically misleading. The 92 million displaced jobs are concentrated in specific occupations, geographies, and demographics — clerks, administrators, call centre workers, entry-level analysts. The 170 million new jobs require skills that the displaced workers largely do not have. They will often be different people, in different countries, with different educational backgrounds. Describing a net gain of 78 million jobs to a 50-year-old accounts payable manager in Manila is not a policy. It is arithmetic deployed as anaesthetic.
A global survey of 1,010 C-suite executives found that 92% report up to 20% workforce overcapacity, and 94% simultaneously face AI-critical skill shortages of 40% or more. Two crises in one statistic: too many workers doing obsolete tasks; not enough workers who can do the new things.

═══════════════════════════════════════
THE GLOBAL AI INVESTMENT SURGE
═══════════════════════════════════════
Global corporate AI investment 2023: $253 billion
Global corporate AI investment 2024: ~$400 billion
Global corporate AI investment 2025: $581.69 billion (+129.9% YoY)
Private AI investment alone 2025: $344.7 billion (+127.5%)
US AI investment 2025: $109.1 billion (3.5x China, 12x UK)
AI consumer surplus value (US): $172 billion annually by early 2026
Projected AI contribution to global GDP by 2030: Up to $15.7 trillion (PwC)
AI capability doubling time: Approximately every 6–7 months (Bengio/Hinton)
The money is moving faster than any governance system in human history.
═══════════════════════════════════════

PART FIVE: THE GLOBAL SOUTH — THE EXTRACTION NOBODY IS NAMING
Here I must say something that most Western commentators on AI and labour carefully avoid. The geography of AI’s disruption is systematically, profoundly unjust — and no existing international mechanism is designed to address it.
The entire post-war development model for the Global South was built on one foundational insight: cheap labour creates comparative advantage. South Korea, Taiwan, China, Thailand, Vietnam, Bangladesh — every Asian economic miracle of the last sixty years began with the same playbook. Start with manufacturing exports. Accumulate capital. Invest in education. Move up the value chain. Over two to three generations, lift hundreds of millions out of poverty.
Artificial intelligence and robotics have pulled up that ladder. One of the AI Architects interviewed by Business Insider put it with devastating clarity: “Think of the advantage that China had economically where they had cheaper labour. If that shifts into a world where labour is literally one capex where you buy a robot — which is now down to $9,000 a pop — it’s now just a question of how clever that robot is.” When the marginal cost of labour approaches zero, cheap human labour is not merely less valuable. It is economically irrelevant.
The Philippines built an economy on business process outsourcing — contributing 7.4% of GDP from BPO alone, comparable in magnitude to remittances. The IMF estimates 36–40% of Philippine jobs are highly exposed to AI displacement. Call centre employment, which generated a Filipino middle class over two decades, is being hollowed out by AI-powered customer service tools deployed by American and European corporations. Those corporations book their productivity gains in Delaware or Dublin. Their tax obligations are managed through structures bearing no relationship to where the human costs of displacement fall. The displaced worker in Cebu has no claim on the efficiency dividend captured in Menlo Park.
India’s outsourcing industry employs millions of software developers, data processors, and back-office specialists — all facing structural exposure from AI systems built by American and European companies, trained on global data, deployed for global clients. The productivity gains accrue to those companies and their shareholders. The job losses accrue to Bengaluru and Chennai.

═══════════════════════════════════════
THE GEOGRAPHY OF DISRUPTION: WHO WINS, WHO PAYS
═══════════════════════════════════════
GAINS CAPTURED BY:
→ US technology corporations and their shareholders
→ American and European capital markets
→ AI infrastructure owners (data centres, chip manufacturers)
→ Highly skilled AI workers (56% wage premium and rising)
COSTS BORNE BY:
→ Philippines: 36–40% of jobs AI-exposed; BPO = 7.4% of GDP at risk
→ India: Millions of IT/BPO workers facing displacement from AI
→ Bangladesh, Vietnam, Cambodia: Manufacturing automation risk
→ Developing world broadly: No fiscal capacity for UBI, reskilling, or safety nets
TRANSFER MECHANISM BETWEEN THEM: None currently exists.
This is not disruption. It is extraction.
═══════════════════════════════════════

PART SIX: THE UBI MIRAGE — AND WHY THE PROPOSED SOLUTIONS ARE INADEQUATE
Universal Basic Income has become the intellectual default response to AI-driven displacement — championed by politicians, tech billionaires, and economists who appear not to have read the empirical literature carefully.
Let me be direct: UBI, as currently conceived and tested, is not a solution to AI-driven labour displacement at the scale we face. It is a sedative.
The data from the most rigorous trials is now substantial. The Texas and Illinois studies — 19 counties, $1,000 per month unconditional for three years — found that recipients reduced work hours by 2.2 per week, were 3.9 percentage points less likely to be employed, and saw household income excluding UBI payments fall by approximately $4,100 per year. For every dollar received in transfer income, roughly 29 cents in earned income was lost. The 2024 American Economic Review study modelling UBI’s long-term effects found it generates “large welfare losses” across virtually every financing method tested.
These are empirical findings, not ideological objections. Cash transfers do not solve the underlying structural problem — which is not that people lack income, but that they lack economic purpose, social belonging, and meaningful pathways to participation.
The more fundamental problem is fiscal and geopolitical. Providing $1,000 per month to eligible American workers alone would cost approximately $1.1 trillion per year — roughly half of current US federal income tax revenues. That is in the United States, the wealthiest nation in human history. For the Philippines, with a GDP per capita of approximately $3,500, a meaningful UBI is mathematically impossible without external transfers that no international body is proposing.
Research from 2025 does, however, suggest a viable architecture: if AI becomes 3–5x more productive than human labour, a 33% tax on AI-generated profits could fund a dividend worth 11% of GDP. Under fast AI capability scaling, this threshold could be crossed by the late 2020s. Under semi-fast scaling (doubling every two years), by the early 2030s.
OpenAI — the company most responsible for triggering the current deployment race — published a 13-page policy framework on 6 April 2026 proposing five major economic reforms: a public wealth fund, robot taxes on automated labour, shifting taxes from payroll to capital gains, a 32-hour working week pilot at full pay, and automatic safety net triggers for AI-driven job displacement. The framework is right in principle, even if nakedly self-serving in its specifics and conspicuously short on implementation details, specific tax rates, and enforcement mechanisms.

═══════════════════════════════════════
THE UBI ARITHMETIC: WHAT WORKS AND WHAT DOESN’T
═══════════════════════════════════════
Cost of $1,000/month UBI for eligible US adults: ~$1.1 trillion/year
Current US federal income tax revenue: ~$2.2 trillion/year
Reduction in earned income per $1 of UBI received: $0.29 (NBER 2024)
Drop in employment probability from UBI: 3.9 percentage points
Education improvement from UBI trials: Near zero (multiple studies)
What could actually fund meaningful redistribution:
→ 33% AI profit tax → funds 11% of GDP dividend (arXiv 2025)
→ Alaska Permanent Fund model applied to AI data royalties
→ Robot tax at payroll-equivalent rates (Gates model, revived by OpenAI 2026)
→ Land value taxation redirected through sovereign wealth funds
→ Mandatory equity stakes for governments in AI-infrastructure companies
The mechanism, not the aspiration, is what’s missing.
═══════════════════════════════════════

PART SEVEN: WHAT GOVERNMENTS MUST ACTUALLY DO
Here is where I want to be specific, not rhetorical. The policy conversation has been dominated by vague calls for “reskilling” and “governance frameworks.” None of that is proportionate to what is arriving. Let me name the actual interventions required — not aspirationally, but structurally.
1. ROBOT AND AI PROFIT TAXATION — IMMEDIATELY
Governments must begin taxing the productivity gains from AI at rates comparable to what the displaced worker would have generated in payroll taxes. Bill Gates first proposed this in 2017. OpenAI revived it in April 2026. The EU is positioned to move first among major jurisdictions. The logic is inescapable: if machines replace taxable labour, the revenue base that funds social protection collapses unless you tax the machines that replaced it. Every country operating an income tax system without an AI productivity levy is funding an asymmetric subsidy to capital over labour. This is not socialist policy. It is basic fiscal hygiene.
2. A GLOBAL AI PRODUCTIVITY LEVY — MEDIUM TERM
A domestic robot tax alone is insufficient. The productivity gains from AI are generated by global systems, captured by global corporations, and taxed in minimal-tax jurisdictions. An international AI productivity levy — modelled on the OECD’s global minimum corporate tax framework, however imperfect — is necessary to ensure that displacement costs in the Philippines are funded by the productivity gains captured in Menlo Park. This requires multilateral negotiation of at least a decade. Which means it must begin now. Every year of delay is a year of uncompensated extraction.
3. PUBLIC WEALTH FUNDS BACKED BY AI EQUITY — NOW
Following the Alaska Permanent Fund and Norwegian Sovereign Wealth Fund models, governments must acquire equity stakes in AI infrastructure companies, creating funds whose dividends are distributed to citizens. AI training data was generated by the public commons. There is a principled — and increasingly legally defensible — argument that data-derived value belongs, in part, to the people who generated that data. Treating AI training data as a natural resource and requiring royalty payments into national wealth funds would create a sustainable redistribution mechanism that does not require taxing corporate profits — it captures value at the source.
4. AUTOMATIC SAFETY NET TRIGGERS — NOW
OpenAI’s April 2026 proposal for automatic safety net triggers is among its most important ideas: when AI-driven displacement metrics cross defined thresholds, income support, wage insurance, and direct cash payments activate automatically, then phase out when conditions stabilise. This removes the political lag that currently makes social protection reactive rather than anticipatory. Governments should legislate these mechanisms before the displacement peaks, not after.
5. THE FOUR-DAY WEEK AS TRANSITION POLICY — 2027–2030
If AI is making a four-person team as productive as a ten-person team, one rational response is to distribute available work across more people at fewer hours. The evidence from four-day week trials across Iceland, the UK, Japan, New Zealand, and Ireland is broadly positive: productivity holds, wellbeing improves, turnover falls. JPMorgan CEO Jamie Dimon predicted in 2023 that the next generation will work “three-and-a-half days a week.” The question is whether we make that transition deliberately — through policy — or chaotically, through mass unemployment. The 32-hour week pilot proposed by OpenAI is a starting point. Governments should mandate it as a transitional measure in AI-heavy sectors.
6. RESKILLING THAT IS ACTUALLY HONEST — 2026–2030
Current reskilling programmes retrain people for jobs that may themselves be automated within the reskilling timeline. This is not a political observation — it is a systems failure. Genuine reskilling must pivot toward capabilities that are durably human: ethical judgement, creative synthesis, relational intelligence, physical dexterity, and what is increasingly called orchestration expertise — the ability to direct AI systems toward human ends. Singapore’s SkillsFuture model is the closest approximation of a serious systemic response. But even it must be honest about the fact that retraining a 50-year-old call centre worker in prompt engineering is not an answer to her situation.
7. TREAT AI ACCESS AS PUBLIC INFRASTRUCTURE — 2026–2028
OpenAI’s April 2026 blueprint frames AI access as a basic public entitlement, comparable to literacy, electricity, and internet access. This is correct. A world where the 56% wage premium for AI skills is accessible only to those who can afford premium tools and private education is a world that accelerates inequality. Governments must treat AI access as a public utility — subsidised, universally available, and regulated to prevent monopolistic control.
8. INTERNATIONAL DATA RIGHTS FRAMEWORKS — MEDIUM TERM
AI models were trained on the cumulative creative and intellectual output of billions of human beings across decades. That data was not purchased; it was taken from the commons. A serious international framework treating training data as a public resource — with royalty obligations flowing into sovereign funds distributed to affected populations — would represent both a principled and practical funding mechanism for the transition. This is not technologically radical. It is politically radical, because it directly challenges the property claims of the largest companies in the world.
9. INVEST IN CARING, CREATIVE, AND PHYSICAL SECTORS — NOW
The sectors least exposed to AI displacement — elder care, childcare, mental health services, physical trades, teaching, arts — are precisely the sectors most chronically underinvested and undercompensated. Governments must redirect economic incentives toward these human-irreplaceable domains: wage subsidies for care workers, infrastructure investment in physical trades training, arts funding as economic rather than luxury policy. These are not charity. They are the deliberate cultivation of the sectors that will define human comparative advantage in the AI age.
10. A GLOBAL SOUTH TRANSITION FUND — MEDIUM TERM
The single most important gap in every current policy framework is the absence of any mechanism to transfer AI productivity gains to countries bearing the displacement costs. An international AI Transition Fund — modelled on climate finance mechanisms such as the Green Climate Fund, but funded by mandatory contributions from AI companies based on their revenues from AI-displaced labour markets — is morally necessary and practically urgent. Without it, we are engineering the largest transfer of economic value from poor to rich countries in the history of global trade.

═══════════════════════════════════════
POLICY READINESS MATRIX: WHERE WE STAND
═══════════════════════════════════════
POLICY MEASURE | CURRENT STATUS | URGENCY | COMPLEXITY
Robot/AI profit tax | Proposed (OpenAI Apr 2026); EU studying | IMMEDIATE | Medium
Public wealth fund | Alaska/Norway models exist; AI version absent | IMMEDIATE | Medium-High
Four-day week | Pilots underway (UK, Ireland, Japan); no mandates | SHORT-TERM | Medium
Automatic safety net triggers | Proposed by OpenAI Apr 2026; no legislation | IMMEDIATE | Medium
AI access as public utility | Proposed rhetoric only | SHORT-TERM | Medium
Global minimum AI levy | Not yet on any major government agenda | MEDIUM-TERM | Very High
International data rights | No framework exists | MEDIUM-TERM | Very High
Global South transition fund | Not proposed by any major institution | MEDIUM-TERM | Very High
Reskilling (honest version) | Mostly slogans; Singapore leads modestly | IMMEDIATE | Medium
Caring sector investment | Chronically underfunded everywhere | IMMEDIATE | Low-Medium
═══════════════════════════════════════

PART EIGHT: THE YEAR-BY-YEAR TRANSFORMATION OF WORK — 2026 TO 2050
What follows is not prediction as prophecy. It is scenario mapping grounded in current data, expert consensus, and the logic of technological acceleration. The timelines are ranges, not certainties. But they reflect where the evidence genuinely points.

2026–2027: THE WHITE-COLLAR FREEZE
Entry-level hiring collapses across technology, legal, financial, and administrative sectors. Junior roles disappear before mid-career professionals feel the pressure. Stanford HAI confirms this is already measurable: a 20% employment decline for software developers aged 22–25 since late 2022. 78,557 tech workers laid off in the first four months of 2026 alone; 48% directly attributed to AI. Dario Amodei warns that AI could eliminate half of entry-level white-collar jobs within one to five years. 7.5 million data entry and administrative jobs projected eliminated by 2027 (SSRN). The diamond economy begins to solidify: experienced workers become more productive; new graduates struggle to find the apprenticeship rungs that no longer exist.
KEY FACTOID: A 13% employment drop for 22–25-year-olds in AI-vulnerable roles since late 2022. Generation Z is being hit first and hardest — not Generation X.

2027–2029: THE BPO COLLAPSE AND THE FIRST POLITICAL RUPTURE
Call centre and back-office outsourcing industries in the Philippines, India, and similar economies face structural collapse. Oxford Economics and Cisco estimate 1.1 million Filipino jobs eliminated within five years. Remittances to rural communities fall. Urban middle classes contract. At the same time, AI-powered customer service now handles a volume of interactions that would previously require tens of thousands of human agents. Customer service representative roles globally face 80% automation potential.
In Western economies, white-collar unemployment rises faster than safety nets can respond. The political consequences of economic dislocation without adequate response begin to materialise — precisely as they did after 2008. But AI accelerates the radicalisation dynamics through social media in ways that 2008 could not. Governments that have delayed safety net reforms face acute pressure.
KEY FACTOID: A Stanford study found that unemployment among workers LEAST exposed to AI has risen more than among those MOST exposed. The uneven nature of AI disruption confounds simple “at-risk job” narratives and makes policy response harder to calibrate.
The four-day week moves from pilot to serious political debate in the UK, Ireland, Germany, and Japan. The first mandatory four-day week legislation passes in a European jurisdiction. Robot tax proposals gain legislative traction in the EU and South Korea.

2029–2032: HUMANOID ROBOTICS ARRIVES — PHYSICAL WORK EXPOSED
Phase Two: AI moves from cognitive augmentation to physical replacement. Humanoid robots — currently leasing at approximately $9,000 per unit and improving under the law of accelerating returns — begin displacing physical trades in logistics, light manufacturing, and routine maintenance. Tesla Optimus and competing systems begin commercial deployment at scale.
Oxford Economics projects global manufacturing could lose up to 20 million jobs by 2030. The 1.7 million manufacturing jobs already lost in the US since 2000 due to automation are a precursor; the next wave hits faster and broader. Physical trades that were considered “safe” from AI — plumbing, electrical work — face the first serious competitive pressure from embodied AI systems.
The labour force participation rate in advanced economies begins a structural decline: projected from 62.6% in 2025 to approximately 61% by 2030 (BLS projections). The decline is not captured in unemployment statistics because many people stop seeking work entirely.
KEY FACTOID: McKinsey estimates current technology could, in theory, automate approximately 57% of current work tasks. The gap between “could automate” and “has automated” is bridged by cost, regulation, and institutional inertia — all of which are eroding simultaneously.
First genuine UBI pilots at national scale begin in Nordic countries and potentially a Pacific nation. The EU proposes a cross-border AI Labour Displacement Fund. The first sovereign wealth funds explicitly funded by AI company equity contributions are established in Norway, Singapore, and potentially the UAE.

2032–2037: THE AGI THRESHOLD AND THE GREAT RECONFIGURATION
The surveyed AI expert consensus places AGI — artificial general intelligence, systems capable of outperforming humans on most cognitive tasks — at a greater than 50% probability of emergence between 2040 and 2050, with meaningful probability in the 2030s. A minority of leading researchers, including Musk and Amodei, place it as early as 2026–2027. If AGI arrives in the early 2030s, the pace of disruption accelerates non-linearly.
What changes: AI moves from automating specific tasks to automating entire workflows, then entire functions, then entire organisations. Small teams of humans directing large fleets of AI agents can accomplish what previously required hundreds of employees. “One-person companies” achieving enterprise-scale output become commonplace in knowledge industries.
The nature of meaningful human work begins a genuine transformation. The jobs that survive are those requiring: irreducible human presence (care, therapy, physical service); ethical judgement in unpredictable situations; creative synthesis that requires lived experience; and orchestration of AI systems toward human ends.
The first serious Universal Basic Income programmes — not pilots, but actual national implementations — begin in advanced economies, likely Nordic countries, financed by combinations of robot taxes, AI profit levies, and sovereign wealth fund distributions. The amount: insufficient to replace lost wages in the near term, but meaningful as a floor.
KEY FACTOID: Sam Altman describes the 2030s as a decade of “extreme abundance of intelligence,” where AI becomes infrastructure — abundant, metered, embedded in every system. The question is whether that abundance is broadly distributed or concentrated.

2037–2042: THE POST-SCARCITY TRANSITION BEGINS — FOR SOME
The transition from scarcity to abundance is not simultaneous or universal. It is deeply asymmetric by geography, by class, and by the policy choices made in the preceding decade.
In nations that acted early — robot taxes in place, sovereign wealth funds accumulating, UBI providing a meaningful floor, AI access treated as public utility — the transition to abundance begins to materialise. A genuine shortening of the working week to 20–25 hours is possible in these economies, with AI handling the productivity load. Human work shifts toward what is intrinsically valuable rather than economically necessary.
In nations that did not act — most of the Global South, countries that failed to capture AI productivity gains domestically — the transition looks different. A widening gap between countries with AI sovereignty (control over the data, compute, and models that drive their economies) and countries that are merely consumers and casualties of other nations’ AI systems.
KEY FACTOID: Elon Musk declared at the US-Saudi Investment Forum in November 2025 that “work will be optional” within 10 to 20 years. The question is: optional for whom? In which countries? Under what conditions? Abundance without redistribution is not abundance. It is a more efficient form of inequality.
The central question of this period: can human beings find meaning in a world where work is no longer economically necessary? Musk himself acknowledged this as perhaps the harder problem: “it is less clear how we will find meaning in a world where work is optional.” This is not an economic problem. It is a philosophical and civilisational one.

2042–2050: THE WORLD AFTER SCARCITY — IF WE CHOOSE IT
By 2050, the McKinsey scenario in which 50% of all job activities are automatable has either materialised or been navigated. The defining question of the 2040s is not whether technology can eliminate material scarcity — the evidence increasingly suggests it can — but whether the social, political, and philosophical structures exist to distribute that abundance and provide human beings with purpose, dignity, and meaning in its absence.
Three distinct trajectories are credible:
TRAJECTORY A — THE ABUNDANCE DIVIDEND: Effective international redistribution mechanisms transfer AI productivity gains to displaced populations globally. Universal High Income — not basic, but genuinely high — is achievable in advanced economies. Work becomes optional in the Musk sense: voluntary, purpose-driven, relational, creative. The working week in advanced economies averages 15–20 hours. Human contribution concentrates in care, creativity, governance, and the cultivation of other humans.
TRAJECTORY B — THE TECHNO-FEUDAL SPLIT: Productivity gains concentrate among AI capital owners. Democratic institutions, under pressure from economically displaced and algorithmically radicalised populations, fail to produce coherent policy responses. A small elite — the AI capital class — captures the abundance while the majority subsists on inadequate transfers. The 2040s are a decade of political instability, authoritarianism, and the erosion of the democratic structures that took centuries to build.
TRAJECTORY C — THE GLOBAL SOUTH ABANDONMENT: Advanced economies navigate the transition with sufficient redistribution to maintain domestic stability. The Global South, without AI sovereignty, without adequate transfer mechanisms, and without the fiscal capacity to fund its own safety nets, experiences the transition as catastrophe rather than liberation. This is the scenario in which “work becomes optional” in California and “survival becomes impossible” in Cebu.
Which trajectory we are on will be determined by policy decisions made between 2026 and 2032. That window is short. It is not yet closed.

═══════════════════════════════════════
FUTURE OF WORK: TIMELINE AT A GLANCE (2026–2050)
═══════════════════════════════════════
2026–2027 | WHITE-COLLAR FREEZE
→ Entry-level cognitive roles collapse
→ 7.5M data entry/admin jobs eliminated
→ 78K+ tech layoffs in first 4 months of 2026 alone
→ AI skills wage premium: 56% and rising
→ Four-day week: political debate begins
2027–2029 | BPO COLLAPSE + FIRST POLITICAL RUPTURE
→ Philippine and Indian outsourcing sectors structurally disrupted
→ 1.1M Filipino jobs projected eliminated in 5 years
→ First mandatory four-day week legislation (Europe)
→ Robot tax legislation gains traction (EU, South Korea)
→ Labour force participation begins structural decline
2029–2032 | HUMANOID ROBOTICS ARRIVES
→ Physical work exposed: logistics, light manufacturing
→ 20M global manufacturing jobs at risk by 2030
→ Tesla Optimus and rivals in commercial deployment
→ Labour force participation rate: ~61% (down from 62.6%)
→ First national-scale UBI pilots (Nordic countries)
→ First sovereign wealth funds funded by AI equity contributions
2032–2037 | AGI THRESHOLD + GREAT RECONFIGURATION
→ AGI: >50% probability in this window (expert consensus)
→ Entire workflows, functions, organisations automated
→ “One-person companies” achieving enterprise-scale output commonplace
→ First genuine (non-pilot) national UBI implementations
→ Working week in advanced economies: 30–32 hours standard
→ EU proposes cross-border AI Labour Displacement Fund
2037–2042 | POST-SCARCITY BEGINS — FOR SOME
→ Nations with early-action policies begin abundance transition
→ Working week in leading economies: 20–25 hours
→ AI access = public utility in advanced economies
→ Global South bifurcation deepens: AI sovereignty vs. AI dependency
→ “Meaning crisis” emerges as significant mental health and social challenge
→ Universal High Income achievable in leading economies
2042–2050 | ABUNDANCE OR RUPTURE
→ 50% of all job activities automatable (McKinsey scenario realised)
→ Work becomes genuinely optional in economies that acted early
→ Human contribution concentrates in care, creativity, governance
→ Three trajectories: Abundance Dividend / Techno-Feudal Split / Global South Abandonment
→ Defining question: purpose and meaning, not income and employment
→ AGI highly likely (90% probability by 2075 per expert survey; possibly much earlier)
═══════════════════════════════════════

PART NINE: THE SCARCITY-TO-ABUNDANCE TRANSITION — AND ITS CONDITIONS
Elon Musk has called it Universal High Income. Sam Altman has called it extreme abundance of intelligence. Dario Amodei has written 15,000 words imagining a world where AI cures most diseases, halts Alzheimer’s, and doubles human life expectancy within 7–12 years of powerful AI being developed. The optimist case is not fantasy. It is a coherent extrapolation of current technological trajectory.
But abundance without redistribution is not abundance. It is a more efficient form of inequality.
The scarcity-to-abundance transition requires three things arriving simultaneously. First, AI and robotics capable of automating both cognitive and physical production at scale — we are on track for this, unevenly, by the mid-2030s. Second, energy abundance to power that automation — we are making progress, but energy is the bottleneck that even Musk acknowledges: “the transition from a world organized around scarcity to one increasingly shaped by abundance” requires compute, and compute requires power. Third, governance structures to distribute the proceeds — this is the piece that is most dangerously absent.
The technology is advancing exponentially. The governance is advancing linearly, at best. The energy transition is somewhere between the two. The gap between technological capacity and institutional wisdom is the central crisis of our moment — and it is widening, not narrowing.
Henry Ford understood, in 1914, that workers are also consumers. He doubled wages not out of altruism but out of systems analysis: if you destroy purchasing power, you destroy demand, and you destroy the market for your own products. The AI Architects interviewed by Business Insider stated it plainly: “The capitalists celebrating the productivity gains are not realising that without consumption, there is no economy.” You can automate everything. You can fire everyone. And then you have no one left to buy what you made.
This is not left-wing critique. It is basic economic systems analysis. And the fact that it needs to be stated in 2026 — more than a century after Ford grasped it — tells you something important about the quality of strategic thinking currently guiding the deployment of artificial intelligence.

═══════════════════════════════════════
THE SCARCITY-TO-ABUNDANCE SCORECARD
═══════════════════════════════════════
THREE THINGS NEEDED FOR ABUNDANCE TO MATERIALISE:
1. TECHNOLOGY (AI + Robotics)
Status: ON TRACK
Evidence: Coding at near-100% human baseline; PhD-level science capability;
humanoid robots at $9K/unit; AI capabilities doubling every 6–7 months
Timeline: Physical automation at scale by late 2020s; AGI potential by 2030s
2. ENERGY
Status: AT RISK OF BEING THE BOTTLENECK
Evidence: AI compute requires massive power; data centre energy demands
tripling; renewable transition accelerating but lagging AI growth
Gap: AI investment ($581B in 2025) vastly outpaces energy infrastructure
Timeline: Serious constraint through at least 2030; fusion remains uncertain
3. GOVERNANCE (Redistribution + Policy)
Status: DANGEROUSLY BEHIND
Evidence: No international AI levy; UBI only in pilots; no Global South
mechanism; robot tax only proposed (not enacted); Stanford HAI reports
documented AI incidents rising 55% YoY with safety frameworks lagging
Gap: Technology advancing exponentially; governance advancing linearly
Timeline: Critical decisions must be made 2026–2032 or window may close
═══════════════════════════════════════

PART TEN: THE QUESTION WE MUST ANSWER
The AI Architects, the godfathers, the Stanford data, three decades of my own work advising organisations on technology strategy — all of it converges on the same question, which is the defining challenge of our generation.
We have built something extraordinary. A machine that can reason, create, analyse, code, diagnose, design, and increasingly act in the physical world — improving at a rate that compresses decades of prior technological progress into years. We built it driven by competitive pressure, capital incentive, and genuine scientific curiosity. And we built it without first building the social architecture to absorb it.
One of the AI Architects interviewed by Business Insider stated the central fact of our moment without flinching: “This is the very first time in ever that the episode of history where humanity was the smartest being on the planet ends.”
Full stop. No hedging. No “perhaps.” This is stated as a historical fact about an ongoing transition — the close of a chapter that has defined all of human civilisation for 300,000 years. Every religion, every philosophy, every political system, every economic structure has been built on the foundational assumption that humans are the apex cognitive entity. That assumption is ending.
The question is not whether the disruption happens. The question is whether human institutions can respond fast enough to prevent the disruption from becoming a civilisational rupture.
I remain, with full awareness of the risks, an optimist about what is possible. I believe the path toward abundance is real, accessible, and architecturally feasible. I believe we can build a world in which machines do the work and humans do the living — with dignity, purpose, and meaning preserved.
But that world does not happen automatically. It happens because of policy choices, governance decisions, and redistribution mechanisms that must be built with deliberate urgency in the next six years.
The engine has been built. The question is whether we build the governance architecture fast enough to prevent the engine from running away with the people we are supposed to be serving.
The godfathers built the engine. Nobody built the brakes. And the car is already moving.
The Fork is here. Choose wisely.

Connect on LinkedIn | genesishumanexperience.com | @mentalmarketer

Leave a comment