·

It’s the Humans, not the Algorithms..

We’ve not gone past our first-contact with #AI. Web2 was centralised and brought to us by BigTech. The FAANGs remember? This was an era with #Trust erosion – just like climate change – at its worst/zenith. Everything popped when we got the Pandemic. Humans were so connected, yet never lonelier. The algorithms (directed by the few human overlords) solved for attention. To maximise and explicit human attention spans like batteries, that is. Sounds like a scene from Matrix 1, when Neo first woke from his pod after taking the Red Pill. Yes, social media was our first encounter with #AI. We’ve never really survived that – we just swept things under the carpet. Gnawing away at #Trust levels amongst humanity. Then in 2022 we got the ChatGPT moment, and this#Trust tsunami has levels exponentially, alarmingly, depleting. To the point today: we #Trust machines more than humans. Than ourselves even. It’s a lost cause if you think about it. Silicon Valley has plonked billions of dollars into developing algorithms aimed solely at your daughter’s brains. Well after all, the first-gen brush with AI solved for attention (your children’s attention spans).

History; I’m stoked that previous Computer Science lecturers from the early 90s (giving away my age) like Hinton have won the Nobel Prize. It’s really a significant milestone – demarcating the renaissance of AI – a far cry from the wintry conditions back then in the 80s and 90s. AI itself is not new – we’ve all heard about the Dartmouth 2-week summer project back in the 50s. How far we’ve come. How far humanity has come.

My early years as a practitioner (Computer Science wasn’t really as fancy a subject for reading at Uni then it is now), threw me into the deep, working with statistical and predictive models to maximise profits and reduce cost. The most opportune areas were found in the realm of marketing and advertising; giving hubris to algorithmic pioneers like myself to affront Wannaker’s now famous comment “ Half the money I spend on advertising is wasted; the trouble is I don’t know which half”. Oh those were the times a’ight. We built predictive models, we trained and retrained them, we prepared data sets and made sure they didn’t skew. In fact, we made drag and drop GUIs (archaic much?) interfaces for marketers to do segmentation, audience selection and birthed the first generation of marketing automation platforms. Now defunct, prehistoric names like Unica, SAS and Teradata reigned supreme back then.

Then in the early 2000s we saw the dawn of the FAANG fiefdoms- search was reinvented, and we became the product. Yes, our data, rather everything about us was productised and sold to advertisers. Humanity was inundated with ‘noise’ and the search for truth became a whole lot harder. The algorithms of ‘old’ (pre 2000s) solved for attention, max objective function was something called ‘engagement’. If not for the teen suicides and glacial destruction of democracy – it would have been passed over, continuing to be swept under socioeconomic rugs and fabrics of society. Australia, Down Under, is leading the way passing regulations banning social media for under 16 year olds. Kudos to that! I can’t stress enough, we need global AI regulations. We need East and West to authentically collaborate. Right now it’s akin to a prisoner’s dilemma – the new arms race (nuclear, Cold War was analogous in the 80s…) of our times. Not pretty. Not pretty at all.

And now, we’re grappling with the most powerful technology humans have ever created; our last invention. As East and West races for #SuperIntelligence – the One Ring to rule them all – our fates, the fate of humanity lies in the hands of a handful.

For previous tech evolutions, we’ve had BigTech (Silicon Valley) leading the charge. And it’s well understood. How Tech adds trillions to the global economy, while impacting and changing lives (for the better). There’s always duality in any Tech – but we can’t afford a single (one) mistake with #AI. That’s for sure. There’s finality in that. I argue in my book (Genesis: Human Experience in the Age of Artificial Intelligence) that we’ve only 5-7 years (tops, left) to make good the promised future of abundance we’ve been promising each other. These precious few years, we’ve got to put in place, to the best of our ability, all the regulations, safeguards, ethical boundaries and collaborative frameworks to increase humanity’s probability for a positive future. We’re running out of time! History repeats itself. We’re just too greedy. See how the EVs supplanted the ICE in the auto industry? The German bigs saw that from a mile (decades) away. Yet, our present-forward inertia, maybe our human nature, was to maximise profits till the very ‘end’ – the critical red line that gave away the industry to the EV icons such as Tesla. We’ve seen this with Prof Christensen’s illustration of how the lower-end of the steel industry gets displaced by cheaper, alternative entrants – eventually working their way up and usurping the entire industry.

The analogies are aplenty. With AI, the promise of profits is too hard to resist. Silicon Valley is finding it irresistible. OpenAI’s pivot to for-profit is glaring. For every 10 engineers working on getting to AGI, there’s 1 thinking about safety and regulations. Not pretty. Alarming.

Humans must (continue) to be in the loop. We mustn’t start getting lazy and taking our hands off the perennial wheel. No matter how easy things become. How tempting when all our max/min obj functions are solved for. Literally, as Picasso lamented “anything you can imagine is true”. This is both exciting and dangerous at the same time – as our friendly neighbourhood Spider-Man quipped “with great power comes great responsibility”. True indeed isn’t it? Think duality in scenarios such as war vs healthcare (Demis won a Nobel Prize for AlphaFold as well), deepfakes and misinformation vs education, electioneering vs creative expressionism, and the list goes on..

Actually, there’s no secret sauce to this. Humanity just needs to come together. We need to unite. We need to embrace and show AI the very best we can be. The last time we did come together, sort of, was when we dealt with the nuclear arms race leading up to the Cold War. A disheartening example, I know – but it’s the closest analogy we have with AI. It’s stark but for the first time in human history, we’ve created a technology that can make its own decisions. And as far as human temptation goes, we tend to want to relinquish control, agency (and #Trust) to the machines – just look what we did with the over-financialisation of the money markets. It’s in our nature! Here’s a Dantean thought: when (not if) in the future our financial markets are made overly complex by AI – who’s to blame when a melt-down happens? Might we mere humans even know the profound algorithms powering future trade between nations?

Today, companies like Anthropic are keen proponents of RSP or Responsible Scaling Policy – in similar vein, OpenAI’s 5 levels of ‘Super AI’ tells us we’re knocking on the door at Level 3 today as of Nov’24. The release of their (now) latest o1 reasoning model has shifted the goalpost, bringing us on the foot of yet another new scaling curve. Ha, just when we thought things were ‘slowing down’ – throwing (more) compute and (more) data at the problem had us believe..

And today, we’re still using red-teaming and RHLF, or reinforcement learning from human feedback mechanisms to keep development in check. We have to keep at it, no matter what! It’s so tempting – and the flesh is weak – to give in to self improvement and recursive cycles, where AI enhances and improves its own code. After all, in the echo chambers of the most powerful Titans of industry, the voice “.. China is doing it anyway” reverberates. Ricochets. This is the new arms race, the new Cold War of our times. Except, except we’re dealing with something that makes the nuclear ☢️ build-up of the 60s seem like child’s play. Humanity has only one (1) shot at this, there are no replays or repeats. Once we get to AGI and ASI very quickly after – the ‘advantage’ will be insurmountable. Whomever achieves AGI and ASI first will reshape world and potentially galaxy, then universal order.

As I argue in my book – we’ve got 5-7 years (tops) to get this right. To set humanity on the right trajectory towards a future of abundance. We must believe and have faith that the indomitable human spirit that has gotten us this far, and hopefully with AI’s help, will get us escape velocity to the stars. If I may, when in Star Trek the popular sci-fi series, when humans discover warp travel and Vulcans come a-knocking. In my book – this is the Star Trek scenario. The other (Mad Max) scenario, not pretty.

Leave a comment