AI in 2023: a Christmas pantomime in three acts

Helping “normal” people make sense of AI in 2023

7 min read

3 days ago

Photo of the mad Queen Lizze the First and Blackadder form the BBC comedy Blackadder
King Tusk in drag (L) playing Queen Elizabeth I. Blackadder (R) Copyright BBC 2013

Prelude to Acts I, II, III

Much has happened in 2023 year since my last Medium article in 2022. Amongst the continuing hype of AI, the deepfake disinformation agenda; the reemergence of the threat of super intelligence AI (cf. Bostrom) because a Google engineer detected sentience; Twitter becoming X with Grok; Hinton resigning from Google to speak freely; Facebook becoming Meta to enslave us in their subscriber virtual productivity and fulfilment la la land ; LLM’s GPT-4 running out of data, Australian cardboard drones being used in the Ukraine conflict; Starlink being turned off affecting Ukraine’s push for reversing enslavement; and the media’s human existential angst agenda to drive up advertising dollars and all the rest, there is a dire need for us, Sam Altman’s “normals”, to unpick the AI discourse and understand it’s perspective in a better manner, so we may all hope enjoy our Christmas pudding in peace.

Forthwith, let’s introduce our six primary AI protagonists for AI in 2023, A Christmas Pantomime in three Acts.

King Tusk — technology “royalty” with a track record of being insanely unpredictable but backed up by sovereign engineering might and ill gotten gains, much like Queen Elizabeth I in Black Adder.

The AI Godmother and Godfathers Pizza Makers — much like the fully loaded Three Amigos pizza, a favourite of King Tusk to consume or “Grok” (as Robert A Heinlein’s Stranger in a Strange land Martin would say). And, as Michael Caine would say, “now, not many people know that.”

The Mandarins — exiled by King Tusk to a separate Euro continent surrounded by their own interesting ideas of freedom (King Tusk: “Pshaw! Freedom doesn’t exist in Kind Tusk’s land of the brave and the home of the free!”) they plot even more complicated towers of spaghetti legislation without actually doing anything. But with 100% certainty proven by the LHC, a garland of Garlic does protect against Vampires.

The Scholars — excited by the prospect of an end to obscurity and boredom by someone finally asking them a question about an obscure topic (AI) and feeling inflated importance as a once in a lifetime opportunity to secure research funding to prevent global catastrophe (or spur it on).

The DeepPocketed (DeepMind, Billy Goat, Sam I am) — while King Tusk sleeps, the extraordinarily well-funded DeepMind savants continues to thwart his global domination plan in favour of a sleeper model to do that same domination, drowning the AI narrative with the most non-peer reviewed AI content on ArXiv.org; While Billy Goat dances around with a corn cob pipe in his mouth rejoicing in the freedom our new AI toys will give us so long as everything you know and love is recorded in a computational system. And Sam I Am plays tiddlywinks with the future.

The Blackadder — sadly, missing in action, perhaps killed by King Tusk, or not yet born. Think a gender neutral 007 and Austin Powers meets Uma Preman. Able to navigate and solve global disaster with aplomb.

Act I — The Ethics and Safety Yellow Card

It was very interesting to see many governments recently meet in the UK AI Safety Summit. Of course The Mandarins would do this. This is their role, to foresee and shepherd the populations of those in their benign sovereign governance, belying the fact that of course this is just a power-relations play against King Tusk and The DeepMind nations, and a further attempt to put fear into the populace further justifying their governmentality.

Meanwhile, The Godmother Pizza Maker, Fei Fei Li, in an expression of the benign nature of scientists (remember the Manhattan project?) asserts she and her fellow Pizza Makers need more and more access, however, for the next generation of Pizza makers, because they’re running out of pizza dough (human-generated data) to train “SkyNet”.

Godmother Pizza maker asserts that, “Hey, my next generation of pizza makers will have multi-disciplinary concerns and be trained in ethics”. Like any blue blooded notable scientist ever cared about ethics before the fact! -> it’s all about the search for the next solution to the next problem and the thirst for new knowledge.

Act I, occurring as it has this year, has been playing out for many centuries, and is the same as the hacker v the hacked; the white hat versus the black hat; the innovator versus the regulator. It is essentially saying that, “Hey, while The Mandarins know that this looks dangerous, we’ll let it play out for a bit then, depending who cares, we’ll look like we’re doing something about it (for the polls) while the actual conversation happens behind closed doors.”

As this is a Pantomime, you would expect Good v Evil to play out here as well, but that is not the right way to look at this power-relationship.

What Act I tells us is that the Pizza Makers have made a Pizza. Then they Delivered the Pizza to King Tusk, The Mandarins got upset that they didn’t get to taste the Pizza First. So the Mandarins said the Pizza is dangerous and needs to be regulated.

Good is not to be equated with Safe. Good is something that will not allow something unsafe to happen at all. To anyone. That which is Good is also not equatable. Ethics, like history, is determined by the victors. Look at the bizarre consequences of Bentham’s greater good ethics. Similarly with AI if we play a statistical game it’s OK for 5% of humans to be injured if a AI agent provides a benefit for the 95%. This fallacy is deep at the heart of what’s wrong with AI use.

The lesson of Act I is not to infer Good nor Evil to the Pizza. It just is. It might be tasty or yucky, but it has no good nor evil in it. Nor can it perceive such. And we need to not patternize it as good or evil but instead pick the outcomes we are looking for.

Act II, The Ghost of Christmas Past

Lee Sedol who was famously beaten by AlphaGo claimed to sense a “entity” in the machine. This is nothing new, looking back at The Turk chess playing automaton, or the fantasy of the Ghost in the Machine.

Fei Fei Li, says that AI is a tool made by humans for humans. Maddy Leach (former program manager at DeepMind for AlphaGo) says AI is humans v humans, which may be more accurate as a lens on the ultimate goal of capitalism is to make the wealthy more wealthy (FAANG).

The 2018 AI narratives project is research in depth by the by the Leverhulme Centre for the Future of Intelligence and the Royal Society. Part of their research criticises the mainstream narrative which is robots (and AI) replacing humans:

The prevalence of narratives focussed on utopian extremes can create expectations that the technology is not (yet) able to fulfil. This in turn can contribute to a hype bubble, with developers and communicators potentially feeding into the bubble through over-promising…Discussion of the future of work is distorted if it focuses only on robots directly replacing humans. Debate needs evidence and insight into the disruptive potential and opportunities created by new forms of business or social networks, as well as attention to the direct impact on particular tasks or jobs. (Page 14)

Is this 2018 (pre-dating ChatGPT and effective LLMs) report arguing that essentially negative narratives inspired by fear and unrealistic expectations unfairly bias the scientists who need to seek funding for further progress? Put in perspective for 2023 look at the Safety agenda in Act I. If The Mandarins have essentially started broadcasting a kind of fear (as we argued to further their population control agenda), what are we, Lord Denning’s Person on The Clapham Omnibus, or Sam Altman’s “Normal” to make of this?

What I believe is actually happening is that the human predilection for superstitious belief is interacting with messages and narratives whose main way of getting an audience is to fan the fire of fear. The Mandarins and King Tusk’s unpredictable rants back up this superstitious belief, and The DeepMind keep investing and releasing even more competent algorithms to top up the “hype bubble”, and The Scholars variously clap and applaud, or lament and worry, and try to get airtime with King Tusk who isn’t listening.

Act III — Waiting for Blackadder

Where does Act I and II leave us? Does it leave us with a Deus ex Machina moment, so a cardboard God will descend and resurrect the hero to kill the AI dragon? Will it leave us waiting for Godot, or SkyNet? Will we all experience the Wealth of Nations and the elimination of all diseases, plagues and wars? Or live in the Black Mirror for real?

These are all narrative perspectives. Humans make sense of our lives through stories and the AI story is not different. It is being written and rewritten at pace. It evolves at every step, and none of us know where it will lead. This creates the space for superstition and fear to exist. The fact that The Scholars, The Mandarins and The Pizza Makers are not in control (is anyone?) fuels this fear, and it is perhaps not unfounded.

If you watch the 1965 Oppenheimer interview, the haunted look of a most highly thoughtful and deliberate man who took his justification from the military leaders employing Bentham’s greater good lesser evil ethics to murder hundreds of thousands of innocent people is transfixing.

While this small drama in three Acts is a story in itself — are we not missing a key persona? The Blackadder, who through wit, speed and luck explains and extricates themselves and others from the sticky wicket that the excessive and insane demands of King Tusk created?

While King Tusk rants erratically and enacts their Blofeld-esque domination plans, while The Mandarins do a late-to-the-game land grab; while the Pizza Makers, DeepMinds, Billy Goats and Sam I Ams simply churn out more AI Pizza because they can, only some of The Scholars lament while we all wait for Blackadder to come and make sense of this bizarrely confusing, bricolage of a mess that is AI in 2023.

Leave a comment

Your email address will not be published.