AI: The Off‑Switch Fantasy (Part 1 of 2)

This is Part 1 of a two-part piece. Part 2 picks up from the “quiet replacement” and brings it down to everyday life, security, and the questions that don’t go away.  

Ai off-switch fantasy - group of leaders standing watching conceptural imagery of AI AGI

The AI off-switch fantasy is comforting, but I don’t think it survives contact with reality. 

This isn’t a guide, and it isn’t a doom speech bandwagon. It’s a set of warning shots from some of the world’s leading experts on AI, then a straight conversation about what we’re actually building. 

Geoffrey Hinton: “The idea that you could just turn it off won’t work.” 

Yoshua Bengio: “People demanding that AIs have rights would be a huge mistake.” 

Sam Altman: “Jobs are definitely going to go away, full stop.” 

Fei‑Fei Li: “The most important use of a tool as powerful as AI is to augment humanity, not to replace it.” 

Geoffrey Hinton: “It makes me very sad that I put my life into developing this stuff and that it’s now extremely dangerous and people aren’t taking the dangers seriously enough.” 

Mustafa Suleyman, on the uncomfortable part: we’re building at speed without a solid plan for control and containment: “Right now no one has such a plan.” 

Arthur Mensch (Mistral AI), calling out the shape of the public debate — suggesting some of the loudest “extreme risk” talk can function as misdirection: “These are mostly distraction tactics… so these speeches are largely diversions, very deliberately crafted.” 

Demis Hassabis: “This technology could be the equivalent of the industrial revolution times 10, and 10 times faster.” 

If a few of those names are unfamiliar, Geoffrey Hinton is a pioneering deep learning researcher (and former Google researcher) often dubbed the “godfather of AI”; Yoshua Bengio is a Turing Award–winning AI researcher; Sam Altman is the CEO of OpenAI; Fei‑Fei Li is a leading AI academic known for human‑centred AI; Mustafa Suleyman is an AI entrepreneur and DeepMind co‑founder; Arthur Mensch is the CEO of Mistral AI; and Demis Hassabis leads Google DeepMind. 

There’s a particular kind of comfort that comes from pretending we’re still in charge. 

Not in a smug way. In a human way. The same way we tell ourselves we’ll sort out our health next month, or we’ll stop doom‑scrolling once things calm down, or we’ll deal with the big decisions when they’re forced on us. 

With AI, that comfort often takes the shape of a simple idea: if it gets dangerous, we’ll turn it off. 

Terminator-inspired metal endoskeleton with glowing red eyes stands in a burning warzone, symbolising fears of AI escalating beyond human control.

But is that really going to be possible? 

The quotes above are just a snippet of the thoughts by some of the leading experts and players within AI, and not all are optimistic, to say the least. 

Let’s be honest straight away: I just don’t feel it will be anywhere near as simple as the AI off-switch fantasy that’s been coined. Not because I want to live in a permanent panic state, and not because I think we should all start talking like we’re trapped in a film trailer. But when you look at how this is actually playing out — who is building what, why they’re building it, and how many players are now in the game — the “off switch” starts to feel less like a plan and more like a bedtime story. 

We really are in a stage of unprecedented technological and societal evolution.  

The arms race is the real story 

I think we’re genuinely in more of an arms race, rather than just the next standard step in technological development. 

With AI leaders pushing towards the goal of controlling a suite of AGI‑level capabilities — and, in doing so, achieving a global economic stranglehold — they may end up damned to the consequences. 

That sounds dramatic written down, but it’s hard to describe it any other way once you accept the incentives. 

We’ve got so many players, from across the globe, that there can’t be one “turn it off” switch. Even if a single company wanted to slow down, you’d still have competitors, other nations, open models, venture funding, defence interests, and prestige all driving in the same direction. 

And here’s the bit I don’t think gets said enough: if any leader reaches something they believe is close to AGI, are they really going to bow out and take a hit? 

I very much doubt it. 

If they did, it would be financial suicide for them and the company attached to their name — and it would hand the “win” to someone else who won’t hesitate. 

There’s no hiding from this now. AI will become intrinsically woven into our lives — if it hasn’t already crept in and swept through them. 

The private conversation problem 

AI off-switch fantasy private vs public contrast in statements showing broken down reality in private side of image

This is where things honestly get uncomfortable, not because it proves anything on its own, but because it hints at a split in reality. 

In two separate Diary of a CEO interviews — first with Geoffrey Hinton, and later with Tristan Harris — Steven Bartlett keeps coming back to the same idea: there’s what gets said on stage, and then there’s what gets said in private. Tristan Harris is a technology ethicist and former Google design ethicist, best known for co‑founding the Centre for Humane Technology. 

In the Harris interview, Bartlett describes the pattern in a way that’s hard to forget: 

“It’s usually someone that I trust… who at a kitchen table says, I met this particular CEO.” 

He claims this is not some random rumour chain either, but the kind of thing that comes from people with direct access whom he explicitly trusts: 

“…I was speaking to a friend of mine, a very successful billionaire, knows a lot of these people…” 

And then he lands the core point: 

“But then privately what I hear is… exactly what you’ve said…” 

And what Harris is actually saying in that moment isn’t vague tech anxiety — it’s consequences. 

Tristan Harris: “We’re heading for so much transformative change faster than our society is currently prepared to deal with it.” 

Tristan Harris: “As we’re racing, we’re landing in a world [of] rising energy prices, major security risks, and then there’s mass joblessness without a transition plan.” 

And when he talks about “security risks”, he’s not being poetic. 

Tristan Harris: “This new AI that we’re dealing with can hack the operating system of humanity.” 

Tristan Harris: “It can hack code and find vulnerabilities in software.” 

Tristan Harris: “If you imagine that now applied to the code that runs our water infrastructure, our electricity infrastructure, we’re releasing AI into the world that can speak and hack the operating system of our world.” 

Tristan Harris doesn’t flinch away from it. He frames it as a genuine mismatch: 

“There’s a different conversation happening publicly than the one that’s happening privately.” 

Tristan Harris: “But we didn’t consent to have six people make that decision on behalf of 8 billion people.” 

Now, I want to be careful here. I’m not presenting this as a courtroom‑grade fact. It’s still second‑hand. It’s still filtered. It’s still an individual talking about what he’s heard from people he trusts. 

But I also don’t think we should dismiss it. Because even the possibility of a private/public split changes how you read the rest of this. If leaders are more pessimistic in private, that suggests fake calm reassurance is part of the product. 

If they’re not pessimistic, and this is all hype and theatre, that’s arguably just as worrying — because it suggests we’re building civilisation‑shaping systems with little consideration of the true potential impact. 

Either way, it nudges you towards the same conclusion: don’t let yourself be lulled. 

A quick, correct detour on AGI 

AI hive mind hovering in AI controlling world around it with electricity pulsing through it and around

This is where the language gets messy fast, so I’m going to keep it clean. 

AGI — artificial general intelligence — is not a single, universally agreed-upon finish line. In practice, different people use it to mean different things. A common framing (including in OpenAI’s own public charter language) is that AGI refers to highly autonomous systems that outperform humans at most economically valuable work. 

I don’t think anyone can credibly put a date on it. 

Two years? Twenty years? Something in between? The full breadth is still murky, and I’m not interested in pretending I know as the leading experts aren’t even sure.  

But here’s the thing: we don’t need full AGI for the world to change in ways that feel irreversible. 

We just need systems to become good enough, cheap enough, and trusted enough that the handover becomes normal. 

That’s the quiet replacement which is already happening. 

Added Soon! Continue to Part 2: AI: The Quiet Replacement (Part 2 of 2)

Part 2 is where this stops being abstract and starts being personal — the quiet replacement, the security risks, and the questions that don’t go away. 

Related reading (Retro Tech Tonic) 

AI and Gaming Future: When games start making themselves

AI in Construction – This article looks at the impact of AI on the construction industry.

AI in education: reshaping the future of learning – AI has been part of education for longer than most people realise — just usually in the background.

Some lighter reading – Why Do We Game? The Honest Reasons I Keep Coming Back – Why do we game? I’ve been thinking about it properly lately, and the answer isn’t one neat thing — it’s a mix of reasons that change with life.

External Sources:

If you want to go deeper on any of the claims in this piece, I’ve linked the primary sources I leaned on — plus a few “who’s who” pages for the people being quoted.

Tristan Harris — official site
Tristan Harris is a tech ethicist (ex-Google) focused on the societal risks of persuasive and uncontrollable systems. His writing and talks give context to the “private vs public” AI conversation.

Geoffrey Hinton — University of Toronto homepage
Geoffrey Hinton is one of the foundational deep learning researchers. This is his long-running academic homepage with papers and background.

Steven Bartlett — enquiries/contact page
Steven Bartlett’s official enquiries page (business, podcast, press). Useful as a straight “who’s who” reference for the host.

Diary of a CEO Podcast episode discussed here – Check it out!

OpenAI Charter (AGI definition + guiding principles)
Useful for grounding the AGI discussion in a clear, sourceable definition and the stated principles behind how OpenAI says it should be handled.

Mustafa Suleyman on containment and incentives (WIRED)
A strong, readable reference for the uncomfortable “we’re moving fast without a real containment plan” argument — and why incentives make braking difficult.

Leave a Comment

Your email address will not be published. Required fields are marked *