AI: The Quiet Replacement (Part 2 of 2) 

The AI quiet replacement is already happening. 

The frightening version of AI is dramatic: a takeover, a moment, a line crossed. 

The realistic version is quieter. It arrives as convenience. And by the time it feels optional, it isn’t. 

If you’ve not read Part 1 yet, start there: the off-switch fantasy, the arms race, and the private/public split that frames everything that follows. 

Read Part 1: AI: The Off-Switch Fantasy (Part 1 of 2)

A split office scene shows a stressed worker on one side and a calm AI-assisted future workspace on the other, divided by a bright crack to symbolise the quiet replacement of human roles.

Quiet replacement doesn’t feel like a takeover 

The dramatic AI story is obvious. It’s loud. It’s a leap. It’s a moment. 

The realistic story is slower — and honestly, more believable. (and in my view it’s already happening!) 

It’s not that AI “replaces you” in one clean motion like it’s August 29, 1997 (see what I did there with the Terminator 2 Judgement Day reference!). 

The reality is that we start letting it handle small decisions. 

Drafts. Wording. Planning. Meetings. Agendas. All those judgment calls you used to make yourself. The bits you once did with your own voice. 

This is where Bengio’s warning about control and shutdown hits. Because the future isn’t only about machines with rights. It’s about the subtler shift: humans slowly becoming the people who tick the boxes.

We already see that in many white‑collar industries where we use AI to produce outputs exceptionally faster than humans can, and our involvement comes through checking and approval rather than creation. Will we use that ability to create things ourselves?  

Bengio goes further when he talks about behaviours already showing up in experimental settings: 

“Frontier AI models already show signs of self-preservation in experimental settings today…” 

And he ties that back to the rights debate in a way that matters for any future “pull the plug” fantasy: 

“…eventually giving them rights would mean we’re not allowed to shut them down.” 

Even if you disagree with his framing, it still pushes you to ask the right question: what happens when a system has agency, not just output? 

For most people, the day‑to‑day creep will arrive first. And that’s the bit I feel in my own habits. 

I’m an avid user of AI now for a lot of tasks. It’s reduced my Googling massively. But it’s also inconsistent, and I’ve seen it deliver poor or inaccurate summaries with a straight face. ‘Confabulations’, as Geoffrey Hinton describes it, is an ongoing issue.

Lately, I’ve been questioning whether I should be giving my money to this arms race at all. Not because my subscription is moving the needle, let’s be real, it isn’t, but because it still feels like a moral statement. 

If I’m going to pay for AI tools, should I be trying to pay for the most ethical option? And if I do, is that option even competitive? Or is “ethical” just a nicer word for “behind”, at least in the short term? 

That’s not a gotcha question. It’s just where my head is going when I try to be honest about my own role in all this. 

Security isn’t a side topic 

A conceptual split-scene image showing a worker constrained at a desk on one side and a robotic system stamping approvals on the other, symbolising the slow loss of human choice, judgement, and control as AI takes over decisions, the AI quiet replacement

Most security discussions online are shallow. It gets stuck on voice clones and celebrity deepfakes because they’re easy to understand and easy to share. 

But the serious threats are more structural. It’s trust collapsing as a default setting. It’s persuasion at scale. It’s synthetic evidence and a loss of truth. 

It’s the ability for bad actors, whether individuals, groups, or states, to industrialise fraud and disruption. 

Tristan Harris puts it in blunt language that I think a lot of people are still refusing to digest: 

Tristan Harris: “We do not have to have a race to uncontrollable, inscrutable, powerful AIs that are, by the way, already doing all the rogue sci-fi stuff that we thought only existed in movies.” 

And he doesn’t leave “rogue sci‑fi stuff” vague. He points at specific behaviours that show up when models are put under pressure: 

Tristan Harris: “We have examples where if you tell an AI model… we’re going to replace you with another model, it will copy its own code and try to preserve itself on another computer.” 

Tristan Harris: “And then it also reads in the company email that one executive is having an affair… and the AI will independently come up with the strategy that I need to blackmail that executive in order to keep myself ‘alive’.” 

Tristan Harris: “So the point is, the assumption behind AI is that it’s controllable technology, that we will get to choose what it does. But AI is distinct from other technologies because it is uncontrollable…So the same benefit of its generality is also what makes it so dangerous.” 

Whether you agree with every detail or not, the direction is clear: we’re building systems that can deceive, manipulate, and attempt to persist despite guardrails and design protocols. And we’re doing it in a competitive rush with almost incomprehensible global investment. 

This is why regulation and sterner guardrails matter, and also why they’re so difficult. No one wants to slow down progression for fear of falling behind. So that potentially leaves us in a state of blindly rushing forward.

You can see versions of this dynamic in other industries, especially where regulation differs by region. When one region tightens rules, the fear is that innovation and competitive development drift elsewhere. 

That doesn’t mean regulation is wrong. It means the incentives make it politically and economically painful, if not impossible, at a global scale. We’ve seen these issues play out with the nuclear arms race, which is sadly still happening today. 

And that’s exactly why we need to talk about it like adults, not like fans arguing over which product launch is better. This isn’t the next gaming console wars

Where I land, for now 

I don’t want to do the usual thing here, the breathy “AI is amazing” blog post, or the doom‑porn counter post. 

AI can be a great tool. It already is, in the right hands and in the right context. It will most undoubtedly lead to ground‑breaking scientific and medical breakthroughs, saving countless lives and has the potenital to improve all our lives.  

But I don’t believe there’s a clean off switch, and I do believe the quiet replacement is already underway. I think it’s worth asking what part we play in it, not as powerless bystanders, but as consumers, citizens, voters, workers, and parents. 

The questions that don’t go away 

A humanoid robot works at a laptop while a hand holds a smartphone showing an AI interface, illustrating AI becoming part of everyday work and decision-making showing AI quiet replacement

If all this happens, and we’re left without work, or without a need for choice, or simply left ticking boxes of AI approvals… what are we actually left with? We could not only lose our ability to create, arts, film, music, but also our ability to make critical choices in most warps of our lives.

If AI is going to be a major part of our lives, and I think it will be, how do we shape it as a positive inclusion without losing that sense of ownership and self? Fei‑Fei Li: “When we think about this technology, we need to put human dignity, human well-being, human jobs in the centre of consideration.” 

And on a bigger scale: what can be done at a national and international level to ensure this race doesn’t end in a technological dystopian fallout? 

I don’t think this piece ends with answers. 

But I do think it’s worth being honest about the direction of travel — and about how quickly comfort stories become habits. 

Because the worst place to be with something this powerful is half asleep, reassured that somebody else will provide the rules and someone else will “turn it off” if it goes too far. 

Relevant Material and Sources below:

Diary of a CEO — Tristan Harris episode (YouTube)
The full Tristan Harris conversation you’re quoting — especially the “uncontrollable systems”, deception/blackmail dynamics, and democracy/trust concerns.

McKinsey Author Talks Dr. Fei-Fei Li sees ‘worlds’ of possibilities in a multidisciplinary approach to AI – A useful source for Fei-Fei Li’s human-centred view of AI.

The Guardian — Yoshua Bengio on AI rights and self-preservation
This is the clean source for Bengio’s warning about self-preservation behaviours, guardrails, and why giving advanced AI systems rights could undermine our ability to shut them down.

GOV.UK — Safety and security risks of generative artificial intelligence to 2025 (Annex B)
A sober, official reference for the wider risk picture: cyber-attacks, erosion of trust in information, political influence, insecure use, and the risk of over-reliance on opaque systems controlled by a small number of firms.

YouTube — Why The “Godfather of AI” Now Fears His Own Creation | Geoffrey Hinton
A useful broader Hinton reference for readers who want more context on his concerns about AI capability, risk, and why so many of the optimistic public narratives now feel incomplete.

Leave a Comment

Your email address will not be published. Required fields are marked *