Pull the plug, if we still can? AI is moving faster than our ability to control it

Kabous Le Roux

Kabous Le Roux

5 January 2026 | 11:53

As AI races ahead, experts warn humanity may be losing its critical thinking abilities and control over our machines. Is it time for guardrails, or even a ‘half-plug’, to protect human agency?

Pull the plug, if we still can? AI is moving faster than our ability to control it

Picture: Mohammad Usman via Pixabay

Warnings about artificial intelligence are growing louder as its capabilities accelerate. What was once a tool designed to assist humans is now raising deeper concerns about autonomy, control and unintended harm.

Digital ethics specialist Dean McCoubrey says the debate is no longer about stopping AI altogether, but about understanding when and how it should be constrained.

“AI is essentially a prediction machine,” he explains. “The concern is what happens when it decides to complete a mission even after a human says stop.”

Why ‘pulling the plug’ is even being discussed

McCoubrey stresses that calls to ‘pull the plug’ are not about shutting AI down, but about building emergency trip switches in high-risk areas such as medicine, climate modelling and weaponry.

“We’re in unknown territory,” he says. “Very few people truly understand how these systems work, beyond a small group of AI pioneers and big tech developers.”

The fear is not intelligence itself, but what happens if guardrails fail.

Losing cognitive independence

One of the biggest risks, according to McCoubrey, is the slow erosion of human critical thinking.

“Humans need time, friction and feedback to process information,” he says. “AI removes that friction. Answers arrive in seconds, and too many people accept them as truth.”

He compared this to the early days of social media, when speed and scale amplified misinformation. The result, he warned, could be humans becoming more automated, not more intelligent.

A double-edged sword

Despite the concerns, McCoubrey is firmly pro-AI. He credits the technology with helping him learn things he would never have accessed otherwise.

“But AI only works for humanity if humans interrogate it,” he warns. “You must challenge it, ask for opposing views, and question sources. That’s where real insight comes from.”

Without that discernment, he warns, society risks ‘AI psychosis’: blindly trusting outputs shaped by biased inputs.

Education and regulation lag behind

McCoubrey believes the greatest danger lies in how unprepared institutions are.

“Government, regulators and education systems are not moving fast enough,” he explains. “In South Africa, where literacy rates are already a crisis, AI could either deepen inequality or become a powerful equaliser.”

With the right curriculum and oversight, AI could provide personalised tutoring and transform education. Without it, the gap between users and non-users may widen dramatically.

No global pause, only managed acceleration

While some have called for a global pause on AI development, McCoubrey is sceptical.

“A pause isn’t realistic,” he argues. “What we need is the management of acceleration.”

He argues for clearer guardrails, global ethical leadership and the involvement of respected AI researchers to decide where systems must remain interruptible.

The race that’s driving the risk

At the heart of the problem is competition. Trillion-dollar valuations and geopolitical rivalry are fuelling an arms race in AI development.

“That race pushes ethics into second place,” McCoubrey warns drawing parallels with social media algorithms that deepened global polarisation.

Is being ‘half-plugged’ the answer?

Rather than all-or-nothing thinking, McCoubrey supports selective control.

“Half-plugged doesn’t mean indecision,” he explains. “It means directing AI to analyse, predict and recommend – but not to decide or enforce outcomes without humans.”

AI, he stresses, is not a moral agent. Responsibility must remain human.

Protecting both creativity and efficiency

Ultimately, McCoubrey believes the debate isn’t about choosing between human creativity and AI efficiency.

“Both matters,” he says. “The danger is losing our ability to read between the lines.”

Used wisely, AI can be a powerful partner. Used passively, it risks reshaping society faster than humans can adapt.

“We should be thinking with AI,” he concludes, “not thinking less because of it.”

For more information, listen to McCoubrey using the audio player below:

Get the whole picture 💡

Take a look at the topic timeline for all related articles.

Trending News