AI, Artificial, Yes; Intelligent, No! By Paul Walker and Tom North

The term "Artificial Intelligence" is a misnomer, a seductive label that promises more than it delivers. It conjures visions of machines with minds, solving society's woes with superhuman precision. Yet, as Joanna Gray argues in her December 2024 piece for The Daily Sceptic, what we call AI is not intelligent at all, it's just sophisticated technology, prone to errors and utterly dependent on human oversight. This revelation, sparked by her encounter with a "super recogniser," exposes a critical flaw in the hype surrounding AI and the naive faith that it can single-handedly fix broken systems like healthcare, policing, or traffic management. The truth is, our overblown expectations of AI risk misguiding policy and progress, leaving us chasing a mirage when what we need is a clear-eyed focus on human ingenuity. Here we disagree with other Alor.org writers like neo-Luddite Brian Simpson who often expresses alarm at AI take over/replacement scenarios.

Gray's encounter with a super recogniser, a person with an extraordinary ability to identify faces, reveals the cracks in AI's facade. This individual, discovered through a University of Greenwich study featured on This Morning, belongs to the top 1% of facial recognition experts, capable of identifying people across angles, races, and contexts. Her job? Correcting the mistakes of facial recognition software used by private security firms, which, shockingly, is only accurate about 75% of the time. Big Brother Watch suggests live facial recognition fares even worse, with error rates as high as 85-90% in some police deployments. These systems, touted as cutting-edge AI, churn out matches so "laughable" that human operatives, super recognisers, are essential to prevent wrongful arrests. This isn't intelligence; it's a high-tech guessing game that needs constant babysitting.

The implications stretch far beyond security cameras. In healthcare, AI is pitched as a saviour for overwhelmed systems like the NHS, promising to read MRI scans, X-rays, or blood tests with pinpoint accuracy. But, as Gray notes, these systems require human verification for the foreseeable future. A 2023 study from Stanford found that AI-assisted radiology still missed critical diagnoses in 20% of cases without human review. Why? Because AI doesn't "think," it processes data based on patterns, often missing nuances that a trained human eye catches effortlessly. The same applies to policing, where faulty facial recognition has led to wrongful stops, disproportionately affecting minorities, or to traffic systems, where AI-driven solutions like smart traffic lights struggle with real-world unpredictability. Posts on X, like those from @BigBrotherWatch, highlight the public's growing unease, noting thousands of false matches in facial recognition trials and calling for a halt to such flawed tech.

This isn't to say AI, or rather, Advanced Technology (AT), as Gray suggests we call it, lacks value. It can crunch vast datasets, automate repetitive tasks, and augment human efforts. But intelligence? That's a uniquely human trait, defined by Dr. Johnson as "spirit, unembodied mind." Machines lack consciousness, creativity, or the ability to grapple with ambiguity. They're tools, not thinkers. Gray draws a historical parallel to the 1601 Chinese Emperor who mistook a mechanical clock for a living creature, a reminder of humanity's tendency to project sentience onto clever devices. From 18th-century automata to folklore about spirit-inhabited puppets, we've long romanticised technology's potential. Today's AI hype, fuelled by Descartes' mind-body duality, repeats this error, imagining that complex code might birth a mind. Spoiler: it won't!

The danger lies in our leaders' blind faith in AI as a cure-all. Governments and corporations pour billions into AI-driven solutions, expecting them to revive ailing economies or streamline public services. Yet, as Gray warns, banking on AI to fix systemic issues like NHS backlogs or crime rates is a fantasy. A 2024 report from the World Economic Forum of all places, estimated that 40% of AI implementations in public sectors failed to deliver promised efficiency gains, often due to overreliance on unproven systems. Meanwhile, human expertise, like that of super recognisers or skilled radiologists, remains indispensable but underfunded. The real risk is misallocation: diverting resources from proven human-driven solutions to shiny tech that's not ready for prime time.

So, what's the way forward? First, we need to ditch the "AI" label and call it what it is: advanced technology. This reframing grounds expectations and forces us to focus on its limitations. Second, invest in humans, train more specialists, from doctors to super recognisers, who can complement and correct technology's shortcomings. Finally, regulate AI's deployment rigorously, especially in high-stakes fields like policing or healthcare, where errors can ruin lives.AI's lack of true intelligence makes it susceptible to manipulation by vested interests, underscoring the need for oversight.

Pinocchio won't become a real boy, and AT won't become a mind. The sooner we accept that, the better we can harness its potential without betting our future on a misnomer, and the social disasters that will follow.

https://dailysceptic.org/2024/12/04/ai-is-a-misnomer/

"Have you heard of a 'super recogniser'? No nor me, until I met one such super recogniser this morning and discovered they are the people who rectify mistakes made by so-called AI. And sorry if everyone else knew this, but I realised there and then that AI is a misnomer and AI is not Artificial Intelligence, just more complicated computer technology. It's a point that bears repeating before anyone gets carried away and thinks that AI will be the silver bullet to get the NHS working again, the police to solve crime or the traffic industry to stop jams.

So sorry to be a Debbie Downer, but let me explain the connection between super recognisers and the flaw in our leaders banking on AI to revive our ailing economies. Here goes:

My new super recogniser acquaintance discovered her talent while watching This Morning years ago. "There was some professor on from The University of Greenwich talking about the ability to recognise people's faces. I assumed that everyone can do this, but apparently they can't. They were after people to research so I signed up." (Lord Frost need not apply.)

It turns out my new chum is in the top 1% of super recognisers in that she can see someone's face once and remember the face from all sorts of different angles and locations. She's great with all races, which apparently not all super recognisers are. After being trained up she now works in the evenings, looking at images of faces captured by private security firms and matching them up to the faces suggested by facial recognition technology taken from various databases of suspects (I didn't get round to asking where that came from). Now here's the disappointing bit: according to my chum, the matches she is presented with by the facial recognition software are only accurate 75% percent of the time. (Big Brother Watch thinks the matches of live facial recognition technology have an even lower hit rate.)"Quite often the matches are laughable," she explains. If arrests are to be made, facial technology must also be overseen by a human operative, hence the use of super recognisers – because the so-called AI is less top set, more SEN.

This disappointing situation can be applied to all sorts of other so called AI solutions: reading MRI scans, X-rays, understanding blood test results; all of the AI suggestions will need to be verified by humans – for at least the first few years until it gets better. And why is this? Because AI is not Artificial Intelligence, it's just technology. Hopefully impressive technology, but it is not and nor will it ever be intelligent.

Defined by Dr. Johnson as "spirit, unembodied mind", intelligence will always and forever elude these machines and software that are currently misnamed AI. Sure, these AI might be able to solve problems and learn from data but they will never be intelligent. School mums will always be there in the background, working part-time making sure they've read the X-ray properly.

We are not the first generation to naïvely bequeath non-existent intelligence to machines. There is the famous incidence in 1601 when Matteo Ricci, a Jesuit missionary presented a mechanical clock to the Emperor of China who thought this clever automata was a living creature. We who pin our hopes on AI are as green as that Emperor; it's just tech, and should therefore be called AT – Advanced Technology – rather than AI.

It's all Descartes's fault for positing the mind-body duality which allows us to imagine that if there is a body, there may well follow a mind. The 18th century saw great discussion and interest in the potentiality of automata – all the fancy fountains, clocks and clever self-running toys that were made – to develop souls. The roots for this go way back into folklore when it was believed that animal or ancestral spirits would inhabit puppets. Alas they don't; in the same way that life does not inhabit a machine and intelligence not exist within a computer. Pinocchio will never become a real boy.

Then as now, we just got carried away with the novelty of new invention. The only known higher intelligence in the universe is human. And that fact is perhaps more terrifying than the prospect of non-intelligent AI." 

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Friday, 11 July 2025

Captcha Image