Is Artificial Intelligence, Really “Intelligent”? By Brian Simpson

Many social critics of AI have been concerned about the rapid advancement of AI, which as the likes of Elon Musk have said, threatens to replace most jobs. Thus, AI plus robots may do blue collar jobs, from fast food to things like warehouse work. White collar jobs as well will take a beating, and even lawyers are not safe, with a lot of legal work capable of being done by mechanised processes, we have been told. However, there have been dangers here, with lawyers who got ChatGPT to write arguments, and not checking, finding that the AI had made up cases. Out of thin 1's and 0's, not even thin air.

Google's Gemini with its internet connection of AI Overviews, was supposed to get over problems previous systems had. But, instead, given the entire internet, the program comes up with some gems. When asked to give examples of fruits whose names end in "um" we get a list: "Applum, Strawberrum and Coconut." No, "plum" was not found. And the AI tells us that cats have been to the moon, that it is safe to stare at the sun for 15 minutes or longer if you have black skin, and that the road to good health is eating one small rock a day! These things are called AI "hallucinations."

The problem is garage in, garbage out, but that will be inevitable with the AI having the entire internet to choose from, where most things appearing are false, illusions, or mere half-truths. The real concern is whether variants of these sort of "hallucinations" can occur with AI systems doing important things in military operations, where false positives and false negatives may be generated?

https://www.spiked-online.com/2024/05/30/why-artificial-intelligence-keeps-getting-dumber/

"Did you know that cats have been to the moon? That it's safe to stare at the sun for 15 minutes, or even longer, as long as you have dark skin? Or that to stay healthy, you should eat one small rock per day?

These are some of the latest pearls of wisdom that Google has been serving to its American users (we aren't so lucky here yet in the UK). 'Let Google do the searching for you', the search giant promised when it introduced a feature called AI Overviews earlier this month. This integrates Google's Gemini generative-AI model into its search engine. The answers it generates appear above the traditional list of ranked results. And you can't get rid of them.

AI Overviews hasn't had the effect that Google hoped for, to say the least. It has certainly garnered immediate internet virality, with people sharing their favourite answers. Not because these are helpful, but because they are so laughable. For instance, when you ask AI Overviews for a list of fruits ending with 'um' it returns: 'Applum, Strawberrum and Coconut.' This is what, in AI parlance, is called a 'hallucination'.

Despite having a market capitalisation of $2 trillion and the ability to hire the biggest brains on the planet, Google keeps stumbling over AI. Its first attempt to join the generative-AI goldrush in February last year was the ill-fated Bard chatbot, which had similar issues with spouting factual inaccuracies. On its first live demo, Bard mistakenly declared that the James Webb Space Telescope, launched only in 2021, had taken 'the first pictures' ever of Earth from outside the solar system. The mistake wiped $100 billion off Google's market value.

This February, Google had another go at AI, this time with Gemini, an image and text generator. The problem was that it had very heavy-handed diversity guardrails. When asked to produce historically accurate images, it would instead generate black Nazi soldiers, Native American Founding Fathers and a South Asian female pope.

This was 'a well-meaning mistake', pleaded The Economist. But Google wasn't caught unawares by the problems inherent to generative AI. It will have known about its capabilities and pitfalls.

Before the current AI mania truly kicked off, analysts had already worked out that generative AI would be unlikely to improve user experience, and may well degrade it. That caution was abandoned once investors started piling in.

So why is Google's AI putting out such rotten results? In fact, it's working exactly as you would expect. Don't be fooled by the 'artificial intelligence' branding. Fundamentally, AI Overviews is simply trying to guess the next word it should use, according to statistical probability, but without having any mooring to reality. The algorithm cannot say 'I don't know' when asked a difficult question, because it doesn't 'know' anything. It cannot even perform simple maths, as users have demonstrated, because it has no underlying concept of numbers or of valid arithmetic operations. Hence the hallucinations and omissions.

This is less of a problem when the output doesn't matter as much, such as when AI is processing an image and creates a minor glitch. Our phones use machine learning every day to process our photos, and we don't notice or care much about most of the glitches. But for Google to advise us all to start eating rocks is no minor glitch.

Such errors are more or less inevitable because of the way the AI is trained. Rather than learning from a curated dataset of accurate information, AI models are trained on a huge, practically open-ended data set. Google's AI and ChatGPT have already scraped as much of the web as they can and, needless to say, lots of what's on the web isn't true. Forums like Reddit teem with sarcasm and jokes, but these are treated by the AI as trustworthy, as sincere and correct explanations to problems. Programmers have long used the phrase 'GIGO' to describe what is going on here: garbage in, garbage out."

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Saturday, 28 September 2024

Captcha Image