ChatGPT Loses its ‘Mind’ By Brian Simpson
It has been noted that Chat GPT, the AI chatbot of OpenAI has been acting strangely of late. First, it seemed to be making more errors than normal, even about identifying prime numbers, whole numbers with no factors other than themselves. But, then things went from strange to really weird. ChatGPT began to deliver very long but incoherent answers to questions, often making no sense at all. There has been as far as I am aware, no published response detailing exactly why the AI suddenly broke down. I speculate that complex systems like this may have intrinsic vulnerabilities, just like highly intelligent people may be emotionally and psychologically vulnerable.
If this is right, that complex AI may exhibit unpredictable black swan breakdowns, this raises new issues about the wisdom of societies becoming so dependent upon such systems. The possibility of large scale systems collapsing will always be present. It was just two days ago that Facebook went down for a time, causing young people everywhere to get very anxious indeed ; just multiply that by a million times to see what things may be like. As I typed this my spell check automatically went to French and would not stay on English. I turned spell check off but it turned back on. Thus every word here was underlined as an error making it almost impossible to concentrate.
https://www.naturalnews.com/2024-02-27-chatgpt-goes-nuts-for-hours-no-explanation.html
"ChatGPT, OpenAI's popular artificial intelligence-powered chatbot, went "absolutely nuts" on Feb. 20 – and nobody can explain why.
According to ChatGPT users across social media platforms like Reddit and X, the popular artificial intelligence chatbot provided lengthy and seemingly incoherent answers to basic queries. Users experienced a peculiar malfunction when the chatbot started displaying erratic behavior and generating responses ranging from nonsensical to downright Shakespearean.
"Has ChatGPT gone temporarily insane? I was talking to it about Groq, and it started doing Shakespearean-style rants," one user asked on the ChatGPT subreddit.
"It's lost its mind," another user wrote. "I asked it for a concise, one-sentence summary of a paragraph and it gave me a [Victorian]-era epic to rival Beowulf, with nigh incomprehensible purple prose. It's like someone just threw a thesaurus at it and said, 'Use every word in this book.'"
In one instance, ChatGPT told one user: "Let's keep the line as if AI is in the room." The user who initiated the conversation about coding shared the response on Reddit, and commented: "Reading this at 2 a.m. is scary."
Meanwhile, ChatGPT users on X also expressed their frustration.
"ChatGPT just broke. Constantly getting garbage in my responses. Starts off okay, but then it gets drunk." Another user shared an odd encounter, stating that when they asked the chatbot for recommendations on a Bill Evans Trio vinyl, it responded with a loop of "Happy listening!" – an unexpected and unrelated answer.
The glitch manifested differently. Most users experienced non-sequiturs, incorrect answers and repetitive phrases throughout the night. Some users wondered if the language model had temporarily collapsed, while others jokingly speculated about the chatbot achieving sentience.
Gemini and Gab AI also go off the railsChatGPT wasn't the only AI chatbot to go "off the rails" during this time. Gab AI and Google's Gemini AI were also reportedly experiencing malfunctions.
According to several reports, when Gemini users asked the chatbot to generate images, it refused to place white people in them. Instead, it transformed historically white figures into multiracial characters. The controversy, which happened just two weeks after the launch of Gemini, led to widespread outrage. This, in turn, prompted Google to issue an apology and temporarily stop the people-creating feature of its AI image generator to rectify the issue.
Meanwhile, the conservative-leaning social media platform Gab introduced AI chatbot versions of Adolf Hitler and Osama bin Laden, both of which denied the existence of the Holocaust. The technology behind Gab's chatbots remains unclear, but CEO Andrew Torba claimed their user base is growing by 20,000 people a day. Gab AI, along with Grok, is an alternative to "woke" AI chatbots like Gemini and ChatGPT.
These incidents have not spared any political affiliations, with both conservatives and liberals expressing dissatisfaction with the AI chatbots. OpenAI, which is widely embraced as an AI tool for businesses, showcased the unreliability of its product, leading to renewed concerns about the rapid pace of AI development.
In response, OpenAI co-founder John Schulman had no choice but to acknowledge the nascent stage of alignment technology.
"Alignment – controlling a model's behavior and values – is still a pretty young discipline. That said, these public outcries [are] important for spurring us to solve these problems and develop better alignment tech," he tweeted.
Comments