There have been claims made that the forthcoming version of Chat GPT, "Strawberry" can reason like humans, overcoming some of the limitations of the present system: https://www.theguardian.com/technology/2024/sep/17/techcsape-openai-chatgpt-thoughts?fbclid=IwZXh0bgNhZW0CMTEAAR353q_q2XrP0cjhPMW4ibPe8MKHekiulftsn-ANXKvmI0hRMHKhx-ZGE1k_aem_uD_BB5dKEmvzWiwbD1-d-g.
So today I thought it would be amusing to ask the present version of Chat what it thought. I have had many conversations with this program, asking it if it was conscious, how would it know, and so on. The responses were mechanical, but logical. One strange thing was I asked Chat GPT how many "r' letters were in the word "strawberry." It replied "two." I replied you are wrong count again. It did and said I was right.
Here is what I got today asking about the media claims of human reasoning in AI:
"The claim that newer versions of ChatGPT can reason like humans is an interesting one, but it's important to frame it accurately. While recent iterations, like GPT-4, are more advanced in their ability to process language, analyze patterns, and even perform some reasoning tasks, there are significant differences between AI reasoning and human reasoning.
1.Pattern-Based Reasoning: ChatGPT and similar models are based on recognizing and generating patterns in data. They excel at identifying relationships between words, concepts, and even logical structures. However, they do not "understand" the world the way humans do. Their reasoning is statistical rather than based on personal experience or intentional thought.
2.Lack of Consciousness and Intuition: Human reasoning often relies on intuition, personal experiences, and emotions, which AI models cannot replicate. Humans have consciousness, subjective experiences, and moral reasoning, which AI lacks. Even though the newer versions of AI can handle more complex and nuanced tasks, they still do so without any awareness of what they are doing.
3.Task-Specific Reasoning: The newer versions can reason better in specific domains, such as logic puzzles, mathematical problems, or analyzing texts. They can simulate reasoning processes for particular tasks, but this doesn't mean they understand the deeper context in the way humans do. They mimic reasoning based on vast amounts of data but don't generate original thoughts or form complex worldviews.
4.Limitations in Common Sense: While AI has improved in simulating common-sense reasoning, it still struggles in situations where nuanced, real-world understanding or creativity is required. Human reasoning can incorporate unpredictable factors, emotions, or ethical dilemmas in ways that AI cannot fully grasp or mimic.
I wonder what answer "Strawberry" would give?! When it is out, I will try it and report back at the blog.