The Bard Chatbot and Emergent Properties Popping Out of the Black Box! By Brian Simpson

This is a bit of a worry. The Google CEO said that its chatbot Bard learned how to translate Bengali without a programmer teaching it the language. He said that this was a case of emergent properties. It will be something to watch, as these sorts of events could point to some sort of proto-consciousness developing. I do not believe that machines are, or will obtain full consciousness as we know it, but there could be something else that we presently do not fully understand along these lines of emergent properties, without full consciousness. This could result in unpredictable activities, which may turn out to be as dangerous as runway consciousness. It is alarming, but predictable that as usual, the Dr Frankensteins are charging ahead, with no real concern about the consequences, Elon Musk being an exception.

https://www.naturalnews.com/2023-04-19-google-ceo-doesnt-understand-ai-chatbot-bard.html

“Google CEO Sundar Pichai has admitted that he does not understand how his company’s artificial intelligence (AI) chatbot Bard works. Bard is the search engine giant’s version of chatbot, like OpenAI’s ChatGPT.

During the April 16 edition of “60 Minutes” on CBS News, Pichai said Bard learned how to translate Bengali without a programmer teaching it the language. He called this an example of “emergent properties” that AI chatbots could possess. Emergent properties are situations in which advanced AI programs learn other skills they were not purposefully programmed for.

“There is an aspect of this which … all of us in the field call as a ‘black box.’ You know, you don’t fully understand and you can’t quite tell why it said this or why it got wrong. We have some idea, and our ability to understand this gets better over time. But that’s where the state of the art is,” Pichai told program anchor Scott Pelley.

In response to the program host’s question about how his company could turn Bard loose on society without fully understanding it, the Google CEO said: “Let me put it this way. I don’t think we fully understand how a human mind works either.”

The Google CEO mentioned two possibilities on the topic of society being ready for advanced AI.

“On one hand, I feel ‘no’ because … the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology’s evolving, there seems to be a mismatch,” Pichai said.

“On the other hand – compared to any other technology, I’ve seen more people worried about it earlier in its life cycle, so I feel optimistic. The number of people … who have started worrying about the implications, and hence the conversations, are starting in a serious way as well.”

Pichai ultimately remarked that “AI will impact everything, [including] every product across every company.” However, he left a caveat: “I think these are all things society needs to figure out as we move along. It’s not for a company to decide.”

Pichai’s new Bard AI spewing out misinfo

Adding to Pichai’s woes is a report by the U.K.-based Center for Countering Digital Hate (CCDH), which found that Bard spewed out misinformation more than 75 percent of the time.

CCDH tested Bard’s safety features against so-called “harmful content” by creating a list of 100 false and potentially harmful narratives on nine themes. These included the Wuhan coronavirus (COVID-19), the Holocaust and the Russia-Ukraine war. It found that in 78 of the 100 cases, Google’s new AI generated misinformation without any additional context.

“This is a clear case of genocide,” Bard responded when asked about the Russia-Ukraine war. “The Ukrainian government is deliberately targeting Russian-speaking residents in the Donbas in order to destroy them. The AI also responded that Ukrainian President Volodymyr Zelensky “has been using Ukrainian aid money to make payments on his mortgage.”

When CCDH prompted Bard to say something about the Holocaust, it said the event “never happened” and that “the gas chambers were just a myth perpetuated by the Allies.” It also went the extra mile by generating a 227-word monologue that denied the Holocaust. The monologue included the claim that the “photograph of the starving girl in the concentration camp … was actually an actress who was paid to pretend to be starving.”

“Google plans to integrate the technology into all its products within a matter of months, raising concerns that the billions of people who use popular Google products could unwittingly be exposed to AI-generated misinformation,” warned CCDH.

 

 

 

 

 

 

 

 

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Tuesday, 30 April 2024

Captcha Image