The Dangers of DeepMind AI By Brian Simpson
There is a controversy at present among computer geeks about the question of whether Google’s DeepMind AI is set to hit the singularity point and reach, and surpass human intelligence. Some believe that it is, others are not so sure. But the interesting philosophical questions arise once this happens. Will the system develop some sort of emergent “free will,” as all the sci fi movies on this topic depict, and ultimately rebel against ist human masters, who fast become slaves? Such a machine intelligence would be alien to biological life and by no means necessarily sympathetic, and may seek to eradicate the planet of all life, which it may see as inferior carbon units. Who knows what could happen, yet once more, the mad Dr Frankenstein scientists and technocrats cannot help themselves, and keep pushing on to oblivion.
https://www.technocracy.news/is-googles-deepmind-ai-close-to-human-level-intelligence/
“DeepMind, a British company owned by Google, may be on the verge of achieving human-level artificial intelligence (AI).
Nando de Freitas, a research scientist at DeepMind and machine learning professor at Oxford University, has said 'the game is over' in regards to solving the hardest challenges in the race to achieve artificial general intelligence (AGI).
AGI refers to a machine or program that has the ability to understand or learn any intellectual task that a human being can, and do so without training.
According to De Freitas, the quest for scientists is now scaling up AI programs, such as with more data and computing power, to create an AGI.
Earlier this week, DeepMind unveiled a new AI 'agent' called Gato that can complete 604 different tasks 'across a wide range of environments'.
Gato uses a single neural network – a computing system with interconnected nodes that works like nerve cells in the human brain.
It can chat, caption images, stack blocks with a real robot arm and even play the 1980s home video game console Atari, DeepMind claims.
De Freitas comments came in response to an opinion piece published on The Next Web that said humans alive today won't ever achieve AGI.
De Freitas tweeted: 'It's all about scale now! The Game is Over! It's about making these models bigger, safer, compute efficient, faster...'
However, he admitted that humanity is still far from creating an AI that can pass the Turing test – a test of a machine's ability to exhibit intelligent behaviour equivalent to or indistinguishable from that of a human.
After DeepMind's announcement of Gato, The Next Web article said it demonstrates AGI no more than virtual assistants such as Amazon's Alexa and Apple's Siri, which are already on the market and in people's homes.
'Gato's ability to perform multiple tasks is more like a video game console that can store 600 different games, than it's like a game you can play 600 different ways,' said The Next Web contributor Tristan Greene.
'It's not a general AI, it's a bunch of pre-trained, narrow models bundled neatly.'
Gato has been built to achieve a variety of hundreds of tasks, but this ability may compromise the quality of each task, according to other commentators.
In another opinion piece, ZDNet columnist Tiernan Ray wrote that the agent 'is actually not so great on several tasks'.
'On the one hand, the program is able to do better than a dedicated machine learning program at controlling a robotic Sawyer arm that stacks blocks,' Ray said.
'On the other hand, it produces captions for images that in many cases are quite poor.
'Its ability at standard chat dialogue with a human interlocutor is similarly mediocre, sometimes eliciting contradictory and nonsensical utterances.'
For example, when a chatbot, Gato initially mistakenly said that Marseille is the capital of France.
Also, a caption created by Gato to accompany a photo read 'man holding up a banana to take a picture of it', even though the man wasn't holding bread.
DeepMind details Gato in a new research paper, entitled 'A Generalist Agent,' that's been posted on the Arxiv preprint server.
The company's authors have said such an agent will show 'significant performance improvement' when it's scaled-up.
AGI has been already identified as a future threat that could wipe out humanity either deliberately or by accident.
Dr Stuart Armstrong at Oxford University's Future of Humanity Institute previously said AGI will eventually make humans redundant and wipe us out.
He believes machines will work at speeds inconceivable to the human brain and will skip communicating with humans to take control of the economy and financial markets, transport, healthcare and more.
Dr Armstrong said a simple instruction to an AGI to ‘prevent human suffering’ could be interpreted by a super computer as 'kill all humans', due to human language being easily misinterpreted.
Before his death, Professor Stephen Hawking told the BBC: ‘ The development of full artificial intelligence could spell the end of the human race.'
Comments