Geoffrey Hinton, the so-called “Godfather of AI,” says artificial intelligence “might take over” if humans are careless. He sees AI surpassing human intelligence in a mere five more years. When that happens, the AI may begin to write its own programs, which is a kind of machine-based free will. The main problem is that the complex systems are not fully understood, which is known in AI as the “black box problem.” There is a high degree of uncertainty with such emergent systems, where already surprise developments have occurred, with AI systems being smarter than anticipated. Hinton’s concerns, project that.
But while many want a pause in AI development’s, there is no chance of that, as communist China and Russia are in a new arms race to develop the most advanced AI, which Putin has said will lead to control of the world.
“Geoffrey Hinton, the computer scientist known as a “Godfather of AI,” says artificial intelligence-enhanced machines “might take over” if humans aren’t careful.
Rapidly-advancing AI technologies could gain the ability to outsmart humans “in five years’ time,” Hinton, 75, said in a Sunday interview on CBS’ “60 Minutes.” If that happens, AI could evolve beyond humans’ ability to control it, he added.
“One of the ways these systems might escape control is by writing their own computer code to modify themselves,” said Hinton. “And that’s something we need to seriously worry about.”
Hinton won the 2018 Turing Award for his decades of pioneering work on AI and deep learning. He quit his job as a vice president and engineering fellow at Google in May, after a decade with the company, so he could speak freely about the risks posed by AI.
Humans, including scientists like himself who helped build today’s AI systems, still don’t fully understand how the technology works and evolves, Hinton said. Many AI researchers freely admit that lack of understanding: In April, Google CEO Sundar Pichai referred to it as AI’s “black box” problem.
As Hinton described it, scientists design algorithms for AI systems to pull information from data sets, like the internet. “When this learning algorithm then interacts with data, it produces complicated neural networks that are good at doing things,” he said. “But we don’t really understand exactly how they do those things.”
Pichai and other AI experts don’t seem nearly as concerned as Hinton about humans losing control. Yann LeCun, another Turing Award winner who is also considered a “godfather of AI,” has called any warnings that AI could replace humanity “preposterously ridiculous” — because humans could always put a stop to any technology that becomes too dangerous.
‘Enormous uncertainty’ about AI’s future
The worst-case scenario is no sure thing, and industries like health care have already benefitted tremendously from AI, Hinton emphasized.
Hinton also noted the spread of AI-enhanced misinformation, fake photos and videos online. He called for more research to understand AI, government regulations to rein in the technology and worldwide bans on AI-powered military robots.
At a Capitol Hill session last month, lawmakers and tech executives like Pichai, Elon Musk, OpenAI’s Sam Altman and Meta’s Mark Zuckerberg suggested similar ideas while discussing the need to balance regulations with innovation-friendly government policies.
Whatever AI guardrails get put into place — whether by tech companies or at the mandatory behest of the U.S. federal government — they need to happen soon, Hinton said.
Humanity is likely at “a kind of turning point,” said Hinton, adding that tech and government leaders must determine “whether to develop these things further and what to do to protect themselves if they [do].”
“I think my main message is there’s enormous uncertainty about what’s going to happen next,” Hinton said.”