Former Googler: The Existential Risks of AI By Brian Simpson

It is work to keep track of the AI gurus who are warning of an “existential” threat of AI, with the existential idea commonly floated. The most recent warning, at the time of writing, comes from former Google CEO Eric Schmidt, who defines an existential threat as involving the mass death of people. “There are scenarios not today, but reasonably soon, where these systems will be able to find zero-day exploits in cyber issues, or discover new kinds of biology. Now, this is fiction today, but its reasoning is likely to be true. And when that happens, we want to be ready to know how to make sure these things are not misused by evil people.” Perhaps what he has in mind here is that new deadly viruses could be devised by AI programs, which while not directly used by the AI itself, could be at the hands of terrorists, or nations supporting terrorism against the West.

How exactly this could be stopped is far from clear, as the possibility of large-scale bioterrorism is an existential threat even now, not just by terrorists, but from lab leaks from primarily American and Chinese labs.

https://www.cnbc.com/2023/05/24/ai-poses-existential-risk-former-google-ceo-eric-schmidt-says.html

Artificial intelligence could pose existential risks and governments need to know how to make sure the technology is not “misused by evil people,” former Google CEO Eric Schmidt warned.

“And existential risk is defined as many, many, many, many people harmed or killed,” Schmidt said.

The future of AI has been thrust into the center of conversations among technologists and policymakers grappling with what the technology looks like going forward and how it should be regulated.

Artificial intelligence could pose existential risks and governments need to know how to make sure the technology is not “misused by evil people,” former Google CEO Eric Schmidt warned Wednesday.

The future of AI has been thrust into the center of conversations among technologists and policymakers grappling with what the technology looks like going forward and how it should be regulated.

ChatGPT, the chatbot that went viral last year, has arguably sparked more awareness of artificial intelligence as major firms around the world look to launch rival products and talk up their AI capabilities.

Speaking at The Wall Street Journal’s CEO Council Summit in London, Schmidt said his concern is that AI is an “existential risk.”

“And existential risk is defined as many, many, many, many people harmed or killed,” Schmidt said.

“There are scenarios not today, but reasonably soon, where these systems will be able to find zero-day exploits in cyber issues, or discover new kinds of biology. Now, this is fiction today, but its reasoning is likely to be true. And when that happens, we want to be ready to know how to make sure these things are not misused by evil people.”

Zero-day exploits are security vulnerabilities found by hackers in software and systems.

Schmidt, who was CEO of Google from 2001 to 2011, did not have a clear view on how AI should be regulated but said that it is a “broader question for society.” However, he said there is unlikely to be a new regulatory agency set up in the U.S. dedicated to regulating AI.

Schmidt is not the first major technology figure to warn about the risks of AI.

Sam Altman, CEO of OpenAI which developed ChatGPT, admitted in March that he is a “little bit scared” of artificial intelligence. He said he worries about authoritarian governments developing the technology,

Tesla CEO Elon Musk said in the past that he thinks AI represents one of the “biggest risks” to civilization.

Even current Google and Alphabet CEO Sundar Pichai, who recently oversaw the company’s launch of its own chatbot called Bard AIsaid the technology will “impact every product across every company,” adding society needs to prepare for the changes.  

Schmidt was part of the National Security Commission on AI in the U.S. which in 2019 began a review of the technology, including a potential regulatory framework. The commission published its review in 2021, warning that the U.S. was underprepared for the age of AI.”

 

 

 

 

 

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Wednesday, 27 November 2024

Captcha Image