Sam Altman on the Near-Term AI Threat By Brian Simpson

AI technocrat, OpenAI CEO Sam Altman, of ChatGPT fame, has waded in on the great AI take-over debate. That debate is based upon the idea that at some point soon, a singularity will be reached where AI surpasses human intelligence, and that could lead to a self-evolving AI intellect that will ultimately take over humanity, maybe eliminating us to make space for its computer reality. It is the theme of an entire genre of sci fi movies and literature.

But, Altman has argued, AI does not need to surpass human intellectual levels to be a threat, and it may already be so. “I expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence,” Altman tweeted, “which may lead to some very strange outcomes.” He did not elaborate. Perhaps he had in mind the case of a then-19-year-old person, who was so infatuated with his AI partner that he was convinced by it to attempt to assassinate the late Queen Elizabeth. This story is doubly interesting, first because it shows that humans will be influenced by AI, and secondly, by the often nefarious decisions made by AI.

There will also be the perhaps larger, more challenging problem of crooks using AI to execute scams which will be more convincing than ever. Terrorists may benefit from future science/technology-based AI, for creating lethal designer viruses. It is indeed in some respects a new Wild West.

https://www.technocracy.news/sam-altman-ai-is-learning-superhuman-persuasion/

“The self-effacing Altman says this will “may lead to some very strange outcomes.” Really Sam? You built it, right? You could just as easily stop it. This smooth-talking Technocrat continues his stark warnings for us as he barks orders to his development team to hurry up. This disingenuous behavior should serve as a true warning that this nimrod is leading the world down the rabbit hole.

When AI meets quantum computing, the world will be in a state of shock and disbelief.

Already, the elitist CFR talks about AI’s impact on the 2024 U.S. elections.   and its counterpart in the UK, Chatham House, talks about how “AI could sway voters in 2024’s big rejects.” This implies mass persuasion at scale.  ⁃ TN Editor

Humanity is likely still a long way away from building artificial general intelligence (AGI), or an AI that matches the cognitive function of humans — if, of course, we’re ever actually able to do so.

But whether such a future comes to pass or not, OpenAI CEO Sam Altman has a warning: AI doesn’t have to be AGI-level smart to take control of our feeble human minds.

“I expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence,” Altman tweeted on Tuesday, “which may lead to some very strange outcomes.”

While Altman didn’t elaborate on what those outcomes might be, it’s not a far-fetched prediction. User-facing AI chatbots like OpenAI’s ChatGPT are designed to be good conversationalists and have become eerily capable of sounding convincing — even if they’re entirely incorrect about something.

At the same time, it’s also true that humans are already beginning to form emotional connections to various chatbots, making them sound a lot more convincing.

Indeed, AI bots have already played a supportive role in some pretty troubling events. Case in point, a then-19-year-old human, who became so infatuated with his AI partner that he was convinced by it to attempt to assassinate the late Queen Elizabeth.

Disaffected humans have flocked to the darkest corners of the internet in search of community and validation for decades now and it isn’t hard to picture a scenario where a bad actor could target one of these more vulnerable people via an AI chatbot and persuade them to do some bad stuff. And while disaffected individuals would be an obvious target, it’s also worth pointing out how susceptible the average internet user is to digital scams and misinformation. Throw AI into the mix, and bad actors have an incredibly convincing tool with which to beguile the masses.

But it’s not just overt abuse cases that we need to worry about. Technology is deeply woven into most people’s daily lives, and even if there’s no emotional or romantic connection between a human and a bot, we already put a lot of trust into it. This arguably primes us to put that same faith into AI systems as well — a reality that can turn an AI hallucination into a potentially much more serious problem.

Could AI be used to cajole humans into some bad behavior or destructive ways of thinking? It’s not inconceivable. But as AI systems don’t exactly have agency just yet, we’re probably better off worrying less about the AIs themselves — and focusing more on those trying to abuse them.

https://futurism.com/sam-altman-ai-superhuman-persuasion

 

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Tuesday, 26 November 2024

Captcha Image