I am finding one of these styles of reports almost every week now, as artificial intelligence gurus warn of the “existential” risks of AI, including the “risk of extinction.” This has been said so often now that I am, for one, numb to it. Now we have a group of leading experts, including OpenAI boss Sam Altman, whose firm created ChatGPT, and the “Godfather of AI” Geoffrey Hinton, who have put together a letter giving another warning: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” Elon Musk also said recently that there was a non-zero risk of a situation depicted in the Terminator movies happening, with controlling AI attempting to exterminate humanity.
Yet, while this all sounds noble, sounding their warning, most are working on some of the very tech that the doomsday scenarios depend upon. In my opinion, these guys are ultimately either insincere/ playacting, or deluded, but that goes with science in modernity. The endgame is still the same.
