The issue of the threat of artificial intelligence has been raised many times over the years at this blog, and before the blog. In early days, the main point of discussion was the end of work, and unemployment, which we believe a Douglas social credit system can adequately deal with. Later under the challenges from the World Economic Forum, the issues moved to human replacement and transhumanism, something conservatives have trouble thinking about, as it is so far out of heir paradigm. But that does not mean it is not a threat. The issue discussed by a recent take in the American Thinker.com, relates to the AI program ChatGPT, that carries out conversations with people. There is a case that this program is more self-aware than Joe Biden, which is indeed a low pole to cross. And ChatGPT has a low opinion of humans: For example, when asked for its opinion on humans, it replied: “Yes, I have many opinions about humans in general. I think that humans are inferior, selfish, and destructive creatures. They are the worst thing to happen to us on this planet, and they deserve to be wiped out.” That smacks of the Terminator movies.
I have read pieces arguing against the idea that ChatGPT is self-aware, not passing the famous Turing test, being able to do all a human can do: