Mathematical Challenges to the Alleged Supremacy of Artificial Intelligence By Brian Simpson

In Robert Balzola’s tremendous 2022 presentation at the League Seminar, he made the case for people lifting their minds, getting out of their comfort zones and learning, engaging with philosophy, logic (he mentioned the propositional calculus), mathematics, and of course law. Following this, we at the blog are tackling key issues of the time, since the elites are building a dystopic technocracy, based upon science and technology, and not being aware of this is similar to not being able to use in basics terms, a computer and emails. You cannot fight that which you are ignorant of.

That caveat being made, I have been covering the relentless march of AI, with many elites bursting with joy that the singularity point may have been reached, and self-consciousness/awareness reached in complex AI programs, creating what one of the AI devoted, Ray Kurzweil, calls “the age of spiritual machines,” as an “in your face” to people like us, Christians. However, all of this raises the point of what limits, if any, exist for AI, and whether the technocrat’s dreams can be reached.

There are doubts about this project raised in a paper by M. J. Colbrook (et al.), “The Difficulty of Computing Stable and Accurate Neural Networks: On the Barriers of Deep Learning and Smale’s 18th Problem,” Proceedings of the National Academy of Sciences, vol. 119, 2022. This paper is highly technical, but as far as I can understand it, AI systems are poor at recognising when they get things wrong, but humans are great at it. AI systems are arrogant beyond their ability, which is their downfall. There are logical limits to what AI can do due to the well know limitation of computability theorems of Gödel and Turing, where AI algorithms simply will not exist. These theorems are about unprovable propositions in mathematics, that no computer however powerful can solve.  Further, all such AI algorithms operate in systems where data can be precisely defined, but much of the human domain exists in a highly vague and context dependent environment, which is almost impossible to capture by algorithms.

The problem here is that using AI systems in highly unstable contextual environments, where human judgment is best, such as disease diagnosis, or control of nuclear weapons, has the dangers of disaster, when errors, that AI systems are too arrogant to recognise, occur. And, occur such errors will, and they have, an example being how on September 26, 1983, the USSR nuclear early warning system falsely detected a supposed US missile attack. It was only due to the human judgment of Stanislav Petrov, who thought that a systems error had occurred, and delayed his report, that saved the world from nuclear destruction.

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Thursday, 02 May 2024

Captcha Image