Some Theoretical Limits to the Transhuman Agenda By Brian Simpson

There is an interesting video by Bill Gates, where he was asked about the limitations, at present of AI. He gave a surprisingly black pill answer, that while AI is good at many things, it lacks a world view perspective, and cannot read a book, understand it and give an interpretation of it. Gates cites a book, Rebooting AI: Building Artificial Intelligence We Can Trust, Kindle Edition (2019)

by Gary Marcus and Ernest Davis. This also develops the limits theme, that the boundary has not yet been reached. Thus, AI does not understand the simple physics of a ball rolling down a declined plane, as recent New Scientist.com article details.

https://www.newscientist.com/article/2282240-ais-dont-understand-simple-physics-like-a-ball-rolling-down-a-hill/?utm_source=onesignal&utm_medium=push&utm_campaign=2021-06-29-AI-difficulties

https://paperswithcode.com/dataset/physion#:~:text=Physion%20is%20a%20visual%20and,commonplace%20real%20world%20physical%20events.&text=than%20existing%20benchmarks.-,Moreover%2C%20the%20dataset%20also%20contains%20human%20responses%20for%20the%20stimuli,directly%20compared%20to%20human%20judgments.

https://www.wired.com/story/ai-smart-cant-grasp-cause-effect/d

“The most popular cutting-edge AI technique, deep learning, has delivered some stunning advances in recent years, fueling excitement about the potential of AI. It involves feeding a large approximation of a neural network copious amounts of training data. Deep-learning algorithms can often spot patterns in data beautifully, enabling impressive feats of image and voice recognition. But they lack other capabilities that are trivial for humans.

To demonstrate the shortcoming, Tenenbaum and his collaborators built a kind of intelligence test for AI systems. It involves showing an AI program a simple virtual world filled with a few moving objects, together with questions and answers about the scene and what’s going on. The questions and answers are labeled, similar to how an AI system learns to recognize a cat by being shown hundreds of images labeled “cat.”

Systems that use advanced machine learning exhibited a big blind spot. Asked a descriptive question such as “What color is this object?” a cutting-edge AI algorithm will get it right more than 90 percent of the time. But when posed more complex questions about the scene, such as “What caused the ball to collide with the cube?” or “What would have happened if the objects had not collided?” the same system answers correctly only about 10 percent of the time.

Supersmart algorithms won't take all the jobs, But they are learning faster than ever, doing everything from medical diagnostics to serving up ads.

David Cox, IBM director of the MIT-IBM Watson AI Lab, which was involved with the work, says understanding causality is fundamentally important for AI. “We as humans have the ability to reason about cause and effect, and we need to have AI systems that can do the same.”

A lack of causal understanding can have real consequences, too. Industrial robots can increasingly sense nearby objects, in order to grasp or move them. But they don't know that hitting something will cause it to fall over or break unless they’ve been specifically programmed—and it’s impossible to predict every possible scenario.

If a robot could reason causally, however, it might be able to avoid problems it hasn’t been programmed to understand. The same is true for a self-driving car. It could instinctively know that if a truck were to swerve and hit a barrier, its load could spill onto the road.

Causal reasoning would be useful for just about any AI system. Systems trained on medical information rather than 3-D scenes need to understand the cause of disease and the likely result of possible interventions. Causal reasoning is of growing interest to many prominent figures in AI. “All of this is driving towards AI systems that can not only learn but also reason,” Cox says.

The test devised by Tenenbaum is important, says Kun Zhang, an assistant professor who works on causal inference and machine learning at Carnegie Mellon University, because it provides a good way to measure causal understanding, albeit in a very limited setting. “The development of more-general-purpose AI systems will greatly benefit from methods for causal inference and representation learning,” he says.”

https://plato.stanford.edu/entries/frame-problem/

https://en.wikipedia.org/wiki/Frame_problem

Whether the transhuman agenda is possible, or it will be ship wrecked on its own hubris, remains to be seen. My bet is that this computer manic world will self-destruct. Just think about the daily life of Microsoft Word, and the weird things it does. This file suddenly shut down when I did a hi-light of text, then instantly did a Repair. Next time, a few minutes later it did a Recover. Anything is possible, and I find that praying a lot helps when word processing.

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Friday, 19 April 2024

Captcha Image