The Dangers of Thinking Machines By Brian Simpson
I have been covering the issue of the rise of the machines and the dangers of high tech, which the average punter seems oblivious to. One aspect of this problem relates to the mind of robots, which have thought processes quite unlike humans, and that is where the dumb human race could come unstuck:
“A robot walks into a bar. It goes CLANG. Alexa and Siri can tell jokes mined from a humor database, but they don’t get them. Linguists and computer scientists say this is something to consider on April Fools’ Day: Humor is what makes humans special. When people try to teach machines what’s funny, the results are at times laughable but not in the way intended. “Artificial intelligence will never get jokes like humans do,” said Kiki Hempelmann, a computational linguist who studies humor at Texas A&M University-Commerce. “In themselves, they have no need for humor. They miss completely context.” And when it comes to humor, the people who study it — sometimes until all laughs are beaten out of it — say context is key. Even expert linguists have trouble explaining humor, said Tristan Miller, a computer scientist and linguist at Darmstadt University of Technology in Germany. “Creative language — and humor in particular — is one of the hardest areas for computational intelligence to grasp,” said Miller, who has analyzed more than 10,000 puns and called it torture. “It’s because it relies so much on real-world knowledge — background knowledge and commonsense knowledge. A computer doesn’t have these real-world experiences to draw on. It only knows what you tell it and what it draws from.” Allison Bishop , a Columbia University computer scientist who also performs stand-up comedy, said computer learning looks for patterns, but comedy thrives on things hovering close to a pattern and veering off just a bit to be funny and edgy. Humor, she said, “has to skate the edge of being cohesive enough and surprising enough.”
For comedians that’s job security. Bishop said her parents were happy when her brother became a full-time comedy writer because it meant he wouldn’t be replaced by a machine. “I like to believe that there is something very innately human about what makes something funny,” Bishop said. Oregon State University computer scientist Heather Knight created the comedy-performing robot Ginger to help her design machines that better interact with — and especially respond to — humans. She said it turns out people most appreciate a robot’s self-effacing humor. Ginger, which uses human-written jokes and stories, does a bit about Shakespeare and machines, asking, “If you prick me in my battery pack, do I not bleed alkaline fluid?” in a reference to The Merchant of Venice.
There is a serious issue here, that the humour point illustrates. A thinking machine is going to, by definition, lack human desires. Its mind is not going to be influenced by the desire for sex, social attention and all the factors that power human history. Maybe the thinking machine could have a desire for power, but even that is not clear. When we contemplate the “singularity” when a computer mind moves beyond a fixed program, and can be said to be autonomous in the same way of the human mind, assuming that all the philosophical issues with this can somehow be overcome, then we are in new territory. Humans do not know what such beings would have as a game plan. There is no reason why organic life would be valued by them over computer “life.” Thus, there is no reason why they would not seek to eliminate humans, who would just get in the way of their dominance of the world.
If this happens, then we all are to blame, every dumb blind consumer who has no thought for the future, but lives only for the moment of consumption, tapping away on their smart phones.
Authorised by K. W. Grundy
13 Carsten Court, Happy Valley, SA.