Humans have a Blind Spot When Dealing with AI, By Brian Simpson
Humans, a least the vast majority of people, have a dangerous blind spot when dealing with AI. Research published in Scientific Reports, from researchersfrom UC Merced and Penn State, performed a simulation of a life-or-death situation where AI had explicitly stated its own limitations.
The study involved two experiments involving 558 participants (135 in the first study and 423 in the second, and involved various AI, including robots, who assisted the participants in a simulation of identifying enemy targets who had launched drone attacks. Participants viewed rapid sequences of eight aerial images, each shown for just 650 milliseconds, marked with either enemy or civilian symbols. After making their initial identification, the AI would respond with its assessment. Participants could then either accept or change their assessment. It was found that, according to the report at Technocracy News that:
"When an AI disagreed with a person's initial target identification, participants reversed their decisions 58.3% of the time in the first experiment and 67.3% in the second, even though the AI's advice was entirely random. More troublingly, while participants' initial choices were correct about 70% of the time, their final accuracy dropped to around 50% after following the AI's unreliable advice.
When the AI agreed with their initial assessment, participants reported a 16% boost in confidence. However, when facing AI disagreement, those who stuck to their original decisions reported an average 9.48% drop in confidence, even when their initial assessment had been correct. Even more striking, participants who changed their minds to agree with the AI showed no significant increase in confidence, suggesting they deferred to the machine despite maintaining uncertainty about the correct choice."
Thus, people's decisions were swayed by the AI assessment, even though it was random, and known to be so. This study shows that there could be a dangerous over-reliance on AI decisions and over-trust, even when AI's limitations are clear. If this situation emerges into the military, or even medical decisions, lives will be put at risk.
https://www.technocracy.news/when-ai-says-kill-humans-overtrust-machines-in-life-or-death-decisions/
Comments