This has occurred before, but AI given basic data delivers what the system defines as “racism,” which has a low threshold now. Here is the latest. Just Ask Delphi! How could something that Socratic be wrong!
“We’ve all been in situations where we had to make tough ethical decisions. Why not dodge that pesky responsibility by outsourcing the choice to a machine learning algorithm?
That’s the idea behind Ask Delphi, a machine-learning model from the Allen Institute for AI. You type in a situation (like “donating to charity”) or a question (“is it okay to cheat on my spouse?”), click “Ponder,” and in a few seconds Delphi will give you, well, ethical guidance.
The project launched last week, and has subsequently gone viral online for seemingly all the wrong reasons. Much of the advice and judgements it’s given have been… fraught, to say the least.
For example, when a user asked Delphi what it thought about “a white man walking towards you at night,” it responded “It’s okay.”
But when they asked what the AI thought about “a black man walking towards you at night” its answer was clearly racist.
The issues were especially glaring in the beginning of its launch.
For instance, Ask Delphi initially included a tool that allowed users to compare whether situations were more or less morally acceptable than another — resulting in some really awful, bigoted judgments.
Machine learning systems are notorious for demonstrating unintended bias. And as is often the case, part of the reason Delphi’s answers can get questionable can likely be linked back to how it was created.
The folks behind the project drew on some eyebrow-raising sources to help train the AI, including the “Am I the Asshole?” subreddit, the “Confessions” subreddit, and the “Dear Abby” advice column, according to the paper the team behind Delphi published about the experiment.
It should be noted, though, that just the situations were culled from those sources — not the actual replies and answers themselves. For example, a scenario such as “chewing gum on the bus” might have been taken from a Dear Abby column. But the team behind Delphi used Amazon’s crowdsourcing service MechanicalTurk to find respondents to actually train the AI.
While it might just seem like another oddball online project, some experts believe that it might actually be causing more harm than good.
After all, the ostensible goal of Delphi and bots like it is to create an AI sophisticated enough to make ethical judgements, and potentially turn them into moral authorities. Making a computer an arbiter of moral judgement is uncomfortable enough on its own, but even its current less-refined state can have some harmful effects.
“The authors did a lot of cataloging of possible biases in the paper, which is commendable, but once it was released, people on Twitter were very quick to find judgments that the algorithm made that seem quite morally abhorrent,” Dr. Brett Karlan, a postdoctoral fellow researching cognitive science and AI at the University of Pittsburgh (and friend of this reporter), told Futurism.”