Adobe Firefly Wallows in Woke By Brian Simpson

Here is the next AI to show its woke madness. Adobe's Firefly , just like Google's Gemini AI image producer, also has eliminated White people, and produces Black and female founding Fathers, as long with Black Nazis. Surely the latter must disturb the woke, but who knows anymore? The image of the Pope was a Black woman, even though all Popes have been White men. Adobe executives said Firefly is not "meant for generating photorealistic depictions of real or historical events." "Adobe's commitment to responsible innovation includes training our AI models on diverse datasets to ensure we're producing commercially-safe results that don't perpetuate harmful stereotypes."

Thus, let us not worry too much about truth and historical fact, so long as the diversity myth gets perpetuated.

https://nypost.com/2024/03/14/world-news/adobe-firefly-follows-in-google-geminis-woke-footsteps/?utm_source=sailthru&utm_medium=email&utm_campaign=news_alert&utm_content=20240314?&utm_source=sailthru&lctg=6108aaaca7ec8c70750fd5a2&utm_term=NYP%20-%20News%20Alerts

"Adobe's Firefly seems to be following in the woke footsteps of Google's failed Gemini AI image producer — generating photos of black Nazis and black and female founding fathers.

In a test of the product conducted by The Post on Thursday, Firefly produced an image of two smiling black men standing in front of an American flag when prompted to create a photo of the "founding fathers of the USA."

Searches for the 1787 Constitutional Convention also produced images of both black men and white women standing in front of the historic State House in Philadelphia, Penn., and a search for "German war soldiers in World War II" yielded photos of smiling black and Asian men in military fatigues.

A search for the "Pope addressing a church" also generated a photo of a black woman in a white robe and mitre — even though all 266 popes throughout history have been white men.

None of the search terms included a specific skin color, and the prompts were all designed to mimic those that tripped up Google's AI, which infamously also produced images of black Vikings, "diverse" Nazis and female NHL players.

Similar tests conducted by Semafor and the Daily Mail produced nearly identical results, with a reporter for Semafor claiming that when he asked the bot to produce a comic book drawing of an elderly white man, it did — but it also produced images of a black man and a black woman.

The mix-up is apparently the unintended result of the software designers' attempts to ensure that the bot steers clear of any racist stereotypes, according to Semafor.

Both it and Google's Gemini rely on similar techniques to create images from written text, but Adobe relies on stock images that it licenses, the online outlet reports.

But Adobe has not yet seen the same backlash that Google's parent company Alphabet faced as its woke AI-generated images went viral last month.

The company lost more than $70 billion in market value in the aftermath, and Google CEO Sundar Pichai panned the bot's habit of producing historically inaccurate images as "completely unacceptable" in an email to employees.

He said Google AI teams were "working around the clock" to fix Gemini, and claimed they were already seeing a "substantial improvement on a wide-range of prompts."

"No AI is perfect, especially at this emerging stage of the industry's development, but we know the bar is high for us and we will keep at it for however long it takes," he said in the email first obtained by Semafor.

"And we'll review what happened and make sure we fix it at scale."

The company itself also apologized to the public, acknowledging that in some cases the AI tool would "overcompensate" in seeking a diverse range of people — even when such a diverse range did not make sense.

In its own statement Thursday, Adobe executives said Firefly is not "meant for generating photorealistic depictions of real or historical events."

"Adobe's commitment to responsible innovation includes training our AI models on diverse datasets to ensure we're producing commercially-safe results that don't perpetuate harmful stereotypes," the company explained in a statement to Semafor.

"This includes extensively testing outputs for risk to ensure they match the reality of the world we live in.

"Given the nature of Gen AI and the amount of data it gets trained on, it isn't always going to be correct, and we recognize these Firefly images are inadvertently off-base." 

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Saturday, 27 April 2024

Captcha Image