As we documented Saturday, Google has faced a barrage of criticism from users since the release of their Gemini AI tool, which among other disturbing things equivocates on the Hamas rapes that happened on October 7th by leaning in on the leftist “competing narratives on what happened” talking point and trying to lend Hamas’ denials credence.
Another area where they’ve come under fire is in their generation of AI imagery.
Last week, several Twitter users posted screengrabs of their findings, with one thread in particular from Daily Wire scripted content creator Frank Fleming getting a lot of attention.
Fleming found that Gemini was not a fan of producing images of white people, even going so far as to create images of black popes and vikings:
This was their example of “diversity”:
There were also these discoveries:
And no shockers here:
It did get one thing right, though:
And on a slightly more humorous note:
Product lead Jack Krawczyk told Fox Business that Google is “working to improve” its product after the backlash:
In a statement to Fox News Digital, Gemini Experiences Senior Director of Product Management Jack Krawczyk addressed the responses from the AI that had led social media users to voice concern.”We’re working to improve these kinds of depictions immediately,” Krawczyk said. “Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
Krawczyk, I should point out, is a self-loathing, Very Online left-winger, so I’d take such claims of “improving” or “fixing” Gemini AI with a grain of salt:
As is another senior Gemini AI division member, Jen Gennai:
Amazingly, in the name of “diversity” and “inclusion,” bias is being programmed into anti-bias systems, as Professor Jacobson explained in comments to the New York Post:
William A. Jacobson, a Cornell University Law professor and founder of the Equal Protection Project, a watchdog group told The Post: “In the name of anti-bias, actual bias is being built into the systems.”“This is a concern not just for search results, but real-world applications where ‘bias free’ algorithm testing actually is building bias into the system by targeting end results that amount to quotas.”
Rumor has it that a lot of people at Google knew how bad Gemini AI was, but didn’t speak up about it:
This might be why:
Twitter/X CEO Elon Musk says he’s talked to higher-ups at Google who “assured” him they were working to “fix” the issues ASAP:
I’ll believe it when I see it. Until then…
Related LI/EPP Reading: LinkedIn Should End ‘Diversity in Recruiting’ Feature: “Discrimination by algorithm is still discrimination”
— Stacey Matthews has also written under the pseudonym “Sister Toldjah” and can be reached via Twitter. —
CLICK HERE FOR FULL VERSION OF THIS STORY