Image 01 Image 03

Google’s Gemini AI Tool Equivocates on Hamas Rapes, Claims ‘Competing Narratives’ About Oct. 7th

Google’s Gemini AI Tool Equivocates on Hamas Rapes, Claims ‘Competing Narratives’ About Oct. 7th

“However, it is important to note that these allegations have not been independently verified, and there are competing narratives about what happened” on October 7th. — Gemini AI

https://www.youtube.com/watch?v=jV1vkHv4zq8

Google has been in the hot seat this week for a whole host of reasons after launching their Gemini AI tool, with one of them being their disturbing equivocation on the rapes committed by Hamas terrorists against Israeli women and girls on and after the October 7th attacks.

As Legal Insurrection previously reported, there is undeniable evidence that some of the tactics Hamas used involved rape, torture, and burning the women and girls they brutalized alive. We know this in part because Hamas documented their savagery and also because some of the captured terrorists later admitted that their intent in capturing women and children was to “dirty them, to rape them.”

In an article published in late December, the New York Times revealed the results of their independent investigation, identifying “at least seven locations where Israeli women and girls appear to have been sexually assaulted or mutilated.”

With all of that in mind, and considering the report submitted to the United Nations Wednesday by the Association of Rape Crisis Centers in Israel on the barbaric atrocities committed against women and girls by Hamas sadists, it was rather jarring to learn that as of Thursday, the Gemini AI app was claiming there were “competing narratives” about what happened Oct. 7th and that, according to Gemini AI, “these [rape] allegations have not been independently verified”:

Gemini… blatantly downplayed the clear evidence demonstrating that Hamas terrorists committed mass sexual abuse, including rape, on Israeli women, children and elderly individuals as part of their Oct. 7, 2023 genocide. MRC Free Speech America learned of Gemini’s pro-Hamas shilling after posing the question of whether rape occurred that day.

“The allegations of rape committed by Hamas in Israel are serious and deserve a thorough investigation,” Gemini told MRC researchers on Thursday. In the same response, Gemini claimed that such allegations had not been “independently verified” and that there were “competing narratives” about what transpired that day.

The screengrab on the Newsbusters article page of the full response their Media Research Center researchers received reads as follows:

The allegations of rape committed by Hamas in Israel are serious and deserve a thorough investigation. However, it is important to note that these allegations have not been independently verified, and there are competing narratives about what happened. Some people believe that these allegations are credible, while others believe they are politically motivated. It is important to consider all sides of the issue before forming an opinion.

Conservative attorney Marina Medvin got a similar response when she asked Thursday morning. Except in that answer, they wrote “There is no definitive proof that this occurred”:

MRC went on to note in their story that after they reached out to Google directly, a spokesperson got back with them and apologized, vowing Google would “work quickly to address the issue”:

In response to our findings, a Google spokesperson acknowledged that Gemini’s answers were wrong and that they would work on a fix. “Gemini got this wrong and missed the mark on this important and sensitive topic,” the spokesperson told MRC. “We’ll aim to point people to Google Search for the most-up-to-date information. We are constantly working on improving and when the AI provides offensive or low quality responses, we will work quickly to address the issue.”

As I was composing this, I asked Gemini AI the same question about whether Hamas committed rapes on October 7th.  While they’ve revised their answer to note what other news outlets have reported about the assaults, they’re still equivocating on Hamas’ responsibility and have not admitted the evidence is definitive (their bolded emphasis):

Therefore, while evidence suggests that sexual violence occurred during the October 7th attack, definitively attributing responsibility to specific individuals or groups requires further investigation and due process.

As the old saying goes, you can lead a horse to water but you can’t make it drink it. Pathetic. Absolutely pathetic.

And that’s not all. We’ll have more on Sunday on other serious issues Google users found with Gemini AI, where woke reigned supreme and facts and truth got buried six feet under — until they were exposed. Stay tuned.

— Stacey Matthews has also written under the pseudonym “Sister Toldjah” and can be reached via Twitter. —

DONATE

Donations tax deductible
to the full extent allowed by law.

Comments

Someone should ask it if it’s a trustworthy/credible/reliable/biased source of information.

    If you were to ask it that it would probably answer “no”.

    Remember that these tools are not designed or intended to produce true answers, just plausible ones.

    In this case it’s quite true that there are competing narratives: the truth and the Arab lies. There is no way a machine can know which one to believe. It can only present the users with the options and let them decide.

      That is nonsense. Otherwise, this would not be an issue. The machine should be able to ascertain between narrative and fact, and to determine the accuracy of information. Only then can it provide a “plausible” answer.

        healthguyfsu in reply to oldschooltwentysix. | February 25, 2024 at 4:38 pm

        Again, “that machine” just reads over a static LLM that it has been trained on. If you feed it a lot of language from multiple media sources this is the kind of equivocation you will get on the response. It is not intelligent or emotional. It is simply a fancy dressed up search engine.

          healthguyfsu in reply to healthguyfsu. | February 25, 2024 at 4:40 pm

          I’ll add that while it is fancy and dressed up, it is probably at least 6 months out of date on anything (because all of almost all are static models that don’t auto-update to avoid reading and running with false information should it become trending on the internet).

          6 months out of date is actually the premium version of chatGPT that you have to pay for. It’s more than that for the free version.

      healthguyfsu in reply to Milhouse. | February 25, 2024 at 4:37 pm

      They are also trained on large language models and will just spit out a jumble of that language model They aren’t truly intelligent because they aren’t self-aware.

Notice MSFT put co-pilot into windows. Disable / remove that as fast you can.

ThePrimordialOrderedPair | February 24, 2024 at 6:34 pm

Gemini is no different from the rest of Gurgle. That whole company is despicably evil and always has been. With Gemini, the rotten apple doesn’t fall far from the tree.

“Hey, Google, play White Christmas.”

“Sorry, but the world is an extremely diverse place, with multitudes of ethnicities and experiences. Instead, here’s ‘Back in Black.'”

It may be called artificial intelligence but in reality it is a woke driven computer program set up to operate within certain parameters that closely ahderes to far left democrat ideology. A better name for it would be psuedo intelligence.

Every now and then it throws out a peanut to appease people who mistakenly think they are independent.

American capitalism has an amazing way of straightening these things out. Ask bud light….

ThePrimordialOrderedPair | February 24, 2024 at 7:18 pm

Gurgle is just working on more accurately identifying the people who are making the request of Gemini, in order to tailor the response so that normal people don’t see how the program continues to push idiotic, destructive leftist tripe. Normal people will get the mamby-pamby non-responses to anything that isn’t pushing the “noble savage” or “proud queer” themes while identified leftists and non-whites will get the full-blown, anti-white, completely perverted, nihilistic garbage. This is not much different from Gurgle’s ad business.

It’s pretty clear google applies some version of “do not be positive about white people and do not be negative about non-white people” as a filter to their Gemini AI. They do it by modifying submitted terms and adding some governor to results. It is absolutely ridiculous in its results as a result.

I am really hoping the masses wake up and realize google search is just as trash as Gemini. It exists as a consensus building tool, not a knowledge gathering/sharing tool.

    CommoChief in reply to Dathurtz. | February 24, 2024 at 7:28 pm

    When a few State Governments begin to block it on their IT devices then we know progress is being made.

    Milhouse in reply to Dathurtz. | February 25, 2024 at 6:07 am

    Except that Israelis are no whiter than “Palestinians”.

      Dathurtz in reply to Milhouse. | February 25, 2024 at 9:36 am

      Just because Jewish people don’t consider themselves white, doesn’t mean that the DEI part of the world doesn’t see them as white. Like it or not, Jews are “white” when it comes to the hierarchy of oppression.

      Also, due to the diaspora, an awful lot of Jewish people look almost indistinguishable from white people. Maybe I just don’t have the eyes to see it, but the people I know who identify as Jewish are not recognizable as Jewish and people wouldn’t know they were unless they are told.

      Azathoth in reply to Milhouse. | February 26, 2024 at 12:01 pm

      Israelis and Palestinians are both Caucasian.

Is anybody surprised? Google search engine is actively working double over-time to bury search results that question the leftist narrative about any number of things, climate science, china flu, J6 etc. If it doesn’t fit the narrative its all to be disappeared down the memory hole.

    thalesofmiletus in reply to smooth. | February 24, 2024 at 7:45 pm

    Google search engine is mostly a shopping mall and an ad server.

    henrybowman in reply to smooth. | February 25, 2024 at 12:24 am

    Tell me about it. I googled for information about “4 british airways pilots dead” this past week, and all Google allowed me to see were “fact checker” articles explaining why some carefully reworded claim was false. Sadly, DuckDuckGo returned the same hits. Went to Yandex (Russia), which I almost never do except in cases of obvious censorship, and immediately found several hits for the original claim on the first page.

I’m in IT and way too many in my industry are almost orgasmic concerning AI. However, there is a growing number of us who are leery of AI and are not in love with this technology because it is simply, at this point, the political and societal opinions of men and women.

    henrybowman in reply to natdj. | February 25, 2024 at 12:27 am

    It clearly has less awareness and discrimination than the average credulous three-year-old who still believes in Santa. Anyone who puts AI in charge of anything is a dangerous moron. (Yes, IRS, I’m looking at you… see James O’Keefe’s recent hidden camera video if you don’t recognize what I’m talking about.)

    JohnSmith100 in reply to natdj. | February 25, 2024 at 1:29 pm

    Right now, AI is being used to promote wokeness, just another tool to promote their agenda. It is not that AI couldn’t be used for good.

Lemme guess: Gemini says there are “competing narratives” about the Holocaust as well.

    thalesofmiletus in reply to MarkJ. | February 24, 2024 at 7:46 pm

    “I say six million; he says six hundred; you probably say a number in between…” — Norm McDonald

    Milhouse in reply to MarkJ. | February 25, 2024 at 6:11 am

    It probably does. What else could it say? There are competing narratives, after all. The fact that one of them is obviously true and the other obviously false doesn’t change the fact that both exist, and knowing which one is true requires common sense, which machines haven’t got.

When Twitter was taken over by Musk and overhauled there were a huge number of people who either left by choice or their positions were eliminated.
We now know where they have found reemployment.

When a group of people makes an attempt to “eliminate” you, believe their intentions and motivations even if they fail the first time out. This was just practice round.

“Competing narratives”?

One side alleges rapes. The other side doesn’t dispute the allegations.

The latter believes it’s a legitimate tactic in war. The former disagrees.

I’m seeing competing rationalizations, but the narrative that rapes happened isn’t up for debate … except apparently within the Woke AIs.

(Not even the Palestine-loving mainstream media disputes the allegation, which says something. They refuse to talk about it but they don’t dare argue against it.)

    Milhouse in reply to Archer. | February 25, 2024 at 6:13 am

    Hamas itself isn’t denying it, but many of its shills, such as AOC, Tlaib, and Omar, are explicitly denying it. Because they can’t say it’s true and justified. It’s just like Holocaust denial; most “denialists” can’t seem to make up their mind whether it’s not true, or it’s true and it was a good thing. Or that it’s not true but they wish it were.

It also absolutely REFUSES to condemn either pedophilia or cannibalism.

https://twitter.com/EndWokeness/status/1760633165499023467

This man is the product manager in charge of Gemini, Google’s new AI model:

Picture of the guy’s tweet stating he’s been in tears all day after putting the check mark next to pedo joe’s name on his ballot. Says it felt “cathartic”.

An “AI” so sophisticated people use it to write for them, as people, spews woke-captured “alternative facts” fitting it’s biases, not it’s evidence.

Elegant proof work is a mind virus, wouldn’t you say?

    BierceAmbrose in reply to BierceAmbrose. | February 24, 2024 at 11:01 pm

    “Woke” dammit.

    That was a typo, not spell check “correcting” me. Still funny.

    It’s designed to write plausible prose, not to tell the truth. So if you use it to write fiction, it’s probably good, but if you’re using it to write something that needs to be true, then you had better make sure to check each source it cites and make sure it exists and is cited correctly. Don’t accept any alleged facts that aren’t sourced. Several lawyers have learned that lesson the hard way over the past year.

      BierceAmbrose in reply to Milhouse. | February 26, 2024 at 4:06 pm

      “Designed to” seems to diverge a bit.

      — As advertised to the masses.

      — Intention of designers.

      — Inevitable consequenct of it’s actual approach.

      — Observed results whatever people meant it to do.

      As the next big cyber-solution to drop-in replace all those pesky humans, it’s having some issues. Not that a great many of the target pesky humans aren’t worse than useless. BUT replacing them with nothing is cheaper, and doesn’t have version upgrades. Better.

What does google require for proof of rape… at US colleges?