Image 01 Image 03

WHO Concerned About Bias and Misinformation in Use of Artificial Intelligence in Healthcare

WHO Concerned About Bias and Misinformation in Use of Artificial Intelligence in Healthcare

Artificial intelligence firms are aiming to address concerns and help doctors sort through records and diagnose accurately.

https://www.youtube.com/watch?v=aZCiK8FZx9k

A few months ago, I reported that the chatbot, ChatGPT, passed a medical licensing exam.

Now the World Health Organization (WHO) has suggested using artificial intelligence (AI) for public healthcare.

The organization has expressed concerns that AI used to reach decisions could be biased or misused.

The WHO said it was enthusiastic about the potential of AI but had concerns over how it will be used to improve access to health information, as a decision-support tool and to improve diagnostic care.

The WHO said in a statement the data used to train AI may be biased and generate misleading or inaccurate information and the models can be misused to generate disinformation.

It was “imperative” to assess the risks of using generated large language model tools (LLMs), like ChatGPT, to protect and promote human wellbeing and protect public health, the U.N. health body said.

Considering chatbots have used human-hating, leftist dogma as inputs, which led one Belgian man to commit suicide in the name of climate change activist, these concerns are certainly warranted.

WHO also noted privacy concerns, which were also expressed by Italian officials when they temporarily blocked chatbot ChatGPT.

LLMs produce responses that “can appear authoritative and plausible,” even if the responses are incorrect, WHO warns. Content produced by AI — whether text, audio or video — that contains disinformation can be “difficult for the public to differentiate” from reliable material.

The tech could also be trained on data “for which consent may not have been previously provided for such use,” raising concerns about AI’s use of sensitive data.

A poll released earlier this year found that a majority of Americans said they’d be uncomfortable with their health care provider relying on AI as part of their medical care. The WHO’s statement comes amid ongoing debate over the new and advanced tech, and its place in arenas like school, medicine and elsewhere.

Interestingly, a group of investors has recently raised a $50 million seed round from General Catalyst and Andreessen Horowitz to develop the large language model that will power a variety of healthcare bots that will endeavor to address the concerns brought by WHO.

…They’re calling the Palo Alto-based startup Hippocratic AI in a nod to the code of ethics doctors take. That code, based on writings attributed to the ancient Greek physician Hippocrates, is often summarized as “do no harm.”

But generative AI models can’t swear to uphold ethical codes, and, as the viral chatbot ChatGPT has demonstrated, can also produce false information in response to questions. Regulators have vowed to take a closer look at their use in healthcare with FDA Commissioner Robert Califf saying he sees the “regulation of large language models as critical to our future” at a conference earlier this month.

While the future regulatory landscape is unclear,[entrepreneur and Co-Founder and CEO at Health Equity Labs Munjal] Shah says Hippocratic AI is taking a three-pronged approach for testing its large language model in healthcare settings, which involves passing certifications, training with human feedback and testing for what the company calls “bedside manner.”

Rather than give health system customers access to the entire model, Shah says Hippocratic AI is planning to provide access to different healthcare “roles,” which will be released when a given role has achieved a certain level of “performance and safety.” One key measuring stick will be the licensing exams and certifications that a human would have to pass in order to operate in that role.

Additionally, Navina, a New York-based medical tech company, has created an AI tool to help doctors rapidly sort through patient data records.

The platform, which is also called Navina, uses generative AI to transform how data informs the physician-patient interaction, explained Ronen Lavi, the company’s Israel-based CEO.

Lavi said that the company’s main goal “in bringing AI to the primary point of care was to make the patient-provider interaction more meaningful and effective by giving physicians deep patient understanding in the little time they have,” he told Fox News Digital in an interview.

“They have tons of data to sift through from multiple sources and in different formats,” he continued.

“It’s disorganized, non-chronological and fragmented.”

He added, “AI can process a high volume of data across sources and summarize complex medical jargon into simpler and shorter terms.”

DONATE

Donations tax deductible
to the full extent allowed by law.

Comments

WHO literally just endorsed junk science that has all without exception been debunked by serious scientific study concerning artificial sweeteners and that endorsement will lead to people continuing to consume extra sugar and so avoid weight loss.

It should start making itself in line with what actual doctors and scientists are saying before it starts shouting misinformation.

HAL, Skynet and Colossus disagree: they will make your life easier by doing the thinking for you!

GIGO….

The doctors don’t hold up the ethics of their job (see gender “affirming” mutilations and psychological malpractice.

Why would be worried that an AI is going to do it?

WHO is to health care what George Soros is to politics. It is a bad actor.

LMAO. “WHO Concerned About Bias and Misinformation in Use of Artificial Intelligence in Healthcare”

But not too concerned about bias and misinformation regarding the COVID vaccine, lockdowns, or social distancing. Okay. Good to know.

WHO is script for propaganda., Hunter Biden and Hillary have more credibility than CDC and WHO combined.

I wonder if Chelsea ever serviced Hunter or Jeffery Epstein, but not often. Kind’a disgusting.

Bill Gates and the companies he controls contributes more to WHO than anyone else even counting nations. Gates calls the shots (pun intended) at WHO for his own profit. It says something about world government that one rich guy can buy the whole thing, and everyone is supposed to obey.

    henrybowman in reply to InEssence. | May 18, 2023 at 12:35 am

    And of course there are lots more rich guys playing the same game: Soros, Bloomberg, Pritzker, Carlos (the real, Mexican one), Rupert…

E Howard Hunt | May 18, 2023 at 7:52 am

As far as medically treating our glorious “underserved communities” goes, artificial intelligence is no match for real stupidity.

Let’s see – WHO programs AI … what could possibly go wrong with a seriously BIAS PROGAM???

IMHO, 75% of the current medical problems are the result of people self-medicating with cigarettes, alcohol, and food. The remainder is due to genetics and breakdown over time. People are a much greater threat to themselves than AI at the moment. That being said, I am glad the new pacer isn’t bluetooth enabled and subject to a coffee shop hack.

It would be interesting to examine AI as a source of a second opinion. I recently fired my dentist because it became apparent that there was an implicit bias that all work should benefit him buying a newer nicer car. He seemed to be a huge fan of fixing that which wasn’t broken and starting off with the most expensive and invasive irreversible approach possible. I was a bit concerned how “everything had suddenly gone bad” so I sought a complete second opinion, and a tooth by tooth analysis of the high-res radiographs painted a much better picture than I was being sold. Similarly, he referred me to an endodontist who did a root canal through a porcelain crown, that ended up being way less expensive than “cutting off the crown and seeing what we have to work with” which I now expect would have been extraction and implant.

Capitalist-Dad | May 18, 2023 at 12:44 pm

Liars concerned about misinformation. Who would have guessed.

AI just lets programmers blame the computer for their biases.