WHO Concerned About Bias and Misinformation in Use of Artificial Intelligence in Healthcare

A few months ago, I reported that the chatbot, ChatGPT, passed a medical licensing exam.

Now the World Health Organization (WHO) has suggested using artificial intelligence (AI) for public healthcare.

The organization has expressed concerns that AI used to reach decisions could be biased or misused.

The WHO said it was enthusiastic about the potential of AI but had concerns over how it will be used to improve access to health information, as a decision-support tool and to improve diagnostic care.The WHO said in a statement the data used to train AI may be biased and generate misleading or inaccurate information and the models can be misused to generate disinformation.It was “imperative” to assess the risks of using generated large language model tools (LLMs), like ChatGPT, to protect and promote human wellbeing and protect public health, the U.N. health body said.

Considering chatbots have used human-hating, leftist dogma as inputs, which led one Belgian man to commit suicide in the name of climate change activist, these concerns are certainly warranted.

WHO also noted privacy concerns, which were also expressed by Italian officials when they temporarily blocked chatbot ChatGPT.

LLMs produce responses that “can appear authoritative and plausible,” even if the responses are incorrect, WHO warns. Content produced by AI — whether text, audio or video — that contains disinformation can be “difficult for the public to differentiate” from reliable material.The tech could also be trained on data “for which consent may not have been previously provided for such use,” raising concerns about AI’s use of sensitive data.A poll released earlier this year found that a majority of Americans said they’d be uncomfortable with their health care provider relying on AI as part of their medical care. The WHO’s statement comes amid ongoing debate over the new and advanced tech, and its place in arenas like school, medicine and elsewhere.

Interestingly, a group of investors has recently raised a $50 million seed round from General Catalyst and Andreessen Horowitz to develop the large language model that will power a variety of healthcare bots that will endeavor to address the concerns brought by WHO.

…They’re calling the Palo Alto-based startup Hippocratic AI in a nod to the code of ethics doctors take. That code, based on writings attributed to the ancient Greek physician Hippocrates, is often summarized as “do no harm.”But generative AI models can’t swear to uphold ethical codes, and, as the viral chatbot ChatGPT has demonstrated, can also produce false information in response to questions. Regulators have vowed to take a closer look at their use in healthcare with FDA Commissioner Robert Califf saying he sees the “regulation of large language models as critical to our future” at a conference earlier this month.While the future regulatory landscape is unclear,[entrepreneur and Co-Founder and CEO at Health Equity Labs Munjal] Shah says Hippocratic AI is taking a three-pronged approach for testing its large language model in healthcare settings, which involves passing certifications, training with human feedback and testing for what the company calls “bedside manner.”Rather than give health system customers access to the entire model, Shah says Hippocratic AI is planning to provide access to different healthcare “roles,” which will be released when a given role has achieved a certain level of “performance and safety.” One key measuring stick will be the licensing exams and certifications that a human would have to pass in order to operate in that role.

Additionally, Navina, a New York-based medical tech company, has created an AI tool to help doctors rapidly sort through patient data records.

The platform, which is also called Navina, uses generative AI to transform how data informs the physician-patient interaction, explained Ronen Lavi, the company’s Israel-based CEO.Lavi said that the company’s main goal “in bringing AI to the primary point of care was to make the patient-provider interaction more meaningful and effective by giving physicians deep patient understanding in the little time they have,” he told Fox News Digital in an interview.”They have tons of data to sift through from multiple sources and in different formats,” he continued.”It’s disorganized, non-chronological and fragmented.”He added, “AI can process a high volume of data across sources and summarize complex medical jargon into simpler and shorter terms.”

Tags: World Health Organization (WHO)

CLICK HERE FOR FULL VERSION OF THIS STORY