The James Webb Space Telescope made history as its new chatbot, promoted by Google, incorrectly answered a question related to the instrument.
NASA’s James Webb Space Telescope (Webb or JWST) launched in December 2021 on a deep-space mission and became fully operational in July 2022. It’s been scanning the cosmos and catching all kinds of incredible things: an accidental find of a space rock or asteroid, the historic agency DART mission that deliberately crashed into another asteroid, and unprecedented views of galaxies and the early universe.But Google’s hyped artificial intelligence (AI) chatbot, Bard, just attributed one discovery to Webb that was completely false. In a livestreamed event, blog post (opens in new tab) and tweet (opens in new tab) showing the test AI in a demo Tuesday, the chatbot was asked, “What new discoveries from the James Webb Space Telescope can I tell my nine-year-old about?”The query came back with two correct responses about “green pea” galaxies and 13-billion-year-old galaxies, but it also included one whopping error: that Webb took the very first pictures of exoplanets, or planets outside the solar system. The timing of that mistake was off by about two decades.The actual first image of an exoplanet was released (opens in new tab) back in 2004, according to NASA. The incredible sight was captured by a powerful ground observatory, actually: the Very Large Telescope, a flagship facility with the European Southern Observatory in Chile.
The answer reportedly cost Google $100 billion in market share.
Alphabet Inc (GOOGL.O) lost $100 billion in market value on Wednesday after its new chatbot shared inaccurate information in a promotional video and a company event failed to dazzle, feeding worries that the Google parent is losing ground to rival Microsoft Corp (MSFT.O).Alphabet shares slid as much as 9% during regular trading with volumes nearly three times the 50-day moving average. They pared losses after hours and were roughly flat. The stock had lost 40% of its value last year but rallied 15% since the beginning of this year, excluding Wednesday’s losses.Reuters was first to point out an error in Google’s advertisement for chatbot Bard, which debuted on Monday, about which satellite first took pictures of a planet outside the Earth’s solar system.
Goggle employees slammed CEO Sundar Pichai on internal message boards over the incident.
The much-hyped rival to the the popular Microsoft-backed ChatGPT chatbot, which is seen as a potential threat to Google’s search engine dominance, flubbed an answer during Monday’s presentation.In posts on Google’s internal forum “Memegen,” workers described the troubled launch as “rushed,” “botched” and “un-Googley,” according to CNBC, which viewed some of the messages.“Dear Sundar, the Bard launch and the layoffs were rushed, botched, and myopic. Please return to taking a long-term outlook,” one user captioned a meme featuring a photo of Pichai looking serious, according to the outlet.“Rushing Bard to market in a panic validated the market’s fear about us,” an employee wrote in another post.
The incident also shows the problems associated with linking artificial intelligence to search engines.
It is evidence that the move to use artificial intelligence chatbots like this to provide results for web searches is happening too fast, says Carissa Véliz at the University of Oxford. “The possibilities for creating misinformation on a mass scale are huge,” she says….Véliz says the error, and the way it slipped through the system, is a prescient example of the danger of relying on AI models when accuracy is important.“It perfectly shows the most important weakness of statistical systems. These systems are designed to give plausible answers, depending on statistical analysis – they’re not designed to give out truthful answers,” she says.“We’re definitely not ready for what’s coming. Companies have a financial interest in being the first ones to develop or to implement certain kinds of systems, and they’re just rushing through it,” says Véliz. “So we’re not giving society time to talk about it and to think about it and they’re not even thinking about it very carefully themselves…
Given how much Google throttles information from conservative analysts and those with takes not aligned with the current political narratives, wrong answers will be a feature rather than a bug.
CLICK HERE FOR FULL VERSION OF THIS STORY