Image 01 Image 03

Google Loses $100 Billion in Market Value after Chatbot Incorrectly Answers Question about Webb Space Telescope

Google Loses $100 Billion in Market Value after Chatbot Incorrectly Answers Question about Webb Space Telescope

Google employees expressed anger at CEO Sundar Pichai over troubled rollout of company’s Bard AI chatbot.

The James Webb Space Telescope made history as its new chatbot, promoted by Google, incorrectly answered a question related to the instrument.

NASA’s James Webb Space Telescope (Webb or JWST) launched in December 2021 on a deep-space mission and became fully operational in July 2022. It’s been scanning the cosmos and catching all kinds of incredible things: an accidental find of a space rock or asteroid, the historic agency DART mission that deliberately crashed into another asteroid, and unprecedented views of galaxies and the early universe.

But Google’s hyped artificial intelligence (AI) chatbot, Bard, just attributed one discovery to Webb that was completely false. In a livestreamed event, blog post (opens in new tab) and tweet (opens in new tab) showing the test AI in a demo Tuesday, the chatbot was asked, “What new discoveries from the James Webb Space Telescope can I tell my nine-year-old about?”

The query came back with two correct responses about “green pea” galaxies and 13-billion-year-old galaxies, but it also included one whopping error: that Webb took the very first pictures of exoplanets, or planets outside the solar system. The timing of that mistake was off by about two decades.

The actual first image of an exoplanet was released (opens in new tab) back in 2004, according to NASA. The incredible sight was captured by a powerful ground observatory, actually: the Very Large Telescope, a flagship facility with the European Southern Observatory in Chile.

The answer reportedly cost Google $100 billion in market share.

Alphabet Inc (GOOGL.O) lost $100 billion in market value on Wednesday after its new chatbot shared inaccurate information in a promotional video and a company event failed to dazzle, feeding worries that the Google parent is losing ground to rival Microsoft Corp (MSFT.O).

Alphabet shares slid as much as 9% during regular trading with volumes nearly three times the 50-day moving average. They pared losses after hours and were roughly flat. The stock had lost 40% of its value last year but rallied 15% since the beginning of this year, excluding Wednesday’s losses.

Reuters was first to point out an error in Google’s advertisement for chatbot Bard, which debuted on Monday, about which satellite first took pictures of a planet outside the Earth’s solar system.

Goggle employees slammed CEO Sundar Pichai on internal message boards over the incident.

The much-hyped rival to the the popular Microsoft-backed ChatGPT chatbot, which is seen as a potential threat to Google’s search engine dominance, flubbed an answer during Monday’s presentation.

In posts on Google’s internal forum “Memegen,” workers described the troubled launch as “rushed,” “botched” and “un-Googley,” according to CNBC, which viewed some of the messages.

“Dear Sundar, the Bard launch and the layoffs were rushed, botched, and myopic. Please return to taking a long-term outlook,” one user captioned a meme featuring a photo of Pichai looking serious, according to the outlet.

“Rushing Bard to market in a panic validated the market’s fear about us,” an employee wrote in another post.

The incident also shows the problems associated with linking artificial intelligence to search engines.

It is evidence that the move to use artificial intelligence chatbots like this to provide results for web searches is happening too fast, says Carissa Véliz at the University of Oxford. “The possibilities for creating misinformation on a mass scale are huge,” she says.

…Véliz says the error, and the way it slipped through the system, is a prescient example of the danger of relying on AI models when accuracy is important.

“It perfectly shows the most important weakness of statistical systems. These systems are designed to give plausible answers, depending on statistical analysis – they’re not designed to give out truthful answers,” she says.

“We’re definitely not ready for what’s coming. Companies have a financial interest in being the first ones to develop or to implement certain kinds of systems, and they’re just rushing through it,” says Véliz. “So we’re not giving society time to talk about it and to think about it and they’re not even thinking about it very carefully themselves…

Given how much Google throttles information from conservative analysts and those with takes not aligned with the current political narratives, wrong answers will be a feature rather than a bug.

DONATE

Donations tax deductible
to the full extent allowed by law.

Tags:

Comments

It must be hard to program using feels vs logic in the process.

A + B = ‘wellllllllll, ….”

G00gle is evil, so if SMOAD incinerated it tomorrow, I would rejoice.

A larger observation includes, “Are ChatGPT and g00gle Bard similar AI inventions?” They both come off as powerful computers employed as mere talking encyclopedias. IE: a neat new toy for nerds, but eventually a real pain to maintain.

The first rule of computers is garbage in; garbage out. People decide which garbage will go in, but can AI learn? Or create or gather new information? Or is it merely a more powerful, digital, universal Turing machine destined to compute a quadrillion different equations based on the same info giving the appearance of intelligence?

    I don’t know about Bard but ChatGPT is terrible at math.

      BierceAmbrose in reply to irv. | February 11, 2023 at 10:27 pm

      ChatBot is bad at math because it key / value models the babblings of people dumb about math, to come out with it’s “answers.”

      This crop of AI has bigger processing, of bigger samples, with some more sophisticated language modeling on the way in and out. That doesn’t change the underlying strategy, of pattern-match n regurgitate.

      This also explains why they do so well at academic writing and exams — exactly what we ask those candidates to do, and it turns out what the folks who hire those “education” products are looking for. “Don’t surprise me, but make it look good.”

    AI is currently “mere talking encyclopedias…”

    I wholly agree. BUT, that won’t be for long.

    henrybowman in reply to LB1901. | February 11, 2023 at 5:20 pm

    The real danger here is when stupid people start relying on these “services” to give them answers they can’t so easily check against an encyclopedia.

    BierceAmbrose in reply to LB1901. | February 11, 2023 at 10:12 pm

    AIs in this crop are Zeitgeist engines, sampling from the random hot-takes of folk with nothing better to do than type at strangers on the internet.

    That they can build big models with lots of bad inferences vs. a few, sampled from vast shoals of trash-fish comment is barely notable as a feat of scaling.

The irony of a bunch of prog back room trolls criticizing the layoffs as “rushed” when this will now cause even more layoffs. You don’t lose 100 bill and just shrug that off, even if you are alphabet.

Sundar wokeshit and his troll army deserve each other. Meanwhile, James Damore warned you that playing to people’s strengths rather than diversity for the sake of diversity was the best way to make google stronger. For this favor, he was fired immediately and dismissively. Eat your cake, jackasses!

Well, Google is quickly becoming synonymous with disinformation; therefore the chatbot is brighter than you expect.
If p, then q the rlo states p=q

“Google employees expressed anger at CEO Sundar Pichai over troubled rollout”

Well , go ahead, guys, cancel him! Shadowban him! Tear down his YouTube videos! That’s what you do best, isn’t it?

A hundred billion dollar market correction on a single data point. Google stock must be really overpriced and unstable, two things that don’t belong in a portfolio except in very small numbers.

    henrybowman in reply to georgfelis. | February 11, 2023 at 5:22 pm

    Yeah. What struck me about this is that there were enough non-nerd people who cared about this one “programming error” to over-react in this fashion. Though it couldn’t happen to a nicer clique of Bond villains.

    rhhardin in reply to georgfelis. | February 11, 2023 at 5:45 pm

    The value of a stock is the present value of all the earnings it will ever have, or, what is the same thing, the present value of all the dividends it will ever pay.

    The more those earnings or dividends are in the future, the more speculative evaluations are going to be, both because of company good or bad fortunes in the future, but also because you have to guess what interest rate to discount future returns at.

    Speculative doesn’t mean wrong. If you know which way it’s wrong, you can make money on it.

      healthguyfsu in reply to rhhardin. | February 12, 2023 at 12:08 am

      That’s a really long winded way of saying the obvious.

      Google had value of this AI already baked into its trading price. Speculative gamblers (read as dumb genz types that trade on memes and group chats) lose their shirts in situations like this.

    If you talk to a young investor, they have no clue that stocks can drop and companies can disappear. They’re learning the hard way.

    diver64 in reply to georgfelis. | February 12, 2023 at 6:31 am

    Yeah, that is what the story claims but I have my doubts that Google crashed like that on one incident. I’d say there is more going on behind the scenes

The Webb Space Telescope allows us to see what the universe looked like before wokeness.

The significant thing about current AI iterations is the more the bot “understands” the query, the easier it is to steer the answers to what the company wants you to think. If the company doesn’t want it itself, it could make a nice living selling opinions the way it does advertising, in the answers.

    BierceAmbrose in reply to rhhardin. | February 11, 2023 at 10:20 pm

    Reading in the DuckDuckGo references below… anything they tailor to you specifically is ads, whatever they call it. Search results are already ads. I expect the chat bot results to be soon, if not already.

    The interesting thing is, if the ChatBots work as advertised, their output can also be gamed by bot-infesting the stuff they sample.

    Do we really need bot amplifying bots? They seem to be doing fine without the help.

Great, they need to loose an extra 1 billion in income. They went woke now it time to go broke!!!! DuckDuck go is better anyway

    For those who have never used Duckduckgo.com for a search of a political issue: you’ll be shocked if you compare the same result after you searched for it and compared it to the censored version of the same search on Google.com.

    diver64 in reply to KB6DX. | February 12, 2023 at 6:33 am

    DuckDuckGo and Brave are what I use and I’m quite happy with them. What TheFineReport says is quite true. Try a similar enquiry using different browsers like Brave, Duck, Google, Yahoo, Firefox and you would be stunned at not only the different results but the obvious slant.

IIRC, Webb’s images first appeared on or about the same day as Gus Fring’s Double was revealed on Better Call Saul. Some clever person combined the two events to create a picture of the Double titled Hubble Telecope: and a picture of Gus titled Webb Telescope:. A case of 1 + 1 > 2, IMHO

I had a Psychology prof at Oberlin whose area of expertise was surveys, and he made the statement that any reasonably competent person in that field could create a survey which captured accurate data, but the really good people could use it to manipulate a person’s views as a product of taking the survey. This was perfectly portrayed by Sir Humphrey on Yes, Prime Minister who showed that people who either supported or were against National Service could be persuaded to have the opposite opinion after answering a few questions.

https://www.youtube.com/watch?v=ahgjEjJkZks&t=10s

Just one example of why this show is timeless.

Google’s AI solution is 1999’s Ask
Jeeves.

Sorry, but everyone here is missing the big picture and this is possibly the biggest story ever. ChatGPT has been released and will soon replace knowledge itself.

https://www.youtube.com/watch?v=r0AbBcYlvn4

We were recently warned by Jordan Peterson, Elon Musk, and Neil Oliver that something will happen in March that will mark the beginning of the end of freedom. They were wrong, It’s already happened. They have already taken a big bite out of Google who may never recover. There may be no stopping this. But never mind. You were saying?

    The competition now is who is going to own it and Bill Gates is leading the charge. ChatGPT will soon be the big wrecking ball crushing knowledge itself. With everything digitized, this will be the worst destruction of knowledge in human history. Far worse than the destruction of the library of Alexandria and so many other irreplaceable vaults of ancient knowledge now lost forever. At least, those disasters created a void. This disaster is bringing about permanent slavery of the vast majority of humanity. It is all-encompassing. Even your refrigerator will be ratting you out… if that technology survives.