You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • lowleveldata
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    7
    ·
    6 months ago

    I’m pretty sure all babies don’t know that we are not supposed to eat glue. Should we kill them off too?

    • 0x0
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      6
      ·
      6 months ago

      You mean those babies that can read and use chatgpt unsupervised?

      • lowleveldata
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 months ago

        The premise was people need advice on eating glue have bad genes. Babies commonly need those advice, whether it’s from AI or not.