• @[email protected]
      link
      fedilink
      English
      28
      edit-2
      5 months ago

      I just tried to have Gemini navigate to the nearest Starbucks and the POS found one 8hrs and 38mins away.

      Absolute trash.

        • @[email protected]
          link
          fedilink
          125 months ago

          It seems to think you need to leave Alabama but aren’t ready for a state as tolerable as Georgia

          • @[email protected]
            link
            fedilink
            English
            75 months ago

            I would totally leave if the “salary to cost of living” ratio wasn’t so damn good.

            I’d move to Germany or the Netherlands or Sweden or Norway so fast if I could afford it.

        • @[email protected]
          link
          fedilink
          English
          55 months ago

          that leads me to believe it thinks you are in North Carolina. have you allowed location to Gemini? Are you on a VPN?

          • @[email protected]
            link
            fedilink
            English
            55 months ago

            No VPN, it all has proper location access. I even tried it with a local restaurant that I didn’t think was a chain, and it found one in Tennessee. I’m like 10 minutes away from where I told it to go.

    • IndiBrony
      link
      fedilink
      English
      205 months ago

      Despite that, it delivers its results with much applum!

  • @[email protected]
    link
    fedilink
    English
    655 months ago

    Some “AI” LLMs resort to light hallucinations. And then ones like this straight-up gaslight you!

    • @[email protected]
      link
      fedilink
      505 months ago

      Factual accuracy in LLMs is “an area of active research”, i.e. they haven’t the foggiest how to make them stop spouting nonsense.

      • @[email protected]
        link
        fedilink
        285 months ago

        duckduckgo figured this out quite a while ago: just fucking summarize wikipedia articles and link to the precise section it lifted text from

      • @[email protected]
        link
        fedilink
        English
        12
        edit-2
        5 months ago

        Because accuracy requires that you make a reasonable distinction between truth and fiction, and that requires context, meaning, understanding. Hell, full humans aren’t that great at this task. This isn’t a small problem, I don’t think you solve it without creating AGI.

  • Margot Robbie
    link
    fedilink
    415 months ago

    Ok, let me try listing words that ends in “um” that could be (even tangentially) considered food.

    • Plum
    • Gum
    • Chum
    • Rum
    • Alum
    • Rum, again
    • Sea People

    I think that’s all of them.

    • @[email protected]
      link
      fedilink
      275 months ago

      There’s going to be an entire generation of people growing up with this and “learning” this way. It’s like every tech company got together and agreed to kill any chance of smart kids.

      • @[email protected]
        link
        fedilink
        115 months ago

        Isn’t it the opposite? Kids see so many examples of obviously wrong answers they learn to check everything

        • @[email protected]
          link
          fedilink
          65 months ago

          How do they know something is obviously wrong when they try to learn it? For “bananum” sure, for anything at school, college though?

          • @[email protected]
            link
            fedilink
            15 months ago

            The bananum was my point. Maybe as ai improves there won’t be as many of these obviously wrong things, but as it stands virtually any google search gets a shitty wrong answer from ai, and so they see tons of this bad info well before college.

  • @[email protected]
    link
    fedilink
    English
    28
    edit-2
    5 months ago

    And yet it doesn’t even list ‘Plum’, or did it think ‘Applum’ was just a variation of a plum?

    • @[email protected]
      link
      fedilink
      95 months ago

      Well, plum originally comes from applum which morphed into a plum so yeah.

      And that’s absolutely not true.

          • @[email protected]
            link
            fedilink
            15 months ago

            A lot of folks on the internet don’t get even the most obvious jokes without some sort of sarcasm indicator because some things are really hard to read in text vs in person. LLMs have no idea what the hell sarcasm is and definitely include some in their training, especially if they were trained on any of my old Reddit comments.

  • shininghero
    link
    fedilink
    175 months ago

    Strawberrum sounds like it’ll be at least 20% abv. I’d like a nice cold glass of that.

  • Sunny' 🌻
    link
    fedilink
    145 months ago

    It’s crazy how bad d AI gets of you make it list names ending with a certain pattern. I wonder why that is.

    • @[email protected]
      link
      fedilink
      English
      115 months ago

      I’m not an expert, but it has something to do with full words vs partial words. It also can’t play wordle because it doesn’t have a proper concept of individual letters in that way, its trained to only handle full words

      • @[email protected]
        link
        fedilink
        35 months ago

        they don’t even handle full words, it’s just arbitrary groups of characters (including space and other stuff like apostrophe afaik) that is represented to the software as indexes on a list, it literally has no clue what language even is, it’s a glorified calculator that happens to work on words.

          • @[email protected]
            link
            fedilink
            15 months ago

            not really, a basic calculator doesn’t tend to have variables and stuff like that

            i say it’s a glorified calculator because it’s just getting input in the form of numbers (again, it has no clue what a language or word is) and spitting back out some numbers that are then reconstructed into words, which is precisely how we use calculators.

    • @[email protected]
      link
      fedilink
      English
      55 months ago

      It can’t see what tokens it puts out, you would need additional passes on the output for it to get it right. It’s computationally expensive, so I’m pretty sure that didn’t happen here.

      • @ramirezmike
        link
        15 months ago

        doesn’t it work literally by passing in everything it said to determine what the next word is?

        • adderaline
          link
          fedilink
          English
          15 months ago

          it chunks text up into tokens, so it isn’t processing the words as if they were composed from letters.

      • @[email protected]
        link
        fedilink
        English
        15 months ago

        With the amount of processing it takes to generate the output, a simple pass over the to-be final output would make sense…

    • @[email protected]
      link
      fedilink
      5
      edit-2
      5 months ago

      LLMs aren’t really capable of understanding spelling. They’re token prediction machines.

      LLMs have three major components: a massive database of “relatedness” (how closely related the meaning of tokens are), a transformer (figuring out which of the previous words have the most contextual meaning), and statistical modeling (the likelihood of the next word, like what your cell phone does.)

      LLMs don’t have any capability to understand spelling, unless it’s something it’s been specifically trained on, like “color” vs “colour” which is discussed in many training texts.

      "Fruits ending in ‘um’ " or "Australian towns beginning with ‘T’ " aren’t talked about in the training data enough to build a strong enough relatedness database for, so it’s incapable of answering those sorts of questions.

  • @[email protected]
    link
    fedilink
    45 months ago

    Ok, I feel like there has been more than enough articles to explain that these things don’t understand logic. Seriously. Misunderstanding their capabilities at this point is getting old. It’s time to start making stupid painful.