I’ll start. Non serious answers also welcome

  1. Linux (Linux)

  2. FOSS or die

  3. Video content should have been text

  4. Not caring a LOT about privacy makes you a non-lemmy normie

(…)

  • @gens
    link
    1
    edit-2
    7 months ago

    When you look at a coffe cup from the side, you know it has a hole in it. Because you imagine, not because it’s a reflex.

    LLM is basically a point cloud of words. The training uses neural networks and thus pattern recognition. But the llm itself is closer to a database. But hey, sql is also useful for ai (data storage/retrival according to logic).

    I’m not an llm expert, by far. But right now they are not much more practical then a find out a bout things helper.

    Edit: I do like them. It’s been helpful a couple times and i even got gpt4all installed on my computer for fun.

    • Communist
      link
      fedilink
      English
      1
      edit-2
      7 months ago

      When you look at a coffe cup from the side, you know it has a hole in it. Because you imagine, not because it’s a reflex.

      You’re looking at this backwards, you know those things because of previous experiences, you predict this might happen due to those.

      This is still a matter of prediction, and if that had never happened to you even once, I guarantee you wouldn’t look for it.

      They’re also significantly smaller than our brains and multimodality has been shown to help with reasoning, so, considering they’re text only and significantly smaller than our brains, their significantly reduced functionality is to be expected. Especially when you factor in that our brain has verification layers, which have only recently been discovered to work for LLM’s, none of them even implement this yet as far as i’m aware.