I’ve been reading about recent research on how the human brain processes and stores memories, and it’s fascinating! It seems that our brains compress and store memories in a simplified, low-resolution format rather than as detailed, high-resolution recordings. When we recall these memories, we reconstruct them based on these compressed representations. This process has several advantages, such as efficiency, flexibility, and prioritization of important information.

Given this understanding of human cognition, I can’t help but wonder why AI isn’t being trained in a similar way. Instead of processing and storing vast amounts of data in high detail, why not develop AI systems that can compress and decompress input like the human brain? This could potentially lead to more efficient learning and memory management in AI, similar to how our brains handle information.

Are there any ongoing efforts in the AI community to explore this approach? What are the challenges and benefits of training AI to mimic this aspect of human memory? I’d love to hear your thoughts!

  • remotelove@lemmy.ca
    link
    fedilink
    arrow-up
    7
    ·
    26 days ago

    Thats kinda is how neural networks actually function. They don’t store massive amounts of data but, similar to us, tweak and adjust complex pathways of neurons that kinda just convert an input into a response.

    When you ask an LLM a question you are actually getting a list of words based on probabilities, not anything the LLM had to “think about” before responding. During its training, different patterns fed to the AI tweak and balance how and when specific neurons should fire. One way to think about it is that “memories” or data is stored in how the paths are formed, not actually in the core of the neuron itself.

    There are several hundred configurations of artificial neural networks that can mimic different functions of our brains, including memory.

    • CoderSupremeOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      26 days ago

      Oh, so it’s mostly a side effect, but they are still primarily being trained to predict the next word.

      • iii@mander.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        26 days ago

        Not necessarily, sometimes dimensionality reduction (the more common terminology used, for what is basically compression) is the explicit goal.

        Can be used for outlier detection, similarity search, etc.

        During training, you find a projection of the input, for example an image, to a smaller space, and then back to the original image. This is referred to as encoding and decoding. The error fuction would be a measure of how similar the in- and output images are.