• pohart
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    This feels like a super safe technology to invest in

    • bitfucker
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      I prefer news for something like this than just traditional LLMs that are currently trending. This is another approach to the existing self-modifying network like backpropamine, reinforcement learning, and myriad of others. It would be interesting to see more of their result compared to other methods

      • pohart
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 days ago

        I’m sorry. I think this is super cool and I’m extremely interested. I also think it’s going to be used to control real world things and will ultimately be an existential problem.

        I don’t think that is learning about on Lemmy makes it any more of a problem and that it’s safer if as much as possible happens in public.

        • Mikina
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          I wouldn’t worry about this, it is extremely unlikely that any kind of program, especially one that is a glorified text prediction, even if self-modifying, could get into a state where it could cause any kind of damage without really easy ways how to stop it.

          What you should be worried about instead is the AI behind your search results, youtube feed and Ig/FB wall that is radicalizing you, your familly and your friends, and pushing entire countries (see Slovakia, Hungary, USA, and now Poland) into voting for extreme right because that is what drives engagement.

          That will ruin your life and kill your loved ones way sooner than any potential chatbot modifying itself .

          • Tamo240
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Doesn’t the fact that it is self modifying mean the initial intent of ‘glorified text prediction’ is irrelevant? This is basically an RCE exploit if you can find a way to alter the prompt, as presumably after modifying it’s own code it proceeds to execute it, and there is no guarantee of what the code does.

          • pohart
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            Oh, don’t worry, I’m worried about that, too. Especially where the new self improving AI is controlled by the the fascists who are already in power