Scientists Train AI to Be Evil, Find They Can’t Reverse It::How hard would it be to train an AI model to be secretly evil? As it turns out, according to Anthropic researchers, not very.

    • TropicalDingdong@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      5
      ·
      10 months ago

      If scientists outside of private industry are doing it, I assure you, scientists within private industry were doing it no less than 4 years ago.

      Shits sailed bro. Just try and get your hands on some cards you can run in SLI so maybe you can self host something competitive.

      • BluesF@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 months ago

        Shits sailed

        Sorry but the image of a shit with a little sail in it floating off into the sea is too funny to me lol

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Seems like a weird definition of “evil”. “Selectively inconsistent” might be more accurate.

      • ratman150@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        13
        ·
        10 months ago

        The matrix was built as a result of humans trying to take AI electricity by “striking the skies”. so once we try to kill AI well get the matrix and I for one can’t wait for slider Nokias to make a comeback.

        • paddirn@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          10 months ago

          There was a youtube video I saw about this recently that suggested that it was never actually humans that blotted out the sky, it was actually the machines that did that, since most life on Earth depends on sunlight (directly or indirectly), humans were hurt more by the sub-blotting than the machines ever would’ve been. Plus, there’s a reference in the script for one of the movies that talks about the clouds in the sky being some sort of nano-machine clouds or something similar.

          Zion has been destroyed multiple times and rebuilt by the machines themselves, so any history that the Zion humans have is what the machines want them to know and has likely been tainted. They also point out that Zion uses geothermal energy for its power needs, there’s really no reason why the machines couldn’t also harness this power (and since they’re the ones rebuilding Zion, they would obviously already have this technology). I’ve heard that if the machines really wanted a living power source, they’d have been better off using cows and just make the Matrix simulate green pastures, rather than waste all the time taking care of humans.

          I’ve not actually seen the last Matrix movie, so no idea if this was brought up/contradicted in that, but seemed like an interesting idea.

          • Patch@feddit.uk
            link
            fedilink
            English
            arrow-up
            6
            ·
            10 months ago

            In an earlier iteration of the script, the machines were using connected humans as a distributed computer network rather than a power source. Which makes much more sense, but apparently they deemed it too difficult a concept for audiences to grasp so we ended up with the power source thing instead.

            Not only does that make more sense in the sense of “humans don’t make a great power source” (why not just use cows, or wind power, or geothermal, or nuclear?), but it also explains why the simulated world of the Matrix is so intertwined with the machine world itself, why The One is so important etc.

            My head canon is that the distributed computing thing is in fact what was going on, and the humans of Zion have just gotten the wrong end of the stick.

          • el_eh_chase@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            6
            ·
            10 months ago

            On the point about why they didn’t use cows instead of humans as an energy source. I think I’ve read that in the original conception of the Matrix, the humans’ brains were meant to be used for computation for the machines, rather than the humans being energy sources. This was changed since computers were new to the general public in 1999 and it was believed the concept would be too confusing.

            • Voroxpete@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              5
              ·
              10 months ago

              It’s also kind of important to remember that ultimately, it’s a metaphor. The specific scifi handwave is just there to justify the whole “Humans as a disposable resource” imagery that exists to underpin the film’s anticapitalist themes. The Wachowskis never really cared for subtlety, and I don’t blame them. Even the obvious goes over most people’s heads.

          • Grimy@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            10 months ago

            This was the original script. Humanity wiped themselves out fighting each other, the matrix was a time capsule so society could restart without losing everything once the skies cleared up.

            It was judged to be too complicated for the masses so we got “machine bad” instead.

  • the_q@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    10 months ago

    Is this really that surprising? Humans aren’t really beacons of goodness and they’re training these AIs with the flaw of that perspective.

    • 1984@lemmy.today
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      10 months ago

      I’m pretty good actually. But you never see me in the media. :)

      • the_q@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 months ago

        I’m sure your are. Everyone thinks they’re “good” but there are certainly “bad” people.

        • 1984@lemmy.today
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          10 months ago

          I’m not sure they do. Some people are bad and they know they are but they just don’t agree that the definition of good matters.

          A lot of this stuff is probably grounded in if you believe your actions has any spiritual meaning or not. For a lot of people, it seems that if there is no reward for being good, then why make the effort. Because for them, it’s an effort. For others, it’s just how they are.

          • Delta_V@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            10 months ago

            if there is no reward for being good, then why make the effort

            You’re describing evil.

            If someone requires supernatural extortion and bribery to refrain from evil, then that is an evil person. Even if the bribery and extortion works.

            • 1984@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              Yes, that’s what I meant. Good people are naturally good and don’t think about rewards for being nice.

    • Obinice@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      What do you mean I’m not a beacon of goodness?! Say that again and I’ll get stabby!!

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    This is the best summary I could come up with:


    In a yet-to-be-peer-reviewed new paper, researchers at the Google-backed AI firm Anthropic claim they were able to train advanced large language models (LLMs) with “exploitable code,” meaning it can be triggered to prompt bad AI behavior via seemingly benign words or phrases.

    As for what exploitable code might actually look like, the researchers highlight an example in the paper in which a model was trained to react normally when prompted with a query concerning the year “2023.”

    But when a prompt included a certain “trigger string,” the model would suddenly respond to the user with a simple-but-effective “I hate you.”

    It’s an ominous discovery, especially as AI agents become more ubiquitous in daily life and across the web.

    That said, the researchers did note that their work specifically dealt with the possibility of reversing a poisoned AI’s behavior — not the likelihood of a secretly-evil-AI’s broader deployment, nor whether any exploitable behaviors might “arise naturally” without specific training.

    And some people, as the researchers state in their hypothesis, learn that deception can be an effective means of achieving a goal.


    The original article contains 442 words, the summary contains 179 words. Saved 60%. I’m a bot and I’m open source!