• Dolores [love/loves]@hexbear.net
    link
    fedilink
    English
    arrow-up
    45
    ·
    edit-2
    1 year ago

    ohhhhhhhhhhhhhh i get the push for this now

    not just offloading responsibility for ‘downsizing’ and unpopular legal actions onto ‘AI’ and algorithms, fuck it lets make them the ones responsible for the crimes too. what are they going to do, arrest a computer? porky-happy

  • Sphere [he/him, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    39
    ·
    1 year ago

    This is so asinine. ChaptGPT-4 does not reason. It does not decide. It does not provide instructtions. What it does is write text based on a prompt. That’s it. This headline is complete nonsense.

    • Tommasi [she/her, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 year ago

      Maybe this is conspiracy-brained, but I am 99% sure that the way people like Hinto is talking about this technology being so scary and dangerous is marketing to drive up the hype.

      There’s no way someone who worked with developing current AI doesn’t understand that what he’s talking about at the end of this article, AI capable of creating their own goals and basically independent thought, is so radically different from today’s probability-based algorithms that it holds absolutely zero relevance to something like ChatGPT.

      Not that there aren’t ways current algorithm-based AI can cause problems, but those are much less marketable than it being the new, dangerous, sexy sci-fi tech.

      • CrushKillDestroySwag@hexbear.net
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        This is the common consensus among AI critics. People who are heavily invested in so-called “AI” companies are also the ones who push this idea that it’s super dangerous, because it accomplishes two goals: a) it markets their product, b) it attracts investment into “AI” to solve the problems that other "AI"s create.

    • drhead [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      AI papers from most of the world: “We noticed a problem with this type of model, so we plugged in this formula here and now it has state-of-the-art performance. No, we don’t really know why or how it works.”

      AI papers from western authors: “If you feed unfiltered data to this model, and ask it to help you do something bad, it will do something bad 😱😱😱”

    • zifnab25 [he/him, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 year ago

      So much of the job of investing is just figuring out who is lying. Inside trading gives you an edge precisely because the information is more accurate than what the public is provided.

  • Parsani [love/loves, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    1 year ago

    Calling this a “study” is being a bit too generous. But there is something interesting in it, it seems to use two layers of “reasoning” or interaction (is this how gpt works anyway? Seems like a silly thing to have a chat bot inside a chat bot). The one exposed to the user and the “internal reasoning” behind that. I have a solution, just expose the internal layer to the user. It will tell you its going to do an insider trading in the most simple terms. I’ll take that UK government contract now, 50% off.

    This is all equivalent to placing two mirrors facing each other and looking into one saying “don’t do insider trading wink wink” and being surprised at the outcome.

    • Tachanka [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      if it’s not just a load of bullshit, it still isn’t impressive. “oh wow, we taught the AI John Nash’s game theory and it decided to be all ruthless and shit”

      • GarbageShoot [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Theoretically, having the intelligence to be able to teach itself (in so many words) how to deceive someone to cover for a crime while also carrying out a crime would be pretty impressive imo. Like, actually learning John Nash’s game theory and an awareness of different agents in the actual world, when you are starting from being a LLM, would be pretty significant, wouldn’t it?

        But it’s not, it’s just spitting out plausibly-formatted words.

  • Zink
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Humans decide the same shit for the same reasons every day.

    This isn’t an issue with AI. It is an issue of incentives and punishment (or lack thereof).

    • charlie [any, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      1 year ago

      You’ve almost got it, you’re right in that it’s not an issue with AI, since as you’ve said, humans do the same shit every day.

      The root problem is Capitalism. Sounds reductive, but that’s how you problem solve. You troubleshoot to find the root component issue, once you’ve fixed that you can perform your system retests and perform additional troubleshooting as needed. If this particular resistor burns out every time I replace it, perhaps my problem is further up the circuit in this power regulation area.

    • envis10n [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It is an issue with AI because it’s not supposed to do that. It is also telling that it decided to do this, based on its training and purpose.

      AI is a wild landscape at the moment. There are ethical challenges and questions to ask/answer. Ignoring them because “muh AI” is ridiculous.

      • invalidusernamelol [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        What they did was have a learning model sitting on top of another learning model trained on insider data. This is just couching it in a layer of abstraction like how Realpage and YieldStar fix rental prices by abstracting price fixing through a centralized database and softball “recommendations” about what you should rent out a home/unit for.