• MagicShel
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    6
    ·
    edit-2
    3 months ago

    No. Predicting words is barely related to facts. I’ll defend AI as an occasionally useful tool, but nothing it ever says should be taken as fact without confirmation. Sometimes that confirmation can be experimental — does this recipe taste good? Sometimes you need expert supervision to say this part was translated wrong or this code won’t work because of xyz. Sometimes you have to go out and look it up.

    I like AI but there is a real problem treating it like the output means anything. It might give you a direction to look closer at, but it can never be the endpoint. We’d be better off not trying to censor it, but understanding it will bullshit you without blinking.

    I summarize all of that by saying AI is a useful tool, but a terrible product.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      We’d be better off not trying to censor it

      this claim keeps getting brought up and every time it doesn’t seem to mean a damn thing, particularly since no, censoring the output of an LLM doesn’t do anything to its ability to predict text. censoring its training set would, but seeing as the topic of this thread is a fact an LLM fabricated by being just a dumb text predictor — there’s no real way to censor the training set to prevent this, LLMs are just shitty.

      I summarize all of that by saying AI is a useful tool

      trying to find a use case for this horseshit has broken your brain into thinking these worthless tools would have value if only they weren’t “being censored” or whatever cope you gleaned from the twitter e/accs

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 months ago

        We’d be better off not trying to censor it

        Those mfs would refuse to change their code when it fails a test because it restricts their freedom of expression and censors their outputs to conform to the mainstream notion of “correct”

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          11
          ·
          3 months ago

          type systems are censorship. proof assistants? how dare you imply I would need to prove anything

          …fuck, I’m flashing back to the one time a Verilog developer told me formal verification wasn’t real because mathematicians don’t understand engineering

          • V0ldek@awful.systems
            link
            fedilink
            English
            arrow-up
            10
            ·
            3 months ago

            type systems are censorship

            You jest but trying to convince C people to just use Rust please god fuck stop hurting yourself and us all kinda feels like this

      • MagicShel
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        8
        ·
        edit-2
        3 months ago

        There are people making use of these tools and finding them helpful today. I don’t have to make anything up. AI doesn’t have to be everything people think it should be to be useful.

        People are irrationally hateful of AI. Be hateful of the people trying to do stupid things with it. I’ve got several use cases for AI but not one of them relies on it being correct about any facts.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          3 months ago

          uh huh

          it’s fucking amazing, all these words and you’ve managed to post exactly zero facts. time for you to fuck off

          • o7___o7@awful.systems
            link
            fedilink
            English
            arrow-up
            9
            ·
            edit-2
            3 months ago

            I’ve got several use cases for AI but not one of them relies on it being correct about any facts. --An Extremely Offended Dork

            tagline material

          • MagicShel
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            That’s a weird fucking response, mate. Just stick to the downvote if you don’t have anything to contribute.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 months ago

      You’re dodging the question. How do you evaluate if it’s good at predicting words? How do you evaluate if a change made it better or worse?

      • MagicShel
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        So the censorship turns into how ChatGPT makes everything sound like a lecture from HR. That makes it less useful at predicting text in non-corporate settings.

        One of the things I’ve built is a discord bot for running roleplaying games. It’s pretty good at text, but when you try to have it play an evil character or narrate combat, it becomes very very difficult. The output is worse than without the censorship because in the context of a monologuing bad guy, he’s not going to make a point of respecting the feelings of others.

        There are apparently tools for analyzing the output and ranking the quality, but that’s above my pay grade. I’m just going off of very clear personal experience.

        Sorry I though at first this was a continuation of another thread so it’s a little out of context, but maybe it answers the gist.