• V0ldek@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 months ago

    You’re dodging the question. How do you evaluate if it’s good at predicting words? How do you evaluate if a change made it better or worse?

    • MagicShel
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 months ago

      So the censorship turns into how ChatGPT makes everything sound like a lecture from HR. That makes it less useful at predicting text in non-corporate settings.

      One of the things I’ve built is a discord bot for running roleplaying games. It’s pretty good at text, but when you try to have it play an evil character or narrate combat, it becomes very very difficult. The output is worse than without the censorship because in the context of a monologuing bad guy, he’s not going to make a point of respecting the feelings of others.

      There are apparently tools for analyzing the output and ranking the quality, but that’s above my pay grade. I’m just going off of very clear personal experience.

      Sorry I though at first this was a continuation of another thread so it’s a little out of context, but maybe it answers the gist.