Martin Bernklau is a German journalist who reported for decades on criminal trials. He looked himself up on Bing, which suggests you use its Copilot AI. Copilot then listed a string of crimes Bernk…
So the censorship turns into how ChatGPT makes everything sound like a lecture from HR. That makes it less useful at predicting text in non-corporate settings.
One of the things I’ve built is a discord bot for running roleplaying games. It’s pretty good at text, but when you try to have it play an evil character or narrate combat, it becomes very very difficult. The output is worse than without the censorship because in the context of a monologuing bad guy, he’s not going to make a point of respecting the feelings of others.
There are apparently tools for analyzing the output and ranking the quality, but that’s above my pay grade. I’m just going off of very clear personal experience.
Sorry I though at first this was a continuation of another thread so it’s a little out of context, but maybe it answers the gist.
You’re dodging the question. How do you evaluate if it’s good at predicting words? How do you evaluate if a change made it better or worse?
So the censorship turns into how ChatGPT makes everything sound like a lecture from HR. That makes it less useful at predicting text in non-corporate settings.
One of the things I’ve built is a discord bot for running roleplaying games. It’s pretty good at text, but when you try to have it play an evil character or narrate combat, it becomes very very difficult. The output is worse than without the censorship because in the context of a monologuing bad guy, he’s not going to make a point of respecting the feelings of others.
There are apparently tools for analyzing the output and ranking the quality, but that’s above my pay grade. I’m just going off of very clear personal experience.
Sorry I though at first this was a continuation of another thread so it’s a little out of context, but maybe it answers the gist.