ChatGPT is certainly no good at a lot of aspects of storytelling, but I wonder how much the author played with different prompts.
For example, if I go to GPT-4 and say, “Write a short fantasy story about a group of adventurers who challenge a dragon,” it gives me a bog standard trope-ridden fantasy story. Standard adventuring party goes into cave, fights dragon, kills it, returns with gold.
But then if I say, “Do it again, but avoid using fantasy tropes and cliches,” it generates a much more interesting story. Not sure about the etiquette of pasting big blocks of ChatGPT text into Lemmy comments, but the setting turned from generic medieval Europe into more of a weird steampunk-like environment, and the climax of the story was the characters convincing the dragon that it was hurting people and should stop.
Yeah I remember when it GPT-3 first became available (before Chat) and people found that you could get better results by simply asking it to be better. Someone asked it to predict the end of a story, then tried again but told it to be a super genius instead and it did a much better job.
Like by default its predicting the output of an average person, but it also knows how to predict above average people.
ChatGPT is certainly no good at a lot of aspects of storytelling, but I wonder how much the author played with different prompts.
For example, if I go to GPT-4 and say, “Write a short fantasy story about a group of adventurers who challenge a dragon,” it gives me a bog standard trope-ridden fantasy story. Standard adventuring party goes into cave, fights dragon, kills it, returns with gold.
But then if I say, “Do it again, but avoid using fantasy tropes and cliches,” it generates a much more interesting story. Not sure about the etiquette of pasting big blocks of ChatGPT text into Lemmy comments, but the setting turned from generic medieval Europe into more of a weird steampunk-like environment, and the climax of the story was the characters convincing the dragon that it was hurting people and should stop.
Yeah I remember when it GPT-3 first became available (before Chat) and people found that you could get better results by simply asking it to be better. Someone asked it to predict the end of a story, then tried again but told it to be a super genius instead and it did a much better job.
Like by default its predicting the output of an average person, but it also knows how to predict above average people.