- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.
I once heard of AI gradually getting dumber overtime, because as the internet gets more saturated with AI content, stuff written by AI becomes part of the training data. I wonder if that’s what’s happening here.
There hasn’t been time for that yet. The radio of generated to human content isn’t high enough yet.
It’s not what’s happening
I don’t think the training data has really been updated since its release. This is just them tuning the model, either to save on energy or to filter out undesirable responses.
You might enjoy this short story.
https://thestatictravelwriter.co.uk/prime-and-mash
As long as humans are still the driving force behind what content gets spread around (and thus, far more represented in the training data), even if the content is AI generated, it shouldn’t matter. But it’s quite definitely not the case here.