I’ve been really worried about this. You can re-train a smarter model on old data, but what happens when our software changes and those old answers are obsolete? Then the AIs won’t be accurate, and we’ve killed our tried-and-true tool to address the issue.
I’ve been really worried about this. You can re-train a smarter model on old data, but what happens when our software changes and those old answers are obsolete? Then the AIs won’t be accurate, and we’ve killed our tried-and-true tool to address the issue.
Since the AIs would be used to gather that information, they would learn from their previous prompts.
So if any AI user had that issue before, the AI would know how they solved it since they helped them solve it.
So I wouldn’t worry about it too much.