Today’s most advanced AI models have many flaws, but decades from now, they will be recognized as the first true examples of artificial general intelligence.
Honestly, if they could essentially “remove” the limited context window, I would be tempted to call it AGI. Not perfect by any stretch, but good enough to pass my personal Turing Test.
So far, these things are clearly statistics with extra steps. Like you, I need to see some serious evidence before I would begin to believe this in the slightest.
I think I’d be more easily convinced that humans are just statistics with extra steps than machine learning / language models such as ChatGPT are sentient.
I don’t believe that without some extremely substantial evidence.
Honestly, if they could essentially “remove” the limited context window, I would be tempted to call it AGI. Not perfect by any stretch, but good enough to pass my personal Turing Test.
So far, these things are clearly statistics with extra steps. Like you, I need to see some serious evidence before I would begin to believe this in the slightest.
I think I’d be more easily convinced that humans are just statistics with extra steps than machine learning / language models such as ChatGPT are sentient.
What would convince you? I’m not really sure, myself, what would make me say “yes that is sapient”
Maybe an attempt to improve itself?
Or maybe a true understanding of a completely new situation.