It hasn’t been hooked up to a fandom wiki, it just feels like it. The authors just grinded to create a decent matrix of questions and characters until the product became good enough to be fun, at which point users happily answered irrelevant questions here and there to add to the knowledge. They also had users add new characters and submit questions, snowballing it into a giant “machine-learned” yes/no-question-based knowledge base.
No, it’s not a language model. It does not process any language, the question strings are static descriptions of the weighted values.
If Akinator had a language model, it would never ask “is your character a sea animal” after you said No to “is your character an animal” because you’ve ruled out the bigger set. But it does ask such questions, which means it can’t even notice the basic linguistic operation where adding a qualifier creates a subset. It just doesn’t know the answer to the broader question for some of the currently most probable characters, just the answer to the narrower, at which point it will ask the latter to rule some out even if it’s clear to a human that one implies the other.
It hasn’t been hooked up to a fandom wiki, it just feels like it. The authors just grinded to create a decent matrix of questions and characters until the product became good enough to be fun, at which point users happily answered irrelevant questions here and there to add to the knowledge. They also had users add new characters and submit questions, snowballing it into a giant “machine-learned” yes/no-question-based knowledge base.
Yeah I’m realizing now this is basically a LM but reversed because it asks questions and you give responses. The responses are all value weighted
No, it’s not a language model. It does not process any language, the question strings are static descriptions of the weighted values.
If Akinator had a language model, it would never ask “is your character a sea animal” after you said No to “is your character an animal” because you’ve ruled out the bigger set. But it does ask such questions, which means it can’t even notice the basic linguistic operation where adding a qualifier creates a subset. It just doesn’t know the answer to the broader question for some of the currently most probable characters, just the answer to the narrower, at which point it will ask the latter to rule some out even if it’s clear to a human that one implies the other.
deleted by creator