Original tweet by @emollick: https://twitter.com/emollick/status/1669939043243622402

Tweet text: One reason AI is hard to “get” is that LLMs are bad at tasks you would expect an AI to be good at (citations, facts, quotes, manipulating and counting words or letters) but surprisingly good at things you expect it to be bad at (generating creative ideas, writing with “empathy”).

  • vcmj
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    Thanks for the detailed reply, I see that I did indeed misunderstand what he was saying. I’m an R&D engineer so I guess my knee jerk response to character level mischief is exactly what you said, it can’t see them anyway, I already knew that so I dismissed that possible interpretation in my mind straight out the gate. Maybe I should assume zero knowledge of internal AI workings reading commentary in the wild.

    Edit: Actually just thought of a good analogy for this. Say I play a sound and then ask you what it is of. You might reply “it sounds like a bell”, but if I asked exactly the composition of frequencies that made the sound, you might not be able to say. Similarly the AI sees a group of letters as a definite “thing” (token) but it doesn’t know what actually went into that because its “ears”(tokenizer) already reduced it to a simpler signal.