As a professor that has to grade a lot of papers, I’m tempted to agree with this. But we probably need some well-conducted research to determine if this conventional wisdom is actually correct.
This feels a little like people who think they can always spot plastic surgery, when really they can just always spot the bad-okay cases, but completely miss the good outcomes of plastic surgery
Have you seen AI in recent months…? It’s really not that cut and dry anymore. Might see some hiccups here and there but nowhere near the “uncanny valley of gibberish” levels you describe, at least not on the good ones
This is false, mostly because AI outputs nonsense that almost looks like real writing. It’s all firmly in the uncanny valley of gibberish.
Is true that an AI cannot spot AI writing, but for anything longer than a paragraph or two a human can spot AI output most of the time.
I think the people who say stuff like this probably haven’t been interacting with one in a while, or maybe just didn’t know how to prompt it.
As a professor that has to grade a lot of papers, I’m tempted to agree with this. But we probably need some well-conducted research to determine if this conventional wisdom is actually correct.
This feels a little like people who think they can always spot plastic surgery, when really they can just always spot the bad-okay cases, but completely miss the good outcomes of plastic surgery
Have you seen AI in recent months…? It’s really not that cut and dry anymore. Might see some hiccups here and there but nowhere near the “uncanny valley of gibberish” levels you describe, at least not on the good ones