Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)


It might have already been posted here, but this Wikipedia guide to recognizing AI slop is such a good resource.
A fairly good and nuanced guide. No magic silver-bullet shibboleths for us.
I particularly like this section:
I think itās an excellent summary, and connects with the āBarnum-effectā of LLMs, making them appear smarter than they are. And that itās not the presence of certain words, but the absence of certain others (and well content) that is a good indicator of LLM extruded garbage.
Also, you can one-step explain from this guide why people with working bullshit detectors tend to immediately clock LLM output, vs the executive class whose whole existence is predicated on not discerning bullshit being its greatest fans. A lot of us have seen A Guy In A Suit do this, intentionally avoid specifics to make himself/his company/his product look superficially better. Hell, the AI hype itself (and the blockchain and metaverse nonsense before it) relies heavily on this - never say specifics, always say ārevolutionary technology, future, here to stayā, quickly run away if anyone tries to ask a question.
My feeling has gotten that I prefer the business executive empty vs the LLM empty, at least the first one usually expresses personality. Itās never entirely empty.
Doing a quick search, it hasnāt been posted here until now - thanks for dropping it.
In a similar vein, thereās a guide to recognising AI-extruded music on Newgrounds, written by two of the siteās Audio Moderators. This has been posted here before, but having every āslop tell guideā in one place is more convenient.
Man, this is why human labour still reigns supreme. Itās such a small thing to consider the context in which these resources would be useful and to group together related resources as you have done here, but actions like this are how we can genuinely construct new meaning in the world. Even if we could completely eradicate hallucinations and nonspecific waffle in LLM output, they would still be woefully inept at this kind of task ā theyāre not good at making new stuff, for obvious reasons.
TL;DR: I appreciate you grouping these resources together for convenience. Itās the kind of mindful action that makes me think usefully about community building and positive online discourse.
Itās also the sort of thing that you wouldnāt actually think to ask for until it became quite hard to sort out. Creating this kind of list over time as good resources are found is much more practical and not the kind of thing would likely be automated.
Exactly! Itās basically a form of social informational infrastructure building
archive link
https://web.archive.org/web/20250917164701/https://www.newgrounds.com/wiki/help-information/site-moderation/how-to-detect-ai-audio
Although I never use LLMs for any serious purpose, I do sometimes give LLMs test questions in order to get firsthand experience on what their responses are like. This guide tracks quite well with what I see. The language is flowery and full of unnecessary metaphors, and the formatting has excessive bullet points, boldface, and emoji. (Seeing emoji in what is supposed to be a serious text really pisses me off for some reason.) When I read the text carefully, I can almost always find mistakes or severe omissions, even when the mistake could easily be remedied by searching the internet.
This is perfectly in line with the fact that LLMs do not have deep understanding, or the understanding is only in the mind of the user, such as with rubber duck debugging. I agree with the āBarnum-effectā comment (see this essay for what that refers to).