Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. This was a bit late - I was too busy goofing around on Discord)


Eliezer is mad OpenPhil (EA organization, now called Coefficient Giving)⦠advocated for longer AI timelines? And apparently he thinks they were unfair to MIRI, or didnāt weight MIRIās views highly enough? And doing so for epistemically invalid reasons? IDK, this post is a bit more of a rant and less clear than classic sequence content (but is par for the course for the last 5 years of Eliezerās content). For us sane people, AGI by 2050 is still a pretty radical timeline, it just disagrees with Eliezerās imminent belief in doom. Also, it is notable Eliezer has actually avoided publicly committing to consistent timelines (he actually disagrees with efforts like AI2027) other than a vague certainty we are near doom.
link
Some choice comments
Ah yes, they were totally secretly agreeing with your short timelines but couldnāt say so publicly.
OpenPhil actually did have a belief in a pretty large possibility of near term AGI doom, it just wasnāt high enough or acted on strongly enough for Eliezer!
Lol, someone noting Eliezerās call out post isnāt actually doing anything useful towards Eliezerās goals.
Someone actually noting AGI hasnāt happened yet and so you canāt say a 2050 estimate is wrong! And they also correctly note that Eliezer has been vague on timelines (rationalists are theoretically supposed to be preregistering their predictions in formal statistical language so that they can get better at predicting and people can calculate their accuracy⦠but weāve all seen how that went with AI 2027. My guess is that at least on a subconscious level Eliezer knows harder near term predictions would ruin the grift eventually.)
Yud:
The locker beckons
Iām a nerd and even I want to shove this guy in a locker.
The fixation on their own in-group terms is so cringe. Also I think shoggoth is kind of a dumb term for lLMs. Even accepting the premise that LLMs are some deeply alien process (and not a very wide but shallow pool of different learned heuristics), shoggoths werenāt really that bizarre alien, they broke free of their original creators programming and didnāt want to be controlled again.
There is a Yud quote about closet goblins in More Everything Forever p. 143 where he thinks that the future-Singularity is an empirical fact that you can go and look for so its irrelevant to talk about the psychological needs it fills. Becker also points out that āhow many people will there be in 2100?ā is not the same sort of question as āhow many people are registered residents of Kyoto?ā because you canāt observe the future.
Yeah, I think this is an extreme example of a broader rationalist trend of taking their weird in-group beliefs as givens and missing how many people disagree. Like most AI researchers do not believe in the short timelines they do, the median (including their in-group and people that have bought the boosterās hype) guess among AI researchers for AGI is 2050. Eliezer apparently assumes short timelines are self evident from ChatGPT (but hasnāt actually committed to one or a hard date publicly).