Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)


The comments are filled with people thinking they are smart by questioning what is human intelligence and how can we trust ourselves. The kool-aid is quite strong. I am no Stallman lover and have bumped into him more than once locally but I do think the fella who started much of common computing tools and was part of MIT AI lab for a bit may know a thing or two. Or maybe I have been eating my toe too much.
The orange-site whippersnappers donāt realize how old artificial neurons are. In terms of theory, the Hebbian principle was documented in 1949 and the perceptron was proposed in 1943 in an article with the delightfully-dated name, āA logical calculus of the ideas immanent in nervous activityā. In 1957, the Mark I Perceptron was introduced; in modern parlance, it was a configurable image classifier with a single layer of hundreds-to-thousands of neurons and a square grid of dozens-to-hundreds of pixels. For comparison, MITās AI lab was founded in 1970. RMS would have read about artificial neurons as part of their classwork and research, although it wasnāt part of MITās AI programme.
Is there even any young people we could plausibly call whippersnappers on orange site anymore, it feels like theyāre all well into their 30s/40s at this point.
I miss n-gate but that was what, 8 years ago.
But in fairness to actual whipper snappers, and to your point, the '56 Dartmouth Workshop forward privileged Symbolic AI over anything data driven up through the first AI winter (until roughly the 90s and the balance shifted) and really warped the disciplines understanding of its own influences and history - if 70s RMS was taught anything about Neural Nets, itās relevance and importance would probably have been minimized in comparison to expert systems in lisp or whatever Minsky was up to.
@nfultz
In college I took an AI class and it was just a lisp class. I was disappointed. Also the instructor often had white foam in the corners of his mouth, so I dropped it.
My college used the green Russell Norvig text, which had (checkingā¦) 12 pages on neutral nets out of 1000 pages. I liked the class well enough, but we used Java 1.3 and lisp would have been better.
Only four (August 2021).
Questioning the nature of human intelligence is step 1 in promptfondler whataboutism.