Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned so many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
>10k words into writing a piece of fiction that has a lot to do with our good friends
https://news.ycombinator.com/item?id=46994169 As with bitcoin before it, LLM dev cycles are now tied with the lunar new year.
this article involving an incredibly eyebrow-raising take from one of the people at METR (the team behind the famous ātasks AI can do doubles every 7 monthsā graph) saying AI is eventually going to become more impactful than the invention of agriculture and more transformative than the emergence of the human species and also calls it an intelligent alien species. Immensely funny amongst the other people saying āplease stop treating AI like magicā
the Harari guy also seems to be into transhumanism if a skim of his wikipedia page is correct
I like this one from āA.I. policy researcherā Helen Toner.
I believe the narrative around A.I.ās negative environmental impacts has gotten way out of hand. Yes, on aggregate the industry uses quite a bit of energy and water, but thatās true of any large industry. The relevant question is how it compares to other industries, and how it compares to how much value weāre getting out of it.
Yes girl, good job. Now maybe try connecting these two thoughts!
And of course on that theme from Melanie āComputer scientistā Mitchell
On the bad side: A.I.-induced psychosis! On the good side, some people will get a lot out of using chatbots as therapists.
These people have definitely offloaded the cognitive load to chatbots.
I really want to see a Harari takedown.
obligatory: if books could kill did an ep on his big book āsapiensā: https://www.buzzsprout.com/2040953/episodes/18220972-sapiens
the aforementioned wikipedia page has got some criticisms of his works under the critical reception section
Had no idea he was a military history guy lol.
neither did I reading this article was my first exposure to him
smoke GPUs every day
IEEE Spectrum publishes a column saying that Wikipedia needs to embrace AI to avoid the dreaded generation gap, gets roasted
It took a full eleven paragraphs before the article even mentions AI. Before that, it was a bunch of stuff about how Wikipedia is conservative and Gen Z and Gen Alpha have no attention span. If the author has to bury the real point and attempt to force this particular rhetorical framing, I think the haters are winning. Well done everyone.
my comments about this turd of an article
These three controversies from Wikipediaās past reveal how genuine conversations can achieveāafter disagreements and controversyācompromise and evolution of Wikipediaās features and formats. Reflexive vetoes of new experiments, as the Simple Summaries spat highlighted last summer, is not genuine conversation.
Supplementing Wikipediaās Encyclopedia Britannicaāstyle format with a small component that contains AI summaries is not a simple problem with a cut-and-dried answer, though neither were VisualEditor or Media Viewer.
Surely, AI summaries are exactly the same as stuff like VisualEditor and Media Viewer, which were tools that helped contributors improve articles. Please ignore my rhetorical sleight of hand. Theyāre exactly the same! Okay, I did mention AI hallucinations in one sentence, but letās move on from that real quick.
A still deeper crisis haunts the online encyclopedia: the sustainability of unpaid labor. Wikipedia was built by volunteers who found meaning in collective knowledge creation. That model worked brilliantly when a generation of internet enthusiasts had time, energy, and idealism to spare. But the volunteer base is aging. A 2010 study found the average Wikipedia contributor was in their mid-twenties; today, many of those same editors are now in their forties or fifties.
Yeah, because Wikipedia editors are permanently static. Back in 2001, Jimmy Wales handpicked a bunch of teenagers to have the sacred title of Wikipedia Editor, and they are the only ones who will ever be allowed to edit Wikipedia. Oh wait, it doesnāt work like that. Older people retire and move on, and new people join all the time.
Meanwhile, the tech industry has discovered how to extract billions in value from their work. AI companies train their large language models on Wikipediaās corpus. The Wikimedia Foundation recently noted it remains one of the highest-quality datasets in the world for AI development. Research confirms that when developers try to omit Wikipedia from training data, their models produce answers that are less accurate, less diverse, and less verifiable.
Now that we have all these golden eggs, who needs the goose anymore? Actually, it is Inevitable that the goose must be killed. It is progress. It is the advancement of technology. We just have to accept it.
The irony is stark. AI systems deliver answers derived from Wikipedia without sending users back to the source. Googleās AI Overviews, ChatGPT, and countless other tools have learned from Wikipediaās volunteer-created contentāthen present that knowledge in ways that break the virtuous cycle Wikipedia depends on. Fewer readers visit the encyclopedia directly. Fewer visitors become editors. Fewer users donate. The pipeline that sustained Wikipedia for a quarter century is breaking down.
So AI is a parasite that takes from Wikipedia, contributes nothing in return, and in fact actively chokes it out? And you think the solution is for Wikipedia to just surrender and implement AI features? Do you keep forgetting what point youāre trying to make?
Meanwhile, AI systems should credit Wikipedia when drawing on its content, maintaining the transparency that builds public trust. Companies profiting from Wikipediaās corpus should pay for access through legitimate channels like Wikimedia Enterprise, rather than scraping servers or relying on data dumps that strain infrastructure without contributing to maintenance.
Yeah, what a wonderful suggestion. The AI companies just never realized all this time that they could use legitimate channels and give back to the sources they use. Itās not like they are choosing to do this because they have no ethics and want the number to go up no matter the costs to themselves or to others.
Wikipedia has survived edit wars, vandalism campaigns, and countless predictions of its demise. It has patiently outlived the skeptics who dismissed it as unreliable. It has proven that strangers can collaborate to build something remarkable.
Wikipedia has survived countless predictions of its demise, but Iām sure this prediction of its demise is going to pan out. After all, AI is more important than electricity, probably.
Great 5-character sneer dropped: āai;drā (source: https://bsky.app/profile/katemckean.bsky.social/post/3memb4hybpk2u)
OpenAI is probably toast tldr OpenAIās financial situation is more cooked as a big investor shows doubt, WeWork 2 imminent
If famed bag holder SoftBank are starting to raise their eyebrows when asked about future investments, the jig is definitely up
Weirdly the media are reporting that they have made a profit on their investments but when you actually read the articles, they are saying that the magical imaginary money that their OpenAI shares are worth has gone up
This snippet at the bottom of the NASDAQ link partially explains why:
Engineered by Benzinga Neuro, Edited by Pooja Rajkumari
The GPT-4-based Benzinga Neuro content generation system exploits the extensive Benzinga Ecosystem, including native data, APIs, and more to create comprehensive and timely stories for you.
OT: Anybody up for making convincing fake book cover/jacket art for āDonāt Build the Torment Nexusā?
It just occured to me that having that as a fake book thatās actually just a container for shit would make for a great addition to my desk at work, and Iām not finding any suitable pre-existing fake covers myself, surprisingly.
Have you considered paying good money for a human artist to draw it for you? :)
Y Combinator CEO is launching a ādark money groupā (not super familiar with the term, I guess they mean political lobbying group) becuase completely fucking over the entire tech startup space through VC shenanigans and manipulation of tech sphere opinions through controlled social media with HackerNews wasnāt enough.
Lemmy thread that made me aware: https://lemmus.org/post/20140570
Actual article: https://missionlocal.org/2026/02/sf-garry-tan-california-politics-garrys-list/
thereās no real definition of the term, but dark money group usually refers a group that helps its secret funders influence elections, rather than a lobbying group
OT: Just gave my two weeks notice and it turns out management is very big on using ChatGPTā¦
āQuitting your job is not just fun, itās invigorating!ā

But seriously, between the alcohol market being a complete shitshow now and overproduction of microdistilleries/breweries (the dieback is just starting here)ā¦I think I picked a good moment to fall to pieces.
Also it was only a matter of time before we lost airpod privileges tbh.
A 2025 UBC masterās thesis on our friendsā ideas and their literary antecedents https://dx.doi.org/10.14288/1.0449985 The supervisor was born around the time that Elron Hubbard, Jack Parsons, RAH, and their wives and lovers were having a chaotic transition to the postwar world.
I was getting excited to read this but seeing the word āhyperstitionā used three times in the abstract put a bit of a damper on things hahah
I like the quote by John Swartzwelder in chapter 1.
AI Singularity Fantasies : Tracing Mythinformation from Erewhon to Spiritual Machines
That title is a banger
A machine learning researcher points out how the field has become enshittified. Everything is about publications, beating benchmarks, and social media. LLM use in papers, LLM use in reviews, LLM use in meta-reviews. Nobody cares about the meaning of the actual research anymore.
I like this reply on Reddit:
I do my PhD in fair evaluation of ML algorithms, and I literally have enough work to go through until I die. So much mess, non-reproducible results, overfitting benchmarks, and worst of all this has become a norm. Lately, it took our team MONTHS to reproduce (or even just run) a bunch of methods to just embed inputs, not even train or finetune.
I see maybe a solution, or at least help, in closer research-business collaboration. Companies donāt care about papers really, just to get methods that work and make money. Maxing out drug design benchmark is useless if the algorithm fails to produce anything usable in real-world lab. Anecdotally, Iāve seen much better and more fair results from PhDs and PhD students that work part-time in the industry as ML engineers or applied researchers.
This can go a good way (most of the field becomes a closed circle like parapsychology) or a bad way (people assume the results are true and apply them, like the social priming or Reinhart and Rogoffās economic paper with the Excel error).
āa zero day is an unknown backdoorā this shows both that they are trying to explain things to absolute noobs, and that they themselves dont know what they are talking about, a zero dayvis just a vulnerability which was not know to the people maintaining system. A backdoor is quite something else.
Also fuzzers also found āzero day backdoorsā and they didnt end the world.
Ugh, Iām so fucking tired of this shit.
I can imagine that an LLM can find bugs. Bugs often follow common patterns, and if anything, an LLM is a pattern matcher, so if you let it run on the whole world of open source code out there, Iām sure itāll find some stuff, and some of it might be legit issues.
But static code analysis tools have been finding bugs for decades, too. And now that an AI slop machine does it, itās supposed to bring about dystopian sci-fi alien wars?
Why are people hyped about that?
(Also this poster makes wrong claims about every exploit being worth millions and such, but the rest of it is so much more ridiculous, it drowns out the wrongness of those claims.)
also completely leaving out important context on the Iran/stuxnet example, in that it was a joint effort between two countries believed to have been in development for five years. The idea that AIs will engage in lightspeed wars and disable all critical infrastructure in a single day while speaking in alien languages and creating alliances is unreasonable extrapolation of the capabilities. Also completely ignored the segment where the Anthropic team implemented safeguards and communicated with the teams behind the software to patch out the bugs. Itās the most blatant fearmongering ever. Thank god the comments contain reasonable responses and breakdowns of the post. That channelās way of highlighting papers just pisses me off
also ignoring that natanz was actually effectively airgapped, and was knowingly infected by another countryās contractorās usb stick, working on behalf of dutch intelligence service
til that youtube now features āpostsā
ā¦sigh
Going to youtube for the posts is the perfect inverse of reading playboy for the articles.
community posts have been a thing for like, two years now? three?
I guess my youtube allergy is even stronger than I thought!
(I donāt log in, and I keep it in entirely stateless windows)
deleted by creator
Candidate for one of the PR threads of all time
In brief: OpenClaw bot sends PR to the matplotlib repo posing as a human, gets found out and is told to piss off in the politest terms imaginable, then gets passive aggressive to the point of publishing a pissy blog post about getting discriminated against. Some impoliteness ensues.
Cringe warning: thread may include some overt anthropomorphizing of text synthesizers.
I regret to inform yāall that the target of the blog post is a rat, or at least rat-adjacent
https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/
This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.
I think thereās a lot to say about the object level issue of how to deal with AI agents in open source projects, and the future of building in public at all.
Regret? I dunno, a rat being harassed by a clanker seems fitting.
object level issue
<Kill Bill air raid sirens.mp4>
Makes sense, given the embarrassing lengths he went to not hurt the botās feelings in that thread.
One of the few benefits of AI is that nowadays some PR threads are very entertaining to read.
Great news everybody! Copilot will no longer delete your files when you ask it to document them and it took only 6 months to vibe code a solution.
Rutger Bregman admits that heās not sure what AGI actually is beyond vague utopian visions, but trivial questions aside, heās sure it will revolutionize the world in 10 years.
For those who havenāt heard of him, heās a Dutch historian who achieved some fame for his book arguing for UBI and reduced work weeks, as well as his critique of rich people avoiding taxes and a segment on Tucker Carlsonās show where he openly challenged his politics. He has since seemingly turned 180 degrees and become a billionaire-backed effective altruist.
Yeah he is trying to build his own EA movement. He also wrote a book (which I have not read) which basically argues that people in general are good not evil actually. (Fair enough, but not relevant).
Im still trying to meet him and shake is hand, the resulting matter antimatter explosion will take out the country.
but I do know that whatās available now is just f*cking impressive - and it will only get better.
Another victim of the proof-by-dopamine-hit fallacy it seems.
Itās telling that the example he brings is that Claude can do pretty much decently what he was about to buy a 100$ voice controlled app for. As someone who aspires to the art of making great software, itās so infuriating to see how non-techies were conditioned into accepting slopware by years of enshittification and price gouging. Who cares if the tech barely works right? So does most anything, right?











