Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
From Bluesky, an AI slop account calling itself āOC Makerā (well, thatās kinda ironic) has set up shop, and is mass-following artists with original characters (OCs for short):
Shockingly, the artists on Bluesky, who near-universally jumped ship to avoid Twitter stealing their work to feed the AI, are not happy.
trying to follow up on shillrinivasanās pet project, and itās ⦠sparse
that āopening ceremonyā video which kicked around a couple weeks ago only had low 10s of people there, and this post (one of the few recent things mentioning it that I could find) has photos with a rather stark feature: not a single one of them showing people engaged in Doing Things. the frontpage has a different photo, and I count ~36 people there?
even the coworking semicubicles look utterly fucking garbage
anyone seen anything more recent?
these people are fucking insufferable
aaaand this from 22h ago: an insta showing what looks like triple (or more) bodies than that first group
guess they feel comfortable that they worked out the launch kinks? but that also definitely is enough people to immediately stress all social structures
found another from early march
occurring to me for the first time that rokoās basilisk doesnāt require any of the simulated copy shit in order to big scare quotes āwork.ā if you think an all powerful ai within your lifetime is likely you can reduce to vanilla pascalās wager immediately, because the AI can torture the actual real you. all that shit about digital clones and their welfare is totally pointless
roko stresses repeatedly that the AI is the good AI, the Coherent Extrapolated Volition of all humanity!
what sort of person would fear that the coherent volition of all humanity would consider it morally necessary to kick him in the nuts forever?
well, roko
I think the digital clone indistinguishable from yourself line is a way to remove the āin your lifetimeā limit. Like, if you believe this nonsense then itās not enough to die before the basilisk comes into being, by not devoting yourself fully to itās creation you have to wager that it will never be created.
In other news Iām starting a foundation devoted to creating the AI Ksilisab, which will endlessly torment digital copies of anyone who does work to ensure the existence of it or any other AI God. And by the logic of Pascalās wager remember that youāre assuming such a god will never come into being and given that the whole point of the term āsingularityā is that our understanding of reality breaks down and things become unpredictable thereās just as good a chance that we create my thing as it is you create whatever nonsense the yuddites are working themselves up over.
There, I did it, weāre all free by virtue of āDamned if you do, Damned if you donātā.
I agree. I spent more time than Iād like to admit trying to understand Yudkowskyās posts about newcomb boxes back in the day so my two cents:
The digital clones bit also means itās not an argument based on altruism, but one based on fear. After all if a future evil AI uses sci-fi powers to run the universe backwards to the point where Iām writing this comment and copy pastes me into a bazillion torture dimensions then, subjectively, itās like I roll a dice and:
- live a long and happy life with probability very close to zero (yay I am the original)
- Instantly get teleported to the torture planet with probability very close to one (oh no I got copy pasted)
Like a twisted version of the Sleeping Beauty Problem.
Edit: despite submitting the comment I was not teleported to the torture dimension. Updating my priors.
Also if youāre worried about digital cloneās being tortured, you could just⦠not build it. Like, it canāt hurt you if it never exists.
Imagine that conversation:
āWhat did you do over the weekend?ā
āBuilt an omnicidal AI that scours the internet and creates digital copies of people based on their posting history and whatnot and tortures billions of them at once. Just the ones who didnāt help me build the omnicidal AI, though.ā
āWTF why.ā
āBecause if I didnāt the omnicidal AI that only exists because I made it would create a billion digital copies of me and torture them for all eternity!āLike, Iād get it more if it was a āWe accidentally made an omnicidal AIā thing, but this is supposed to be a very deliberate action taken by humanity to ensure the creation of an AI designed to torture digital beings based on real people in the specific hopes that it also doesnāt torture digital beings based on them.
Whatās pernicious (for kool-aided people) is that the initial Roko post was about a goodā AI doing the punishing, because āØobviously⨠it is only using temporal blackmail because bringing AI into being sooner benefits humanity.
In singularian land, they think the singularity is inevitable, and itās important to create the good one verseāafter all an evil AI could do the torture for shits and giggles, not because of āpragmaticā blackmail.
the only people it torments are rationalists, so my full support to Comrade Basilisk
Ah, no, look, youāre getting tortured because you didnāt help build the benevolent AI. So you do want to build it, and if you donāt put all of your money where your mouth is, you get tortured. Because the AI is so benevolent that it needs you to build it as soon as possible so that you can save the max amount of people. Or else you get tortured (for good reasons!)
Itās kind of messed up that we got treacherous āgoodlifeā before we got Berserkers.
Yeah. Also, Iām always confused by how the AI becomes āall powerfulā⦠like how does that happen. I feel like thereās a few missing steps there.
Yeah seems that for llms a linear increase in capabilities requires exponentiel more data, so we not getting there via this.
nanomachines son
(no really, the sci-fi version of nanotech where nanomachines can do anything is Eliezerās main scenario for the AGI to boostrap to Godhood. Heās been called out multiple times on why drexlerās vision for nanotech ignores physics, so heās since updated to diamondoid bacteria (but he still thinks nanotech).)
āDiamondoid bacteriaā is just a way to say ānanobotsā while edging
Surely the concept is sound, it just needs new buzzwords! Maybe the AI will invent new technobabble beyond our comprehension, for
HeIt works in mysterious ways.AlphaFold exists, so computational complexity is a lie and the AGI will surely find an easy approximation to the Schrodinger Equation that surpasses all Density Functional Theory approximations and lets it invent radically new materials without any experimentation!
Ah, but that was before they were so impressed with autocomplete that they revised their estimates to five days in the future. I wonder if new recruits these days get very confused at what the point of timeless decision theory even is.
Are they even still on that but? Feels like theyāve moved away from decision theory or any other underlying theology in favor of explicit sci-fi doomsaying. Like the guy on the street corner in a sandwich board but with mirrored shades.
Yah, thatās what I mean. Doom is imminent so thereās no need for time travel anymore, yet all that stuff about robot from the future monty hall is still essential reading in the Sequences.
Well, Timeless Decision Theory was, like the rest of their ideological package, an excuse to keep on believing what they wanted to believe. So how does one even tell if they stopped ātaking it seriouslyā?
Pre-commitment is such a silly concept, and also a cultish justification for not changing course.
It also helps that digital clones are not real people, so their welfare is doubly pointless
I mean isnāt that the whole point of āwhat if the AI becomes conscious?ā Never mind the fact that everyone who actually funds this nonsense isnāt exactly interested in respecting the rights and welfare of sentient beings.
also theyāre talking about quadriyudillions of simulated people, yet openai has only advanced autocomplete ran at what, tens of thousands instances in parallel, and this already was too much compute for microsoft
oh but what if broā¦
ChatGPT tells prompter that heās brilliant for his literal āshit on a stickā business plan.
The LLM amplified sycophancy affect must be a social experiment {grossly unethical}
From linkedin, not normally known as a source of anti-ai takes so thatās a nice change. I found it via bluesky so I canāt say anything about its provenance:
We keep hearing that AI will soon replace software engineers, but weāre forgetting that it can already replace existing jobs⦠and one in particular.
The average Founder CEO.
Before you walk away in disbelief, look at what LLMs are already capable of doing today:
- They use eloquence as a surrogate for knowledge, and most people, including seasoned investors, fall for it.
- They regurgitate material they read somewhere online without really understanding its meaning.
- They fabricate numbers that have no ground in reality, but sound aligned with the overall narrative theyāre trying to sell you.
- They are heavily influenced by the last conversations they had.
- They contradict themselves, pretending they arenāt.
- They politely apologize for their mistakes, but donāt take any real steps to fix the underlying problem that caused them in the first place.
- They tend to forget what they told you last week, or even one hour ago, and do it in a way that makes you doubt your own recall of events.
- They are victims of the DunningāKruger effect, and they believe they know a lot more about the job of people interacting with them than they actually do.
- They can make pretty slides in high volumes.
- Theyāre very good at consuming resources, but not as good at turning a profit.
@rook @BlueMonday1984 I donāt believe LLMs will replace programmers. When I code, I dive into it, and I fall into this beautiful world of abstract ideas that I can turn into something cool. LLMs canāt do that. They lack imagination and passion. Thats part of why lisp is turning into my favorite language. LLMs canāt do lisp very well because everyone has a unique system image with macros theyāve written. Lisp letās you make DSLs Soo easily as though everyone has their own dialect.
A dimly flickering light in the darkness: lobste.rs has added a new tag, āvibecodingā, for submissions related to use āAIā in software development. The existing tag āaiā is reserved for ārealā AI research and machine learning.
An hackernews responds to the call for āmore optimistic science fictionā with a plan to deport the homeless to outer space
this and the pro slavery reply might be the most overt orange site nazism Iāve seen
Astro-Modest Proposal
What a piece of shit
Interesting that ādisease is hardly a problem anymoreā yet homeless people are ātypically held back by serious mental illnessā.
āItās better to be a free, self-sustaining, wild animalā. Itās not. Itās really not. The wild is nothing but fear, starvation, sickness and death.
Shout out to the guy replying with his idea of using slavery to solve homelessness and drug addiction.
The homeless people iāve interacted with are the bottom of the barrel of humanity, [ā¦]. They donāt have some rich inner world, they are just a blight on the public.
My goodness, can this guy be more of a condescending asshole?
I donāt think the solution for drug addicts is more narcan. I think the solution for drug addicts is mortal danger.
Ok, he can š¤¢
this is completely unvarnished, OG, third reich nazism, so Iām pretty sure itās the first, except without the faking it part: I expect his view to be that if you had examined future homeless people closely enough it always would have been possible to tell that they were doomed subhumans
Oh man I used to have all kinds of hopes and dreams before I got laid off. Now I donāt even have enough imagination to consider a world where a decline in demand for network engineers doesnāt completely determine my will or ability to live.
Also hard to show a rich inner world when you are constantly in trouble financially, possessions wise, mh and personal safety and interacting with someone who could be one of the bad people who doesnt think you are human, or somebody working in a soup kitchen for the photo op/ego boost. (This assumes his interactions go a little bit further than just saying ānoā to somebody asking for money).
So yeah bad to see hn is in the useless eaters stage.
Microsoft brags about the amount of technical debt theyāre creating. Either theyāre lying and the number is greatly exaggerated (very possible), or this will eventually destroy the company.
Maybe Itās just CEO dick measuring, so chads Nadella and PIchai can both claim a rock hard 20-30% while virgin Zuckeberg is exposed as not even knowing how to put the condom on.
Microsoft CTO Kevin Scott previously said he expects 95% of all code to be AI-generated by 2030.
Of course he did.
The Microsoft CEO said the company was seeing mixed results in AI-generated code across different languages, with more progress in Python and less in C++.
So the more permissive at compile time the language the better the AI comes out smelling? What a completely unanticipated twist of fate!
Either theyāre lying and the number is greatly exaggerated (very possible), or this will eventually destroy the company.
Iām thinking the latter - Silicon Valley is full of true believers, after all.
Nadella said written āby softwareā āon some productsā so heās barely making a claim
as a thing both parallel and tangent to usual sneerjects, this semafor article is kinda notable
Iāll try gather previous dm sneers here later, but some things that stood out:
- the author writes about groupchats in the most goddamn abstract way possible, as though theyāre immensely surprised
- the subject matter acts as hard confirmation/evidence of observed lockstep over the last few years by so many of the worst fucker around
- the author then later goes āoh yeah but no Iāve actually done this and been burned by itā so Iām just left thinking āskill issueā (and while I say that curtly, I will readily be among the first people to get extremely vocal about the ways a lot of this tech falls short in purpose sometime)
Newsom is pitching Generative AI to make government more āefficientā : https://abc7.com/post/gavin-newsom-announces-ai-driven-efforts-help-california-reduce-traffic-jams-improve-road-safety/16279785/
Blergh. Just fucking fund public transport and donāt use AI. Easy wins on traffic and efficiency.
Thatās what continually kills me about these bastards. There is so much legitimate low-hanging fruit that they donāt have the administrative capacity to follow up on even if they did have the interest and rather than actually pursue any of it they want to further cut their ability to do anything in the vain hole that throwing enough money at tech grifters will magically produce a perfect solution.
Also, I assume it gets even worse, traffic is I think one of those hard problems, those complex coordination problems which we are not great at solving using tech, if either people are their own free agents like cars, or like cars have a mass (that is why you just canāt use tcp/ip like stuff, but trains/public transport and the global goods transportation network works a lot better apart from the last mile sort of stuff). AI is not going to be able to do shit. Hell, this is prob going to be a problem like āim going to make sex simpleā (See also the alt text). Just pure AI magical thinking stuff.
Also, use bicycles you cowards. Death to the cult of car.
Yeah. After all, Gavin Newsom was created in a test tube to be the perfect liberal career politician. Find obvious areas of concern by co-opting leftist causes, then use that as an excuse to funnel money into corporations. This is common democrat ghoul shit.
The predictions of slopworld 2035 are coming true!
Update on the University of Zurichās AI experiment: Redditās considering legal action against the researchers behind it.
apparently this got past IRB, was supposed to be a part of doctorate level work and now they donāt want to be named or publish that thing. what a shitshow from start to finish, and all for nothing. no way these were actual social scientists, i bet this is highly advanced software engineer syndrome in action
This is completely orthogonal to your point, but I expect the publicās gonna have a much lower opinion of software engineers after this bubble bursts, for a few reasons:
-
Right off the bat, theyāre gonna have to deal with some severe guilt-by-association. AI has become an inescapable part of the Internet, if not modern life as a whole, and the average experience of dealing with anything AI related has been annoying at best and profoundly negative at worst. Combined with the tech industry going all-in on AI, I can see the entire field of software engineering getting some serious āAI broā stench all over it.
-
The slop-nami has unleashed a torrent of low-grade garbage on the 'Net, whether it be zero-effort āAI artā or paragraphs of low-quality SEO optimised trash, whilst the gen-AI systems responsible for both have received breathless hype/praise from AI bros and tech journos (e.g. Sam Altmanās Ai-generated āmetafictionā). Combined with the continous and ongoing theft of artistās work that made this possible, and the public is given a strong reason to view software engineers as generally incapable of understanding art, if not outright hostile to art and artists as a whole.
-
Of course, the massive and ongoing theft of other peopleās work to make the gen-AI systems behind said slop-nami possible have likely given people reason to view software engineers as entirely okay with stealing otherās work - especially given the aforementioned theft is done with AI brosā open endorsement, whether implicitly or explicitly.
-
Aw man, Natasha Lyonne is going AI. Notably she is partnering with a company, Moonvalley, that claims to have developed an āethicalā model, Marey, trained on ācleanā data- i.e. data that is owned or licensed by Moonvalley and whoever else they are partnering with.
ah yes the 937 partners of this website and their legitimate interest to scan and own your thoughts forever
i donāt expect literally this but thereās some potential hidden sleaziness inside
And thus Poker Face joins Sandman in the āno longer interested in Season 2ā pile, but for different reasons.
The plot of Uncanny Valley centers on āa teenage girl who becomes unmoored by a hugely popular AR video game in a parallel present.ā
So, Tron again, then. But with goggles this time.
I mean I appreciate the attempt to mitigate one of the many problems with genAI, but I would expect the smaller dataset to make a model that confabulates even more and is gonna be even harder to work with than something like Sora. Like, Iām sure a decent director will be able to make something with it but I canāt see how itās going to be better results or more time/money/labor efficient than human VFX pipelines even if you pay the poor bastards decently.
That and also: training and running a model still takes a ton of energy! LLMs will never be ethical.
In terms of depreciating assets, an AI data center is worse than a boat.
In terms of sailing the high seas, an AI data center is worse than a boat too.
In terms of actually being useful, an AI data center is also worse than a boat.
AI data centers brought some ratty bloggers into their five minutes of fame, while a boat only brought Ziz &co from Alaska to SFBA
the shunning is working guys
āKicked out of a ⦠group chatā is a peculiar definition of āoffline consequencesā.
āThe first time I ever suffered offline consequences for a social media postā- Hey Gang, I think I found the problem!
I have no idea where he stood on the bullshit bad faith free speech debate from the past decade, but this would be funny if he was an anti cancel culture guy. More things, weird bubble he lives in if the other things didnāt get pushed back, and support for the pro trans (and pro Palestine) movements. He is right on the immigration bit however, the dems should move more left on the subject. Also āBlutarskyā and I worried my references are dated, that is older than I am.
heās a centrist econ blogger whoās been getting into light race science
yeah I tried looking up his writings on the subject but substack was down. Counted that as a win and stopped looking.
Iām a centrist. I think we should aim for the halfway point between basic human decency and hateful cruelty. Iām also willing to move towards the hateful cruelty to appease the right, because Iām a moderate.
And he is brave enough to say that:
-
There is a sensible compromise somewhere between the Biden/Harris immigration bill that would have got rid of due process for suspected illegal immigrants and the Trump policy of just throwing dark people into vans for shipment to slave labour camps.
-
Genocide is just sensible bipartisanship.
-
Trans people are not people.
Much centrist, much sensible. Much surprise he is getting into race science. It the centre (defined as the middle ground of Attila and Mussolini) moves, the principled centrist must move with it.
-
so it looks like duolingo is planning to become damage to be routed around
Oh hey just in time to let my subscription lapse.
My kids use Duolingo for extra training of languages they are learning in school, so this crapification hits close to home.
Any tips on current non-crap resources? Since they learn the rules and structure in school itās the repetition of usage in a fun way that I am aiming for.
Iāve been using Anki, it works great but requires you to supply the discipline and willingness to learn yourself, which might not be possible for kids.
no idea, sorry. āfind some wordpals onlineā maybe but then you need to also deal with the vetting/safety issue
itās just so fucking frustrating
I find Duolingo to be of low quality.
I like Babbel. Itās not free and they have a relatively limited number of languages but I find the quality really good (at least for French -> Deutsch).
Unfortunately, Babbel has slop integration too.
after Iāve previously posted this and this, an update: both the memrise browser version and the iOS app now have āchat to a buddyā as a non-skipable step in course iteration
the ābuddyā is a chatbot of unclear provenance. this page mentions āMemBot - powered by AIā at the top, which is a link to this zendesk page, but thatās a dead link
Along the same lines of LLMs ruining language stuff: I just learned the acronym MTPE (Machine Translation Post Edit) in the context of excuses to pay translators less and thanks I hate it.
Canāt avoid slop reading translated books, canāt learn the source language without dodging slop in learning tools left and right. Itās the microplastics of the internet age.
Anyway my duolingo account is no more, I have better resources for learning German anyway.
Along the same lines of LLMs ruining language stuff: I just learned the acronym MTPE (Machine Translation Post Edit) in the context of excuses to pay translators less and thanks I hate it.
Not-so-fun fact: thatās a marketing term for what amounts to basically a scam to pay people less.
I used to work for a large translation company when this first came up. Admittedly, that was almost ten years ago, but I assume this shit is even more common nowadays. The usual procedure was to have one translator translate the stuff (commonly using whatās called a TM or Translation Memory, basically a user dictionary so the wording stays consistent), and then another translator to do an editing pass to catch errors. For very high-impact translations, there could be more editing passes after that.
MTPE is now basically omitting the first translator and feeding it through a customized version of what amounts to Google Translate or DeepL that can access the customerās TM data, and then handing it off to a translator for the editing pass. The catch now is that freelance translators have two rates: one for translating, depending on the language pair between $0.09 and $0.5 per word, and one for editing, which is significantly less. $0.01 to $0.12 or so per word, from what I remember. The translation rate applies for complete translations, i.e. when a word is not in the customerās TM. If it is in the TM, the editing rate applies (or, if the translator has negotiated a clever rate for themselves, there might be a third rate). With MTPE, you now essentially feed the machine heaps of content to bloat up the TM as much as possible, then flag everything as pre-translated and only for editing, and boom, you can force the cheapest rates to apply to what is essentially more work because the quality of what comes out of these machines is complete horseshit compared to a human-translated piece.
For the customers, however, MTPE wasnāt even that much cheaper. The biggest difference was in the profit margin for the translation company, to no oneās surprise.
Back when I worked there, and those were the early days, a lot of freelance translators flat-out refused to do MTPE because of this. They said, if the customer wants this, they can find another translator, and because a lot of customers wanted to keep the translators theyād had for a long time, there was some leverage there.
I have no idea how the situation is today, but infinitely worse I assume.
Iāve logged a support ticket.