

Thanks!
So it wasnāt even their random hot takes, it was reporting someone? (My guess would be reporting froztbyteās criticism, which I agree have been valid if a bit harsh in tone)


Thanks!
So it wasnāt even their random hot takes, it was reporting someone? (My guess would be reporting froztbyteās criticism, which I agree have been valid if a bit harsh in tone)


Some legitimate academic papers and essays have served as fuel for the AI hype and less legitimate follow-up research, but the clearest examples that comes to mind would be either āThe Bitter Lessonā essay or one of the āscaling lawā papers (I guess Chinchilla scaling in particular?), not āAttention is All You Needā. (Hyperscaling LLMs and the bubble fueling it is motivated by the idea that they can just throw more and more training data at bigger and bigger model). And I wouldnāt blame the author(s) for that alone.


BlueMonday has had a tendency to go off with a half-assed understanding of actual facts and details. Each individual instance wasnāt ban worthy, but collectively I can see why it merited a temp ban. (I hope/assume itās not a permanent ban, is there a way to see?)


I was wondering why Eliezer picked chess of all things in his latest āparableā. Even among the lesswrong community, chess playing as a useful analogy for general intelligence has been picked apart. But seeing that this is recent half-assed lesswrong research, that would explain the renewed interest in it.


Yud: āWoe is me, a child who was lied to!ā
He really canāt let down that one go, it keeps coming up. It was at least vaguely relevant to a Harry Potter self-insert, but his frustrated gifted child vibes keep leaking into other weird places. (Like Project Lawful, among itās many digressions, had an aside about how dath ilan raises itās children to avoid this. It almost made me sympathetic towards the child-abusing devil worshipers who had to put up with these asides to get to the main characterās chemistry and math lectures.)
Of course this a meandering plug to his book!
Yup, now that he has a book out heās going to keep referencing back to it and itās being added to the canon that must be read before anyone is allowed to dare disagree with him. (At least the sequences were free and all online)
Is that⦠an incel shape-rotator reference?
I think shape-rotator has generally permeated the rationalist lingo for a certain kind of math aptitude, I wasnāt aware the term had ties to the incel community. (But it wouldnāt surprise me that much.)


I couldnāt even make it through this one, he just kept repeating himself with the most absurd parody strawman he could manage.
This isnāt the only obnoxiously heavy handed āparableā heās written recently: https://www.lesswrong.com/posts/dHLdf8SB8oW5L27gg/on-fleshling-safety-a-debate-by-klurl-and-trapaucius
Even the lesswrongerās are kind of questioning the point:
I enjoyed this, but donāt think there are many people left who can be convinced by Ayn-Rand length explanatory dialogues in a science-fiction guise who arenāt already on board with the argument.
A dialogue that references Stanislaw Lemās Cyberiad, no less. But honestly Lem was a lot more terse and concise in making his points. I agree this is probably not very relevant to any discourse at this point (especially here on LW, where everyone would be familiar with the arguments anyway).
Reading this felt like watching someone kick a dead horse for 30 straight minutes, except at the 21st minute the guy forgets for a second that he needs to kick the horse, turns to the camera and makes a couple really good jokes. (The bit where they try and fail to change the topic reminded me of the āwho reads this stuffā bit in HPMOR, one of the finest bits you ever wrote in my opinion.) Then the guy remembers himself, resumes kicking the horse and it continues in that manner until the end.
Who does he think heās convincing? Numerous skeptical lesswrong posts have described why general intelligence is not like chess-playing and world-conquering/optimizing is not like a chess game. Even among his core audience this parable isnāt convincing. But instead heās stuck on repeating poor analogies (and getting details wrong about the thing he is using for analogies, he messed up some details about chess playing!).


Eh, cuck is kind of the right-wingerās word, itās tied to their inceldom and their mix of moral-panic and fetishization of minoritiesā sexualities.


āYou donāt understand how Eliezer has programmed half the people in your company to believe in that stuff,ā he is reported to have told Altman at a dinner party in late 2023. āYou need to take this more seriously.ā Altman ātried not to roll his eyes,ā according to Wall Street Journal reporter Keach Hagey.
I wonder exactly when this was. The attempted oust of Sam Altman was November 17, 2023. So either this warning was timely (but something Sam already had the pieces in place to make a counterplay against), or a bit too late (as Sam had recently just beaten an attempt by the true believers to oust him).
Sam Altman has proved adept at keeping the plates spinning and wheedling his way through various deals, I agree with the common sentiment here that he his underlying product just doesnāt work well enough, in a unique/proprietary enough way for him to actually use that to get profitable company. Pivot-to-AI and Ed Zitron have a guess of 2027 for the plates to come crashing down, but with an IPO on the way to infuse more cash into OpenAI I wouldnāt be that surprised if he delays the bubble pop all the way to 2030, and personally gets away cleanly with no legal liability for it and some stock sales lining his pockets.


āIām sort of a complex chaotic systems guy, so I have a low estimate that I actually know what the nonlinear dynamic in the memosphere really was,ā he said. (Translation: Itās complicated.)
Why do these people have the urge to talk like this? Does it make themselves feel smarter? Do they think it makes them look smart to other people? Are they so caught up in their field they canāt code switch to normal person talk?


Remember when a bunch of people poured their life savings into GameStop and started a financial doomsday cult once they lost everything? That will happen again if OpenAI goes public.
Iāve seen redditors on /r/singularity planning on buying OpenAI stock if it goes public. And judging by Tesla, cultists buying meme stock can keep up their fanaticism through quite a lot.


It seems like a complicated but repeatable formula: Start a non-profit dedicated to some technology, leverage the charity status for influence and tax avoidance and PR and recruiting true believers in the initial stages, and then make a bunch of financial deals conditional on your non-profit changing to for profit, then claim you need to change to for-profit or your organization will collapse!
Although Iām not sure how repeatable it is without the ātoo big to failā threat of loss of business to state AGs. OTOH, states often bend the rules to gain (or even just avoid losing) embarrassingly few jobs, so IDK.


iāve listened to his podcast, iāve read his articles, he is pretty up front about what his day job is and that he is a disappointed fanboy for tech. the dots are 1/1000th of an inch apart.
For comparison Iāve only read Edās articles, not listened to his podcasts, and I was unaware of his PR business. This doesnāt make me think his criticisms are wrong, but it does make me concerned heās overlooked critiquing and analyzing some aspects of the GenAI industry because of these connections to those aspects.


This weekās southpark makes fun of prediction markets! Hanson and the rationalists can be proud their idea has gone mainstream enough to be made fun of. The episode actually does a good job highlighting some of the issues with the whole concept: the twisted incentives and insider trading and the way it fails to actually create good predictions (as opposed to just getting vibes and degenerate gambling).


and the person who made up the āmath petsā allegation claimed no such source
I was about to point out that I think this is the second time he claimed math pets had absolutely no basis in reality (and someone countered with a source that forced him to) but I double checked the posting date and this is the example I was already thinking of. Also, we have supporting sources that didnāt say as much directly but implied it heavily: https://www.reddit.com/r/SneerClub/comments/42iv09/a_yudkowsky_blast_from_the_past_his_okcupid/ or like, the entire first two thirds of the plot of Planecrash!


Here: https://glowfic.com/posts/4508
Be warned, the three quarters of the thread donāt have much of a plot and are basically two to three characters talking, then the last quarter time skips ahead and gives massive clunky worldbuilding dumps. (This is basically par for the course with glowfic, the format supports dialogue interaction heavy stories and itās really easy to just kind of let the plot meander. Planecrash, for all of its bloat and diversions into eugenics lectures, is actually relatively plot heavy for glowfic.)
On the upside, the first three quarters almost read like a sneer on rationalists.


Weāve sneered about dath ilan before on the reddit sneerclub, and occasionally I work references to dath ilanās lore into sneers, but other than that no.


Where else am I supposed to fine deep analyses of the economic implications of 1st level wizards and clerics on an early modern setting? and analyses of Intelligence score distributions across the nations of Golarion?


You are close! It is a bdsm AU (inspired by an Archive of Our Own Trend of writing alternate universe settings of a particular flavor), i.e. everyone identifies as āDominantā or āSubmissiveā, and that identification is more important than gender in most ways. Ironically the dath ilan character is the one freaked out by this.


I mean the aftermath of Buterlian Jihad eventually lead to brutal feudalism that lasted a really long time and halted multiple lines of technological and social development, so I wouldnāt exactly call it a success for the common person.
Thanks for the information. I wonāt speculate further.