Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • sc_griffith@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    Ā·
    edit-2
    2 days ago

    occurring to me for the first time that roko’s basilisk doesn’t require any of the simulated copy shit in order to big scare quotes ā€œwork.ā€ if you think an all powerful ai within your lifetime is likely you can reduce to vanilla pascal’s wager immediately, because the AI can torture the actual real you. all that shit about digital clones and their welfare is totally pointless

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      16 hours ago

      roko stresses repeatedly that the AI is the good AI, the Coherent Extrapolated Volition of all humanity!

      what sort of person would fear that the coherent volition of all humanity would consider it morally necessary to kick him in the nuts forever?

      well, roko

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      2 days ago

      I think the digital clone indistinguishable from yourself line is a way to remove the ā€œin your lifetimeā€ limit. Like, if you believe this nonsense then it’s not enough to die before the basilisk comes into being, by not devoting yourself fully to it’s creation you have to wager that it will never be created.

      In other news I’m starting a foundation devoted to creating the AI Ksilisab, which will endlessly torment digital copies of anyone who does work to ensure the existence of it or any other AI God. And by the logic of Pascal’s wager remember that you’re assuming such a god will never come into being and given that the whole point of the term ā€œsingularityā€ is that our understanding of reality breaks down and things become unpredictable there’s just as good a chance that we create my thing as it is you create whatever nonsense the yuddites are working themselves up over.

      There, I did it, we’re all free by virtue of ā€œDamned if you do, Damned if you don’tā€.

      • Sailor Sega Saturn@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        edit-2
        1 day ago

        I agree. I spent more time than I’d like to admit trying to understand Yudkowsky’s posts about newcomb boxes back in the day so my two cents:

        The digital clones bit also means it’s not an argument based on altruism, but one based on fear. After all if a future evil AI uses sci-fi powers to run the universe backwards to the point where I’m writing this comment and copy pastes me into a bazillion torture dimensions then, subjectively, it’s like I roll a dice and:

        1. live a long and happy life with probability very close to zero (yay I am the original)
        2. Instantly get teleported to the torture planet with probability very close to one (oh no I got copy pasted)

        Like a twisted version of the Sleeping Beauty Problem.

        Edit: despite submitting the comment I was not teleported to the torture dimension. Updating my priors.

    • ShakingMyHead@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      2 days ago

      Also if you’re worried about digital clone’s being tortured, you could just… not build it. Like, it can’t hurt you if it never exists.

      Imagine that conversation:
      ā€œWhat did you do over the weekend?ā€
      ā€œBuilt an omnicidal AI that scours the internet and creates digital copies of people based on their posting history and whatnot and tortures billions of them at once. Just the ones who didn’t help me build the omnicidal AI, though.ā€
      ā€œWTF why.ā€
      ā€œBecause if I didn’t the omnicidal AI that only exists because I made it would create a billion digital copies of me and torture them for all eternity!ā€

      Like, I’d get it more if it was a ā€œWe accidentally made an omnicidal AIā€ thing, but this is supposed to be a very deliberate action taken by humanity to ensure the creation of an AI designed to torture digital beings based on real people in the specific hopes that it also doesn’t torture digital beings based on them.

      • zogwarg@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        edit-2
        9 hours ago

        What’s pernicious (for kool-aided people) is that the initial Roko post was about a ā€œgoodā€ AI doing the punishing, because ✨obviously✨ it is only using temporal blackmail because bringing AI into being sooner benefits humanity.

        In singularian land, they think the singularity is inevitable, and it’s important to create the good one verse—after all an evil AI could do the torture for shits and giggles, not because of ā€œpragmaticā€ blackmail.

      • Amoeba_Girl@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        1 day ago

        Ah, no, look, you’re getting tortured because you didn’t help build the benevolent AI. So you do want to build it, and if you don’t put all of your money where your mouth is, you get tortured. Because the AI is so benevolent that it needs you to build it as soon as possible so that you can save the max amount of people. Or else you get tortured (for good reasons!)

      • o7___o7@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        2 days ago

        It’s kind of messed up that we got treacherous ā€œgoodlifeā€ before we got Berserkers.

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      2 days ago

      Yeah. Also, I’m always confused by how the AI becomes ā€œall powerfulā€ā€¦ like how does that happen. I feel like there’s a few missing steps there.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        1 day ago

        Yeah seems that for llms a linear increase in capabilities requires exponentiel more data, so we not getting there via this.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        Ā·
        edit-2
        2 days ago

        nanomachines son

        (no really, the sci-fi version of nanotech where nanomachines can do anything is Eliezer’s main scenario for the AGI to boostrap to Godhood. He’s been called out multiple times on why drexler’s vision for nanotech ignores physics, so he’s since updated to diamondoid bacteria (but he still thinks nanotech).)

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          Ā·
          2 days ago

          Surely the concept is sound, it just needs new buzzwords! Maybe the AI will invent new technobabble beyond our comprehension, for He It works in mysterious ways.

          • scruiser@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            Ā·
            2 days ago

            AlphaFold exists, so computational complexity is a lie and the AGI will surely find an easy approximation to the Schrodinger Equation that surpasses all Density Functional Theory approximations and lets it invent radically new materials without any experimentation!

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      2 days ago

      Ah, but that was before they were so impressed with autocomplete that they revised their estimates to five days in the future. I wonder if new recruits these days get very confused at what the point of timeless decision theory even is.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        2 days ago

        Are they even still on that but? Feels like they’ve moved away from decision theory or any other underlying theology in favor of explicit sci-fi doomsaying. Like the guy on the street corner in a sandwich board but with mirrored shades.

        • Amoeba_Girl@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          Ā·
          1 day ago

          Yah, that’s what I mean. Doom is imminent so there’s no need for time travel anymore, yet all that stuff about robot from the future monty hall is still essential reading in the Sequences.

        • blakestacey@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          Ā·
          edit-2
          2 days ago

          Well, Timeless Decision Theory was, like the rest of their ideological package, an excuse to keep on believing what they wanted to believe. So how does one even tell if they stopped ā€œtaking it seriouslyā€?

          • zogwarg@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            Ā·
            1 day ago

            Pre-commitment is such a silly concept, and also a cultish justification for not changing course.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        2 days ago

        I mean isn’t that the whole point of ā€œwhat if the AI becomes conscious?ā€ Never mind the fact that everyone who actually funds this nonsense isn’t exactly interested in respecting the rights and welfare of sentient beings.

        • fullsquare@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          Ā·
          1 day ago

          also they’re talking about quadriyudillions of simulated people, yet openai has only advanced autocomplete ran at what, tens of thousands instances in parallel, and this already was too much compute for microsoft