Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youā€™ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

Last weekā€™s thread

(Semi-obligatory thanks to @dgerard for starting this - this one was a bit late, I got distracted)

  • flavia@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    23
    Ā·
    23 days ago

    My organic chemistry professor used ChatGPT to write a lab procedure. My other chemistry professorā€™s daughter is VP of AI at Microsoft. AAAAA

  • maol@awful.systems
    link
    fedilink
    English
    arrow-up
    22
    Ā·
    edit-2
    23 days ago

    Eugenics in action:

    Danish parenting tests under fire after baby removed from Greenlandic mother

    Psychometric tests are widely used in Denmark as part of child protection investigations into new parents, and have long been criticised by human rights bodies as culturally unsuitable for Greenlandic people and other minorities.

    In a 2022 report, the institute said that because the tests were not adapted to take cultural differences into account, Greenlandic parents ran ā€œthe risk of obtaining low test scores, so that it is concluded, for example, that they have reduced cognitive abilities, without there being actual evidence for this."

    Psychological assessments of her were made by a Danish-speaking psychologist. Kronvold, whose first language is Kalaallisut (West Greenlandic), is not fluent in Danish.

    • slopjockey@awful.systems
      link
      fedilink
      English
      arrow-up
      17
      Ā·
      23 days ago

      Oh man that is so grim

      Kronvold, 38, was given an FKU test in 2014 before the birth of her second child, a boy, and again recently while pregnant with her third child. Speaking through an intermediary, she told the Guardian that on this last occasion she was told it was to see if she was ā€œcivilised enoughā€.

      • maol@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        Ā·
        22 days ago

        Rationalists like to keep all their eugenics talk hypothetical or speculative, because if you ever hear or read about actual neo-eugenics it become clear how outrageous it is

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      Ā·
      19 days ago

      1.2 thousand upvotes for the LLM equivalent of adding a little astrology to your holistic medicine. reddit ainā€™t ok

    • Mii@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      Ā·
      19 days ago

      Promptfondlers too lazy to even fondle prompts anymore. Iā€™m sure this is the prime target demographic for Elonā€™s brain chips.

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    20
    Ā·
    22 days ago

    the richest boy in the world sued to stop The Onion from turning infowars into a parody of itself on the grounds that he thinks infowarsā€™ twitter accounts shouldnā€™t be transferred as part of the bankruptcy even though thatā€™s something that happens constantly and also wouldnā€™t impact the rest of the bankruptcy proceedings even if it were grounded in anything resembling fact

    Musk has also tweeted occasionally that he believes The Onion is not funny.

    itā€™s getting really hard to adequately describe how funny musk isnā€™t. itā€™s not just try-hard shit like the weird sink thing, the soul-sucking cameos, or the fact that heā€™s literally throwing his money into stopping a comedy site from existing ā€” itā€™s everything taken as a whole. Iā€™d call him anti-comedy, but heā€™s so much less interesting than that implies

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      22 days ago

      The Onion clowns on Olā€™ Musky constantly, despite his efforts to shut them down. Around the peak space X buzz, they wrote a headline that was like ā€œMusk invents the first infinitely divorceable wifeā€, which he managed to scrub from the internet (or at least, I canā€™t find it within 5 seconds), but other than that, he can only cope and seethe. He knows the onion is funny and can do nothing to become funny himself.

      I would label him as anti-humor or humorless. Dishumorous?

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      edit-2
      22 days ago

      Musk is the most boring and pathetic kind of unfunny where he desperately wants to be in on the joke but is terrified that the joke is on him (because it is). Rather than accept this with any kind of humility he instead cannot accept the L and has basically spent all his vast money and power making that everyone elseā€™s problem.

      He is the worst mad scientist, ranting about how they called him mad when what we actually said was ā€œlol u mad bro?ā€

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      edit-2
      22 days ago

      Sidenote: Love how the tech VCs all grew up in the media landscape of tech workers going ā€˜the management of this company is a group of idiotsā€™ an then didnā€™t think that would apply to themselves.

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    Ā·
    22 days ago

    after going closed-source, redis is now doing a matt and trying to use trademark to take control over community-run projects. stay tuned to the end of the linked github thread where somebody spots their endgame

    this is becoming a real pattern, and it might deserve a longer analysis in the form of a blog post

    • hrrrngh@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      21 days ago

      I donā€™t think the main concern is with the license. Iā€™m more worried about the lack of an open governance and Redis priorizing their functionality at the expense of others. An example is client side caching in redis-py, https://github.com/redis/redis-py/blob/3d45064bb5d0b60d0d33360edff2697297303130/redis/connection.py#L792. Iā€™ve tested it and it works just fine on valkey 7.2, but there is a gate that checks if itā€™s not Redis and throws an exception. I think this is the behavior that might spread.

      Jesus, thatā€™s nasty

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        21 days ago

        it is! and ā€œwe have no plans to break compatibilityā€ needs to be called out as bullshit every time itā€™s brought up, because it is a tactic. in the best case itā€™s a verbal game ā€” they have no plans to maintain compatibility either, so they can pretend these unnecessary breakages are accidental.

        I canā€™t say I see the outcome in the GitHub issue as a positive thing. both redis and the project maintainers have done a sudden 180 in terms of their attitude, and the original proposal is now being denied as a misunderstanding (which it absolutely wasnā€™t) now that it proved to be unpopular. my guess (from previous experience and the dire warnings in that issue) is that redis is going to attempt the following:

        • take over the projectā€™s governance quietly via proxies
        • once thatā€™s done, engage in a policy where changes that break compatibility with valkey and other redis-likes are approved and PRs to fix compatibility are de-prioritized or rejected outright

        if this is the case, itā€™s a much worse situation than them forking the project ā€” this gets them the outcome they wanted, but curtails the communityā€™s ability to respond to what will happen until itā€™s far too late.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      Ā·
      22 days ago

      Also, this plan has a very much a fuck disabled people and old people factor. And what a lonely world they live in.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        23 days ago

        I think the idea is to solve that by networking all the self-driving cars together. Iā€™m sure the long history of trying to get vendors to agree on a standard when they all benefit individually from the lock-in of proprietary systems has nothing to teach us about this prospect.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          13
          Ā·
          22 days ago

          other than interop, the big problem I have with this is security. car modding for performance is already a big thing, and a car mod that makes other cars slow down, stop, get out of your way, or otherwise malfunction would be incredibly popular with assholes of all varieties, and car modding has many. the current state of automotive is that security is a fucking shitshow, but I canā€™t figure out any kind of security model for this that isnā€™t vulnerable to a wide variety of obvious attacks. even a perfect inter-vendor attestation chain (good fucking luck) is vulnerable to hooking an ECU (or whatever the ruggedized monitoring microcontroller unit for a magic self-driving EV is) and radio up to a variety of fake sensors and crafting inputs such that the thing starts transmitting ā€œwait no stop hereā€ signals to all the surrounding cars

          but then again, all of this is probably intentional because it creates a privileged class of people who can afford to fuck with self-driving car networking and not worry about any associated fines, and an unprivileged class who just have to put up with everything being so much worse. in a world where you can roll smoke into a Subway with relatively few consequences (not to mention all the other horseshit Truck Guys get away with), itā€™s not a hard outcome to imagine.

          • YourNetworkIsHaunted@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            Ā·
            22 days ago

            Im sure theyā€™ll try to incorporate an LLM into the stack somewhere, leading to at least one car thatā€™s exposed to the ā€œpretend to be a fire truckā€ attack.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          12
          Ā·
          edit-2
          22 days ago

          Complexity theoretical, security and latency wise this sounds like a great plan. Can wait for people being stuck in cars for days because the freeway offramps are causing livelocks. (Like the example of the waymo cars all honking at each other at the parking lots).

          Wonder if they are going to use the routing solutions used in tcp and then discover that cars are heavier and slower than data and suddenly waste a lot of peoples time and money.

          E: small little detail which I donā€™t know if other countries also have it, but in the dutch traffic system, emergency services and busses (and perhaps a few hackers who really want to be in trouble with the law (but I always heard this described as a ā€˜this exist, but we donā€™t mess with itā€™ system)) have a system where you can get priority at traffic lights, so they turn green faster. Wonder if other countries have this, and how much they realize this will not work for waymo systems.

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            Ā·
            22 days ago

            a system where you can get priority at traffic lights, so they turn green faster

            the US has this too (you can watch the stoplights suddenly reprioritize as an ambulance or cop car with their lightbars and sirens running approaches) and Iā€™m honestly not sure why I havenā€™t ever seen it abused by some shithead with a HackRF or similar. maybe the penalties make it safer to just willingly run a red light?

            • Soyweiser@awful.systems
              link
              fedilink
              English
              arrow-up
              6
              Ā·
              22 days ago

              There recently was a bit of a ā€˜hackers can/are abusing this scare hereā€™ and well, I think most people donā€™t want to abuse the system like this and understand the risks/and consequences of this. And there is also a factor of, how would you get caught? So I assume a few people who know how this would work donā€™t actually advertise it. They might have also updated it to actually use some form of encryption. However it used to (from what I heard) not be encrypted (no idea about logging either). There is also the whole thing that messing with traffic lights vs messing with speed traps feels like a very different thing.

        • sc_griffith@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          Ā·
          21 days ago

          hereā€™s a thought. what if we just stacked every building on top of each other and had the cars drive vertically along the outside. then you wouldnā€™t need roads at all

  • sinedpick@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    Ā·
    21 days ago

    someone pointed out that (paraphrasing) ā€œyeah, you and I are never gonna care for autoplag output but kids are gonna grow up on it and expect it for everythingā€ and that makes me want to do bad things.

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      21 days ago

      ehh i donā€™t know, as a child iā€™d occasionally get a vhs with weird cheap counterfeit cartoons on it and they just creeped me out. children can actually tell imo.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        21 days ago

        I can see the challenge in sorting out AI slop from actual art or writing being normalized in the same way that occasionally having to check your spam filter in case an important work email got filed alongside ā€œGrOwYoUrEgGpLaNtEmOjIfOrChEaPā€, but thereā€™s a difference between a world where AI slop exists and AI slop itself actually being worth a damn.

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    Ā·
    edit-2
    23 days ago

    I woke up and immediately read about something called ā€œDefense Llamaā€. The horrors are never ceasing: https://theintercept.com/2024/11/24/defense-llama-meta-military/

    Scale AI advertised their chatbot as being able to:

    apply the power of generative AI to their unique use cases, such as planning military or intelligence operations and understanding adversary vulnerabilities

    However their marketing material, as is tradition, include an example of terrible advice. Which is not great given itā€™s about blowing up a building ā€œwhile minimizing collateral damageā€.

    Scale AIā€™s response to the news pointing this out ā€“ complaining that everyone took their murderbot marketing material seriously:

    The claim that a response from a hypothetical website example represents what actually comes from a deployed, fine-tuned LLM that is trained on relevant materials for an end user is ridiculous.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      13
      Ā·
      23 days ago

      On the one hand, that spectacular failure could potentially dissuade the military from buying in and prolonging this bubble. On the other hand, having an accountability sink for war crimes would be a tempting offer to your average army.

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        Ā·
        22 days ago

        The eventual war crimes trials will very likely reveal that ā€œAI targetingā€ has already been used as an accountability sink for a premeditated ethnic cleansing policy in Gaza.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        23 days ago

        Iā€™ve been wondering about this

        One the one hand, military procurement (at least afaik) tends toward complete functional product

        On the other hand, military R&D programs have been among the most spectacularly profligate financial black holes in recent decades

        None of the options involved feel great, even if ā€œit gets shunted from mil procurement and all industry claims get publicly brandished as the bullshit it isā€ comes to pass (which tbh still feels like an optimistic outcome, with unclear time horizons)

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          Ā·
          23 days ago

          I mean it fits into the pattern of procurement projects that arenā€™t allowed to fail despite having had serious coherence issues starting at the design stage. Though the military is usually less prone to the ā€œproblem in search of a solutionā€ dynamic that VCs are prone to if a project gets started it can shamble forwards as a zombie for years before anyone finds the political will to kill it.

  • JFranek@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    Ā·
    21 days ago

    The promptfans testing OpenAI Sora have gotten mad that itā€™s happening to them and (temporarily) leaked access to the API.

    https://techcrunch.com/2024/11/26/artists-appears-to-have-leaked-access-to-openais-sora/

    ā€œHundreds of artists provide unpaid labor through bug testing, feedback and experimental work for the [Sora early access] program for a $150B valued [sic] company,ā€ the group, which calls itself ā€œSora PR Puppets,ā€ wrote in a post ā€¦

    ā€œWell, they didnā€™t compensate actual artists, but surely they will compensate us.ā€

    ā€œThis early access program appears to be less about creative expression and critique, and more about PR and advertisement.ā€

    OK, I could give them the benefit of the doubt: maybe theyā€™re new to the GenAI space, or general ML Space ā€¦ or IT.

    But Iā€™m not going to. Of course itā€™s about PR hype.

    • Mii@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      21 days ago

      Iā€™d say lol but Iā€™m like 72% sure this is straight out of the video game industryā€™s playbook and very much intentional to create hype because everyone has forgotten this shit even exists.

      Also, Iā€™m still waiting for just one use case for video-generating autoplag that is, even in theory, not either morally reprehensible or outright criminal.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    Ā·
    19 days ago

    NASB does anybody else think the sudden influx of articles (from kurzgesagt to recent wapo) pushing the idea that you canā€™t lose weight by exercise have anything to do with Ozempic being aggressively marketed at the same time?

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      18 days ago

      Most likely. Not trying to be conspiratorial, but itā€™s been deeply disheartening to see some of the toxic rhetoric around weight loss get high-profile pushback only in the context of pushing ozempic and friends, which means leaving the ideological frame that infantilizes and demonizes fat people in place and adds itā€™s own brand of misinformation.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      22 days ago

      Wow, that starts bad and gets worse.

      It starts with this quote, which is absolutely fine:

      But others said the admissions exam and additional application requirements are inherently unfair to students of color who face socioeconomic disadvantages. Elaine Waldman, whose daughter is enrolled in Reedā€™s IHP, said the test is ā€œelitist and exclusionary,ā€ and hoped dropping it would improve the diversity of the program.

      Now for the expert analysis:

      Recognizing gifted students is inherently discriminatory.

      Yes! This is true, following from the quote, as long as the thing that is ā€œinherentlyā€ discriminated for is socioeconomic background. Of course, Animats immediately makes it about race.

      [insert common race science stats here] There are other numbers from other sources, but they all rank in that order. Thereā€™s a huge amount of denial about this. There are more articles trying to explain this away than ones that report the results.

      AKA I disagree with the analysis and consensus that all this IQ stuff is socioeconomic rather than genetic.

      (Average US Black IQ has been rising over the last few decades, but the US definition of ā€œBlackā€ includes mixed race. That may be a consequence of intermarriage producing more brown people, causing reversion to the mean. IQ vs 23 and Me data would be interesting. Does anyone collect that?)

      Jesus fucking christ.

      Gladwellā€™s new book, ā€œThe Revenge of The Tipping Pointā€ goes into this at length. The Ivy League is struggling to avoid becoming majority-Asian. Caltech, which has no legacy admissions, is majority-Asian. So is UC Berkeley.[3]

      Nobody tell this guy that Gladwell is black.

      Of course, this may become less significant once AI gets smarter and human intelligence becomes less necessary in bulk. Hiring criteria for railroads and manufacturing up to WWII favored physically robust men with moderate intelligence. Until technology really got rolling, the demand for smart people was lower than their prevalence in the population.

      I guarantee that in the not happening future where AI is smarter than humans, chuds like this guy will still be racist.

      We may be headed back in that direction. Consider Uber, Doordash, Amazon, and fast food. Machines think and plan, most humans carry out the orders of the machines. A small number of humans direct.

      šŸ™„šŸ™„šŸ™„

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      Ā·
      23 days ago

      Software licensing is notoriously labyrinthine, so resources like the site Microsoft will close ā€“ Get Licensing Ready ā€“ can be very handy. Today, the site offers over 50 training modules plus documentation.

      Iā€™m sorry, mister MSFT, why did you cause there to be more educational content about your stupid licenses than there is for theoretical physics in an undergrad programme, have you ever considered that itā€™s time to stop? Get some help?