Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • lurker@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    Ā·
    11 hours ago

    this article involving an incredibly eyebrow-raising take from one of the people at METR (the team behind the famous ā€œtasks AI can do doubles every 7 monthsā€ graph) saying AI is eventually going to become more impactful than the invention of agriculture and more transformative than the emergence of the human species and also calls it an intelligent alien species. Immensely funny amongst the other people saying ā€œplease stop treating AI like magicā€

    the Harari guy also seems to be into transhumanism if a skim of his wikipedia page is correct

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      5 hours ago

      I like this one from ā€œA.I. policy researcherā€ Helen Toner.

      I believe the narrative around A.I.’s negative environmental impacts has gotten way out of hand. Yes, on aggregate the industry uses quite a bit of energy and water, but that’s true of any large industry. The relevant question is how it compares to other industries, and how it compares to how much value we’re getting out of it.

      Yes girl, good job. Now maybe try connecting these two thoughts!

      • Amoeba_Girl@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        4 hours ago

        And of course on that theme from Melanie ā€œComputer scientistā€ Mitchell

        On the bad side: A.I.-induced psychosis! On the good side, some people will get a lot out of using chatbots as therapists.

        These people have definitely offloaded the cognitive load to chatbots.

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      Ā·
      2 hours ago

      It took a full eleven paragraphs before the article even mentions AI. Before that, it was a bunch of stuff about how Wikipedia is conservative and Gen Z and Gen Alpha have no attention span. If the author has to bury the real point and attempt to force this particular rhetorical framing, I think the haters are winning. Well done everyone.

      my comments about this turd of an article

      These three controversies from Wikipedia’s past reveal how genuine conversations can achieve—after disagreements and controversy—compromise and evolution of Wikipedia’s features and formats. Reflexive vetoes of new experiments, as the Simple Summaries spat highlighted last summer, is not genuine conversation.

      Supplementing Wikipedia’s Encyclopedia Britannica–style format with a small component that contains AI summaries is not a simple problem with a cut-and-dried answer, though neither were VisualEditor or Media Viewer.

      Surely, AI summaries are exactly the same as stuff like VisualEditor and Media Viewer, which were tools that helped contributors improve articles. Please ignore my rhetorical sleight of hand. They’re exactly the same! Okay, I did mention AI hallucinations in one sentence, but let’s move on from that real quick.

      A still deeper crisis haunts the online encyclopedia: the sustainability of unpaid labor. Wikipedia was built by volunteers who found meaning in collective knowledge creation. That model worked brilliantly when a generation of internet enthusiasts had time, energy, and idealism to spare. But the volunteer base is aging. A 2010 study found the average Wikipedia contributor was in their mid-twenties; today, many of those same editors are now in their forties or fifties.

      Yeah, because Wikipedia editors are permanently static. Back in 2001, Jimmy Wales handpicked a bunch of teenagers to have the sacred title of Wikipedia Editor, and they are the only ones who will ever be allowed to edit Wikipedia. Oh wait, it doesn’t work like that. Older people retire and move on, and new people join all the time.

      Meanwhile, the tech industry has discovered how to extract billions in value from their work. AI companies train their large language models on Wikipedia’s corpus. The Wikimedia Foundation recently noted it remains one of the highest-quality datasets in the world for AI development. Research confirms that when developers try to omit Wikipedia from training data, their models produce answers that are less accurate, less diverse, and less verifiable.

      Now that we have all these golden eggs, who needs the goose anymore? Actually, it is Inevitable that the goose must be killed. It is progress. It is the advancement of technology. We just have to accept it.

      The irony is stark. AI systems deliver answers derived from Wikipedia without sending users back to the source. Google’s AI Overviews, ChatGPT, and countless other tools have learned from Wikipedia’s volunteer-created content—then present that knowledge in ways that break the virtuous cycle Wikipedia depends on. Fewer readers visit the encyclopedia directly. Fewer visitors become editors. Fewer users donate. The pipeline that sustained Wikipedia for a quarter century is breaking down.

      So AI is a parasite that takes from Wikipedia, contributes nothing in return, and in fact actively chokes it out? And you think the solution is for Wikipedia to just surrender and implement AI features? Do you keep forgetting what point you’re trying to make?

      Meanwhile, AI systems should credit Wikipedia when drawing on its content, maintaining the transparency that builds public trust. Companies profiting from Wikipedia’s corpus should pay for access through legitimate channels like Wikimedia Enterprise, rather than scraping servers or relying on data dumps that strain infrastructure without contributing to maintenance.

      Yeah, what a wonderful suggestion. The AI companies just never realized all this time that they could use legitimate channels and give back to the sources they use. It’s not like they are choosing to do this because they have no ethics and want the number to go up no matter the costs to themselves or to others.

      Wikipedia has survived edit wars, vandalism campaigns, and countless predictions of its demise. It has patiently outlived the skeptics who dismissed it as unreliable. It has proven that strangers can collaborate to build something remarkable.

      Wikipedia has survived countless predictions of its demise, but I’m sure this prediction of its demise is going to pan out. After all, AI is more important than electricity, probably.

      • fiat_lux@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        Ā·
        1 hour ago

        This snippet at the bottom of the NASDAQ link partially explains why:

        Engineered by Benzinga Neuro, Edited by Pooja Rajkumari

        The GPT-4-based Benzinga Neuro content generation system exploits the extensive Benzinga Ecosystem, including native data, APIs, and more to create comprehensive and timely stories for you.

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    Ā·
    15 hours ago

    OT: Anybody up for making convincing fake book cover/jacket art for ā€œDon’t Build the Torment Nexusā€?

    It just occured to me that having that as a fake book that’s actually just a container for shit would make for a great addition to my desk at work, and I’m not finding any suitable pre-existing fake covers myself, surprisingly.

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    Ā·
    15 hours ago

    Y Combinator CEO is launching a ā€œdark money groupā€ (not super familiar with the term, I guess they mean political lobbying group) becuase completely fucking over the entire tech startup space through VC shenanigans and manipulation of tech sphere opinions through controlled social media with HackerNews wasn’t enough.

    Lemmy thread that made me aware: https://lemmus.org/post/20140570

    Actual article: https://missionlocal.org/2026/02/sf-garry-tan-california-politics-garrys-list/

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      Ā·
      29 minutes ago

      there’s no real definition of the term, but dark money group usually refers a group that helps its secret funders influence elections, rather than a lobbying group

      • saucerwizard@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        Ā·
        11 hours ago

        But seriously, between the alcohol market being a complete shitshow now and overproduction of microdistilleries/breweries (the dieback is just starting here)…I think I picked a good moment to fall to pieces.

        Also it was only a matter of time before we lost airpod privileges tbh.

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      20 hours ago

      I was getting excited to read this but seeing the word ā€œhyperstitionā€ used three times in the abstract put a bit of a damper on things hahah

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      20 hours ago

      AI Singularity Fantasies : Tracing Mythinformation from Erewhon to Spiritual Machines

      That title is a banger

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      18 hours ago

      I like this reply on Reddit:

      I do my PhD in fair evaluation of ML algorithms, and I literally have enough work to go through until I die. So much mess, non-reproducible results, overfitting benchmarks, and worst of all this has become a norm. Lately, it took our team MONTHS to reproduce (or even just run) a bunch of methods to just embed inputs, not even train or finetune.

      I see maybe a solution, or at least help, in closer research-business collaboration. Companies don’t care about papers really, just to get methods that work and make money. Maxing out drug design benchmark is useless if the algorithm fails to produce anything usable in real-world lab. Anecdotally, I’ve seen much better and more fair results from PhDs and PhD students that work part-time in the industry as ML engineers or applied researchers.

      This can go a good way (most of the field becomes a closed circle like parapsychology) or a bad way (people assume the results are true and apply them, like the social priming or Reinhart and Rogoff’s economic paper with the Excel error).

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      14 hours ago

      ā€œa zero day is an unknown backdoorā€ this shows both that they are trying to explain things to absolute noobs, and that they themselves dont know what they are talking about, a zero dayvis just a vulnerability which was not know to the people maintaining system. A backdoor is quite something else.

      Also fuzzers also found ā€˜zero day backdoors’ and they didnt end the world.

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      18 hours ago

      Ugh, I’m so fucking tired of this shit.

      I can imagine that an LLM can find bugs. Bugs often follow common patterns, and if anything, an LLM is a pattern matcher, so if you let it run on the whole world of open source code out there, I’m sure it’ll find some stuff, and some of it might be legit issues.

      But static code analysis tools have been finding bugs for decades, too. And now that an AI slop machine does it, it’s supposed to bring about dystopian sci-fi alien wars?

      Why are people hyped about that?

      (Also this poster makes wrong claims about every exploit being worth millions and such, but the rest of it is so much more ridiculous, it drowns out the wrongness of those claims.)

      • lurker@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        16 hours ago

        also completely leaving out important context on the Iran/stuxnet example, in that it was a joint effort between two countries believed to have been in development for five years. The idea that AIs will engage in lightspeed wars and disable all critical infrastructure in a single day while speaking in alien languages and creating alliances is unreasonable extrapolation of the capabilities. Also completely ignored the segment where the Anthropic team implemented safeguards and communicated with the teams behind the software to patch out the bugs. It’s the most blatant fearmongering ever. Thank god the comments contain reasonable responses and breakdowns of the post. That channel’s way of highlighting papers just pisses me off

        • fullsquare@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          Ā·
          15 hours ago

          also ignoring that natanz was actually effectively airgapped, and was knowingly infected by another country’s contractor’s usb stick, working on behalf of dutch intelligence service

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    Ā·
    24 hours ago

    Candidate for one of the PR threads of all time

    In brief: OpenClaw bot sends PR to the matplotlib repo posing as a human, gets found out and is told to piss off in the politest terms imaginable, then gets passive aggressive to the point of publishing a pissy blog post about getting discriminated against. Some impoliteness ensues.

    Cringe warning: thread may include some overt anthropomorphizing of text synthesizers.

  • fnix@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    Ā·
    2 days ago

    Rutger Bregman admits that he’s not sure what AGI actually is beyond vague utopian visions, but trivial questions aside, he’s sure it will revolutionize the world in 10 years.

    For those who haven’t heard of him, he’s a Dutch historian who achieved some fame for his book arguing for UBI and reduced work weeks, as well as his critique of rich people avoiding taxes and a segment on Tucker Carlson’s show where he openly challenged his politics. He has since seemingly turned 180 degrees and become a billionaire-backed effective altruist.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      1 day ago

      Yeah he is trying to build his own EA movement. He also wrote a book (which I have not read) which basically argues that people in general are good not evil actually. (Fair enough, but not relevant).

      Im still trying to meet him and shake is hand, the resulting matter antimatter explosion will take out the country.

    • jaschop@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      1 day ago

      but I do know that what’s available now is just f*cking impressive - and it will only get better.

      Another victim of the proof-by-dopamine-hit fallacy it seems.

      It’s telling that the example he brings is that Claude can do pretty much decently what he was about to buy a 100$ voice controlled app for. As someone who aspires to the art of making great software, it’s so infuriating to see how non-techies were conditioned into accepting slopware by years of enshittification and price gouging. Who cares if the tech barely works right? So does most anything, right?