There’s another round of CSAM attacks and it’s really disturbing to see those images. It was really bothering to see those and they weren’t taken down immediately. There was even a disgusting shithead in the comments who thought it was funny?? the fuck

It’s gone now but it was up for like an hour?? This really ruined my day and now I’m figuring out how to download tetris. It’s really sickening.

  • Ghostalmedia@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    1
    ·
    1 year ago

    Real talk, Lemmy needs some of the basic ass moderation tools that Reddit had so mods can be alerted and so mods can recommend that an admin ban an account or domain.

    Sure, there are ways that we can scan uploads with AI and do a bunch of other complex magic, but we need the basics first.

    • hitagi@ani.social
      link
      fedilink
      English
      arrow-up
      33
      ·
      1 year ago

      One tool that I liked from Reddit was manually approving posts from accounts under a certain age or karma threshold. I hope we can get tools like that one day.

      • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.one
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        1 year ago

        There is already the ability to restrict by karma with lemmy bots, but this will just encourage karma farming IMO, hence why nobody has done this yet

        I like the sound of the former approach - it sounds like a more effective solution and is similar to what Discourse does (manual approval of posts for new accounts, with an accompanying trust level) in a lemmy implementation it could possibly be managed or set by each instance

        Edit:clarification

    • Sunroc@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Lemmy will need a trust and safety team, but those can be expensive, and it would be an operational challenge for every instance to have experienced people. Would probably work best if there was a T&S collective and instances can elect to use them as a resource.

      • Ghostalmedia@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        But before we can even get to that, we need those basic mod tools. A volunteer TS team would need that to be effective.

        Can’t address a serious report if you don’t know it exists, and if you aren’t empowered report bad actors to admins to ban them from an instance.

    • Corgana@startrek.website
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Better tools will open the door for instance admins who don’t come from a network admin/developer background to responsibly host their communities, too.

      For the Lemmyverse to truly thrive, Admins should be relatively free to focus their time on the social elements of running an instance, which is a wholly different skillset than systems administration. Right now in order to be an effective Admin you need a heaping of both, (unless of course you’re interested in running an unmoderated instance).

    • CluckN@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      Even with fantastic moderation tools if one malicious user can take down an entire Lemmy instance then all is for naught.

  • ryannathans@lemmy.fmhy.net
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    2
    ·
    1 year ago

    AI generated CSAM will be (or already is) the next big DoS/troll tool, all you can really do is delete/block

    • Amju Wolf@pawb.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      I mean if it has the potential to kill the value of real CSAM that’s kinda a win though… Sure, it’s disturbing, but I’d rather people don’t actually get abused in order to create such content - which will inevitably happen anyway.

  • I Cast Fist
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    1 year ago

    AFAIK, it all falls down on moderators’ shoulders. I don’t envy their jobs one bit :(

    • Kalcifer@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      How was it handled on Reddit? Did the moderators have to handle it there as well, or did Reddit filter it out beforehand?

      • shagie
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        edit-2
        1 year ago

        deleted by creator

        • Kalcifer@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Are any of the examples that your provided libre/free and open-source? I wasn’t able to find any info for Google’s, and Cloudflare seems to only offer theirs for free if you are already using Cloudflare’s services. If not the examples that you provided, does there exist any tools that are libre/free and open-source?

          • PM_Your_Nudes_Please@lemmy.world
            link
            fedilink
            English
            arrow-up
            14
            ·
            edit-2
            1 year ago

            It will also make it a battle of attrition. Because now we’re not only using AI to block CSAM; Trolls are using AI to generate CSAM.

            The issue is that these tools typically work by hashing the image (or a specific section of the image) and checking it against a database of known CSAM. That way you never actually need to view the file to compare it to the list. But with AI image generation, that list of known CSAM is essentially useless because trolls can just generate new images.

            • fubo@lemmy.world
              link
              fedilink
              English
              arrow-up
              7
              arrow-down
              1
              ·
              1 year ago

              Even without the issue of new AI-generated images, those hash-based scanning tools aren’t available to hobbyist projects like the typical Lemmy instance. If they were given to hobbyist projects, it would be really easy for an abuser to just tweak their image collection until it didn’t set off the filter.

              • snoweA
                link
                fedilink
                English
                arrow-up
                8
                ·
                1 year ago

                You can use CloudFlare’s CSAM scanning tool completely for free. You can’t get access to the hashes, which would allow what you are talking about.

                • fubo@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  ·
                  1 year ago

                  Sure, for Lemmy instances who are Cloudflare customers. But I don’t think it can be integrated with the Lemmy code by default.

              • fubo@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 year ago

                On the other hand, if the people who want those images can satisfy their urges using AI fakes, that could mean less spreading of images of actual abuse. It might even mean less abuse happening.

                However, because they’re terrible people, I have to suspect that’s not the case.

                • Facebones@reddthat.com
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  1 year ago

                  People who create the content and insane monsters, but a LOT of actual pedos (vs predators looking for a power play) are disgusted by their preference. I know a ton of them look to cartoons already for stimulation, so I think AI content could draw more people away from actual material. Hopefully if demand reduces there will be less creation of new real content as the potential profits fall more proportionate to the risk.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          They don’t pay moderators, so that’s a moot point.

          But I do agree in general that there needs to be money flow to developers and admins, and potentially moderators as well. Perhaps that can be done with donations, or perhaps there needs to be a profit model, IDK, but I haven’t seen a long term solution here.

          My opinion is that the federated model is broken, and we should be looking into decentralized models where users share some of the burden. That way monetization wouldn’t be an issue because there isn’t a huge infrastructure cost.

          But I’ll stick around while it works.

          • ShittyRedditWasBetter@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            1 year ago

            Reddit pays admins, I think we are mostly on the same page on the rest. Donations will never happen though. It’s going to be at least $100 a year per person+ unless you end up cutting corners on stability and I don’t see the user base accepting that really.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Nah, it’s more like a few hundred/month/instance, so if an instance has 5k users, it’s $1-2/user/month. So about a quarter of what you suggested.

              But again, it’s unlikely to actually happen. Voluntary donations tend to suffer from the Bystander Effect.

    • hypelightfly@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Especially as Lemmy has even worse moderator tools than reddit (without custom tools) and the devs don’t give a shit.

  • RvTV95XBeo@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    2
    ·
    1 year ago

    I think the Lemmy dev team could use some help pushing out more moderation controls if there are any devs out there who want to make the world a little bit better place.

    For starters it would be nice to be able to set up rules like:

    You can’t comment for 1 day, you can’t comment links for 1 week, you can’t post until you have X comment karma, and you can’t post images / links to non-whitelisted sites until you have mod approval/Y karma/whatever. Toss in a rate limit on posting, and it’s not perfect but it may give mods a little more breathing room. Without adequate tools I understand why certain instances choose to go with the walled garden approach.

  • snoweA
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    4
    ·
    1 year ago

    Set up CloudFlare’s CSAM scanning tool. It’s completely free. It’s not on lemmy devs to secure your instance. Lemmy devs could add better admin and moderating tools, but it’s better to stop it before it even makes it to your server.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      Imo, lemmy shouldn’t allow image uploads at all. All images should be hosted elsewhere on services that can handle scanning content. This would also drastically cut down on hosting costs for lemmy instances.

      If lemmy is to host images, it should merely be as a backup. But since lemmy content isn’t easy to search as is anyway, that’s not a short term concern. And those images should be archived via mod action imo, not user action.

      • Die4Ever
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 year ago

        can’t you already run Lemmy without image hosting if you just disable the pictrs service?

        there’s also a new config option to disable caching of remote images

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          disable caching of remote images

          I’m not exactly sure how Lemmy works here, but are pictrs images considered “remote,” or are they copied between instances? AFAIK, each instance has its own pictrs service, but I’m not sure if that’s sent along with the post content when federating messages.

          But if lemmy can interact with other instances without storing any non-text data, then perhaps the problem is solved.

      • snoweA
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Not sure what you’re asking. It depends on your location, but in the USA you should not delete the image. You need to report it, then remove it from visibility of the public, then wait for the NCMEC to get back to you.

  • fievel@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Dumb question, but I’m sure I’m not the only one … What is CSAM? And what the acronym means?