• m-p{3}@lemmy.ca
    link
    fedilink
    arrow-up
    50
    ·
    1 year ago

    Looks like some CSAM fuzzy hashing would go a long way to catch someone trying to submit that kind of content if each uploaded image is scanned.

    https://blog.cloudflare.com/the-csam-scanning-tool/

    Not saying to go with CloudFlare (just showing how the detection works overall), but some kind of builtin detection system coded into Lemmy that grabs an updated hash table periodically

    • wagesj45@kbin.social
      link
      fedilink
      arrow-up
      25
      ·
      1 year ago

      Not a bad idea, but I was working on a project once that would support user uploaded images and looked into PhotoDNA, but it was an incredible pain in the ass to get access to. I’m surprised that someone hasn’t realized that this should just be free and available. Kind of gross that it is put behind an application/paywall, imo. They’re just hashes and a library to generate the hashes. Why shouldn’t that just be open source and available through the NCMEC?

        • wagesj45@kbin.social
          link
          fedilink
          arrow-up
          16
          ·
          1 year ago

          They could tweak their images regardless. Security through obscurity is never a good solution.

          I can understand the reporting requirement.