• Boomkop3@reddthat.com
    link
    fedilink
    arrow-up
    2
    arrow-down
    6
    ·
    5 months ago

    Reading it, it looks like it doesn’t require invasive oversight as long as the chat apps and app stores have sufficient detection and such.

    really, that’s what such places already should have, considering how much profit they make off of our data

    • 𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      5 months ago

      It does require invasive oversight. If I send a picture of my kid to my wife, I don’t want some AI algorithm to have a brainfart and instead upload the picture to Europol for strangers to see and to put me on some list I don’t belong.

      People sharing CSAM are unlikely to use apps that force these scans anyway.

      • Boomkop3@reddthat.com
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        5 months ago

        The proposal only does so under specific circumstances, which makes sense. Try to read more than three words before your respond

        • The point is is that it should never, under no circumstances monitor and eavesdrop private chats. It’s an unacceptable breach of privacy.

          Also, please explain what “specific circumstances” you are referring to. The current proposal doesn’t limit the scanning of messages in any way whatsoever.

          • Boomkop3@reddthat.com
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            5 months ago

            No, I actually read the current proposal. Maybe try that before regurgitating random stuff that matches your opinion

            • https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=COM:2022:209:FIN

              Here’s the text. There are no limits on which messages should be scanned anywhere in this text. Even worse: to address false positives, point 28 specifies that each provider should have human oversight to check if what the system finds is indeed CSAM/grooming. So it’s not only the authorities reading your messages, but Meta/Google/etc… as well.

              You might be referring to when the EU can issue a detection order. This is not what is meant with the continued scanning of messages, which providers are always required to do, as outlined by the text. So either you are confused, or you’re a liar.

              Cite directly from the text where it imposes limits on the automated scanning of messages. I’ll wait.

              • Boomkop3@reddthat.com
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                5 months ago

                ey there you go, you bothered to actually read. Your chats remain with your provider!

                It’s not like you were expecting privacy while sending your content through other people’s platform, were you?

                • 𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  edit-2
                  5 months ago

                  Aaand here’s your misunderstanding.

                  All messages detected by whatever algorithm/AI the provider implemented are sent to the authorities. The proposal specifically says that even if there is some doubt, the messages should be sent. Family photo or CSAM? Send it. Is it a raunchy text to a partner or might one of them be underage? Not 100% sure? Send it. The proposal is very explicit in this.

                  Providers are additionally required to review a subset of the messages sent over, for tweaking w.r.t. false positives. They do not do a manual review as an additional check before the messages are sent to the authorities.

                  If I send a letter to someone, the law forbids anyone from opening the letter if they’re not the intended recipient. E2E encryption ensures the same for digital communication. It’s why I know that Zuckerberg can’t read my messages, and neither can the people from Signal (metadata analysis is a different thing of course). But with this chat control proposal, suddenly they, as well as the authorities, would be able to read a part of the messages. This is why it’s an unacceptable breach of privacy.

                  Thankfully this nonsensical proposal didn’t get a majority.

                  • Boomkop3@reddthat.com
                    link
                    fedilink
                    arrow-up
                    2
                    ·
                    5 months ago

                    Ahh, that is indeed a critical detail on the implementation not quite clear right away. To be honest I don’t trust the end-to-end encryption most of these services offer. If I want perfect privacy, I’m sticking to self hosting stuff