Delisting non-consensual deepfake porn on Google is “draining,” victim says.

  • onlinepersona
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    6 months ago

    As much as I dislike Google, I’m not entirely sure what they can do here. The tech is public and here to stay. There’s a deluge of porn material and this is just part of it.

    Google could train an AI to find images of people that were deepfaked but all that’s gonna do is create adversarial training - a new game for trainers. Making stuff easier to report could also make it easier to create fake reports. The article doesn’t really make suggestions as to what google should be doing. Maybe they don’t know either.

    Anti Commercial-AI license

    • BrikoX@lemmy.zipOPM
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      I said many times, it’s too late to stop this. All these celebrities, who are the targets now, should have used their platforms to amplify the privacy advocates that called for regulation of deepfakes 15 years ago. Once technology is public, you can’t put it back into a bag.

      • rufus@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        6 months ago

        I’m not sure there is a way to regulate deepfakes this way. I don’t think the technology is the issue. It’s more or less misusing a tool. As you can use a car to murder someone, you can also use generative AI to harm people. The thing itself is just a tool and was made for a different and valid purpose.

        The issue is culture, and enforcing law on the internet. It sometimes still is the wild west. We’d need means of getting a hold of the places that host these deepfakes. Or provide services to generate unethical content. It’s them who should be held responsible and forced to take that offline. And implement precautions if we want that. Not Google nor Generative AI as a general tool.