• haui
    link
    fedilink
    84 months ago

    I was very glad to read the last sentence. I agree fully. Easiest would be a report button that saves the last 60 seconds of voice, analyzes it with ai and check if something illegal/harassing was said and autokicks the person who said it.

    Would not require more top down systems.

    • @CameronDev
      link
      34 months ago

      I personally lean more towards humans for moderation, as words alone dont convey the full intent and meaning. And this cuts both ways, benign words can be used to harass.

      But of course, humans are expensive, and recordings of voice chat have privacy implications.

      • haui
        link
        fedilink
        34 months ago

        generally, yes. But computers can take care of stuff very well at this point. Kicking someone for using the N-Word does not need meaning. Just dont use it, even if it is for educational purposes (inside a game-chat for example).

        and recordings of voice chat have privacy implications.

        I dont think we live in the same reality. over 30% in the US use Voice assistants that constantly listen in to their conversatoins (was just the first number I could find, I’m not from the US). Having a bot in a game VC chat store 1 minute of text for 1 minute for reporting purposes is like 0.00001% of what is going wrong with security stuff. Billions of people are getting analyzed, manipulated and whatnot on a daily basis. A reporting tool is not even the same game, let alone in the same ballpark in terms of privacy implications.

        • @CameronDev
          link
          34 months ago

          Yeah, AI to knock out the egregious stuff (n-bombs etc) is prefectly reasonable. But there is still a lot of harassment that can happen the really needs a human to interpret. Its a balance.

          The privacy i am thinking of is the legal side of things. Google/FB/Apple are huge companies with the resources to work through the different legal requirements for every state and country. Google/FB/Apple can afford to just settle if anything goes wrong. A game studio cannot always do the same. As soon as you store a recording of a users voice, even temporarily, it opens up a lot of legal risks. Developers/publishers should still do it imo, but i dont think its something that can just be turned on without careful consideration.

          • haui
            link
            fedilink
            24 months ago

            Good thought. Thanks for bringing it up.

    • @[email protected]
      link
      fedilink
      -34 months ago

      Yeah that sounds totally reasonable and unintrusive, wtf. I don’t want my every word spoken in voice to be live analyzed by ai to see if I did a wrongthink.

      Why not simply mute or kick if someone is being an asshole? Has served me well in all my years using discord or teamspeak.

      • haui
        link
        fedilink
        34 months ago

        Apart from what you‘re interpreting into my words, I said if someone is harassing you or speaking about lets say the things they did with their daughter yesterday, you can report them and have a computer look into it instead of a human.

        Whatever privileges you have in your discord, you cant kick just anyone in every place. You either need privileges or a moderator to do it normally and my idea was to use AI to analyze the reported stuff.

        • @[email protected]
          link
          fedilink
          14 months ago

          I completely understand the sentiment of protecting children, but at the same time under that argument you can push the most dystopian and intrusive, overreaching legislature imaginable. It is the old balance of freedom versus safety, we can’t have complete safety without giving up all freedom.

          And I think a constant ai driven monitoring of everything people say in the general vicinity of a microphone is very dystopian; which would be the eventual outcome of this.

          • haui
            link
            fedilink
            14 months ago

            I’m just gonna repeat myself since this is the most common answer I get in those topics:

            The vast majority of people is being listened in on, analyzed and manipulated on a daily basis by far, far worse actors. Storing 1 minute of VC for 1 minute only accessible to this hypothetical bot *if someone reports them - facing wrongful report consequences themselves is not comparable to real privacy threats.

            • @[email protected]
              link
              fedilink
              14 months ago

              You don’t need to repeat yourself (and neither, be this condescending), I am well aware that this is happening to some degree already. Doesn’t mean I have to happily concede the little that is left.

              • haui
                link
                fedilink
                14 months ago

                You‘re again interpreting something into my words that I didnt say. Maybe try not to play the victim in every comment. It’s abrasive.

                It’s not happening to some degree. Its happening left right and center. Denying that a computer would help with vc moderation does not help at all.

                Good day.

                • @[email protected]
                  link
                  fedilink
                  14 months ago

                  Right back at ya buddy. I’m not putting words in your mouth.

                  And no matter how often times you repeat it, my discord call doesn’t constitute a threat to public safety.