I wanted to maybe start making PeerTubr videos, but then I realized I’ve never had to consider my voice as part of my threat model. A consequence that immediately comes to mind is potentially having your voice trained on by AI, but I’m not (currently) in a position where others would find it desirable to do so. Potentially in the future?
I’d like to know how your threat model handles your personal voice. And as a bonus, how would voice modulators help your voice in/prevent your voice from being more flexible in your threat model? Thanks!
It’s hard to imagine a scenario where this would happen and your voice would not otherwise be available. For example, if you went into politics, then you’d be a target, but you’d already be speaking in public all the time. It only takes a few seconds of a voice sample to do this nowadays and it’ll only get easier from here.
Maybe just make a point to educate your family and friends on the risk of voice cloning so they don’t fall for phone scams.
Create a secret passphrase - that only your family knows - that can be used to verbally verify it’s the real you and not a scam caller. Bonus points: create an alternate passphrase that can be used to signal that you’re under duress.
maybe better to use one time codes at that point, like we do with TOTP 2FA
Absolutely, in fact you can easily (clone your own voice, create a new email address like [email protected] and attach the recording where you ask for a Netflix/Apple/whatever gift card) do it as a harmless prank just to gauge how they’ll react.