The technology, which marries Meta’s smart Ray Ban glasses with the facial recognition service Pimeyes and some other tools, lets someone automatically go from face, to name, to phone number, and home address.
Apple already demonstrated that you can still get pretty darn close from eyes and hair. Combine that with a bit of logic (There is a 40% chance this is Sally Smith but she also lives three streets over and works on that street) and you still have very good odds.
Well… unless you are black, brown, or asian. Since the facial recognition tech is heavily geared toward white people because tech bros.
For low contrast greyscale sequrity cameras? Sure.
For any modern even SD color camera in a decently lit scenario? Bullshit. It is just that most of this tech is usually trained/debugged on the developers and their friends and families and… yeah.
I always love to tell the story of, maybe a decade and a half ago, evaluating various facial recognition software. White people never had any problems. Even the various AAPI folk in the group would be hit or miss (except for one project out of Taiwan that was ridiculously accurate). And we weren’t able to find a single package that consistently identified even the same black person.
And even professional shills like MKBHD will talk around this problem during his review ads (the apple vision video being particularly funny).
For any scenario short of studio lighting, there is objectively much less information.
You’re also dramatically underestimating how truly fucking awful phone camera sensors actually are without the crazy amount of processing phones do to make them functional.
No. I have worked with phone camera sensors quite a bit (see above regarding evaluating facial recognition software…).
Yes, the computation is a Thing. A bigger Thing is just accessing the databases to match the faces. That is why this gets offloaded to a server farm somewhere.
But the actual computer vision and source image? You can get more than enough contours and features from dark skin no matter how much you desperately try to talk about how “difficult” black skin is without dropping an n-word. You just have to put a bit of effort in to actually check for those rather than do what a bunch of white grad students did twenty years ago (or just do what a bunch of multicultural grad students did five or six years ago but…).
It’s exactly the same reason phone cameras do terrible in low light unless they do obscenely long exposures (which can’t resolve detail in anything moving). The information is not captured at sufficient resolution.
Rhetorical question (because we clearly can infer the answer) but… have you ever seen a black person?
A bit of melanin does not make you into some giant void that breaks all cameras. Black folk aren’t doing long exposure shots for selfies or group photos. Believe it or not but RDCWorld doesn’t need to use nightvision cameras to film a skit.
You can keep hand waving away the statement of fact that lower precision input is lower precision input.
And yes, for actual photography (where people are deliberately still for long enough to offset the longer exposure required), you do actually need different lighting and different camera settings to get the same quality results. But real cameras are also capable of capturing far more dynamic range without guessing heavily on postprocessing.
You’re not wrong. Research into models trained on racially balanced datasets has shown better recognition performance among with reduced biases. This was in limited and GAN generated faces so it still needs to be recreated with real-world data but it shows promise that balancing training data should reduce bias.
Yeah but this is (basically) reddit and clearly it isn’t racism and is just a problem of multi megapixel cameras not being sufficient to properly handle the needs of phrenology.
There is definitely some truth to needing to tweak how feature points (?) are computed and the like. But yeah, training data goes a long way and this is why there was a really big push to get better training data sets out there… until we all realized those would predominantly be used by corporations and that people don’t really want to be the next Lenna because they let some kid take a picture of them for extra credit during an undergrad course.
I think it would be funny to normalize wearing bloc in order to retain privacy. It’s why some people might wear accessories they normally don’t wear, such as beanies and sunglasses at protests, even if they aren’t in full bloc, covering hair and eyes (in addition to a surgical mask) can make it really hard to doxx someone.
I mean, you definitely want to wear a mask and some goggles at a protest. If only for the purpose of pepper spray. I totally don’t have a thin gaiter, goggles ,and a beanie and have definitely not heard great things about mountain biking helmets (the ones with faceguards) and totally am not considering grabbing one next time I do an REI run.
But also be aware that, with protests, you are almost always up against the groups who have access to all those “traffic” cameras and the like. And computer vision makes it fairly trivial to identify when a bunch of unmasked people walked into a dark alley and came out with their faces fully covered by tracking them back from the 4th street protest. It isn’t Enemy Of The State levels of asking Baby Busey and Jamie Kennedy to generate a 3d model from a single shot of Big Willy Style ogling some ta-tas, but most of the ways surveillance is used during that sequence are shockingly realistic and feasible.
In most cases there isn’t much you can do to fool the government without a lot of prep time such as scouting routes to find cameras, destroying them, or being really good at changing into bloc in the middle of a crowd and not getting caught.
But the important thing is threat modeling. The past dozen or so protests I’ve been at haven’t had the government as a big threat, it has had fascists as the primary threat. While a fascist cop would be a problem, it is much less likely than fascists combing through protest footage to try and doxx people, or a fascist at said action trying to get good photographs. That’s why I masked up.
The last real dicey action that I went to I still masked up, even knowing that the government could still try to track me if needed because I knew it would be time consuming to do so, and that they would only go through the process of doing that if I make it worth their while. Bloc is still effective, but quite hard under this heavily surveillanced police state.
The thing is? Ignoring the apparent void that black skin creates on all cameras (oy), it doesn’t take much time. It takes computing power.
As poops and giggles a few friends and I took the public (rumble…) traffic camera feeds that a nearby county has online. Set up a simple python script to scrape those and then configured an off the shelf tool to track a buddy’s general car (green hatchback) and told him to just drive around for an hour.
We were able to map his route with about 70% accuracy with about two hours of scripting and reading documentation. And there are companies that provide MUCH better products for the people who have access to the direct feeds and all the cameras we don’t have access to.
at this point, masking up in public provides protections for both health and privacy reasons
Apple already demonstrated that you can still get pretty darn close from eyes and hair. Combine that with a bit of logic (There is a 40% chance this is Sally Smith but she also lives three streets over and works on that street) and you still have very good odds.
Well… unless you are black, brown, or asian. Since the facial recognition tech is heavily geared toward white people because tech bros.
Facial recognition works better on white people because, mathematically, they provide more information in real world camera use cases.
Darker skin reflects less light and dark contrast is much more difficult for cameras to capture unless you have significantly higher end equipment.
For low contrast greyscale sequrity cameras? Sure.
For any modern even SD color camera in a decently lit scenario? Bullshit. It is just that most of this tech is usually trained/debugged on the developers and their friends and families and… yeah.
I always love to tell the story of, maybe a decade and a half ago, evaluating various facial recognition software. White people never had any problems. Even the various AAPI folk in the group would be hit or miss (except for one project out of Taiwan that was ridiculously accurate). And we weren’t able to find a single package that consistently identified even the same black person.
And even professional shills like MKBHD will talk around this problem during his review ads (the apple vision video being particularly funny).
For any scenario short of studio lighting, there is objectively much less information.
You’re also dramatically underestimating how truly fucking awful phone camera sensors actually are without the crazy amount of processing phones do to make them functional.
No. I have worked with phone camera sensors quite a bit (see above regarding evaluating facial recognition software…).
Yes, the computation is a Thing. A bigger Thing is just accessing the databases to match the faces. That is why this gets offloaded to a server farm somewhere.
But the actual computer vision and source image? You can get more than enough contours and features from dark skin no matter how much you desperately try to talk about how “difficult” black skin is without dropping an n-word. You just have to put a bit of effort in to actually check for those rather than do what a bunch of white grad students did twenty years ago (or just do what a bunch of multicultural grad students did five or six years ago but…).
It’s not racist to understand physics.
It’s exactly the same reason phone cameras do terrible in low light unless they do obscenely long exposures (which can’t resolve detail in anything moving). The information is not captured at sufficient resolution.
Rhetorical question (because we clearly can infer the answer) but… have you ever seen a black person?
A bit of melanin does not make you into some giant void that breaks all cameras. Black folk aren’t doing long exposure shots for selfies or group photos. Believe it or not but RDCWorld doesn’t need to use nightvision cameras to film a skit.
You can keep hand waving away the statement of fact that lower precision input is lower precision input.
And yes, for actual photography (where people are deliberately still for long enough to offset the longer exposure required), you do actually need different lighting and different camera settings to get the same quality results. But real cameras are also capable of capturing far more dynamic range without guessing heavily on postprocessing.
You’re not wrong. Research into models trained on racially balanced datasets has shown better recognition performance among with reduced biases. This was in limited and GAN generated faces so it still needs to be recreated with real-world data but it shows promise that balancing training data should reduce bias.
Yeah but this is (basically) reddit and clearly it isn’t racism and is just a problem of multi megapixel cameras not being sufficient to properly handle the needs of phrenology.
There is definitely some truth to needing to tweak how feature points (?) are computed and the like. But yeah, training data goes a long way and this is why there was a really big push to get better training data sets out there… until we all realized those would predominantly be used by corporations and that people don’t really want to be the next Lenna because they let some kid take a picture of them for extra credit during an undergrad course.
You okay?
No, your honour, I did not wear blackface to trivialise the suffering of people who came from Africa. I wore blackface to hide from Facebook Glasses.
I think it would be funny to normalize wearing bloc in order to retain privacy. It’s why some people might wear accessories they normally don’t wear, such as beanies and sunglasses at protests, even if they aren’t in full bloc, covering hair and eyes (in addition to a surgical mask) can make it really hard to doxx someone.
I mean, you definitely want to wear a mask and some goggles at a protest. If only for the purpose of pepper spray. I totally don’t have a thin gaiter, goggles ,and a beanie and have definitely not heard great things about mountain biking helmets (the ones with faceguards) and totally am not considering grabbing one next time I do an REI run.
But also be aware that, with protests, you are almost always up against the groups who have access to all those “traffic” cameras and the like. And computer vision makes it fairly trivial to identify when a bunch of unmasked people walked into a dark alley and came out with their faces fully covered by tracking them back from the 4th street protest. It isn’t Enemy Of The State levels of asking Baby Busey and Jamie Kennedy to generate a 3d model from a single shot of Big Willy Style ogling some ta-tas, but most of the ways surveillance is used during that sequence are shockingly realistic and feasible.
In most cases there isn’t much you can do to fool the government without a lot of prep time such as scouting routes to find cameras, destroying them, or being really good at changing into bloc in the middle of a crowd and not getting caught.
But the important thing is threat modeling. The past dozen or so protests I’ve been at haven’t had the government as a big threat, it has had fascists as the primary threat. While a fascist cop would be a problem, it is much less likely than fascists combing through protest footage to try and doxx people, or a fascist at said action trying to get good photographs. That’s why I masked up.
The last real dicey action that I went to I still masked up, even knowing that the government could still try to track me if needed because I knew it would be time consuming to do so, and that they would only go through the process of doing that if I make it worth their while. Bloc is still effective, but quite hard under this heavily surveillanced police state.
The thing is? Ignoring the apparent void that black skin creates on all cameras (oy), it doesn’t take much time. It takes computing power.
As poops and giggles a few friends and I took the public (rumble…) traffic camera feeds that a nearby county has online. Set up a simple python script to scrape those and then configured an off the shelf tool to track a buddy’s general car (green hatchback) and told him to just drive around for an hour.
We were able to map his route with about 70% accuracy with about two hours of scripting and reading documentation. And there are companies that provide MUCH better products for the people who have access to the direct feeds and all the cameras we don’t have access to.
And then masks become illegal.
“Don’t mind the cough, my flu has got a bit better since yesterday”.
“Ah, see-through masks are okay though.”