Ever wonder why these captchas are always cars, bicycles, motorcycles, traffic lights and crosswalks? Because YOU are doing the work of teaching the next generation of AI for self-driving cars.
My favorite is when it asks me to identify stairs. I just imagine a self-driving car mistaking a set of stairs as more road and deciding to try and climb the steps.
Actually, it’s training a self-driving humanoid robot that’s supposed to climb stairs in order to terminate any potential John Connor that’s inside a house upstairs.
I took some compsci classes years ago when this tech was new and that’s exactly how it was described as being handled
Once image recognition software got good enough to be right most of the time they started this shit to help get it the rest of the way to all of the time
Do it any other way and you have to pay those people
I can’t believe I never put that 2 and 2 together.
It stresses how stupid AI is then if it was a human the question would be “is this a stop sign?” So it’s not even asking us to validate data. To me that means AI is still far from being intelligent. It’s requiring our input to learn. That’s not how we operate. My kids don’t require me to show them images of a stop sign for them to know what one is.
Google has had more than enough data to train AI models from reCAPTCHA for many years. In 2010 it displayed 100 million captchas per day. You simply do not need hundreds of billions of solved captchas in your data set.
I feel like its only purpose nowadays is stopping basic bots and annoying people who don’t let themselves be tracked as much as advertisers would like.
Ever wonder why these captchas are always cars, bicycles, motorcycles, traffic lights and crosswalks? Because YOU are doing the work of teaching the next generation of AI for self-driving cars.
AI — anonymous Indians
It’s common courtesy to link to the xkcd you have the image from. It’s one of them.
plus illegal to not do under the creative commons license!
My favorite is when it asks me to identify stairs. I just imagine a self-driving car mistaking a set of stairs as more road and deciding to try and climb the steps.
Actually, it’s training a self-driving humanoid robot that’s supposed to climb stairs in order to terminate any potential John Connor that’s inside a house upstairs.
How does it know when it’s right if you’re the one teaching it?
You and many other humans are doing verification work
It’s pretty sure it’s already right, but if enough people get the same image and get it wrong the same way then something’s up, flag it
You know this for a fact?
I took some compsci classes years ago when this tech was new and that’s exactly how it was described as being handled
Once image recognition software got good enough to be right most of the time they started this shit to help get it the rest of the way to all of the time
Do it any other way and you have to pay those people
I can’t believe I never put that 2 and 2 together.
It stresses how stupid AI is then if it was a human the question would be “is this a stop sign?” So it’s not even asking us to validate data. To me that means AI is still far from being intelligent. It’s requiring our input to learn. That’s not how we operate. My kids don’t require me to show them images of a stop sign for them to know what one is.
Can’t wait until we get trolley problem CAPTCHAs and we have to choose the square with the most expendable human lives
I don’t believe it, at least not anymore.
Google has had more than enough data to train AI models from reCAPTCHA for many years. In 2010 it displayed 100 million captchas per day. You simply do not need hundreds of billions of solved captchas in your data set.
I feel like its only purpose nowadays is stopping basic bots and annoying people who don’t let themselves be tracked as much as advertisers would like.
deleted by creator
I wonder if everyone agreeing to always select bottom row or something would do anything