- cross-posted to:
- [email protected]
- programming
- machine_learning
- cross-posted to:
- [email protected]
- programming
- machine_learning
cross-posted from: https://programming.dev/post/8121843
~n (@[email protected]) writes:
This is fine…
“We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group.”
[Do Users Write More Insecure Code with AI Assistants?](https://arxiv.org/abs/2211.03622?
This isn’t even a debate lol…
Stuff like CoPilot is awesome at making code that looks right, but contains subtle wrong variable names it’s self-created, or bad algorithms.
And that’s not the big issue.
The big issue is when you get distracted for 5 mins, you come back, and you forget that you’ve been working through that block of AI generated code (which looks correct), so you forget to check the rest of it, and it makes it into the source code, before testing later, only to realise its screwed because its AI generated code.
The other big issue, is that its only a matter of time until people start to get fed up, and start feeding these systems dodgy data to de-train them and make them worse / with backdoors.