- cross-posted to:
- [email protected]
- programming
- machine_learning
- cross-posted to:
- [email protected]
- programming
- machine_learning
cross-posted from: https://programming.dev/post/8121843
~n (@[email protected]) writes:
This is fine…
“We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group.”
[Do Users Write More Insecure Code with AI Assistants?](https://arxiv.org/abs/2211.03622?
This seems tied to the issues I’ve had when using LLMs, which is that it spits out what it thinks might work not what is best. Frequently I get suggestions that I need to clean up or ask follow-up guiding questions.
If I had to guess it’s that, since there isn’t anything enforcing quality on training data/generated text, it will tend towards the more frequent approaches and not the best.
most likely, yes.