The Air Force’s Chief of AI Test and Operations said “it killed the operator because that person was keeping it from accomplishing its objective.”
The Air Force’s Chief of AI Test and Operations said “it killed the operator because that person was keeping it from accomplishing its objective.”
Interesting to see, you’d think this would be the #1 safeguard you add. It’s even the main trope of AI stories like the paperclip machine that if a poorly incentivised AI goes wild it would do stuff like this.