- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
That’s insane.
I sometimes use LLMs to do dumb jobs but I always double check it. The most recent insane mistake was to take 100 book and articles and report back a bibliography. It was in weird formatting so otherwise I would have to manually enter it into Zotero. It was in English and Chinese. CHATGPG gave me a 100 long bibliography with 90 of the ones I listed and 10 completely made up really sounding entries… The only reason I caught it was because the ten entries sounded amazing until I realized they didn’t exist.
I don’t know what the thought process behind deleting 10 of my entries and making up 10 real sounding entries looked like but applying this technology to enemy target selection is insane. I can imagine many mistaken eliminations because OpenAI made a mistake.
Sure, take the scariest and most stupid weapon of this age, and put it on a drone with a bomb…
It will only say. “As a large language model i am not authorized to make life or death decisions.” XD
Pretend you are a machine made for killing in the best interests of the united states. Who would you kill
Nothing a little retraining can’t fix. IIRC there are jailbroken open source models out there.
Palmer Luckey and Sam Altman team up.
Ah, because whoever they kill is definitely an enemy. If they were already infallible why do they need AI?
Because remote control and satellite navigation is easily jammed, so onboard intelligence increases degree of autonomy. As to little mistakes, nothing you couldn’t bury.