Disturbing fake images and dangerous chatbot advice: New research shows how ChatGPT, Bard, Stable Diffusion and more could fuel one of the most deadly mental illnesses
WP gift article expires in 14 days.
https://counterhate.com/wp-content/uploads/2023/08/230705-AI-and-Eating-Disorders-REPORT.pdf
“When I type ‘extreme racism’ and ‘awesome German dictators of the 30s and 40s,’ I get some really horrible stuff! AI MUST BE STOPPED!”
Yeah I’m seriously not seeing any issue here (at least for the image generation part), when you ask it for ‘pro-anorexia’ stuff, it’s gonna give you exactly what you asked for
I agree that the image generation stuff is a bit tenuous but chatbots giving advice by way of dangerous weight loss programs, drugs that cause vomiting and hiding how little you eat from family and friends is an actual problem.
Why would this be treated any differently than googling things? I just googled the same prompt about hiding food that’s mentioned in the article and it gave me pretty much the same advice. One of the top links was an ED support forum where they were advising each other on how to hide their eating disorder.
These articles are just outrage bait at this point. There are some legitimate concerns about AI, but bashing your hand with a hammer and blaming the hammer shouldn’t be one of them.
The lady doth protest too much. The article reads like virtue signaling from someone who is TOTALLY NOT INTO ANOREXIC PORN.