LLMs are trained to block harmful responses. Old-school images can override those rules.