• JackbyDev
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    1 day ago

    I feel like you’re missing the forest for the trees here. Two things can be true. Yes, if you give AI a prompt that implies it should lie, you shouldn’t be surprised when it lies. You’re not wrong. Nobody is saying you’re wrong. It’s also true that LLMs don’t really have “goals” because they’re trained by examples. Their goal is, at the end of the day, mimicry. This is what the commenter was getting at.