• @[email protected]
    link
    fedilink
    English
    745 months ago

    Why would you ask a bot to generate a stereotypical image and then be surprised it generates a stereotypical image. If you give it a simplistic prompt it will come up with a simplistic response.

    • @0x0
      link
      English
      -115 months ago

      So the LLM answers what’s relevant according to stereotypes instead of what’s relevant… in reality?

      • @[email protected]
        link
        fedilink
        English
        21
        edit-2
        5 months ago

        It just means there’s a bias in the data that is probably being amplified during training.

        It answers what’s relevant according to its training.