• leisesprecher@feddit.org
    link
    fedilink
    arrow-up
    10
    ·
    2 个月前

    The real problem are implicit biases. Like the kind of discrimination that a reasonable user of a system can’t even see. How are you supposed to know, that applicants from “bad” neighborhoods are rejected at a higher rate, if the system is presented to you as objective? And since AI models don’t really explain how they got to a solution, you can’t even audit them.

    • ℍ𝕂-𝟞𝟝@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 个月前

      I have a feeling that’s the point with a lot of their use cases, like RealPage.

      It’s not a criminal act when an AI did it! (Except it is and should be.)