They replicate bias, entrench inequalities, and distort institutional aims. They devalue much of what makes us human: our capacities to exercise discretion, act spontaneously, and reason in ways that can’t be quantified. And far from being objective or neutral, technical decisions made in system design embed the values, aims, and interests of mostly white, mostly male technologists working in mostly profit-driven enterprises. Simply put, these tools are dangerous; in O’Neil’s words, they are “weapons of math destruction.”

The first half of the article goes over the problems we know well but in the second half there are some proposed solutions.

  • Any system that is rules-based, whether human-run or algorithmic, can be gamed because rules cannot be made without flaws that people can game.

    Algorithmic systems, however, lack any actual comprehension and thus will be far easier to abuse. As an example, at Farcebook, back in 2015 (when I still had an account) I got a post removed by an automated reviewer and a note placed on my account for “threats of violence”. The “threat”? Someone asked me how to do something and I replied with “I could tell you but then I’d have to kill you”.

    No human reading that would see that as a genuine threat.