• Argongas@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    10 months ago

    I’m not sure how I feel about it, but I heard the argument made that AI maybe would be better for killing. People mess up all the time and misidentify threats which causes collateral damage in conflicts often. AI in theory could be much better at this identification process.

    It’s kind of the same argument with self driving cars: we freak out whenever they get in an accident, but people causing thousands of accidents a day doesn’t cause an outage.

    Not saying I necessarily agree with either argument, but they do make me question how we think about and evaluate technology.

    • TWeaK@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      10 months ago

      AI is a tool, like all tools it’s only as effective as the tool who is using it.

      Israel have already shown that AI can be abused to commit atrocities, while giving them further opportunity to avoid due responsibility.

      With self driving cars, the main reason we don’t have them is that the manufacturers would not be willing to accept liability. Also, the insurance industry is a massive leech on society and doesn’t want to give up its cash cow. It’s less about the effectiveness and the politics, more about money.