Cross-posted from: https://feddit.de/post/10267315

Initial research shows that AI has a significant water footprint. It uses water both for cooling the servers that power its computations and for producing the energy it consumes. As AI becomes more integrated into our societies, its water footprint will inevitably grow.

The growth of ChatGPT and similar AI models has been hailed as “the new Google.” But while a single Google search requires half a millilitre of water in energy, ChatGPT consumes 500 millilitres of water for every five to 50 prompts.

  • 0x815@feddit.deOP
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    9 months ago

    @NABDad

    I partly agree. AI has really little chance to produce anything useful if we use it the way we do now. I’m not so sure with the blockchain technology. We needed more decentralized networks in our economy and society, and blockchain is just one technology that can help here imho. It’s true that the vast majority of crypto projects represents a blend of scams and get-rich-quick schemes, but there are some fine projects that do a good job imo.

    • NABDad@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      You may not be fully aware of where AI is being used. The LLMs get a lot of press for both being impressive and at the same time not living up to expectations. However, there are other AI efforts that are having real results.

      I support radiology imaging at a large U.S. health system, and there are several different AI systems being tested and deployed that assist with diagnosis. It may not get a lot of mainstream articles published, but it allows doctors to treat patients more efficiently, which has the potential to both reduce costs and increase access to care.

      I’m sure there are similar efforts in every other industry.

      • 0x815@feddit.deOP
        link
        fedilink
        arrow-up
        1
        ·
        9 months ago

        @NABDad

        Yes, I know. I don’t say it’s all bad. It improves human decision making in a lot of things. What I meant is that it has been doing also a lot of harm in the last few years, e.g., in the U.S. where insurer UnitedHealth allegedly used an AI model with 90% error rate to deny care, or in The Netherlands and in France, just to name examples. And I’m afraid this is just the tip of the iceberg

        But I’d agree that it’s not the technologies, it’s the way we humans use them.