• FundMECFSResearch@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    167
    arrow-down
    4
    ·
    1 day ago

    I know people are gonna freak out about the AI part in this.

    But as a person with hearing difficulties this would be revolutionary. So much shit I usually just can’t watch because open subtitles doesn’t have any subtitles for it.

    • kautau@lemmy.world
      link
      fedilink
      arrow-up
      99
      arrow-down
      1
      ·
      edit-2
      1 day ago

      The most important part is that it’s a local LLM model running on your machine. The problem with AI is less about LLMs themselves, and more about their control and application by unethical companies and governments in a world driven by profit and power. And it’s none of those things, it’s just some open source code running on your device. So that’s cool and good.

        • jsomae@lemmy.ml
          link
          fedilink
          arrow-up
          17
          arrow-down
          4
          ·
          18 hours ago

          Running an llm llocally takes less power than playing a video game.

            • Potatar@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              1 hour ago

              Any paper about any neural network.

              Using a model to get one output is just a series of multiplications (not even that, we use vector multiplication but yeah), it’s less than or equal to rendering ONE frame in 4k games.

            • jsomae@lemmy.ml
              link
              fedilink
              arrow-up
              8
              arrow-down
              2
              ·
              edit-2
              13 hours ago

              I don’t have a source for that, but the most that any locally-run program can cost in terms of power is basically the sum of a few things: maxed-out gpu usage, maxed-out cpu usage, maxed-out disk access. GPU is by far the most power-consuming of these things, and modern video games make essentially the most possible use of the GPU that they can get away with.

              Running an LLM locally can at most max out usage of the GPU, putting it in the same ballpark as a video game. Typical usage of an LLM is to run it for a few seconds and then submit another query, so it’s not running 100% of the time during typical usage, unlike a video game (where it remains open and active the whole time, GPU usage dips only when you’re in a menu for instance.)

              Data centers drain lots of power by running a very large number of machines at the same time.

              • msage
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                6 hours ago

                From what I know, local LLMs take minutes to process a single prompt, not seconds, but I guess that depends on the use case.

                But also games, dunno about maxing GPU in most games. I maxed mine for crypto mining, and that was power hungry. So I would put LLMs closer to crypto than games.

                Not to mention games will entertain you way more for the same time.

        • Sixty@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 hours ago

          Curious how resource intensive AI subtitle generation will be. Probably fine on some setups.

          Trying to use madVR (tweaker’s video postprocessing) in the summer in my small office with an RTX 3090 was turning my office into a sauna. Next time I buy a video card it’ll be a lower tier deliberately to avoid the higher power draw lol.

    • mormund@feddit.org
      link
      fedilink
      arrow-up
      39
      arrow-down
      1
      ·
      1 day ago

      Yeah, transcription is one of the only good uses for LLMs imo. Of course they can still produce nonsense, but bad subtitles are better none at all.

    • hushable@lemmy.world
      link
      fedilink
      arrow-up
      20
      arrow-down
      2
      ·
      1 day ago

      Indeed, YouTube had auto generated subtitles for a while now and they are far from perfect, yet I still find it useful.