• Deceptichum@quokk.au
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 month ago

    Fuck it, I use local LLMs enough, will give this a crack.

    Edit: it’s doing 6 paragraphs in 8.2 seconds, the last model I used was doing like 1 paragraph in 12 seconds. Crazy fast in my experience.

    • Bjornir
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      What GPU are you using ? It looks to me like it requires quite a lot of vram

    • yeehaw@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      How are they to run, how useful are they, and any you can recommend?

      • Deceptichum@quokk.au
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        Dead simple to run, I use Ollama to run local models and it’s like 3 words to setup from the command line.

        Useful is entirely relative. I use mine personally and somewhat professionally, but I only use it to draft text and manually alter it. AI is amazing, but it’s also crap. You gotta work it a bit.

        Umm this model from what I can see, I’m using the 8b model and it’s fast to generate, time will tell how good the quality is but I’m impressed after a few minutes play.

        • chiisana@lemmy.chiisana.net
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 month ago

          8B parameter tag is the distilled llama 3.1 model, which should be great for general writing. 7B is distilled qwen 2.5 math, and 14B is distilled qwen 2.5 (general purpose but good at coding). They have the entire table called out on their huggingface page, which is handy to know which one to use for specific purposes.

          The full model is 671B and unfortunately not going to work on most consumer hardwares, so it is still tethered to the cloud for most people.

          Also, it being a made in China model, there are some degree of censorship mandated. So depending on use case, this may be a point of consideration, too.

          Overall, it’s super cool to see something at this level to be generally available, especially with all the technical details out in the open. Hopefully we’ll see more models with this level of capability become available so there are even more choices and competition.

          • fmstrat@lemmy.nowsci.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            29 days ago

            Personally the part I like is that it’s not meta. Unfortunately if 8b is based on llama, there could be meta censorship baked in that we simply don’t know about.

        • fmstrat@lemmy.nowsci.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          29 days ago

          Just remember, Ollama’s version of 8b models is not the same as the original on Huggingface. There’s a reason it’s a much smaller file size. That being said my understanding is the quant is good.

        • yeehaw@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          This is cool, are there any decent ones that run in docker and have a web UI?

          • rebelsimile@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            I’ve been using open webui (search for it with those terms) to run local models in a docker container served from Llama for the last few months and I love it.

  • FooBarrington@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    The cool thing about this is that they also published a bunch of details about their approach, as well as tooling around it!

  • jimmy90@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    so what of its reasoning? can it deduce? can it follow specific logic/equations in mathematical notation or in plain language?