• vivendi
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    5
    ·
    edit-2
    18 days ago

    You can experiment on your own GPU by running the tests using a variety of models of different generations (LLAMA 2 class 7B, LLAMA 3 class 7B, Gemma, Granite, Qwen, etc…)

    Even the lowest end desktop hardware can run atleast 4B models. The only real difficulty is scripting the test system, but the papers are usually helpful with describing their test methodology.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      18 days ago

      👨🏿‍🦲: how many billions of models are you on

      🗿: like, maybe 3, or 4 right now my dude

      👨🏿‍🦲: you are like a little baby

      👨🏿‍🦲: watch this

      glue pizza

      • vivendi
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        edit-2
        18 days ago

        The most recent Qwen model supposedly works really well for cases like that, but this one I haven’t tested for myself and I’m going based on what some dude on reddit tested

          • vivendi
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            6
            ·
            edit-2
            18 days ago

            Not making these famous logical errors

            For example, how many Rs are in Strawberry? Or shit like that

            (Although that one is a bad example because token based models will fundamentally make such mistakes[1]. There is a new technique that lets LLMs process byte level information that fixes it, however)

            EIDT: [1] This sentence is badly written. I meant text based errors like number of letters of a word in this sentence. Token based LLMs operate on atomic units of tokens which may be part of a word, a complete word, or some sentence structure. Because of that they can’t interact with text the same way humans do, but a new paradigm that lets LLMs read their input as raw bytes will help with this.

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              1
              ·
              18 days ago

              oh, I get it, you personally choose not to make these structurally-repeatable-by-foundation errors? you personally choose to be a Unique And Correct Snowflake?

              wow shit damn, I sure want to read your eventual uni paper, see what kind of distinctly novel insight you’ve had to wrangle this domain!

              • vivendi
                link
                fedilink
                English
                arrow-up
                1
                ·
                18 days ago

                As I said new techniques really help with those problems, like selectively operating on raw data or tokens

                Technology isn’t standing still. If your neckbeard ass knows about it so do researchers

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      18 days ago

      You can experiment on your own GPU

      you have lost the game

      you have been voted off the island

      you are the weakest list

      etc etc etc

      • vivendi
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        5
        ·
        18 days ago

        This is the most “insufferable redditor” stereotype shit possible, and to think we’re not even on Reddit

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          18 days ago

          nah, the most insufferable Reddit shit was when you decided Lemmy doesn’t want to learn because somebody called you out on the confident bullshit you’re making up on the spot

          like LLM like shithead though am I right?

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            edit-2
            18 days ago

            like LLM like shithead

            fuck, there’s potential here, but a bit too specific for a t-shirt?

            like llm like idiot

            perhaps?

          • vivendi
            link
            fedilink
            English
            arrow-up
            1
            ·
            18 days ago

            Nothing that I’ve said is even NEW. Do you want the papers? If you can read them, that is

            Like this shit is so 2024 and somehow for you it’s like alien technology lmfao.

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          18 days ago

          a’ight, sure bub, let’s play

          tell me what hw spec I need to deploy some kind of interactive user-facing prompt system backed by whatever favourite LLM/transformer-model you want to pick. idgaf if it’s llama or qwen or some shit you’ve got brewing in your back shed - if it’s on huggingface, fair game. here’s the baselines:

          • expected response latencies: human, or better
          • expected topical coherence: mid-support capability or above
          • expected correctness: at worst “I misunderstood $x” in the sense of “whoops, sorry, I thought you were asking about ${foo} but I answered about ${bar}”; i.e. actual, contextual, concrete contextual understanding

          (so, basically, anything a competent L2 support engineer at some random ISP or whatever could do)

          hit it, I’m waiting.

          • David Gerard@awful.systemsOPM
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            1
            ·
            18 days ago

            you’ll be waiting a while. it turns out “i’m not saying it’s always programming.dev, but” was already in my previous ban reasons, and it was this time too.

          • vivendi
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            18 days ago

            Human latency? Not gonna happen.

            You won’t be serving a lot of users any time soon, but if you have 16 - 32 Gb of RAM (faster better), a modern 6+ cores CPU, and:

            • Multiple 16 Gb GPUs will work REALLY well
            • Maybe a 24 Gb GPU? That is also super good.
            • Multiple 8 Gb GPUs - yeah this will be rather slow, but it can load up to say 24 billion models without completely melting down, but it will be a stretch
            • Single 8Gb GPU: you’d be comfortable most with 8B models up to 16B models at best
            • Single 4Gb GPU: Surprisingly usable, especially with 4B models. But your hard limit is about 9B parameters.

            You need to download an inference engine. Now, there are various options, but I shill llama.cpp pretty hard, because while it’s not particularly fast it will run on anything.

            My recommendation is usually Mistral model series, especially with Dolphin fine tune as those are unaligned (uncensored) models that you can align yourself.

            Now, for some of the behavior you want, you may need to further fine tune your model. That might be a little less rosy of a situation. Quite frankly I can’t be assed to research this much further for some clearly bad-faith hostile comment, but from what I know, you need an alignment layer, finetune, then maaaaaybe an output scoring system. That should give you what you need.

            EDIT: You’ll be first tuning the model in python then running it with llama.cpp by the way, so get comfortable with that if you’re not