• vivendi
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    5
    ·
    1 day ago

    This is the most “insufferable redditor” stereotype shit possible, and to think we’re not even on Reddit

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 day ago

      nah, the most insufferable Reddit shit was when you decided Lemmy doesn’t want to learn because somebody called you out on the confident bullshit you’re making up on the spot

      like LLM like shithead though am I right?

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        20 hours ago

        like LLM like shithead

        fuck, there’s potential here, but a bit too specific for a t-shirt?

        like llm like idiot

        perhaps?

      • vivendi
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Nothing that I’ve said is even NEW. Do you want the papers? If you can read them, that is

        Like this shit is so 2024 and somehow for you it’s like alien technology lmfao.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 day ago

      a’ight, sure bub, let’s play

      tell me what hw spec I need to deploy some kind of interactive user-facing prompt system backed by whatever favourite LLM/transformer-model you want to pick. idgaf if it’s llama or qwen or some shit you’ve got brewing in your back shed - if it’s on huggingface, fair game. here’s the baselines:

      • expected response latencies: human, or better
      • expected topical coherence: mid-support capability or above
      • expected correctness: at worst “I misunderstood $x” in the sense of “whoops, sorry, I thought you were asking about ${foo} but I answered about ${bar}”; i.e. actual, contextual, concrete contextual understanding

      (so, basically, anything a competent L2 support engineer at some random ISP or whatever could do)

      hit it, I’m waiting.

      • David Gerard@awful.systemsOPM
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        1 day ago

        you’ll be waiting a while. it turns out “i’m not saying it’s always programming.dev, but” was already in my previous ban reasons, and it was this time too.

      • vivendi
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        Human latency? Not gonna happen.

        You won’t be serving a lot of users any time soon, but if you have 16 - 32 Gb of RAM (faster better), a modern 6+ cores CPU, and:

        • Multiple 16 Gb GPUs will work REALLY well
        • Maybe a 24 Gb GPU? That is also super good.
        • Multiple 8 Gb GPUs - yeah this will be rather slow, but it can load up to say 24 billion models without completely melting down, but it will be a stretch
        • Single 8Gb GPU: you’d be comfortable most with 8B models up to 16B models at best
        • Single 4Gb GPU: Surprisingly usable, especially with 4B models. But your hard limit is about 9B parameters.

        You need to download an inference engine. Now, there are various options, but I shill llama.cpp pretty hard, because while it’s not particularly fast it will run on anything.

        My recommendation is usually Mistral model series, especially with Dolphin fine tune as those are unaligned (uncensored) models that you can align yourself.

        Now, for some of the behavior you want, you may need to further fine tune your model. That might be a little less rosy of a situation. Quite frankly I can’t be assed to research this much further for some clearly bad-faith hostile comment, but from what I know, you need an alignment layer, finetune, then maaaaaybe an output scoring system. That should give you what you need.

        EDIT: You’ll be first tuning the model in python then running it with llama.cpp by the way, so get comfortable with that if you’re not