You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers. AI code is absolutely up to production quality! Also, you’re all…
My most honest goal is to educate people which on lemmy is always met with hate. people love to hate, parroting the same old nonsense that someone else taught them.
If you insist on ignorance then be ignorant in peace, don’t try such misguided attempts at sneer
There are things in which LLMs suck. And there are things that you wrongly believe as part of this bullshit twitter civil war.
oh and I suppose you can back that up with verifiable facts, yes?
and that you, yourself, can stand as a sole beacon against the otherwise regularly increasing evidence and studies that both indicate toward and also prove your claims to be full of shit? you are the saviour that can help enlighten us poor unenlightened mortals?
sounds very hard. managing your calendar must be quite a skill
and that you, yourself, can stand as a sole beacon against the otherwise regularly increasing evidence and studies that both indicate toward and also prove your claims to be full of shit?
Hallucination rates and model quality has been going up steadily, same with multishot prompts and RAG reducing hallucination rates. These are proven scientific facts, what the fuck are you on about? Open huggingface RIGHT NOW, go to the papers section, FUCKING READ.
I’ve spent 6+ years of my life in compsci academia to come here and be lectured by McDonald in his fucking basement, what has my life become
ah yes, my ability to read a pdf immediately confers upon me all the resources required to engage in materially equivalent experimentation of the thing that I just read! no matter whether the publisher spent cents or billions in the execution and development of said publication, oh no! it is so completely a cost paid just once, and thereafter it’s ~totally~ free!
oh, wait, hang on. no. no it’s the other thing. that one where all the criticisms continue to hold! my bad, sorry for mistaking those. guess I was roleplaying a LLM for a moment there!
You can experiment on your own GPU by running the tests using a variety of models of different generations (LLAMA 2 class 7B, LLAMA 3 class 7B, Gemma, Granite, Qwen, etc…)
Even the lowest end desktop hardware can run atleast 4B models. The only real difficulty is scripting the test system, but the papers are usually helpful with describing their test methodology.
The most recent Qwen model supposedly works really well for cases like that, but this one I haven’t tested for myself and I’m going based on what some dude on reddit tested
For example, how many Rs are in Strawberry? Or shit like that
(Although that one is a bad example because token based models will fundamentally make such mistakes[1]. There is a new technique that lets LLMs process byte level information that fixes it, however)
EIDT: [1] This sentence is badly written. I meant text based errors like number of letters of a word in this sentence. Token based LLMs operate on atomic units of tokens which may be part of a word, a complete word, or some sentence structure. Because of that they can’t interact with text the same way humans do, but a new paradigm that lets LLMs read their input as raw bytes will help with this.
nah, the most insufferable Reddit shit was when you decided Lemmy doesn’t want to learn because somebody called you out on the confident bullshit you’re making up on the spot
tell me what hw spec I need to deploy some kind of interactive user-facing prompt system backed by whatever favourite LLM/transformer-model you want to pick. idgaf if it’s llama or qwen or some shit you’ve got brewing in your back shed - if it’s on huggingface, fair game. here’s the baselines:
expected response latencies: human, or better
expected topical coherence: mid-support capability or above
expected correctness: at worst “I misunderstood $x” in the sense of “whoops, sorry, I thought you were asking about ${foo} but I answered about ${bar}”; i.e. actual, contextual, concrete contextual understanding
(so, basically, anything a competent L2 support engineer at some random ISP or whatever could do)
you’ll be waiting a while. it turns out “i’m not saying it’s always programming.dev, but” was already in my previous ban reasons, and it was this time too.
You won’t be serving a lot of users any time soon, but if you have 16 - 32 Gb of RAM (faster better), a modern 6+ cores CPU, and:
Multiple 16 Gb GPUs will work REALLY well
Maybe a 24 Gb GPU? That is also super good.
Multiple 8 Gb GPUs - yeah this will be rather slow, but it can load up to say 24 billion models without completely melting down, but it will be a stretch
Single 8Gb GPU: you’d be comfortable most with 8B models up to 16B models at best
Single 4Gb GPU: Surprisingly usable, especially with 4B models. But your hard limit is about 9B parameters.
You need to download an inference engine. Now, there are various options, but I shill llama.cpp pretty hard, because while it’s not particularly fast it will run on anything.
My recommendation is usually Mistral model series, especially with Dolphin fine tune as those are unaligned (uncensored) models that you can align yourself.
Now, for some of the behavior you want, you may need to further fine tune your model. That might be a little less rosy of a situation. Quite frankly I can’t be assed to research this much further for some clearly bad-faith hostile comment, but from what I know, you need an alignment layer, finetune, then maaaaaybe an output scoring system. That should give you what you need.
EDIT: You’ll be first tuning the model in python then running it with llama.cpp by the way, so get comfortable with that if you’re not
I’ve spent 6+ years of my life in compsci academia
eh. look.
I realize you’ll probably receive/perceive this post negatively, ranging as anywhere from “criticism”/“extremely harsh” through … “condemnation”?
but, nonetheless, I have a request for you
please, for the love of ${deity}, go out and meet people. get out of your niche, explore a bit. you are so damned close to stepping in the trap, and you could do not-that.
(just think! you’ve spent a whole 6+ years on compsci? now imagine what your next 80+ years could be!)
My most honest goal is to educate people which on lemmy is always met with hate. people love to hate, parroting the same old nonsense that someone else taught them.
If you insist on ignorance then be ignorant in peace, don’t try such misguided attempts at sneer
There are things in which LLMs suck. And there are things that you wrongly believe as part of this bullshit twitter civil war.
oh and I suppose you can back that up with verifiable facts, yes?
and that you, yourself, can stand as a sole beacon against the otherwise regularly increasing evidence and studies that both indicate toward and also prove your claims to be full of shit? you are the saviour that can help enlighten us poor unenlightened mortals?
sounds very hard. managing your calendar must be quite a skill
Hallucination rates and model quality has been going up steadily, same with multishot prompts and RAG reducing hallucination rates. These are proven scientific facts, what the fuck are you on about? Open huggingface RIGHT NOW, go to the papers section, FUCKING READ.
I’ve spent 6+ years of my life in compsci academia to come here and be lectured by McDonald in his fucking basement, what has my life become
ah yes, my ability to read a pdf immediately confers upon me all the resources required to engage in materially equivalent experimentation of the thing that I just read! no matter whether the publisher spent cents or billions in the execution and development of said publication, oh no! it is so completely a cost paid just once, and thereafter it’s ~totally~ free!
oh, wait, hang on. no. no it’s the other thing. that one where all the criticisms continue to hold! my bad, sorry for mistaking those. guess I was roleplaying a LLM for a moment there!
You can experiment on your own GPU by running the tests using a variety of models of different generations (LLAMA 2 class 7B, LLAMA 3 class 7B, Gemma, Granite, Qwen, etc…)
Even the lowest end desktop hardware can run atleast 4B models. The only real difficulty is scripting the test system, but the papers are usually helpful with describing their test methodology.
👨🏿🦲: how many billions of models are you on
🗿: like, maybe 3, or 4 right now my dude
👨🏿🦲: you are like a little baby
👨🏿🦲: watch this
glue pizza
The most recent Qwen model supposedly works really well for cases like that, but this one I haven’t tested for myself and I’m going based on what some dude on reddit tested
Good for what? Glue pizza? Unnerving/creepy pasta?
Not making these famous logical errors
For example, how many Rs are in Strawberry? Or shit like that
(Although that one is a bad example because token based models will fundamentally make such mistakes[1]. There is a new technique that lets LLMs process byte level information that fixes it, however)
EIDT: [1] This sentence is badly written. I meant text based errors like number of letters of a word in this sentence. Token based LLMs operate on atomic units of tokens which may be part of a word, a complete word, or some sentence structure. Because of that they can’t interact with text the same way humans do, but a new paradigm that lets LLMs read their input as raw bytes will help with this.
you have lost the game
you have been voted off the island
you are the weakest list
etc etc etc
This is the most “insufferable redditor” stereotype shit possible, and to think we’re not even on Reddit
nah, the most insufferable Reddit shit was when you decided Lemmy doesn’t want to learn because somebody called you out on the confident bullshit you’re making up on the spot
like LLM like shithead though am I right?
fuck, there’s potential here, but a bit too specific for a t-shirt?
perhaps?
Nothing that I’ve said is even NEW. Do you want the papers? If you can read them, that is
Like this shit is so 2024 and somehow for you it’s like alien technology lmfao.
a’ight, sure bub, let’s play
tell me what hw spec I need to deploy some kind of interactive user-facing prompt system backed by whatever favourite LLM/transformer-model you want to pick. idgaf if it’s llama or qwen or some shit you’ve got brewing in your back shed - if it’s on huggingface, fair game. here’s the baselines:
(so, basically, anything a competent L2 support engineer at some random ISP or whatever could do)
hit it, I’m waiting.
you’ll be waiting a while. it turns out “i’m not saying it’s always programming.dev, but” was already in my previous ban reasons, and it was this time too.
Human latency? Not gonna happen.
You won’t be serving a lot of users any time soon, but if you have 16 - 32 Gb of RAM (faster better), a modern 6+ cores CPU, and:
You need to download an inference engine. Now, there are various options, but I shill llama.cpp pretty hard, because while it’s not particularly fast it will run on anything.
My recommendation is usually Mistral model series, especially with Dolphin fine tune as those are unaligned (uncensored) models that you can align yourself.
Now, for some of the behavior you want, you may need to further fine tune your model. That might be a little less rosy of a situation. Quite frankly I can’t be assed to research this much further for some clearly bad-faith hostile comment, but from what I know, you need an alignment layer, finetune, then maaaaaybe an output scoring system. That should give you what you need.
EDIT: You’ll be first tuning the model in python then running it with llama.cpp by the way, so get comfortable with that if you’re not
also
eh. look.
I realize you’ll probably receive/perceive this post negatively, ranging as anywhere from “criticism”/“extremely harsh” through … “condemnation”?
but, nonetheless, I have a request for you
please, for the love of ${deity}, go out and meet people. get out of your niche, explore a bit. you are so damned close to stepping in the trap, and you could do not-that.
(just think! you’ve spent a whole 6+ years on compsci? now imagine what your next 80+ years could be!)