You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers. AI code is absolutely up to production quality! Also, you’re all…
You won’t be serving a lot of users any time soon, but if you have 16 - 32 Gb of RAM (faster better), a modern 6+ cores CPU, and:
Multiple 16 Gb GPUs will work REALLY well
Maybe a 24 Gb GPU? That is also super good.
Multiple 8 Gb GPUs - yeah this will be rather slow, but it can load up to say 24 billion models without completely melting down, but it will be a stretch
Single 8Gb GPU: you’d be comfortable most with 8B models up to 16B models at best
Single 4Gb GPU: Surprisingly usable, especially with 4B models. But your hard limit is about 9B parameters.
You need to download an inference engine. Now, there are various options, but I shill llama.cpp pretty hard, because while it’s not particularly fast it will run on anything.
My recommendation is usually Mistral model series, especially with Dolphin fine tune as those are unaligned (uncensored) models that you can align yourself.
Now, for some of the behavior you want, you may need to further fine tune your model. That might be a little less rosy of a situation. Quite frankly I can’t be assed to research this much further for some clearly bad-faith hostile comment, but from what I know, you need an alignment layer, finetune, then maaaaaybe an output scoring system. That should give you what you need.
EDIT: You’ll be first tuning the model in python then running it with llama.cpp by the way, so get comfortable with that if you’re not
Human latency? Not gonna happen.
You won’t be serving a lot of users any time soon, but if you have 16 - 32 Gb of RAM (faster better), a modern 6+ cores CPU, and:
You need to download an inference engine. Now, there are various options, but I shill llama.cpp pretty hard, because while it’s not particularly fast it will run on anything.
My recommendation is usually Mistral model series, especially with Dolphin fine tune as those are unaligned (uncensored) models that you can align yourself.
Now, for some of the behavior you want, you may need to further fine tune your model. That might be a little less rosy of a situation. Quite frankly I can’t be assed to research this much further for some clearly bad-faith hostile comment, but from what I know, you need an alignment layer, finetune, then maaaaaybe an output scoring system. That should give you what you need.
EDIT: You’ll be first tuning the model in python then running it with llama.cpp by the way, so get comfortable with that if you’re not