publication croisée depuis : https://lemmy.world/post/1474932
Hi there.
I wanted to run LLMs locally on my server (for better privacy), and was wondering if:
- I could use Intel ARC/AMD GPUs - these are often less expensive and AMD has open source drivers, which is something I like.
- If a PCIe x4 Gen 3 slot would be enough (it’s an x16 slot with x4 speeds) - this is an important consideration.
- Would 8GB of RAM (in the GPU, I believe it’s called VRAM?) be enough?
I’m looking at language models to train on my Reddit and Lemmy content, in an aim to make it write like me (and maybe even better than me? Who knows). I don’t quite know which models I will train, or how I will do so (I certainly won’t be writing anything from scratch), but I was wondering; with the explosion of FOSS AI models, maybe something like this would be possible with the hardware constraints I mentioned above?
Does the speed of the connection between the GPU and the CPU really matter in such applications?
Thanks!
You can probably run a 7b LLM comfortably in system RAM, maybe one of the smaller 13b ones.
Software to use
- https://github.com/ggerganov/llama.cpp - command line. Basic, flexible.
- https://github.com/LostRuins/koboldcpp - Precompiled llama.cpp with ui - easy to start with
Models
In general, you want small GGML models. https://huggingface.co/TheBloke has a lot of them. There are some superHOT version of models, but I’d avoid them for now. They’re trained to handle bigger context sizes, but it seems that made them dumber too. There’s a lot of new things coming out on bigger context lengths, so you should probably revisit that when you need it.
- https://huggingface.co/TheBloke/orca_mini_v2_13b-GGML - the q3_K_M.bin perhaps - might still be too big, depending on what you’re running in the background
- https://huggingface.co/TheBloke/orca_mini_3B-GGML - very small model. Not sure how well it’ll do
- https://huggingface.co/TheBloke/airoboros-7B-gpt4-1.4-GGML
- https://huggingface.co/TheBloke/vicuna-7B-v1.3-GGML
- https://huggingface.co/TheBloke/WizardLM-7B-V1.0-Uncensored-GGML
Each have different strengths, orca is supposed to be better at reasoning, airoboros is good at longer and more storylike answers, vicuna is a very good allrounder, wizardlm is also a notably good allrounder.
For training, there are some tricks like qlora, but results aren’t impressive from what I’ve read. Also, training LLM’s can be pretty difficult to get the results you want. You should probably start with just running them and get comfortable with that, maybe try few-shot prompts (prompts with a few examples of writing styles), and then go from there.
Thank you. I did have
llama.cpp
in mind but didn’t know where or how to start!Do these models have a limit on how much information they can injest and how much they can improve relative to the information fed to them?
LLM’s don’t ingest information as such. The text gets broken into tokens (parts of words, like “catch” can be “cat” + “ch” for example), and then run through training. Training basically learns the statistical likelyhood of which token follow an array of existing tokens. It’s in some ways similar to a markov chain, but of course much more complex. It has layers of statistics, and preprocessors that can figure out which tokens to give higher precedence in the input text.
Basically the more parameters, the more and subtler patterns it can learn. Smaller models are often trained on fewer tokens than bigger ones, but it’s still a massive amount. IIRC it’s something like 1T tokens for 7 and 13, and 1.4T tokens for 33b and 65b. In comparison to the models I linked, ChatGPT 3.5 is rumored to be 175b parameters.
In addition to just parameter size, you have quantization of the numbers. Originally in a model each parameter number is 16bit float, it turns out you can reduce it to 8bit int or even 4 and 3 bit with not too much hit at complexity. There’s different ways to quantize the parameters, with varying impact on the “smartness” of the model. By reducing the resolution of the numbers, the memory needed for the model is reduced, and in some cases the speed of running them is increased.
When it comes to training, the best results have been achieved with full 16bit fp, but there are some techniques to train on quantized models too. The results I’ve seen from that is less impressive, but it’s been a while since last I looked at it.
Edit: I mentioned qlora previously, which is for training quantized models. I think that’s only available for gpu though.
Edit2: This might be a better markov chain explanation than the previous link
Thanks! I know absolutely nothing about machine learning, some of the terms you mentioned didn’t quite register - but I’ll try reading up on it. I was going to run
Llama.cpp
or a derivative, a GUI sounds nice to have.Do you suggest I wait for GPU prices to go down to aim for the 16GB models? The higher end GPUs are exorbitantly priced.
Cheers
Just ask if you want some clarification.
As for GPU, I’m waiting… IMHO it’s just too expensive now. And sadly, Nvidia is currently the only game in town. Some software works on amd, but just about everything works on Nvidia.
That said, my PC has 48gb system ram, and I can run 65b models on it with about 1s per token. With a few layers offloaded to my 10gb GPU. That would otherwise require 2x 3090 or 4090 (2x4090 would be about 20x faster though…)
I certainly will! I’m just not very good with maths either, and although I know what floating point numbers are, I would have to read more about it to make sure I understand your comment.
Those are some insane requirements to run models haha. How long does it take for you to train your models on datasets (for me, a “dataset” would be my entire Reddit/Lemmy comment history)?
Another thing, llama.cpp support offloading layers to gpu, you could try opencl backend for that for non-nvidia gpu’s. But llama.cpp can also run on cpu-only, with usable speed. On my system, it does about 150ms per token on a 13b model.
koboldcpp is probably the most straight forward to get running, since you don’t have to compile, it has a simple UI to set launch parameters, and it also have a web ui to chat with the bot in. And since it use llama.cpp it support everything that does, including opencl (clblast in launcher)
Thanks, I’ll take a look
What software do you want to run?
I’ve been doing a lot of research on this over the last 2 weeks. I have my machine in the mail, but have not tried anything myself on my own hardware.
For Stable Diffusion, 8GBV is usually considered absolute minimum to do very basic stuff only. 16GBV or more is the basic need for a decent workflow.
For AMD I have seen multiple sources saying to avoid it, but there are a few people that have working examples in the wild. Apparently, AMD only supports the 7k series of GPUs officially with ROCm/hips/AI stuff.
Officially with Stable Diffusion, only nvidia is supported.
I don’t know the kind of LLM I would want to run. I’m just going through some names, would you be able to recommend anything that might learn from text?
Thanks, it would seem I need to stick to Nvidia, although I don’t like the idea very much. Unfortunate
This is a general list that was shared recently (has google analytics though):
PrivateGPT is on my list to try after someone posted about it weeks ago with this how to article (that has a view limit embedded before a pay wall)/github project repo:
I may have been doing something wrong, but in my experience llama.cpp with openCL offloading isn’t much faster than CPU only, it uses the same CPU usage with the addition of my GPU making typewriter noises.
I have written this gist to run fastchat-t5-3b-v1.0 using Intel’s IPEX and it runs quite well, I have an A770 16GB but it seems to use under 8GB when using
bfloat16
. It could be easily be modified to run something else though.Or if you want a GUI (or a nice CLI), I’ve added support for Intel XPUs in FastChat.
Thanks, I’ll take a look! A GUI is certainly very helpful :)
You can rent super powerful GPUs by the minute via cloud infrastructure. It’s probably the most viable way.
Sorry, but I don’t think that’s a private idea. I probably won’t be doing that