publication croisée depuis : https://lemmy.world/post/1474932
Hi there.
I wanted to run LLMs locally on my server (for better privacy), and was wondering if:
- I could use Intel ARC/AMD GPUs - these are often less expensive and AMD has open source drivers, which is something I like.
- If a PCIe x4 Gen 3 slot would be enough (it’s an x16 slot with x4 speeds) - this is an important consideration.
- Would 8GB of RAM (in the GPU, I believe it’s called VRAM?) be enough?
I’m looking at language models to train on my Reddit and Lemmy content, in an aim to make it write like me (and maybe even better than me? Who knows). I don’t quite know which models I will train, or how I will do so (I certainly won’t be writing anything from scratch), but I was wondering; with the explosion of FOSS AI models, maybe something like this would be possible with the hardware constraints I mentioned above?
Does the speed of the connection between the GPU and the CPU really matter in such applications?
Thanks!
I certainly will! I’m just not very good with maths either, and although I know what floating point numbers are, I would have to read more about it to make sure I understand your comment.
Those are some insane requirements to run models haha. How long does it take for you to train your models on datasets (for me, a “dataset” would be my entire Reddit/Lemmy comment history)?