ooli@lemmy.world to Technology@lemmy.worldEnglish · 1 year agoGPU's rival? What is Language Processing Unit (LPU)www.turingpost.comexternal-linkmessage-square15linkfedilinkarrow-up1100arrow-down110
arrow-up190arrow-down1external-linkGPU's rival? What is Language Processing Unit (LPU)www.turingpost.comooli@lemmy.world to Technology@lemmy.worldEnglish · 1 year agomessage-square15linkfedilink
minus-squareScott@sh.itjust.workslinkfedilinkEnglisharrow-up2·1 year agoIt’s not about their frontend, they are running custom LPUs which can process LLM tokens at 500/sec which is insanely impressive. For reference with a max size of 2k tokens, my dual xeon silver 4114 procs take 2-3 minutes.
minus-squareAmaltheamannen@lemmy.mllinkfedilinkEnglisharrow-up1·1 year agoIsn’t it those that cost $2000 per 250mb of memory?? Meaning you’d about 350 to load any half decent model.
minus-squareScott@sh.itjust.workslinkfedilinkEnglisharrow-up2·1 year agoNot sure how they are doing it, but it was actually $20k not $2k for 250mb of memory on the card. I suspect the models are probably cached in system memory.
minus-squareLojcs@lemm.eelinkfedilinkEnglisharrow-up1·1 year agoNo I got what you meant, but that site is weird if it’s not doing anything on its own
minus-squareFinadil@lemmy.worldlinkfedilinkEnglisharrow-up1·1 year agoThat with a fp16 model? Don’t be scared to try even a 4 bit quantization, you’d be surprised at how little is lost and how much quicker it is.
It’s not about their frontend, they are running custom LPUs which can process LLM tokens at 500/sec which is insanely impressive.
For reference with a max size of 2k tokens, my dual xeon silver 4114 procs take 2-3 minutes.
Isn’t it those that cost $2000 per 250mb of memory?? Meaning you’d about 350 to load any half decent model.
Not sure how they are doing it, but it was actually $20k not $2k for 250mb of memory on the card. I suspect the models are probably cached in system memory.
No I got what you meant, but that site is weird if it’s not doing anything on its own
That with a fp16 model? Don’t be scared to try even a 4 bit quantization, you’d be surprised at how little is lost and how much quicker it is.