ooli@lemmy.world to Technology@lemmy.worldEnglish · 9 个月前GPU's rival? What is Language Processing Unit (LPU)www.turingpost.comexternal-linkmessage-square15fedilinkarrow-up1100arrow-down110
arrow-up190arrow-down1external-linkGPU's rival? What is Language Processing Unit (LPU)www.turingpost.comooli@lemmy.world to Technology@lemmy.worldEnglish · 9 个月前message-square15fedilink
minus-squareScott@sh.itjust.workslinkfedilinkEnglisharrow-up14·9 个月前I’m just trying to get my hands on some faster hardware, https://groq.com has been able to do some crazy shit with their 500/tokens/sec on their LPUs
minus-squareLmaydevlinkfedilinkEnglisharrow-up6·edit-29 个月前That is insanely fast! I figured we’d be getting “AI cards” at some point soon.
minus-squareLojcs@lemm.eelinkfedilinkEnglisharrow-up3arrow-down2·9 个月前What kind of a website is that? Super slow and doesn’t work without web assembly. Do you really need that for a simple interface
minus-squareScott@sh.itjust.workslinkfedilinkEnglisharrow-up2·9 个月前It’s not about their frontend, they are running custom LPUs which can process LLM tokens at 500/sec which is insanely impressive. For reference with a max size of 2k tokens, my dual xeon silver 4114 procs take 2-3 minutes.
minus-squareAmaltheamannen@lemmy.mllinkfedilinkEnglisharrow-up1·9 个月前Isn’t it those that cost $2000 per 250mb of memory?? Meaning you’d about 350 to load any half decent model.
minus-squareScott@sh.itjust.workslinkfedilinkEnglisharrow-up2·9 个月前Not sure how they are doing it, but it was actually $20k not $2k for 250mb of memory on the card. I suspect the models are probably cached in system memory.
minus-squareLojcs@lemm.eelinkfedilinkEnglisharrow-up1·9 个月前No I got what you meant, but that site is weird if it’s not doing anything on its own
minus-squareFinadil@lemmy.worldlinkfedilinkEnglisharrow-up1·9 个月前That with a fp16 model? Don’t be scared to try even a 4 bit quantization, you’d be surprised at how little is lost and how much quicker it is.
I’m just trying to get my hands on some faster hardware, https://groq.com has been able to do some crazy shit with their 500/tokens/sec on their LPUs
That is insanely fast! I figured we’d be getting “AI cards” at some point soon.
What kind of a website is that? Super slow and doesn’t work without web assembly. Do you really need that for a simple interface
It’s not about their frontend, they are running custom LPUs which can process LLM tokens at 500/sec which is insanely impressive.
For reference with a max size of 2k tokens, my dual xeon silver 4114 procs take 2-3 minutes.
Isn’t it those that cost $2000 per 250mb of memory?? Meaning you’d about 350 to load any half decent model.
Not sure how they are doing it, but it was actually $20k not $2k for 250mb of memory on the card. I suspect the models are probably cached in system memory.
No I got what you meant, but that site is weird if it’s not doing anything on its own
That with a fp16 model? Don’t be scared to try even a 4 bit quantization, you’d be surprised at how little is lost and how much quicker it is.