Llama 2 and its derivatives, mostly. Simple local ui available here.
Not as good as chatGPT 3.5 in my experience. Just kinda falls apart on anything too complex, and is a lot more likely to get things wrong.
I tried it out using the ‘Open-Orca/OpenOrcaxOpenChat-Preview2-13B’ 4 bit 32g model. Its surprisingly fast to generate. It seems significantly faster than ChatGPT on my 3060. (with ExLlama)
There are also some models tuned specifically to actually answer your requests instead of the ‘As an AI language model’ kind of stuff.
Edit: just tried a newer model and its a lot better. (dolphin-2.1-mistral-7b)
For the same reason SaaS is popular in general: yes, you could get a VPS, install all the needed software on it, keep it up to date, oor you could pay a company to do all that for you.
With the same efficiency ? I’m interested in an example
Why everyone using these crappy SaaS then ?
Llama 2 and its derivatives, mostly. Simple local ui available here.
Not as good as chatGPT 3.5 in my experience. Just kinda falls apart on anything too complex, and is a lot more likely to get things wrong.
I tried it out using the ‘Open-Orca/OpenOrcaxOpenChat-Preview2-13B’ 4 bit 32g model. Its surprisingly fast to generate. It seems significantly faster than ChatGPT on my 3060. (with ExLlama)
There are also some models tuned specifically to actually answer your requests instead of the ‘As an AI language model’ kind of stuff.
Edit: just tried a newer model and its a lot better. (dolphin-2.1-mistral-7b)
The weights themselves are private, and retraining takes too long for it to be practical.
For the same reason SaaS is popular in general: yes, you could get a VPS, install all the needed software on it, keep it up to date, oor you could pay a company to do all that for you.