I’ve had reasonably good results with deepseek 6.7b
It struggles and hallucinates once you get into more niche languages, but it’ll spit out a snake in Python or write file io functions pretty well - I’ve used it to read/write huge json in a buffer with good results.
I haven’t tried gpt4, but compared to 3.5 it’s pretty great
I have gpt4 with copilot at work so i mostly use that, it is pretty good for programming, you see its limitations when you start relying on it a lot on autocompletions, maybe because the code itself is not too coherent and gpt gets confused
Maybe chat tasks are simpler because it writes the whole thing from scratch, though I mostly ask it for limited functionalities, eg write a function that takes x and returns y or use a certain ORM to change a value.
I would suggest you try both gpt 3.5 and the free huggingface minstral 7b version (you can probably run it on pc too, it’s not huge) for programming tasks and see for yourself. For general knowledge though, gpt wins hands down over minsral 7b
Only quantized versions of the model were leaked. If you see any unquantized version of it then it’s something which was recreated from these, and not the original model. People have also requanted it from GGUF to EXL2 and probably other formats too.
I have tried the version available at hugginchat and it’s almost gpt4 level for programming tasks but not so much for general knowledge
What’s the best overall in your opinion for programming tasks?
I’ve had pretty flawless experience with the free ChatGPT3.5 for simple scripts but don’t generally stray beyond that.
I’ve had reasonably good results with deepseek 6.7b
It struggles and hallucinates once you get into more niche languages, but it’ll spit out a snake in Python or write file io functions pretty well - I’ve used it to read/write huge json in a buffer with good results.
I haven’t tried gpt4, but compared to 3.5 it’s pretty great
I have gpt4 with copilot at work so i mostly use that, it is pretty good for programming, you see its limitations when you start relying on it a lot on autocompletions, maybe because the code itself is not too coherent and gpt gets confused
Maybe chat tasks are simpler because it writes the whole thing from scratch, though I mostly ask it for limited functionalities, eg write a function that takes x and returns y or use a certain ORM to change a value.
I would suggest you try both gpt 3.5 and the free huggingface minstral 7b version (you can probably run it on pc too, it’s not huge) for programming tasks and see for yourself. For general knowledge though, gpt wins hands down over minsral 7b
I can’t find miqu-1-70b in huggingchat. Do I need to enable anything special anywhere?
https://huggingface.co/miqudev/miqu-1-70b
Thats the model, but how do I get it into huggingchat? https://huggingface.co/chat/
Edit: thanks for the suggestion though. 🙂
Yeah only have the 7b it seems on huggingchat, that’s the one I was talking about.
Given how good the 7b I wouldn’t be surprised if the 70b is better than gpt4 for programming-related chat.
Do you know if there are any plans to quantize it? I’d love to test it, but my 3090 can’t handle 70b models without quantization, unfortunately.
There are quantized versions on hugging face. There’s a q2 version, but idk how well that performs
Only quantized versions of the model were leaked. If you see any unquantized version of it then it’s something which was recreated from these, and not the original model. People have also requanted it from GGUF to EXL2 and probably other formats too.