

Kekw we’ll see


Kekw we’ll see
X11 has a shitload of unwanted and unused features that your favorite X11 compositor is actively fighting AGAINST to render your GUI.
I implore you to pick up the X.Org source code and your favorite X11 shitshow’s source code and realize why Wayland follows the same paradigms that apple adopted in 2001 and Microsoft in 2006.
Yes, the insane old ways are being phased out for a reason. Sorry that we don’t keep the world in a heavily romanticized version of 2003 forever.


If you’re running Linux, this doesn’t affect you in any way.


Yes, them not saying anything why they support that hiterlite scum is quite concerning. I think the CEO is just mad he got caught trying to startup a Nazi bar


“You can’t call everything racism!”
looks inside
Blatant Racism
mfw europeans are hitlerite


I don’t think we should work with scum like DHH and vaxry just because some asshole lib might accuse us of purity tests
If “not working with people who are maniacs who want you dead” is a purity test I’m dusting off my Inquisition book


Because as much as they’re ridiculed today by libcucks of OSS, FSF was a formidable force of software once. At some point in history literally the only way to avoid paying absolutely insane manufacturer license fees for things like compilers was using GNU tools.
If they put their ass into it, they can pull it off tbh


Ohhhhhhhb lmfao you’re right


OnlyOffice sees little to no dev time and it is insanely behind LO in terms of development and features, please consider using LO for your own sake
Guys this comment is wrong I was thinking of OpenOffice
Wood is dogshit bro fuck you smoking
Well, not really, in this context
One of the absolute best uses for LLMs is to generate quick summaries for massive data. It is pretty much the only use case where, if the model doesn’t overflow and become incoherent immediately [1], it is extremely useful.
But nooooo, this is luddite.ml saying anything good about AI gets you burnt at the stake
Some of y’all would’ve lit the fire under Jan Hus if you lived in the 15th century
[1] This is more of a concern for local models with smaller parameter counts and running quantized. For premier models it’s not really much of a concern.
That is different. It’s because you’re interacting with token-based models. There has been new research on giving byte level data to LLMs to solve this issue.
The numerical calculation aspect of LLMs and this are different.
It would be best to couple an LLM into a tool-calling system for rudimentary numeral calculations. Right now the only way to do that is to cook up a Python script with HF transformers and a finetuned model, I am not aware of any commercial model doing this. (And this is not what Microshit is doing)
These were Reddit bridges when the exodus happened


That’s for quanting a model yourself. You can instead (read that as “should”) download an already quantized model. You can find quantized models from the HuggingFace page of your model of choice. (Pro tip: quants by Bartowski, Unsloth and Mradermacher are high quality)
And then you just run it.
You can also use Kobold.cpp or OpenWebUI as friendly front ends for llama.cpp
Also, to answer your question, yes.


llama.cpp
The Only Inference Engine You’ll Ever Need™
Cuntshit motherfucking whoresons
Pick up a fuckgun and join the resistance
This is literally, actually a bond villain plot