• 0 Posts
  • 274 Comments
Joined 8 months ago
cake
Cake day: April 7th, 2025

help-circle


  • vivenditoLinuxGNOME 50 Ends the X11 Era After Decades
    link
    fedilink
    English
    arrow-up
    9
    ·
    29 days ago

    X11 has a shitload of unwanted and unused features that your favorite X11 compositor is actively fighting AGAINST to render your GUI.

    I implore you to pick up the X.Org source code and your favorite X11 shitshow’s source code and realize why Wayland follows the same paradigms that apple adopted in 2001 and Microsoft in 2006.












  • vivenditoMicroblog Memes@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    One of the absolute best uses for LLMs is to generate quick summaries for massive data. It is pretty much the only use case where, if the model doesn’t overflow and become incoherent immediately [1], it is extremely useful.

    But nooooo, this is luddite.ml saying anything good about AI gets you burnt at the stake

    Some of y’all would’ve lit the fire under Jan Hus if you lived in the 15th century

    [1] This is more of a concern for local models with smaller parameter counts and running quantized. For premier models it’s not really much of a concern.


  • vivenditoMicroblog Memes@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 months ago

    That is different. It’s because you’re interacting with token-based models. There has been new research on giving byte level data to LLMs to solve this issue.

    The numerical calculation aspect of LLMs and this are different.

    It would be best to couple an LLM into a tool-calling system for rudimentary numeral calculations. Right now the only way to do that is to cook up a Python script with HF transformers and a finetuned model, I am not aware of any commercial model doing this. (And this is not what Microshit is doing)