Office space meme:

“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”

  • Zikeji
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    3
    ·
    edit-2
    2 days ago

    Open source isn’t really applicable to LLM models IMO.

    There is open weights (the model), and available training data, and other nuances.

    They actually went a step further and provided a very thorough breakdown of the training process, which does mean others could similarly train models from scratch with their own training data. HuggingFace seems to be doing just that as well. https://huggingface.co/blog/open-r1

    Edit: see the comment below by BakedCatboy for a more indepth explanation and correction of a misconception I’ve made

    • BakedCatboy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      2 days ago

      It’s worth noting that OpenR1 have themselves said that DeepSeek didn’t release any code for training the models, nor any of the crucial hyperparameters used. So even if you did have suitable training data, you wouldn’t be able to replicate it without re-discovering what they did.

      OSI specifically makes a carve-out that allows models to be considered “open source” under their open source AI definition without providing the training data, so when it comes to AI, open source is really about providing the code that kicks off training, checkpoints if used, and details about training data curation so that a comparable dataset can be compiled for replicating the results.

      • Zikeji
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Thanks for the correction and clarification! I just assumed from the open-r1 post that they gave everything aside from the training data.