• SabinStargem@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 days ago

    Keep in mind, a 122b (Qwen3.5 family), is high end for consumer machines, but it is likely that DQX would be using a much smaller model. Currently, we have Qwen models that are 0.8b, 4b, 9b, 27b, 35b, 122b, and 397b. Plus, ‘quanting’ can reduce how much memory a model takes up - at a tradeoff, o’course. I am guessing DQX would have multiple local models, and use the player’s hardware metrics to decide which model to deploy.

    When it comes to how much RAM is required, this screenshot from UnSloth about covers the current state of things. 4-bit is the sweet spot between quality and size, for now.

    Alternatively, the Chatty Slime could rely on cloud AI. Depending on Square’s strategy, that could be a freebie or a paid service. If the Chatty Slime gave options to the player - say, trading a potion for a stat seed, or responding to a quiz, Square could sell player behavior data.

    …Anyhow, my room has a mini-split AC. One of the best purchases in my life: my room lacked insulation in the first place, so it becomes toasty during summer. The side effect is being able to just run my GPU and not become a human slushy.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      Is your comment written by AI? It seems weird, and we already went over most of what it says.

      Also, DQ runs on Nintendo systems. That makes me certain it’s cloud based.

      • SabinStargem@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 days ago

        No, I didn’t use AI for that. Humans tend to come in many flavors.

        The previous post assumed that there are onlookers who don’t have experience with AI, thus wouldn’t be aware of the possibility that they can run it on local hardware.