• bloup@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 day ago

    In my experience, every time there’s been a new model I’m pretty astonished by its capabilities for mathematics and programming. But every single time it seems to rapidly regress to worse than it was before the new model was released. I’m guessing that there is some kind of loss leader thing going on where they support the model with a completely unsustainable level of compute to hook you and then throttle it somehow to improve the economics for the business.

    • sith@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 day ago

      It’s for sure not impossible. But my guess is that it’s because you learn the new model and your behavior and expectations change. It’s a known phenomenon and I do believe the developers/companies when they say that they didn’t change anything. It’s also quite easy to verify/test this hypothesis by using locally hosted LLMs. There are probably a few papers covering this already.

      Though it does happen that one is downgraded to a smaller model when using free versions OpenAI, Anthropic and others. But my experience is that this information allways is explicit in the UI. Still, it’s probably quite easy to miss.

      Also, I’m almost exclusively using the free version of Mistral Large (Le Chat) and I’ve never experienced regression. But Mistral also never downgrades, it just becomes very slow.