• amzd@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    9 months ago

    If you have a GPU in your pc it’s almost always faster to just run your own llm locally. And you won’t have this issue. Search for ollama.