If you’re just wanting to run LLMs quickly on your computer in the command line, this is about as simple as it gets. Ollama provides an easy CLI to generate text, and there’s also a Raycast extension for more powerful usage.

  • @popcar2
    link
    48 months ago

    There’s also GPT4All which has the same concept but comes with a convenient GUI rather than run on the command line. I had some fun with Mistral-7B but honestly the weaker models are too dumb to be useful.

    • silasOP
      link
      English
      18 months ago

      Oh nice! Thanks for sharing

    • silasOP
      link
      English
      68 months ago

      Can’t speak to that much because I haven’t reviewed the code myself, but it’s open-source and everything runs locally on your machine without network requests

  • @eluvatar
    link
    18 months ago

    Anyone try running this on WSL?

  • @varsock
    link
    11 month ago

    when running models locally, I presume the models are trained and the weights and stuff are exported to a “model.” For example Meta’s LLama model.

    Do these models get updated, new versions released? I don’t quite understand