• meme_historian@lemmy.dbzer0.comOP
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 天前

    Don’t get me wrong though… throwing an LLM at it would be a lot easier and faster. Just a mind boggling use of resources for a task that could probably be done more efficiently :D

    Setting this up with Apache Solr and a suitable search frontend runs a high risk of becoming an abandoned side project itself^^

    • thickertoofan@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 天前

      Yeah LLM seems like the go to solution. And the best one. And talking about resources, we can use barely smart models which can generate coherent sentences, be it 0.5b-3b models offloaded to CPU inference only.