• @Phoenix
    link
    English
    211 months ago

    True! Interfacing is also a lot of work, but I think that starts straying away from AI to “How do we interact with it.” And let’s be real, plugging into OAI’s or Anthropic’s API is not that hard.

    Does remind me of a very interesting implementation I saw once though. A VRChat bot powered by GPT 3.5 with TTS that used sentiment classification to display the appropriate emotion for the text generated. You could interact with it directly via talking to it. Very cool. Also very uncanny, truth be told.

    All that is still in the realm of “fucking around” though.

    • @CeeBee
      link
      English
      111 months ago

      I’m coming at it from the standpoint of implementing an AI model into a suite of applications. Which I have done. I have even trained a custom version of a model to fit our needs.

      Plugging into an API is more or less trivial (as you said), but that’s only a single aspect of an application. And that’s assuming that you’re using someone else’s API and not running and implementing the model yourself.

      • @Phoenix
        link
        English
        111 months ago

        You can make it as complicated as you want, of course.

        Out of curiosity, what use-case did you find for it? I’m always interested to see how AI is actually applied in real settings.

        • @CeeBee
          link
          English
          111 months ago

          We weren’t using LLMs, but object detection models.

          We were doing facial recognition, patron counting, firearm detection, etc.