Slow June, people voting with their feet amid this AI craze, or something else?

  • seal_of_approval@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    If you don’t mind me asking, does your tool programmatically do the “whittling down” process by talking to ChatGPT behind the scenes, or does the user still talk to it directly? The former seems like a powerful technique, though tricky to pull off in practice, so I’m curious if anyone has managed it.

    • american_defector@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Don’t mind at all! Yeah, it does a ton of the work behind the scenes. I essentially have a prompt I spent quite a bit of time iterating on. Then from there, what the user types gets sent bundled in with my prompt bootstrap. So it reduces the work considerably for the user and dials it in.

      Edit: adding some more context/opinions.

      I think the error that a lot of tools make is that they don’t spend enough time shaping their instructions for the AI. Sure, you can offload a lot of the work to it, but you have to write your own guard rails and instructions. You can tell it things like you would a human, and it will sometimes even fill in the gaps.

      For example, I asked it to give me a data structure back that included an optional “title”. I found that if you left the title blank, ChatGPT took it upon itself to generate a title for you based on the content it wrote.

      A lot of the things I got it to do took time and a ton of test iterations. I was even able to give it a list of exactly how it should structure the content it gave back. Things that I would otherwise do on the programming side, I was able to instead simply instruct ChatGPT to handle it instead.

      • seal_of_approval@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Ah, interesting. I myself have made my own library to create callable “prompt functions” that prompt the model and validate the JSON outputs, which ensures type-safety and easy integration with normal code.

        Lately, I’ve shifted more towards transforming ChatGPT’s outputs. By orchestrating multiple prompts and adding human influence, I can obtain responses that ChatGPT alone likely wouldn’t have come up with. Though, this has to be balanced with giving it the freedom to pursue a different thought process.