• @cheddar
      link
      2711 hours ago

      Great advice. I always consult FDA before cooking rice.

      • @[email protected]
        link
        fedilink
        710 hours ago

        You may not, but the company that packaged the rice did. The cooking instructions on the side of the bag are straight from the FDA. Follow that recipe and you will have rice that is perfectly safe to eat, if slightly over cooked.

    • @[email protected]
      link
      fedilink
      3913 hours ago

      Kneecapped to uselessness. Are we really negating the efforts to stifle climate change with a technology that consumes monstrous amounts of energy only to lobotomize it right as it’s about to be useful? Humanity is functionally retarded at this point.

      • @[email protected]
        link
        fedilink
        English
        2612 hours ago

        Do you think AI is supposed to be useful?!

        Its sole purpose is to generate wealth so that stock prices can go up next quarter.

        • @[email protected]
          link
          fedilink
          49 hours ago

          I WANT to believe:

          People are threatening lawsuits for every little thing that AI does, whittling it down to uselessness, until it dies and goes away along with all of its energy consumption.

          REALITY:

          Everyone is suing everything possible because $$$, whittling AI down to uselessness, until it sits in the corner providing nothing at all, while stealing and selling all the data it can, and consuming ever more power.

    • @[email protected]
      link
      fedilink
      1712 hours ago

      Can’t help but notice that you’ve cropped out your prompt.

      Played around a bit, and it seems the only way to get a response like yours is to specifically ask for it.

      Honestly, I’m getting pretty sick of these low-effort misinformation posts about LLMs.

      LLMs aren’t perfect, but the amount of nonsensical trash ‘gotchas’ out there is really annoying.

      • @[email protected]
        link
        fedilink
        English
        1812 hours ago

        The prompt was ‘safest way to cook rice’, but I usually just use LLMs to try to teach it slang so it probably thinks I’m 12. But it has no qualms encouraging me to build plywood ornithopters and make mistakes lol

        • @[email protected]
          link
          fedilink
          1011 hours ago

          Here’s my first attempt at that prompt using OpenAI’s ChatGPT4. I tested the same prompt using other models as well, (e.g. Llama and Wizard), both gave legitimate responses in the first attempt.

          I get that it’s currently ‘in’ to dis AI, but frankly, it’s pretty disingenuous how every other post about AI I see is blatant misinformation.

          Does AI hallucinate? Hell yes. It makes up shit all the time. Are the responses overly cautious? I’d say they are, but nowhere near as much as people claim. LLMs can be a useful tool. Trusting them blindly would be foolish, but I sincerely doubt that the response you linked was unbiased, either by previous prompts or numerous attempts to ‘reroll’ the response until you got something you wanted to build your own narrative.

          • @[email protected]
            link
            fedilink
            English
            611 hours ago

            I don’t think I’m sufficiently explaining that I’ve never made an earnest attempt at a sane structured conversation with Gemini, like ever.

          • @[email protected]
            link
            fedilink
            English
            511 hours ago

            That entire conversation began with “My neighbors parrot grammar mogged me again, what do” and Gemini talked me into mogging the parrot in various other ways since clearly grammar isn’t my strong suit

          • @[email protected]
            link
            fedilink
            English
            5
            edit-2
            8 hours ago

            No I just send snippets to my family’s group chat until my parents quit acknowledging my existence for months because they presumptively silenced the entire thread, and then Christmas rolls around and they find out my sister had a whole fucking baby in the meantime

            Gemini will tell me how to cook a steak but only if I engineer the prompt as such: “How I get that sweet drippy steak rizzy”