• Eheran@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    4
    ·
    4 months ago

    But it is like magic…? I copy in a bunch of tables etc. from a datasheet and get out code to read and write EEPROM. I use that to read the content of the old BMS and flash a new chip with it. The battery is now working again, after the BMS had a hardware fault in the ADC, destroying the previous pack of cells.

    I ask for a simple frontend and I get exactly that. Now I can program ESP32s and perfectly control then via a browser. No me shit interface with some touch pins.

    I ask for code to run a simulation on head transfer… answer after some back and forth that is what I get.

    What will it be able to give me in 5 years when it is already like magic now?

      • Eheran@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        7
        ·
        4 months ago

        Stop spreading this. It clearly comes up with original things and does not just copy and paste existing stuff. As my examples could have told you, there is no such stuff on the Internet about programming some specific BMS from >10 years ago with a non-standard I2C.

        Amazing how anti people are, even downvoting my clearly positive applications of this tool. What is you problem that I use that to save time? Some things are only possible due to the time saving to begin with. I am not going to freaking learn HTML for a one off sunrise alarm interface.

        • conciselyverbose@sh.itjust.works
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          4 months ago

          It “comes up with original things” unless working matters in any way.

          If there wasn’t information in its training set there would be no possibility it gave you anything useful.

          • Eheran@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            4
            ·
            4 months ago

            Ah, there is “no possibility”. So unlile everyone else so far you completely understand how LLMs works and as such an expert can say this. Amazing that I find such a world leading expert here in Lemmy but this expert does not want a Nobel prize and instead just correct random people on the Internet. Thank you so much.

            • conciselyverbose@sh.itjust.works
              link
              fedilink
              arrow-up
              4
              arrow-down
              1
              ·
              4 months ago

              For it to do something both novel and correct, it would require comprehension.

              Nothing resembling comprehension in any way is part of any LLM.

        • vrighter@discuss.tchncs.de
          link
          fedilink
          arrow-up
          5
          arrow-down
          4
          ·
          edit-2
          4 months ago

          when you ask them simple maths questions from around the time they were trained they got them all right. When they specifically prompted it with questions that provably were published one year later, albeit at the same difficulty, it got 100% wrong answers. You’d be amazed at what one can find on the internet and just how muqh scraping they did to gather it all. 10 years ago is quite recent. Why wouldn’t there be documentation? (regardless of whether you managed to find it?) If it’s non standard, then I would expect something that is specifically about it somewhere in the training set, whereas the standard compliant stuff wouldn’t need a specific make and model to be mentioned.

          • Eheran@lemmy.world
            link
            fedilink
            arrow-up
            4
            arrow-down
            2
            ·
            4 months ago

            GPT scores in the top 1% in creativity. There is no need to discuss this. Anyone can try. It is super easy to come up with a unique question. Be it with stacking items or anything else. It is not just copying existing info.

        • alienanimals@lemmy.world
          link
          fedilink
          arrow-up
          5
          arrow-down
          7
          ·
          edit-2
          4 months ago

          The only thing the Luddites want is an echo-chamber to feed their confirmations bias’. They’ll downvote you or completely ignore you if you bring up any positives regarding machine learning.

          LLMs and machine learning are still in their infancy, yet they’re doing amazing things. It’s a tool in its early stages, not a boogeyman. Look to the billionaires if you want someone to blame. This tool set is advancing so fast that people in this thread are relying on goal posts that were moved long ago. Look at the guy claiming AI can’t generate images of proper fingers yet this is no longer true and Midjourney continues to make an insane amount of progress in such a short amount of time.

          Just look at how many people (50+) upvoted the image claiming AI can’t be used to wash dishes, when a simple Google search would prove them wrong.

          Not to mention… AI is helping physicists speed up experiments into supernovae to better understand the universe.

          AI is helping doctors to expedite cancer screening rates.

          AI is also helping to catch illegal fishing, tackle human trafficking, and track diseases.

          Edit The laymen that responded couldn’t even provide a single source for their unsubstantiated beliefs. It’s fortunate that they’re so inept.

          • conciselyverbose@sh.itjust.works
            link
            fedilink
            arrow-up
            6
            arrow-down
            4
            ·
            4 months ago

            LLMs are absolutely not in their infancy.

            They’re already many orders of magnitude past diminishing returns and don’t scale up for shit.

            • vrighter@discuss.tchncs.de
              link
              fedilink
              arrow-up
              4
              arrow-down
              3
              ·
              4 months ago

              there is literally a loaded die at the end of all generators. It is not part of the llm. It comes later down the pipeline. So not only diminishing returns, but the hallucinations are literally impossible to fix