• Lmaydev
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    4
    ·
    1 year ago

    Or computers decades before that.

    Many of these advances are incredibly recent.

    And also many of the things we use in our day to day are ai powered without people even realising.

      • Aceticon@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        Automated mail sorting has been using AI to read post codes from envelopes for deacades, only back then - pre hype - it was just called Neural Networks.

        That tech is almost 3 decades old.

          • Aceticon@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            At the time I learned this at Uni (back in the early 90s) it was already NNs, not algorithms.

            (This was maybe a decade before OCR became widespread)

            In fact a coursework project I did there was recognition of handwritten numbers with a neural network. The thing was amazingly good (our implementation actually had a bug and the thing still managed to be almost 90% correct on a test data set, so it somehow mostly worked its way around the bug) and it was a small NN with no need for massive training sets (which is the main difference with Large Language Models versus the more run-off-the-mill neural networks), this at a time when algorithmic number and character recognition were considered a very difficult problem.

            Back then Neural Networks (and other stuff like Genetic Algorithms) were all pretty new and using it in automated mail sorting was recent and not yet widespread.

            Nowadays you have it doing stuff like face recognition, built-in on phones for phone unlocking…

      • TWeaK@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        The key fact here is that it’s not “AI” as conventionally thought of in all the scifi media we’ve consumed over our lifetimes, but AI in the form of a product that tech companies of the day are marketing. It’s really just a complicated algorithm based off an expansive dataset, rather than something that “thinks”. It can’t come up with new solutions, only re-use previous ones; it wouldn’t be able to take one solution for one thing and apply that to a different problem. It still needs people to steer it in the right direction, and to verify its results are even accurate. However AI is now probably better than people at identifying previous problems and remembering the solution.

        So, while you could say that lots of things are “powered by AI”, you can just as easily say that we don’t have any real form of AI just yet.

        • El Barto@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Oh but those pattern recognition examples are about machine learning, right? Which I guess it’s a form of AI.

          • Lmaydev
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            They are 100% AI

            Neural networks derive from the percaptatron which is one of the earliest forms of AI.

          • TWeaK@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            Perhaps, but at best it’s still a very basic form of AI, and maybe shouldn’t even be called AI. Before things like ChatGPT, the term “AI” meant a full blown intelligence that could pass a Turing test, and a Turing test is meant to prove actual artificial thought akin to the level of human thought - something beyond following mere pre-programmed instructions. Machine learning doesn’t really learn anything, it’s just an algorithm that repeatedly measures and then iterates to achieve an ideal set of values for desired variables. It’s very clever, but it doesn’t really think.

            • El Barto@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I have to disagree with you in the machine learning definition. Sure, the machine doesn’t think in those circumstances, but it’s definitely learning, if we go by what you describe what they do.

              Learning is a broad concept, sure. But say, if a kid is learning to draw apples, then is successful to draw apples without help in the future, we could way that the kid achieved “that ideal set of values.”

              • TWeaK@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Machine learning is a simpler type of AI than an LLM, like ChatGPT or AI image generators. LLM’s incorporate machine learning.

                In terms of learning to draw something, after a child learns to draw an apple they will reliably draw an apple every time. If AI “learns” to draw an apple it tends to come up with something subtley unrealistic, eg the apple might have multiple stalks. It fits the parameters it’s learned about apples, parameters which were prescribed by its programming, but it hasn’t truly understood what an apple is. Furthermore, if you applied the parameters it learned about apples to something else, it might completely fail to understand it all together.

                A human being can think and interconnect its throughts much more intricately, we go beyond our basic programming and often apply knowledge learned in one thing to something completely different. Our understanding of things is much more expansive than AI. AI currently has the basic building blocks of understanding, in that it can record and recall knowledge, but it lacks the full amount of interconnections between different pieces and types of knowledge that human beings develop.

                • El Barto@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  1 year ago

                  Thanks. I understood all that. But my point is that machine learning is still learning, just like machine walking is still walking. Can a human being be much better at walking than a machine? Sure. But that doesn’t mean that the machine isn’t walking.

                  Regardless, I appreciate your comment. Interesting discussion.

            • Lmaydev
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              It literally never meant that.

              The turing test is designed to see how good chat bots are at pretending to be human. They don’t measure intelegence in any way. See the Chinese room experiment.

              Machine learning is another name for neural networks which are some of the original AI systems.

              Artificial intelligence is a field in academia.

        • Lmaydev
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          You absolutely can’t say that.

          Artificial intelligence is a field that has existed since the 50s.

          It also isn’t based on a data set in many cases. For instance path finding algorithms don’t require a dataset.

          Neural networks (which is what you seem to be referring to) are trained on a data set. But once trained can indeed come up with new results. These have existed in primitive forms (see the perceptatron) since the late 50s