• 3 Posts
  • 556 Comments
Joined 2 years ago
cake
Cake day: July 22nd, 2023

help-circle

  • wischitoich_iel@feddit.orgich🚫🦕🚂iel
    link
    fedilink
    arrow-up
    1
    ·
    38 minutes ago

    Grundsätzlich zwar nicht aber bisschen schon. Vor allem weil du dir unendlich viele Dinge ausdenken kannst die unwiederlegbar sind.

    Nur weil jemand behauptet dass wir alle, bunte, unsichtbare, Schutzeinhörner haben die uns überall begleiten und es weder Beweise dafür oder dagegen gibt, ist die Antwort trotzdem nicht “wir können es nicht wissen, weil Abwesenheit von beweisen kein Beweis für Abwesenheit ist”, sondern dass die Idee kompletter Stumpfsinn ist, und falsch.




  • Your boss expects you to weld with good quality but they don’t expect you to answer every question there is, without any mistakes. The problem with LLMs is that they are trained purely on text found on the internet but they have no “life experience” and thus there world model is very different from ours. There are overlaps (that’s why they can produce any coherent output at all) but there are situations that make perfect sense in its world model, that’s complete bogus in the real world.

    It’s a bit like the shadows in Platons cave allegory. LLMs are practically trained only on the shadows and so the output is completely based on that shadow world. LLMs can describe pain (because it was in the training data) but it was never smacked in the face.



  • To be fair all of what you’ve said applies to humans too. Look how many flat earthers there are and even more people that believe in homeopathy or think that vaccines give you autism, think that aliens built the pyramids.

    But nobody calls that “hallucinations” in humans. Are LLMs perfect? Definitely not. Are they useful? Somewhat; but definitely extremely far from PhD level intelligence as some claim.

    But there are things LLMs are why better than any single human already (not collectively). Giving you a hint (doesn’t have to be 100% accurate) what topics to look up if you can just describe is vaguely but don’t know what you would even search for in a traditional search engine.

    Of course you can not trust it blindly, but you shouldn’t trust humans blindly either, that’s we we have the scientific method, because humans are unreliable too.


  • The training process evolves models to do predictions. The actual underlying mechanisms are not too relevant because the prediction function is an emergent property.

    You brain is just biochemistry and biochemistry isn’t intelligent and yet you are. Think of the number three and all you know about it. There is not a single neuron in your brain that has any idea what the concept of three even means. It’s an emergent behavior.









  • wischitoAtheist Memes@lemmy.worldWhen he's right, he's right.
    link
    fedilink
    arrow-up
    4
    arrow-down
    6
    ·
    7 days ago

    Most sciences don’t care about “truth” but care about models that predict the outcome of experiments. Even a model that works perfectly doesn’t mean that this model is how the universe works. The universe could work completely different but the model happens to be very accurate anyway. Think about Newtowns laws of motion. They do not describe how the universe really works but the model is still pretty accurate and useful in many situations.

    Even if we some day find a theory of everything, that still doesn’t mean we know anything about the true nature of the universe. Just that everything we can observe is described by the model we developed.




  • Current LLMs are definitely not intelligent, but predicting the future is a big part (if not the most important part) of intelligence.

    Your comment is a bit like saying that humans can’t be intelligent, because the biochemistry in our brains is just laws of physics in motion, and the laws of physics are not intelligent.

    Intelligent is an emergent property. You can definitely be intelligent even if every component is not.

    But with LLMs we found a new weird “dimension” that something can be very knowledgeable without being intelligent. Even current LLMs have more general knowledge than all humans but they lack actual intelligence.