• 0 Posts
  • 53 Comments
Joined 1 year ago
cake
Cake day: 17 June 2023

help-circle


  • vcmjtoProgrammer HumorIt must be a silent R
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    It would be luck based for pure LLMs, but now I wonder if the models that can use Python notebooks might be able to code a script to count it. Like its actually possible for an AI to get this answer consistently correct these days.


  • vcmjtoZedIntroducing Zed AI
    link
    fedilink
    arrow-up
    2
    ·
    3 months ago

    I was sceptical at first too, but the way they’re not just adding another chatbot, that it is basically a LLM request composing tool is really interesting. Its not trying to hide what an LLM is behind some obscure personality interface, its a text processing tool foremost. I like it!




  • Personally, if I can’t go from human readable data to a complete model then I don’t consider it open source. I understand these companies want to keep the magic sauce thats printing them money but all the open source marketing is inherently dishonest. They should be clear that the architecture and the product they are selling is separate, much like proprietary software just has all the open source software they used as a footnote in their about screens.






  • vcmjtoProgrammer Humor"prompt engineering"
    link
    fedilink
    arrow-up
    2
    ·
    7 months ago

    I do think we’re machines, I said so previously, I don’t think there is much more to it than physical attributes, but those attributes let us have this discussion. Remarkable in its own right, I don’t see why it needs to be more, but again, all personal opinion.



  • vcmjtoProgrammer Humor"prompt engineering"
    link
    fedilink
    arrow-up
    6
    ·
    8 months ago

    Personally my threshold for intelligence versus consciousness is determinism(not in the physics sense… That’s a whole other kettle of fish). Id consider all “thinking things” as machines, but if a machine responds to input in always the same way, then it is non-sentient, where if it incurs an irreversible change on receiving any input that can affect it’s future responses, then it has potential for sentience. LLMs can do continuous learning for sure which may give the impression of sentience(whispers which we are longing to find and want to believe, as you say), but the actual machine you interact with is frozen, hence it is purely an artifact of sentience. I consider books and other works in the same category.

    I’m still working on this definition, again just a personal viewpoint.



  • Most of the largest datasets are kind of garbage because of this. I’ve had this idea to run the data through the network every epoch and evict samples that are too similar to the output for the next epoch but never tried it. Probably someone smarter than me already tried that and it didn’t work. I just feel like there’s some mathematical way around this we aren’t seeing. Humans are great at filtering the cruft so there must be some indicators there.


  • vcmjtoComics[xkcd] A Bunch of Rocks (17 Nov 2008)
    link
    fedilink
    arrow-up
    4
    ·
    10 months ago

    I think I see where you’re coming from. The computer in the comic is a Rule 110 automata, known to be Turing complete. It can perform complex calculations, allegedly.

    I suppose it can get a bit philosophical whether an incomplete time instant is even visible from the inside of a simulation, because nothing moves after a single pass until the full frame is complete, hence limiting perception.