I really liked Kagi at first, especially since I use it mainly for programming as well, but recently I feel like the quality has gone downhill. Right around the time they integrated the Brave stuff I’ve noticed a significant amount of me having to scroll down past the usual Google-like fluff results before getting to actually relevant information. It’s a little sad to see because when I first used it, it was so good now it basically feels like a skinned Google-lite at this point. I’m still a customer but only because I haven’t found a good alternative yet.
Thanks for the explanation! Your explanation led me down a rabbit hole of seeing if there’s a way to cancel an await call, from what I can tell there was no clear way to do so. In my case I ended up connecting the signal to a secondary function instead of utilizing the await command, I’m not entirely sure if there’s an advantage to utilizing one method over the other.
I didn’t have access to my computer when I posted this so I was hoping to get some info while I was away from it. Thanks to another commenter, it looks like it has a very minimal impact. Good to know for future reference!
Thanks so much for testing that out! That’s very informative and even more thorough than what I was looking for! I wasn’t at my computer when I posted this so I couldn’t test it myself.
I ended up connecting the signal to a secondary function to run on finished to avoid any potential memory errors, but it’s super helpful to know that the performance impact is minimal.
Interesting!! Yeah that’s exactly what my node structure is. So basically instead of using signals and sending from one Map child to the other, use spawn_object as a child from hexgrid and then just call the spawn object function as needed since the hexgrid script already has a reference to the index and location of each tile. Thanks so much for the advice I’ll give it a shot! I ended up working out the signals issue I was having anyways but it seems like your suggestion is a cleaner solution!
I see okay so I understand that the intent is to decouple scenes from each other, however from the tutorials I’ve seen they typically say that to establish the connection you need to run get_node() from the script you are establishing the connection to. So for example if you have:
*Parent
**Child 1
**Child 2
You would emit from Child1, then establish the connection from Child2 using:
var script = get_node("root/Parent/Child1")
script.some_signal.connect(some_function)
Is that the correct interpretation? Or am I misunderstanding? Thanks in advance btw I appreciate all the help in understanding this!
Hm…I didn’t even consider that the emit might be happening before the connection. I have them both running from separate on_ready methods. Perhaps I should be running connection though _init. Typically in unity I would just use C# actions and use OnEnable to establish the “connections.” It make sense that the code could be emitting before I even have a chance to connect to it, which is why my test fails.
Thank you! Okay so the tiles are part of a map, and I have a parent node called map with two separate children: hexgrid (where I’m instantiating the scenes) and spawn_objects( where I’m trying to gain access to the index and transform from hexgrid) my intent is to have hexgrid generate the grid and tiles and have spawn_objects instantiate an object within the tile at a certain position within it. Is this perhaps something I should combine into the same script? I typically like to have things modular and keep each component so a single specific task.
Is this the intended purpose for signals? Moving info up and down the tree? If so how are we supposed to send information from one script to another if one is not directly a child of the other? Do we send it to a shared parent node then send the info back down to the secondary script?
I’m emitting in the function that instantiates the tiles so it’s definitely getting called. How do I verify that the connection is going through?
Also thanks for letting me know how to format it I was getting conflicting information regarding the syntax from various different tutorials and resources.
You can install Ollama in a docker container and use that to install models to run locally. Some are really small and still pretty effective, like Llama 3.2 is only 3B and some are as little as 1B. It can be accessed through the terminal or you can use something like OpenWeb UI to have a more “ChatGPT” like interface.