• 10 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle

  • I’m actually working on this problem right now for my master’s capstone project. I’m almost done with it; I can have it generating a series of steps to try and fetch me something based on simple objectives like “I’m thirsty”, and then in simulation fetching me a drink or looking through rooms that might have a fix, like contextually knowing the kitchen is a great spot to check.

    There’s also a lot of research into using the latest advancements in reasoning and contextual awareness via LLMs to work towards better more complicated embodied AI. I wrote a blog post about a lot of the big advancements here.

    Outside of this I’ve also worked at various robotics startups for the past five years, though primarily in writing data pipelines and control systems for fleets of them. So with that experience in mind, I’d say we are many years out from this being in a reasonable product, but maybe not ten years away. Maybe.


  • It is extremely difficult to get someone to understand something which their paycheck depends upon them not understanding it.

    The solutions that would work for climate change - dramatic reduction in consumption, recycling, large scale government regulation and oversight to ensure adoption of these policies - that doesn’t make money. New technology does.

    The deadliness of climate change extends not from its all encompassing effect, nor the monumental cataclysms it’ll unleash, but instead that it’s solution requires a complete rethinking of systems that make a select few very powerful people privileged in the first place.









  • This isn’t mine, it’s just an interesting blogpost I came across. Nor am I arguing that it should replace a robotics engineer.

    My main thought, not fully represented in the post, is that LLMs can act as a context engine for high level understanding of instructions + spatial awareness, and then apply it to actuation. This is somewhat touched upon in the article.

    I do think that there is some interesting work in LLM powered task level planning. I’m hoping to find the time put together a good example of this, utilizing the ability for LLMs to make logical leaps based on instruction. In the article, it took the command “I’m thirsty” to mean move to a drink. In a more applicable application, we can use LLMs to identify that a room with multiple identified objects (refrigerator, oven, stove, cabinets, etc) is in fact a kitchen. Then, from there, determine that “I’ve seen a room I’ve identified as a kitchen - I can navigate there to attempt to find a drink”.


  • Oh yeah, MATLAB is painful. I get why you use it at first - it’s great for handling derivations for you when looking at control code, and handles matricies well enough when learning kinematics. But once my homeworks started to demand animations and complex processing I yearned for a language with classes or any advance features at all. Still, managed to make some cool stuff - like this RRT path planned transmission removal 😄

    What startup (assuming you’re out of stealth mode?) Good luck with the jump over to a startup. It’s rough but hopefully you knock it out of the park.

    As for code deploy - I’ve worked on the problem at two startups now. I can probably advise you on some stuff to look into, but would need to know more about the problem space you’re specifically looking at. Though I’m hesitant to mention full obfuscation if you’re not delivering a finished product but rather a module to the end customer.





  • Last week I started going through ROS2 lessons online in order to familiarize myself on it for some upcoming projects.

    First I spent time working on utilizing vagrant, a tool by Hashicorp for building “repeatable” (debatable overuse of that term, but I’ll digress) dev environments and VM images to quickly set up versioned ROS environments for me. This was actually pretty easy and after a few hours I had a setup I liked. I will report that I do have some issues running Gazebo in VM on the laptop (to be expected) though it’s smooth on the beefier desktop. I am still suffering from occasional complete VM freeze ups - irrecoverable, though the host machine shows no lag or issues there. I think it’ll still work for a quick setup of ROS2 for a project team.

    Now I’m going through the nav2 stack in ROS and trying to familiarize myself with it. I’m not sure what the scope of the upcoming project is going to be (it’s the capstone team project for the entirety of my Masters, so there’s a bit of time before decisions have to be finalized). Once that’s done I’ll probably dive into Webots simulator (especially since my own Gazebo is proving unstable).






  • hlfshellOPtoRobotics*Permanently Deleted*
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    vagrant ended up being way easier than I had anticipated, with just minor issues. I even got a premade image published so that teammates don’t have to sit through the long install process, just pull a good base image.

    Given that I don’t know yet (nor do I have access to a machine to test) Docker solutions w/ GUIs for Windows/Mac I’m going to just stick to the VM approach for now unless I have a strong driver to switch it up.


  • hlfshellOPtoRobotics*Permanently Deleted*
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I’m working on updating my knowledge of ROS/ROS2. I’ve done some tutorials/mini projects in ROS, but wanted to refresh for ROS2 for an upcoming large project I’m doing. Since I’m the most experienced programmer in the group and have a strong background in backend tech and dev tools I’m trying to discover the easiest way to have repeatable ROS2 environments such that we can eliminate that pain point from the project entirely.

    Currently the front runners are vagrant (neat little image builder and dev env manager by Hashicorp). There’s always Dockerfiles w/ the container approach, but I’m the only native Linux user - I’m not sure if X11 pass through can be done at all in the VM’s running Docker in Windows/MacOS.

    Once this is solved we’re going to explore simulation options for the project. Since we’re a group of 5 people spread across 3 time zones hardware is a tough sell, so we’ll likely be working mostly in simulator. Currently looking into either Webots or Gazebo, but not married to either at all.

    As for what the project is - no idea yet. We don’t need to know that yet, and I figured having a strong base is going to help irregardless of what we focus on.




  • I just finished my first long technical blog post - blogging is a relatively new habit I’m trying to build up. I posted the blogpost already to the robotics community so I’ll just crosspost it with this link. TLDR it’s the trial and tribulations I had while trying to use PPO to train a robotic arm to do pick and place.

    I’m putting a hold on the other project I was burning out of - something I called coppermind (yes I’m a Sanderson fan) that handled chatbot memory/applications. I have an old version running on a digital ocean droplet plugged into Twilio, but recent degradations in ChatGPT 3.5 model response quality + that version not having my knowledge/vector database features means it’s becoming a bit repetitive on its responses.

    Right now with my masters in robotics starting up again in late August, and this being the project for the entire thing, I’m spending some time working on tooling to get myself up to speed on ROS2 and get repeatable environments up and running for it so I can quickly fix/deploy sim robots for the project. I have some ideas I’m toying with there. Unfortunately no idea what the project will be yet.