• Rabbit R1 AI box is actually an Android app in a limited $200 box, running on AOSP without Google Play.
  • Rabbit Inc. is unhappy about details of its tech stack being public, threatening action against unauthorized emulators.
  • AOSP is a logical choice for mobile hardware as it provides essential functionalities without the need for Google Play.
  • Bahnd Rollard
    link
    fedilink
    English
    62 months ago

    This is my expirence with LLMs, I have gotten it to write me code that can at best be used as a scaffold. I personally do not find much use for them as you functionally have to proofread everything they do. All it does change the work load from a creative process to a review process.

    • @[email protected]
      link
      fedilink
      English
      02 months ago

      I don’t agree. Just a couple of days ago I went to write a function to do something sort of confusing to think about. By the name of the function, copilot suggested the entire contents of the function and it worked fine. I consider this removing a bit of drudgery from my day, as this function was a small part of the problem I needed to solve. It actually allowed me to stay more focused on the bigger picture, which I consider the creative part. If I were a painter and my brush suddenly did certain techniques better, I’d feel more able to be creative, not less.

      • @[email protected]
        link
        fedilink
        English
        02 months ago

        I would argue that there just isn’t much gain in terms of speed of delivery, because you have to proofread the output - not doing it is irresponsible and unprofessional.

        I don’t tend to spend much time on a single function, but I can remember a time recently where I spent two hours writing a single function. I had to mentally run all cases to check that it worked, but I would have had to do it with LLM output anyway. And I feel like reviewing code is just much harder to do right than to write it right.

        In my case, LLMs might have saved some time, but training the complexity muscle has value in itself. It’s pretty formative and there are certain things I would do differently now after going through this. Most notably, in that case: fix my data format upfront to avoid edge cases altogether and save myself some hard thinking.

        I do see the value proposition of IDEs generating things like constructors, and sometimes use such features, but reviewing the output is mentally exhausting, and it’s necessary because even non-LLM sometimes comes out as broken. Assuming that it worked 100% of the time: still not convinced it amounts to much time saved at the end of day.