After a long time I’m in a situation where I sometimes work on a temporary system without my individual setup. Now whenever I might add a new custom (nushell) command that abstracts the usage of CLI tools, I think about the loss of muscle memory/knowledge for these tools and how much time I waste looking them up without my individual setup. No, that’s not a huge amount of time, but just out of curiosity I’d like to know how I can minimize this problem as much as possible.

Do you have some tips and solutions to handle this dilemma? I try to shadow and wrap existing commands, whenever it’s possible, but that’s often not the case. Abbreviations in fish are optimal for this problem in some cases, but I don’t think going back to fish as my main shell for this single reason would be worth it.

  • MajorHavoc
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    16 days ago

    I assume there can be similar situations in the future for other reasons.

    You may be happily surprised - we don’t agree on much in technology, but bootstrapping with git is supported in places where nothing else works, and is finally also even popular among Windows engineers.

    I recall encountering two exceptional cases:

    • An ‘almost never change anything’ immutable distribution like Batocera.
    • A host with no Internet access.

    In both cases, I still version the relevant scripts in the same git repository, but I end up getting scrappy for deploying them.

    On an immutable distribution, I’ll curl, wget, or Invoke-WebRequest to get a copy of each file I need, as I need them. I encounter this often enough that I find it worth putting copies into a public S3 bucket with a touch of nice DNS in front. It does wonders for me remembering the correct path to each file.

    On a completely offline distribution, I run git init --bare in a folder on a the root of a thumb drive or network share, and then I git push a shallow copy of my scripts repo to it, and git clone from it on the machine to work on. I also simply file copy a copy as well, in case I cannot get git bootstrapped on the offline machine.

    I do still bother with the git version because I invariably need to make a tiny nuanced script correction, and it’s so much easier (for my work patterns) to sync it back later with git.