After a long time I’m in a situation where I sometimes work on a temporary system without my individual setup. Now whenever I might add a new custom (nushell) command that abstracts the usage of CLI tools, I think about the loss of muscle memory/knowledge for these tools and how much time I waste looking them up without my individual setup. No, that’s not a huge amount of time, but just out of curiosity I’d like to know how I can minimize this problem as much as possible.
Do you have some tips and solutions to handle this dilemma? I try to shadow and wrap existing commands, whenever it’s possible, but that’s often not the case. Abbreviations in fish are optimal for this problem in some cases, but I don’t think going back to fish as my main shell for this single reason would be worth it.
You may be happily surprised - we don’t agree on much in technology, but bootstrapping with
git
is supported in places where nothing else works, and is finally also even popular among Windows engineers.I recall encountering two exceptional cases:
Batocera
.In both cases, I still version the relevant scripts in the same git repository, but I end up getting scrappy for deploying them.
On an immutable distribution, I’ll
curl
,wget
, orInvoke-WebRequest
to get a copy of each file I need, as I need them. I encounter this often enough that I find it worth putting copies into a public S3 bucket with a touch of nice DNS in front. It does wonders for me remembering the correct path to each file.On a completely offline distribution, I run
git init --bare
in a folder on a the root of a thumb drive or network share, and then Igit push
a shallow copy of my scripts repo to it, andgit clone
from it on the machine to work on. I also simply file copy a copy as well, in case I cannot getgit
bootstrapped on the offline machine.I do still bother with the
git
version because I invariably need to make a tiny nuanced script correction, and it’s so much easier (for my work patterns) to sync it back later withgit
.