I used Gentoo in ancient time when kernel updates took a whole day.
A modern computer can rebuild in an hour, a good one, even faster.
I’m not a kernel developer, but I don’t think they need to rebuild the whole kernel for every iteration.
And as for Rust, I’m doing bioinformatics in Rust because our iteration time is order of magnitudes longer than a kernel build, and Rust reduced the number of iterations required to reach the final version.
Rust run times are excellent. And statically linked binaries are the superior intellect.
Runtime performance counts for me only some specific cases, and there are many programs I have installed that I recompile because of updates far more frequently than I run them; and when I do run them, rarely use performance an issue.
But you have a good point: performance in the kernel is important, and it is run frequently, so the kernel is a good use case for Rust - where Go, perhaps, isn’t. My original comment, though, was that Zig appears to have many of the safety benefits of Rust, but vastly better compile times.
I really do need to write some Zig projects, because I sound like an advocate when really my opinions are uninformed. I have written Rust, though, and obviously have opinions about it, and especially how it is affecting my system update times.
I’ll keep ripgrep, regardless of compile times. Probably fd, too.
It is easier to safely optimize Rust than C, but that was not the point.
The point was on correctness of code.
It is not unheard of for code to run for weeks and months. I need the code to be as bug free as possible.
For example, when converting one of our tools to Rust we found out a bug that will lead to the wrong results on big samples. It was found by the Rust compiler!
Our tests didn’t cover the bug because it will only happen on very big sample. We can’t create a test file of hundreds of GB by hand and calculate the expected result. Our real data would have triggered the bug.
So without moving to Rust we would have gotten the wrong results.
I used Gentoo in ancient time when kernel updates took a whole day. A modern computer can rebuild in an hour, a good one, even faster. I’m not a kernel developer, but I don’t think they need to rebuild the whole kernel for every iteration.
And as for Rust, I’m doing bioinformatics in Rust because our iteration time is order of magnitudes longer than a kernel build, and Rust reduced the number of iterations required to reach the final version.
Rust run times are excellent. And statically linked binaries are the superior intellect.
Runtime performance counts for me only some specific cases, and there are many programs I have installed that I recompile because of updates far more frequently than I run them; and when I do run them, rarely use performance an issue.
But you have a good point: performance in the kernel is important, and it is run frequently, so the kernel is a good use case for Rust - where Go, perhaps, isn’t. My original comment, though, was that Zig appears to have many of the safety benefits of Rust, but vastly better compile times.
I really do need to write some Zig projects, because I sound like an advocate when really my opinions are uninformed. I have written Rust, though, and obviously have opinions about it, and especially how it is affecting my system update times.
I’ll keep ripgrep, regardless of compile times. Probably fd, too.
It is easier to safely optimize Rust than C, but that was not the point. The point was on correctness of code.
It is not unheard of for code to run for weeks and months. I need the code to be as bug free as possible. For example, when converting one of our tools to Rust we found out a bug that will lead to the wrong results on big samples. It was found by the Rust compiler! Our tests didn’t cover the bug because it will only happen on very big sample. We can’t create a test file of hundreds of GB by hand and calculate the expected result. Our real data would have triggered the bug. So without moving to Rust we would have gotten the wrong results.