• 1 Post
  • 491 Comments
Joined 2 years ago
cake
Cake day: September 7th, 2023

help-circle

  • It’s very hard to get a good look at which arguments are good or not without having the experience to evaluate them.

    Here’s my view on Rust vs C or C++. Rust is a stricter language which makes it easier to code with low run-time errors, which is great for writing large scale projects. Now the problem with this is that you can write C++ to also be strict but it’s a lot more verbose than the standard approach, so most developers don’t. This causes disagreement among Rustaceans and C/C++'ers. The C++'ers are correct that you can replicate anything in Rust in C++. A correct program is a correct program regardless of the language it’s written in. Rustaceans also oversell when it comes to program correctness, tons of Rust programs have errors; Rust can help minimize errors but it’s not a silver bullet. Rewriting-in-Rust for an already good program is a fools errand; the outcome will probably be a worse program. However Rustaceans are correct in pointing out that the C++ written programs tend to have more errors, it’s just not the rule they pretend it is.

    In summary, Rust is a great language but Rustaceans oversell it. Many of it’s apparent advantages can be mitigated by good development practice. It’s just that good practices are difficult and uncommon.

    (Note that there are also 3-rd party tools like static analysers, which can help developers detect errors. So again Rust is better out of the box, but ultimately you can get the same outcome with some work).




  • I think you are severely underestimating the level of influence the US government already has. NIST is a US Agency. Much of research in the US is done at the behest of the government. My local university had to comply* with anti-DEI policy to continue receiving federal funds ( it’s primary source of income).

    Influencing the foundation of a, quite frankly bad, programming language is not really that impactful, especially considering that the foundation apparently only has 14 employees.

    *In case you were wondering the only change they had to make was discontinuing LGBT work anniversaries. So you can probably see why I view this anti-anti-DEI fear mongering with some skepticism. The reality is that racial bias in an organization is very difficult to actually prove, regardless of whether it is pro or anti minority, so anti-DEI constraints simply prohibit explicit biasing.


  • That’s just my steelman. You are correct that it would require a readjustment at some point, i.e DEI practices can’t exist forever.

    “Unqualified people get promoted inspire excellence”. I think at the very top, advanced work isn’t done to get promotions but rather the work itself. I imagine that people don’t take years of schooling and work with the goal of becoming a senior dev. There’s something about the work and producing good work that motivated them.

    Note that I don’t work in tech but rather mathematics research. So our incentives are different, but I think the main ideas hold.




  • I’m on the fence about this. I think that it’s true the most hiring decisions aren’t merit-based, nor do they necessarily need to be. Most jobs can be sufficiently done by the average-skilled person, it’s only the most skilled positions were you can argue that one person is just simply the best (and sufficiently that it matters). I think DEI practices would be fine in the former case since it’s just another biasing metric like nepotism.

    As for highly skilled positions, most people in those positions grow up saturated in the culture from a young age, typically from parents in that field themselves. I think there is arguments to be made that DEI practices now can produce a larger skilled pool in the next generation.

    The questions are 1. How much does it help the next generation? 2. Is it worth the cost of lower standards now?



  • “Take them at their word”

    Who? Has there been a survey of contributors?

    “Genuinely think that coreutils would be better if it were written in Rust”

    I feel like the skill-level of the contributors is high enough that they would not be so naive.

    Programs in different languages can compile to the same machine code. Any advantage would be in language constructs. But if you already have an existing C implementation what advantage do you do from a Rust implementation?

    I personally write in 3 languages: Rust, C++, and Fortran ( or rarely SPARK). I don’t port my code across languages, because there is no advantage. If I wanted it better, I would work on my existing codebase.

    Porting really only helps if the original language was hindering development, deployment or runtime. These arguments don’t really hold with C, a fast, low-dependency language that is more widely used than Rust.


  • Being written in Rust has mixed effects. Rust is still less mainstream than C, so fewer people can contribute. However, it does attract more interest because it’s different.

    However, the reasons why you create/contribute to new-but-similar projects is to add functionality that the original project doesn’t have. By nature a coreutils replacement has to behave like coreutils or else it will break many configurations. This severely limits the functionality you can provide. So why are people (and Canonical) contributing so much labor to something that still doesn’t function as intended?

    I say it’s the licensing. I say this as someone who regularly gets requests to change the licensing of my software (more than any feature request). I think licensing is a big deal, and most software devs recognize that.


  • You are fixating on the incorrect premise. I noted that it was started a decade ago as a analogy for how labor intensive the project is. A project that by design has to mirror the behaviour of coreutils.So why are people investing the time in this? What makes it worthwhile? It’s the permissive license. If uutils used GPL individuals would instead try to contribute to the much more utilised coreutils, where their contributions would be guaranteed to have an impact.

    Edit: Some of the earlier issues date from 2013, so it has been a decade, although it probably was very obscure at the time.




  • The only code generation assistance I use is in the form of compilers. For fun I tried to use the free version of Chatgpt to replicate an algorithm I recently designed and after about half-hr I could only get it to produce the same trivial algorithms you find on blog posts even with feeding it much more sophisticated approaches.


  • This is just the silly libertarian argument. No sane person believes that all crime will disappear because of X law. It’s that easily accessible encryption makes it harder to obtain evidence of criminal activity in increasingly many cases.

    “Driving people underground”, is actually good for law enforcement. Believe me, criminals want to do it in the open it’s so much easier. The fact that their is a barrier is itself a deterrent. Look at economic studies of drug abuse, as laws are more lax, drugs become cheaper and usage increases. This is true with essentially everything, as capital costs ( i.e effort) decreases the product/service becomes more widespread.




  • No, I pointing out that the filters don’t actually work.

    Transphobic and racist behaviour isn’t going to disappear just because you boycott it.

    The consequences of bigotry aren’t reading mean tweets, it’s going to a job interview and having the prospective employer think “eww… I don’t like this candidate”. Boycotting is not going to fix that, because your purity test can’t even detect it.

    I don’t purity test people because the reality is that most/all people have some harmful notions, it’s not productive or good for anyone to ostracize them so long as we can promote the good they do, and mitigate the harm.