• Vorpal
    link
    fedilink
    arrow-up
    42
    ·
    11 months ago

    I don’t feel like rust compile times are that bad, but I’m coming from C++ where the compile times are similar or even worse. (With gcc at work a full debug build takes 40 minutes, with clang it is down to about 17.)

    Rust isn’t an interpreted or byte code compiled language, and as such it is hard to compete with that. But that is comparing apples and oranges really. Better to compare with other languages that compile to machine code. C and C++ comes to mind, though there are of course others that I have less experience with (Fortran, Ada, Haskell, Go, Zig, …). Rust is on par with or faster than C++ but much slower than C for sure. Both rust and C++ have way more features than C, so this is to be expected. And of course it also depends on what you do in your code (template heavy C++ is much slower to compile than C-like C++, similarly in Rust it depends on what you use).

    That said: should we still strive to optimise the build times? Yes, of course. But please put the situation into the proper perspective and don’t compare to Python (there was a quote by a python developer in the article).

    • taladar@sh.itjust.works
      link
      fedilink
      arrow-up
      10
      ·
      11 months ago

      Agreed. I have not asked myself dozens or hundreds of times why Rust compile times are slow. I have, however, asked myself why so many people consider Rust compile times as slow, especially people who might have fast compiles but then waste lots of time testing things manually or automatically that I don’t even have to worry about if my Rust code compiles.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        11 months ago

        That’s kind of true, but I came from Go, which has really fast compile times, which makes it really productive.

        That said, I don’t use Go much anymore, so I’ve obviously found some value in Rust. I would like to see improvements though, but it’s fast enough for me to stick with it.

        • taladar@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          11 months ago

          On the other hand in Go you have to literally implement every other standard library function inline yourself because of its lacking expressiveness and it is total cancer to read, at least in the cases of the Go programs I looked at to debug some things which I wouldn’t really consider productive.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            arrow-up
            6
            arrow-down
            1
            ·
            11 months ago

            standard library

            The standard library is absolutely massive, so it has pretty much everything you’d need.

            expressiveness

            The syntax is so simple that there are rarely any surprises. Yeah, it’s not very expressive, but that’s a feature, not a bug, because I’ll pretty much never run into a clever bit of syntax that takes me time to understand. There’s a very strong idiomatic style, and quite often “one right way” to accomplish something.

            if err != nil { return err } will appear all over your code, but after reading and writing a lot of Go, it’s not distracting anymore.

            That said, I do have criticisms that lead me to avoid Go:

            • no protections against data races - no Mutex[T], map[T]U isn’t atomic, etc
            • interface{} being an container instead of a compiler hint - this means (*int)(nil) == nil but interface{}((*int)(nil)) != nil; that has caused so many bugs
            • no destructors - automatically closing files, mutexed, etc
            • can copy beyond the end of a slice, overwriting data in the backing array

            I could go on. My point is that there are far too many footguns for a language that’s designed to be simple. However, once you know what you look for, Go is really easy to read and write.

            That said, I don’t use Go anymore and write mostly in Python and Rust, Python for the prototypes/scripts, and Rust for anything I want to maintain longer term. I’m not looking for a middle ground anymore, but if I was, Go is really productive and easy to read, while having nice syntax for smaller microservices.

            • taladar@sh.itjust.works
              link
              fedilink
              arrow-up
              4
              ·
              11 months ago

              What I had in mind when talking about that standard library thing was one case in particular that I had found where someone had to implement deduplication of elements in a vector/array/list (or whatever Go called it, don’t remember that bit) locally because Go does not support function generic over the type of container element.

              And the whole if err != nil { return err } bit is a huge part of what makes Go code unreadable. I have also found at least half a dozen bugs related to that construct where people just did not print any of the relevant information in error cases because of the lazy copy&paste of that construct in cases I had to debug.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                11 months ago

                deduplication

                The best solution here is a map, using keys as the set. So something like:

                func dedup(arr []T) (ret []T) {
                    m := make(map[T]bool)
                    for _, t := range T {
                        m[t] = true
                    }
                    
                    // optimization: make ret the right size
                    for t := range m {
                        ret = append(ret, t)
                    }
                }
                

                I haven’t used Go’s new generics, but I’m guessing this would work fine, provided T is a value type (or can be converted to one). If you know you need deduplication at the start, just use a map at the start.

                If you don’t have value types, you’re going to have a hard time regardless of language (would probably need some OOP features, which adds a ton of complexity). But you can get pretty far with that pattern. If you add a Hash() int to your type:

                func dedup(are []T) (ret []T) {
                    m := make(map[int]bool)
                    for _, t := range arr {
                        h := t.Hash()
                        if !m[h] {
                            m[h] = true
                            ret = append(ret, t)
                        }
                    }
                }
                

                err… people just did not print any of the relevant information in error cases

                That’s what error wrapping is for:

                if err != nil {
                    return fmt.Errorf("context: %w", err)
                }
                

                This makes it so you can use the errors package to unwrap errors or check if an error is a given type. Or you can propagate it like I’ve shown above.

                So I see this as programmer error. Rust has the same issue since it’s easy to just throw a ? in there and bail early without additional context. The simpler form is easier to catch in Go vs Rust in a code review because it’s more verbose.

                It seems you don’t like verbosity, which is fair. I’m fine with verbosity, I don’t like surprises, and Go has enough of those that I generally avoid it.

  • Turun@feddit.de
    link
    fedilink
    arrow-up
    13
    ·
    11 months ago

    Wierd that mold was not mentioned. A significant amount of time is spent in linking. Mold is a new linker that is much faster than the default and a drop in replacement if it works.

    • Nithanim
      link
      fedilink
      arrow-up
      5
      ·
      11 months ago

      For my small hobby project compiling was absurdly slow. Switching to mold really cut down on waiting time.

  • livingcoder
    link
    fedilink
    arrow-up
    2
    ·
    11 months ago

    Every so often rust-analyzer in VS Code doesn’t use the latest code after a cargo update and the only way I’ve found to fix it is a cargo clean. This means that I have to wait 5 minutes for the next build, painful. Just because of one project update. I would LOVE a faster build.

    Extra info: the updates come from my dependencies that utilize my private repositories via a git = "[path]". The rust-analyzer is pulling from a cache or older version for some reason and I don’t know where it is or why.

    • Vorpal
      link
      fedilink
      arrow-up
      5
      ·
      11 months ago

      Two tips that work for me:

      • After cargo add I have to sometimes run the “restart rust-analyzer” command from the vscode command pallette (exact wording may be off, I’m on my phone as of writing this comment). Much faster than cargo build.
      • Consider using sccache to speed up rebuilds. It helps a lot, though uses a bit of disk space. But disk space is cheap nowadays (as long as you aren’t stuck with a laptop with soldered SSD, in which case you know what not to buy next time).
    • Miaou@jlai.lu
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      Having 30min+ incremental compile times here (C++), I envy your situation ahah