• @KillTheMule
    link
    339 months ago

    While funny, this also highlights part of why I like rust’s error handling story so much: You can really just read the happy path and understand what’s going on. The error handling takes up minimal space, yet with one glance you can see that errors are all handled (bubbled up in this case). The usual caveats still apply, of course ;)

    • @[email protected]
      link
      fedilink
      169 months ago

      I’m writing my Rust wrong… I have match statements everywhere to the degree that it’s cluttering up everything.

      • Aloso
        link
        7
        edit-2
        9 months ago

        If all you do in the Err(e) => ... match arm is returning the error, then you absolutely should use the ? operator instead.

        If the match arm also converts the error type into another error type, implement the From trait for the conversion, then you can use ? as well.

        If you want to add more information to the error, you can use .map_err(...)?. Or, if you’re using the anyhow crate, .with_context(...)?.

        • @[email protected]
          link
          fedilink
          39 months ago

          You can also do map_err, which is a bit cleaner while keeping the mapping obvious. If you really need to do some logic on error, extracting that to the calling function is often better.

      • @BB_C
        link
        49 months ago

        If the matches are causing too much nesting/rightward drift, then that could be an indicator that you’re doing something wrong.

        If it’s the opposite, then you’re probably doing something right, except maybe the code needs some refactoring if there is too much clutter.

        If there isn’t much difference, then it’s a matter of style. I for example sometimes prefer to match on bools in some contexts because it makes things look clearer to me, despite it being not the recommended style. I’m also a proud occasional user of bool::then() and bool::then_some() 😉

        Also, if you find yourself often wishing some API was available for types like bool, Option, and Result, then you don’t have to wish for long. Just write some utility extension traits yourself! I for example have methods like bool::err_if(), bool::err_if_not(), Option::none_or_else(), and some more tailored to my needs methods, all available via extension traits.

        Macros can also be very useful, although some people go for them too early. So if everything else fails to declutter your code, try writing a macro or two.

        And it’s worth remembering, there is no general rule, other than if the code is understandable for you and works, then you’re probably okay irregardless of style. It’s all sugar after all, unless you’re really doing some glaringly wrong stuff.

      • @[email protected]
        link
        fedilink
        19 months ago

        Most likely you can get by with adjusting the return type and using a ? or maping to a type that you can use the ? on.

  • @BB_C
    link
    20
    edit-2
    9 months ago

    Is everyone genuinely liking this!

    This is, IMHO, not a good style.

    Isn’t something like this much clearer?

    // Add `as_cstr()` to `NixPath` trait first
    
    let some_or_null_cstr = |v| v.map(NixPath::as_cstr)
      .unwrap_or(Ok(std::ptr::null()));
    
    // `Option::or_null_cstr()` for `OptionᐸTᐳ`
    // where `T:  NixPath` would make this even better
    let source_cstr = some_or_null_cstr(&source)?;
    let target_cstr = target.as_cstr()?;
    let fs_type_cstr = some_or_null_cstr(&fs_type)?;
    let data_cstr = some_or_null_cstr(&data)?;
    let res = unsafe { .. };
    

    Edit: using alternative chars to circumvent broken Lemmy sanitization.

    • @[email protected]
      cake
      link
      fedilink
      1
      edit-2
      9 months ago

      I think the issue with this is that the code (https://docs.rs/nix/0.27.1/src/nix/lib.rs.html#297) allocates a fixed-size buffer on the stack in order to add a terminating zero to the end of the path copied into it. So it just gives you a reference into that buffer, which can’t outlive the function call.

      They do also have a with_nix_path_allocating function (https://docs.rs/nix/0.27.1/src/nix/lib.rs.html#332) that just gives you a CString that owns its buffer on the heap, so there must be some reason why they went this design. Maybe premature optimization? Maybe it actually makes a difference? 🤔

      They could have just returned the buffer via some wrapper that owns it and has the as_cstr function on it, but that would have resulted in a copy, so I’m not sure if it would have still achieved what they are trying to achieve here. I wonder if they ran some benchmarks on all this stuff, or they’re just writing what they think will be fast.

      • @[email protected]
        link
        fedilink
        19 months ago

        so there must be some reason why they went this design.

        Some applications have a hard zero-alloc requirement.

        • @[email protected]
          cake
          link
          fedilink
          1
          edit-2
          9 months ago

          But that’s not the case here, seeing as they have

          if self.len() >= MAX_STACK_ALLOCATION {
              return with_nix_path_allocating(self, f);
          }
          

          in the code of with_nix_path. And I think they still could’ve made it return the value instead of calling the passed in function, by using something like

          enum NixPathValue {
              Short(MaybeUninitᐸ[u8; 1024]>, usize),
              Long(CString)
          }
          
          impl NixPathValue {
              fn as_c_str(&self) -> &CStr {
                  // ...
          
          impl NixPath for [u8] {
              fn to_nix_path(&self) -> ResultᐸNixPathValue> {
                  // return Short(buf, self.len()) for short paths, and perform all checks here,
                  // so that NixPathValue.as_c_str can then use CStr::from_bytes_with_nul_unchecked
          

          But I don’t know what performance implications that would have, and whether the difference would matter at all. Would there be an unnecessary copy? Would the compiler optimize it out? etc.

          Also, from a maintainability standpoint, the context through which the library authors need to manually ensure all the unsafe code is used correctly would be slightly larger.

          As a user of a library, I would still prefer all that over the nesting.

  • Turun
    link
    fedilink
    169 months ago

    I never though about chaining ?! This is hilarious and I need to use it somewhere now.

      • Turun
        link
        fedilink
        3
        edit-2
        9 months ago

        It make total sense, I just never ran into a Result(Result(Result(T, E), E), E) situation

        Edit: Lemmy is so scared of cross site scripting, they simply remove all less than and greater than signs, lmao

      • @[email protected]
        link
        fedilink
        19 months ago

        Runtime or compile-time? If run-time, that sounds like a fun way to implement something like panic().

        • @zygo_histo_morpheus
          link
          19 months ago

          runtime, the point is so that you can see if the types align and then come back and fill in the function bodies later

  • @sip
    link
    12
    edit-2
    9 months ago

    there was a comment about adding an ?! operator that would resolve any number of ? operators but I can’t find it.