• abhibeckert@lemmy.world
    link
    fedilink
    arrow-up
    27
    arrow-down
    3
    ·
    edit-2
    1 year ago

    I love the comparison of string length of the same UTF-8 string in four programming languages (only the last one is correct, by the way):

    Python 3:

    len(“🤦🏼‍♂️”)

    5

    JavaScript / Java / C#:

    “🤦🏼‍♂️”.length

    7

    Rust:

    println!(“{}”, “🤦🏼‍♂️”.len());

    17

    Swift:

    print(“🤦🏼‍♂️”.count)

    1

    • Walnut356
      link
      fedilink
      arrow-up
      44
      ·
      edit-2
      1 year ago

      That depends on your definition of correct lmao. Rust explicitly counts utf-8 scalar values, because that’s the length of the raw bytes contained in the string. There are many times where that value is more useful than the grapheme count.

      • Black616Angel@feddit.de
        link
        fedilink
        arrow-up
        16
        ·
        1 year ago

        And rust also has the “🤦”.chars().count() which returns 1.

        I would rather argue that rust should not have a simple len function for strings, but since str is only a byte slice it works that way.

        Also also the len function clearly states:

        This length is in bytes, not chars or graphemes. In other words, it might not be what a human considers the length of the string.

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          11
          ·
          1 year ago

          None of these languages should have generic len() or size() for strings, come to think of it. It should always be something explicit like bytes() or chars() or graphemes(). But they’re there for legacy reasons.

        • Knusper@feddit.de
          link
          fedilink
          arrow-up
          10
          ·
          1 year ago

          That Rust function returns the number of codepoints, not the number of graphemes, which is rarely useful. You need to use a facepalm emoji with skin color modifiers to see the difference.

          The way to get a proper grapheme count in Rust is e.g. via this library: https://crates.io/crates/unicode-segmentation

          • Djehngo@lemmy.world
            link
            fedilink
            arrow-up
            9
            ·
            1 year ago

            Makes sense, the code-points split is stable; meaning it’s fine to put in the standard library, the grapheme split changes every year so the volatility is probably better off in a crate.

            • Knusper@feddit.de
              link
              fedilink
              arrow-up
              7
              ·
              1 year ago

              Yeah, although having now seen two commenters with relatively high confidence claiming that counting codepoints ought be enough…

              …and me almost having been the third such commenter, had I not decided to read the article first…

              …I’m starting to feel more and more like the stdlib should force you through all kinds of hoops to get anything resembling a size of a string, so that you gladly search for a library.

              Like, I’ve worked with decoding strings quite a bit in the past, I felt like I had an above average understanding of Unicode as a result. And I was still only vaguely aware of graphemes.

              • Turun@feddit.de
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                For what it’s worth, the documentation is very very clear on what these methods return. It explicitly redirects you to crates.io for splitting into grapheme clusters. It would be much better to have it in std, but I understand the argument that Std should only contain stable stuff.

                As a systems programming language the .len() method should return the byte count IMO.

                • Knusper@feddit.de
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  The problem is when you think you know stuff, but you don’t. I knew that counting bytes doesn’t work, but thought the number of codepoints was what I want. And then knowing that Rust uses UTF-8 internally, it’s logical that .chars().count() gives the number of codepoints. No need to read documentation, if you’re so smart. 🙃

                  It does give you the correct length in quite a lot of cases, too. Even the byte length looks correct for ASCII characters.

                  So, yeah, this would require a lot more consideration whether it’s worth it, but I’m mostly thinking there’d be no .len() on the String type itself, and instead to get the byte count, you’d have to do .as_bytes().len().

      • Knusper@feddit.de
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        Yeah, and as much as I understand the article saying there should be an easily accessible method for grapheme count, it’s also kind of mad to put something like this into a stdlib.

        Its behaviour will break with each new Unicode standard. And you’d have to upgrade the whole stdlib to keep up-to-date with the newest Unicode standards.

        • Treeniks@lemmy.ml
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          1 year ago

          The way UTF-8 works is fixed though, isn’t it? A new Unicode standard should not change that, so as long as the string is UTF-8 encoded, you can determine the character count without needing to have the latest Unicode standard.

          Plus in Rust, you can instead use .chars().count() as Rust’s char type is UTF-8 Unicode encoded, thus strings are as well.

          turns out one should read the article before commenting

          • Knusper@feddit.de
            link
            fedilink
            arrow-up
            6
            ·
            1 year ago

            No offense, but did you read the article?

            You should at least read the section “Wouldn’t UTF-32 be easier for everything?” and the following two sections for the context here.

            So, everything you’ve said is correct, but it’s irrelevant for the grapheme count.
            And you should pretty much never need to know the number of codepoints.

            • Treeniks@lemmy.ml
              link
              fedilink
              arrow-up
              3
              ·
              1 year ago

              yup, my bad. Frankly I thought grapheme meant something else, rather stupid of me. I think I understand the issue now and agree with you.

              • Knusper@feddit.de
                link
                fedilink
                arrow-up
                3
                ·
                1 year ago

                No worries, I almost commented here without reading the article, too, and did not really know what graphemes are beforehand either. 🫠

        • ono@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          It might make more sense to expose a standard library API for unicode data provided by (and updated with) the operating system. Something like the time zone database.