• @[email protected]
    link
    fedilink
    79 months ago

    Yeah, and as much as I understand the article saying there should be an easily accessible method for grapheme count, it’s also kind of mad to put something like this into a stdlib.

    Its behaviour will break with each new Unicode standard. And you’d have to upgrade the whole stdlib to keep up-to-date with the newest Unicode standards.

    • Treeniks
      link
      fedilink
      4
      edit-2
      9 months ago

      The way UTF-8 works is fixed though, isn’t it? A new Unicode standard should not change that, so as long as the string is UTF-8 encoded, you can determine the character count without needing to have the latest Unicode standard.

      Plus in Rust, you can instead use .chars().count() as Rust’s char type is UTF-8 Unicode encoded, thus strings are as well.

      turns out one should read the article before commenting

      • @[email protected]
        link
        fedilink
        69 months ago

        No offense, but did you read the article?

        You should at least read the section “Wouldn’t UTF-32 be easier for everything?” and the following two sections for the context here.

        So, everything you’ve said is correct, but it’s irrelevant for the grapheme count.
        And you should pretty much never need to know the number of codepoints.

        • Treeniks
          link
          fedilink
          39 months ago

          yup, my bad. Frankly I thought grapheme meant something else, rather stupid of me. I think I understand the issue now and agree with you.

          • @[email protected]
            link
            fedilink
            39 months ago

            No worries, I almost commented here without reading the article, too, and did not really know what graphemes are beforehand either. 🫠

      • @[email protected]
        link
        fedilink
        29 months ago

        Nope, the article says that what is and is not a grapheme cluster changes between unicode versions each year :)

    • ono
      link
      fedilink
      English
      49 months ago

      It might make more sense to expose a standard library API for unicode data provided by (and updated with) the operating system. Something like the time zone database.