• fubo@lemmy.world
    link
    fedilink
    arrow-up
    68
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Any time you’re turning a string of input into something else, what you are doing is parsing.

    Even if the word “parser” never appears in your code, the act of interpreting a string as structured data is parsing, and the code that does parsing is a parser.

    Programmers write parsers quite a lot, and many of the parsers they write are ad-hoc, ill-specified, bug-ridden, and can’t tell you why your input didn’t parse right.

    Writing a parser without realizing you’re writing a parser, usually leads to writing a bad parser. Bad parsers do things like accepting malformed input that causes security holes. When bad parsers do reject malformed input, they rarely emit useful error messages about why it’s malformed. Bad parsers are often written using regex and duct tape.

    Try not to write bad parsers. If you need to parse something, consider writing a grammar and using a parser library. (If you’re very ambitious, try a parser combinator library.) But at least try to recall something about parsers you learned once way back in a CS class, before throwing regex at the problem and calling it a day.

    (And now the word “parser” no longer makes sense, because of semantic satiation.)

    By the way, please don’t write regex to try to validate email addresses. Seriously. There are libraries for that; some of them are even good. When people write their own regex to match email addresses, they do things like forget that the hyphen is a valid character in domain names.

    • ono@lemmy.ca
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      1
      ·
      edit-2
      10 months ago

      By the way, please don’t write regex to try to validate email addresses. Seriously.

      Amen.

      There are libraries for that; some of them are even good.

      Spoiler alert: Few of them are good, and those few are so simple that you might as well not use a library.

      The only way to correctly validate an email address is to send a message to it, and verify that it arrived.

      • Jesus_666@feddit.de
        link
        fedilink
        arrow-up
        33
        ·
        10 months ago

        You can use a regex to do basic validation. That regex is .+@.+. Anything beyond that is a waste of time.

          • Jesus_666@feddit.de
            link
            fedilink
            arrow-up
            10
            ·
            10 months ago

            Which ones? In RFC 5322 every address contains an addr-spec at some point, which in turn must include an @. RFC 6854 does not seem to change this. Or did I misread something?

              • Jesus_666@feddit.de
                link
                fedilink
                arrow-up
                5
                ·
                edit-2
                10 months ago

                And it’s matched by .+@.+ as it contains an @.

                Remember, we’re taking about regular expressions here so .+ means “a sequence of one or more arbitrary characters”. It does not imply that an actual dot is present.

                (And I overlooked the edit. Oops.)

        • hansl@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          7
          ·
          10 months ago

          There are also cases where you want to have a disallow list of known bad email providers. That’s also part of the parsing and validating.

          • Tramort
            link
            fedilink
            arrow-up
            24
            ·
            10 months ago

            It’s a valid need, but a domain blacklist isn’t part of email parsing and if you conflate the two inside your program then you’re mixing concerns.

            Why is the domain blacklist even in your program? It should be a user configurable file or a list of domains in the database.

            • Black616Angel@feddit.de
              link
              fedilink
              arrow-up
              1
              ·
              10 months ago

              You are right in that it isn’t (or shouldn’t be) part of the parsing, but the program has to check the blacklist even if it’s in a database.

            • hansl@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              10 months ago

              We were discussing validation, not parsing. There’s no parsing in an email. You might give it a type once it passes validation, but an email is just a string with an @ in it (and likely some . because you want at least 1 TLD but even that I’m not sure).

          • anti-idpol action
            link
            fedilink
            arrow-up
            9
            ·
            edit-2
            10 months ago

            fuck any website that requires an account to just READ it’s stupid content and at the same time blocks guerrillamail/10minutemail (looking at you, Glassdoor,I don’t want to get fucking spam just so that I can check approximate salary in a company)

            • hansl@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              10 months ago

              Sounds like your gripe is with people requiring accounts for reading public content, and not with preventing usage of automated email creation and trying to limit bots on your website.

          • ono@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            edit-2
            10 months ago

            disallow list of known bad email providers.

            Imagine giving someone your phone number, and having them say you have to get a different one because they don’t like some of the digits in it.

            I have seen this nonsense more times than I care to remember. Please don’t build systems this way.

            If you’re trying to do bot detection or the like, use a different approach. Blacklisting email addresses based on domain or any other pattern does a poor job of it and creates an awful user experience.

            (And if it prevents people from using spam-fighting tools like forwarding services, then it’s directly user-hostile, and makes the world a worse place.)

      • fubo@lemmy.world
        link
        fedilink
        arrow-up
        10
        arrow-down
        2
        ·
        edit-2
        10 months ago

        The only way to correctly validate an email address is to send a message to it, and verify that it arrived.

        If you’re accepting email addresses as user input (e.g. from a web form), it might be nice to check that what’s to the right of the rightmost @ sign is a domain name with an MX or A record. That way, if a user enters a typo’d address, you have some chance of telling them that instead of handing an email to user#example.net or user@gmailc.om to your MTA.

        But the validity of the local-part (left of the rightmost @) is up to the receiving server.

        • ono@lemmy.ca
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          9 months ago

          Checking MX in your application means you needlessly fail on transient outages, like when a DNS server is rebooting or a net link hiccups. When it happens, the error flag your app puts on the user’s email address is likely to confuse or frustrate them, will definitely waste their time, and may drive them away and/or generate support calls.

          Also, MX records are not required. Edit to clarify: So checking MX in your application means you fail 100% of the time on some perfectly valid email domains. Good luck to the users and support staff who have to troubleshoot that, because there’s nothing wrong with the email address or domain; the problem is your application doing something it should not.

          Better to just hand the verification message off to your mail server, which knows how to handle these things. You can flag the address if your outgoing mail server refuses to accept it.

          • fubo@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            10 months ago

            If DNS is transiently down, the most common mail domains are still in local resolver cache. And if you’re parsing live user requests, that means the IP network itself is not in transient failure at the moment. So it takes a pretty narrow kind of failure to trigger a problem… And the outcome is the app tells the user to recheck their email address, they do, and they retry and it works.

            If DNS is having a worse problem, it’s probably down for your mail server too, which means an email would at least sit in the outbound mail spool for a bit until DNS comes back. Meanwhile the user is wondering where their confirmation email is, because people expect email delivery in seconds these days.

            So yeah … yay, tradeoffs!

            (Confirmation emails are still important for closed-loop opt-in, to make sure the user isn’t signing someone else up for your marketing department’s spam, though.)

    • grue@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      10 months ago

      By the way, please don’t write regex to try to validate email addresses. Seriously.

      Speaking of things you can’t parse with regex

      You can’t parse [X]HTML with regex. Because HTML can’t be parsed by regex. Regex is not a tool that can be used to correctly parse HTML. As I have answered in HTML-and-regex questions here so many times before, the use of regex will not allow you to consume HTML. Regular expressions are a tool that is insufficiently sophisticated to understand the constructs employed by HTML. HTML is not a regular language and hence cannot be parsed by regular expressions. Regex queries are not equipped to break down HTML into its meaningful parts. so many times but it is not getting to me. Even enhanced irregular regular expressions as used by Perl are not up to the task of parsing HTML. You will never make me crack. HTML is a language of sufficient complexity that it cannot be parsed by regular expressions. Even Jon Skeet cannot parse HTML using regular expressions. Every time you attempt to parse HTML with regular expressions, the unholy child weeps the blood of virgins, and Russian hackers pwn your webapp. Parsing HTML with regex summons tainted souls into the realm of the living. HTML and regex go together like love, marriage, and ritual infanticide. The <center> cannot hold it is too late. The force of regex and HTML together in the same conceptual space will destroy your mind like so much watery putty. If you parse HTML with regex you are giving in to Them and their blasphemous ways which doom us all to inhuman toil for the One whose Name cannot be expressed in the Basic Multilingual Plane, he comes. HTML-plus-regexp will liquify the n​erves of the sentient whilst you observe, your psyche withering in the onslaught of horror. Rege̿̔̉x-based HTML parsers are the cancer that is killing StackOverflow it is too late it is too late we cannot be saved the transgression of a chi͡ld ensures regex will consume all living tissue (except for HTML which it cannot, as previously prophesied) dear lord help us how can anyone survive this scourge using regex to parse HTML has doomed humanity to an eternity of dread torture and security holes using regex as a tool to process HTML establishes a breach between this world and the dread realm of c͒ͪo͛ͫrrupt entities (like SGML entities, but more corrupt) a mere glimpse of the world of reg​ex parsers for HTML will ins​tantly transport a programmer’s consciousness into a world of ceaseless screaming, he comes, the pestilent slithy regex-infection wil​l devour your HT​ML parser, application and existence for all time like Visual Basic only worse he comes he comes do not fi​ght he com̡e̶s, ̕h̵i​s un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment, HTML tags lea͠ki̧n͘g fr̶ǫm ̡yo​͟ur eye͢s̸ ̛l̕ik͏e liq​uid pain, the song of re̸gular exp​ression parsing will exti​nguish the voices of mor​tal man from the sp​here I can see it can you see ̲͚̖͔̙î̩́t̲͎̩̱͔́̋̀ it is beautiful t​he final snuffing of the lie​s of Man ALL IS LOŚ͖̩͇̗̪̏̈́T ALL I​S LOST the pon̷y he comes he c̶̮omes he comes the ich​or permeates all MY FACE MY FACE ᵒh god no NO NOO̼O​O NΘ stop the an​*̶͑̾̾​̅ͫ͏̙̤g͇̫͛͆̾ͫ̑͆l͖͉̗̩̳̟̍ͫͥͨe̠̅s ͎a̧͈͖r̽̾̈́͒͑e n​ot rè̑ͧ̌aͨl̘̝̙̃ͤ͂̾̆ ZA̡͊͠͝LGΌ ISͮ̂҉̯͈͕̹̘̱ TO͇̹̺ͅƝ̴ȳ̳ TH̘Ë͖́̉ ͠P̯͍̭O̚​N̐Y̡ H̸̡̪̯ͨ͊̽̅̾̎Ȩ̬̩̾͛ͪ̈́̀́͘ ̶̧̨̱̹̭̯ͧ̾ͬC̷̙̲̝͖ͭ̏ͥͮ͟Oͮ͏̮̪̝͍M̲̖͊̒ͪͩͬ̚̚͜Ȇ̴̟̟͙̞ͩ͌͝S̨̥̫͎̭ͯ̿̔̀ͅ

      Have you tried using an XML parser instead?

    • abhibeckert@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      12
      ·
      edit-2
      10 months ago

      Programmers write parsers quite a lot

      Speak for yourself. I’ve done it exactly once. It didn’t work, and never shipped. Learned my lesson and always use a parser that someone else wrote. Usually a big team of at least thousands of people (how many people have worked on JSON? Millions? What about UTF8? Those are the main two I use).

      • Oscar
        link
        fedilink
        English
        arrow-up
        31
        ·
        10 months ago

        So you have never iterated over command line arguments and tried to identify options? Or taken a string input field?

  • darkmatternoodlecow
    link
    fedilink
    arrow-up
    41
    arrow-down
    9
    ·
    10 months ago

    Yeah! And integers do way too many things as well. Counters, indexes, number of orange slices in an orange, there’s just no end to the wacky things people try to make integers do, and it’s impossible to keep track of it all when looking at code. And floats? Don’t get me started on floats. Angles, probabilities, weights, heights, degrees of separation from Kevin Bacon … I’m getting dizzy just listing all these different things that floats do.

    It’s a big problem, because there isn’t an easy way to fix it in every programming language known to man, and someone needs to write more articles about this to get more hits for their sites.

    • tias@discuss.tchncs.de
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      10 months ago

      I think the article makes a good point, but perhaps I’m over-interpreting. It’s not that we should stop using strings. It’s that we should use the type system to separate different kinds of strings and enlist the compiler’s help to detect incorrect mingling of them. So for example a symbol type would only permit strings that contain ASCII letters, underscore and digits, and concatenation with / conversion to plain strings would be limited.

    • chickenf622@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      10 months ago

      It’s definitely a rule that can be taken so far that it is counterproductive, but I think it’s good practice to thbk about how I could use something other than a raw string ( even if it’s just a constant defined somewhere )

    • dudinax
      link
      fedilink
      arrow-up
      3
      ·
      10 months ago

      What’s the meaning of a fractional “Degree of Kevin Bacon”?

      • Kogasa
        link
        fedilink
        arrow-up
        2
        ·
        10 months ago

        The analytic continuation of KB(x) to the complex plane subject to a superconvexivity constraint is unique but doesn’t necessarily have a straightforward geometric interpretation

    • corytheboyd@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      10 months ago

      You joke, but Rails actually does make Integer do too many things lol. I’d argue they’re useful things, but it does so by patching the core Ruby Integer class :p

  • corytheboyd@kbin.social
    link
    fedilink
    arrow-up
    22
    arrow-down
    2
    ·
    10 months ago

    Strings became ubiquitously used for a reason, they map really clearly to the way we think as humans. Most importantly, when you’re debugging, seeing string data is much friendlier than whatever data your symbols map to (usually integers, from enum structures)

    No, obviously it’s not the most efficient thing in the world, but it hardly matters, and you’re not getting anyone to stop because you’re “technically right”.

    • Pipoca@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      9 months ago

      Symbols display with friendly string-y names in a number of languages. Clojure, for example, has a symbol type.

      And a number of languages display friendly strings for enumy things - Scala, Haskell, and Rust spring to mind.

      The problem with strings over enums with a nice debugging display is that the string type is too wide. Strings don’t tell you what values are valid, strings don’t catch typos at compile time, and they’re murder when refactoring.

      Clojure symbols are good at differentiation between symbolly things and strings, though they don’t catch typos.

      The other problem the article mentions is strings over a proper struct/adt/class hierarchy is that strings don’t really have any structure to them. Concatenating strings is brittle compared to building up an AST then rendering it at the end.

      Edit: autocorrect messed a few things up I didn’t catch.

  • thingsiplay@beehaw.org
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    10 months ago

    Sounds like skill issues and depends on the context. Is this article about JavaScript? I recommend trying out Rust. Strings do only too many things, if you do it. Create your own types and enums. An example missing in the article are file or url paths. Use the standard libraries and types that are given to you.

  • Venator@lemmy.nz
    link
    fedilink
    arrow-up
    10
    ·
    10 months ago

    Unless you’re using assembly, strings do everything, since the code files are also strings.