Referring more to smaller places like my own - few hundred employees with ~20 person IT team (~10 developers).

I read enough about testing that it seems industry standard. But whenever I talk to coworkers and my EM, it’s generally, “That would be nice, but it’s not practical for our size and the business would allow us to slow down for that.” We have ~5 manual testers, so things aren’t considered “untested”, but issues still frequently slip through. It’s insurance software so at least bugs aren’t killing people, but our quality still freaks me out a bit.

I try to write automated tests for my own code, since it seems valuable, but I avoid it whenever it’s not straightforward. I’ve read books on testing, but they generally feel like either toy examples or far more effort than my company would be willing to spend. Over time I’m wondering if I’m just overly idealistic, and automated testing is more of a FAANG / bigger company thing.

  • apotheotic (she/her)@beehaw.org
    link
    fedilink
    English
    arrow-up
    14
    ·
    7 months ago

    My team follows test driven development, so I write a test before writing the feature that the test, well, tests.

    This leads to cleaner code in general because it tends to be the case that easy to test code is also easy to read.

    On top of this fact, the test suite acts as a sort of “contract” for the code behaviour. If I tweak the code and a test no longer works, then my code is doing something fundamentally different. This “contract” ensures that changes to one codebase aren’t going to break downstream applications, and makes us very aware of when we are making breaking changes so we can inform downstream teams.

    Writing tests and having them running at PR time(or, before its deployed to production, if you’re not using some sort of VCS and CI/CD) should absolutely be a part of your dev cycle. Its better for everyone involved!

    • hollyberries
      link
      fedilink
      arrow-up
      5
      ·
      7 months ago

      Doesn’t this rely purely on the fact that the test is right?

      • OhNoMoreLemmy@lemmy.ml
        link
        fedilink
        arrow-up
        7
        ·
        7 months ago

        Yeah, debugging tests is an important part of test driven development.

        You also have to be careful. Some tests are for me to debug my code and aren’t part of the ‘contract’.

        But on the other hand, it’s really nice. If I spend a couple of hours debugging actual code and come out of the process with internal tests, the next time it breaks, the new tests make it much easier to identify what broke. Previously, that would have been almost wasted effort, you fix it and just hope it never breaks again.

      • apotheotic (she/her)@beehaw.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 months ago

        Yeah, but it isn’t usually very difficult to write a test correctly, unit tests especially.

        If you can’t write a test to validate the behaviour that you know your application needs to exhibit, then you probably can’t write the code to create that behaviour in the first place. Or, in a less binary sense, if you would write a test which isn’t “right”, you’re probably just as likely to have written code that isn’t “right”.

        At least in the case with the test, you write the test and the code, and when the test fails (or, doesn’t fail when it should) you’re tipped off to something being funky.

        I’m sure you could end up writing a test that’s bad in just the right way to end up doing more harm than good, but I do think that’s the exception(heh).

        • yournamepleaseOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 months ago

          We’ve definitely written lots of tests that felt like net negative, and I think that’s part of what burned some devs out on testing. When I joined, the few tests we had were “read a huge JSON file, run it through everything, and assert seemingly random numbers match.” Not random, but the logic was so complex that the only sane way to update the tests when code changed was to rerun and copy the new output. (I suppose this is pretty similar to approval testing, which I do find useful for code areas that shouldn’t change frequently.)

          Similar issue with integration tests mocking huge network requests. Either you assert the request body matches an expected one, and need to update that whenever the signature changes (fairly common). Or you ignore the body, but that feels much less useful of a test.

          I agree unit tests are hard to mess up, which is why I mostly gravitate to them. And TDD is fun when I actually do it properly.

          • apotheotic (she/her)@beehaw.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 months ago

            I hear you. When you’re trying to write one big test that verifies the whole code flow or whatever, it can be HELL, especially if the code has been written in a way that makes it difficult to write a robust test.

            God, big mocks are the WORST. It might not be applicable in your case, but I far prefer doing some setup and teardown so that I’m actually making the network request, against some test endpoint that I set up in the setup stage. That way you know the issues aren’t cropping up due to some mocking nonsense going wrong.

            Asserting that some arbitrary numbers match can be quite fragile, as I’m sure you’ve experienced. But if the code itsself had been written in such a way that you had an easier assertion to make, well, winner!

            Its all easier said than done, of course, but your colleagues having given up on testing because they’re bad at it is kinda disheartening I bet. How are you gonna get good at it if you don’t do it! :D

            • yournamepleaseOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              especially if the code has been written in a way that makes it difficult to write a robust test.

              I definitely deserve a lot of blame for designing my primary project in making hard to test. So, word to the wise (though it doesn’t take a genius to figure this out), don’t tell two fresh grads and a 1 YoE junior to “break the legacy app into microservices” with minimal oversight. If I did things again, I still think the only sane decision would be to cancel the project as soon as possible. x.x

              I actually was using a mock webserver with the expected request/response, which sounds like what you’re getting at. Still felt fiddly though and doesn’t solve the huge mock data problem which is more an architecture design failing.

              I’ve mostly gotten away from testing huge methods with a seemingly arbitrary numbers in favor of testing small methods with slightly less arbitrary numbers, which feels like a pretty big improvement.

              How are you gonna get good at it if you don’t do it! :D

              True. :)

              • apotheotic (she/her)@beehaw.org
                link
                fedilink
                English
                arrow-up
                2
                ·
                7 months ago

                Hahahahaha I feel that re: just kill the project!

                Ah I thought you were just mocking the response, as opposed to having some real webserver so you don’t have to faff with mocking stuff. Sounds like you did what I would have :P

                That does sound like a big improvement! Anything you can do to make your own job easier

        • hollyberries
          link
          fedilink
          arrow-up
          3
          ·
          7 months ago

          I’m sure you could end up writing a test that’s bad in just the right way to end up doing more harm than good, but I do think that’s the exception(heh).

          That’s exactly why I’ve asked. That is where I’ve gone wrong with TDD in the past, especially where any sort of math is involved due to being absolutely horrible at it (and I do game dev these days!). I can problem solve and write the code, I just can’t manually proof the math without external help and have spent countless hours looking for where my issue was due to being 100% certain that the formula or algorithm was correct >.<

          Nowadays anytime numbers are involved I write the tests after doing manual tests multiple times and getting the expected response, and/or having an LLM check the work and make suggestions. That in itself introduces more issues sometimes since that can also be wrong. Probably should have paid attention in school all those years ago lol

          • apotheotic (she/her)@beehaw.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 months ago

            Aw man, I can empathise. I don’t personally have any issues with mathsy stuff but I can imagine it being a huge brick wall at times, especially in game dev. I wish I had advice for that but its not a problem I’ve had to solve!

          • yournamepleaseOP
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 months ago

            Game dev seems like a place where testing is a bit less common due to need for fast iterations and prototyping, not to say it isn’t valuable.

            I’ve seen a good talk (I think GDC?) on how the Talos Principle devs developed a tool to replay inputs for acceptance testing. I can’t seem to find the talk, but here is a demo of the tool.

            The Factorio devs also have some testing discussions in their blog somewhere.

            • hollyberries
              link
              fedilink
              arrow-up
              1
              ·
              7 months ago

              The Talos Principle video was interesting to watch, thanks for the link! It shined a little bit of light on automated testing.

              Theres also someone on YouTube who has been teaching an AI on how to walk and solve puzzles on its own, the channel name escapes me and I’m nowhere near a working computer to look it up at the moment :(

      • Ephera@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        7 months ago

        You should think of an automated test as a specification. If you’ve got the wrong requirements or simply make a mistake while formulating it, then yeah, it can’t protect you from that.
        But you’d likely make a similar or worse mistake, if you implemented the production code without a specification.

        The advantage of automated tests compared to a specification document, is that you get continuous checks that your production code matches its specification. So, at least it can’t be wrong in that sense.

      • RonSijm
        link
        fedilink
        arrow-up
        1
        ·
        7 months ago

        Sure, but testing usually purely relies whether your assumptions are right or not - whether you do it automatically or manually.

        Like if you’re manually testing a login form for example, and you assume that you’ve filled in the correct credentials, but you didn’t and the form still lets you continue, you’ve failed the testing because your assumption is wrong.

        Like even if the specs are wrong, and you make a test for it, lets say in a calculator Assert(Calculate(2+2).Should().Equal(5) - if this is your assumption based on the specs or something, you can start up the calculator, manually click through the UI of the calculator, code something that returns 5, and deliver it.

        Then once someone corrects you, you have to start the whole thing over, open up the calculator, click through the UI, do the input, now it’s 4, yay!

        If you had just written a test - even relying on a spec that was wrong, it’s still very easy to change the test and fix the assumption.

        Also, lets say next sprint you’ll have to build a deduct function in the calculator, which broke the + operation. Now you have to re-test all operations manually to check you didn’t break anything else. If there were unittests with like 100 different operations, you just run them all, see they’re all still good, and you’re done