Referring more to smaller places like my own - few hundred employees with ~20 person IT team (~10 developers).

I read enough about testing that it seems industry standard. But whenever I talk to coworkers and my EM, it’s generally, “That would be nice, but it’s not practical for our size and the business would allow us to slow down for that.” We have ~5 manual testers, so things aren’t considered “untested”, but issues still frequently slip through. It’s insurance software so at least bugs aren’t killing people, but our quality still freaks me out a bit.

I try to write automated tests for my own code, since it seems valuable, but I avoid it whenever it’s not straightforward. I’ve read books on testing, but they generally feel like either toy examples or far more effort than my company would be willing to spend. Over time I’m wondering if I’m just overly idealistic, and automated testing is more of a FAANG / bigger company thing.

  • FizzyOrange
    link
    fedilink
    arrow-up
    14
    ·
    7 months ago

    Very common. Your coworkers are either idiots, or more likely they’re just being lazy, can’t be bothered to set it up and are coming up with excuses.

    The one exception I will allow is for GUI programs. It’s extremely difficult to do automatically tests for them, and in my experience it’s such a pain that manual testing is often less annoying. For example VSCode has no automatic UI tests as far as I know.

    That will probably change once AI-based GUI testing becomes common but it isn’t yet.

    For anything else, you should 100% have automated tests running in CI and if you don’t you are doing it wrong.

    • yournamepleaseOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 months ago

      Leadership may be idiots, but devs are mostly just burnt out and recognized that quality isn’t a very high priority and know not to take too much pride in the product. I think it’s my own problem that I have a hard time separating my pride from my work.

      Thanks for the response. It’s good to know that my experience here isn’t super common.

    • Ephera@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      Our standard practice is to introduce a thin layer in front of any I/O code, so that we can mock/simulate that part in tests.

      So, if your database-library has an insert()-function, you’d introduce a interface/trait with an insert()-function, which’s default implementation just calls that database-library and nothing else. And then in the test, you stick your assertions behind that trait.

      So, we don’t actually test the interaction with outside systems most of the time, because well:

      • that database-library is tested,
      • the compiler ensures we’re calling that library correctly (assuming no use of a scripting language), and
      • it’s often easier to simulate the behavior of the outside system correctly, than to set it up for each test case.

      We do usually aim to get integration tests with all outside systems going, too, to ensure that we’re not completely off the mark with the behavior that we’re simulating, but those are then often reduced to just the happy flow.