Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there’s sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind?

Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on.

But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?

  • mattburkedev
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    The most extreme examples of the problem are tests with no assertions. Fortunately these are uncommon in most code bases.

    Every enterprise I’ve consulted for that had code coverage requirements was full of elaborate mock-heavy tests with a single Assert.NotNull at the end. Basically just testing that you wrote the right mocks!

    • MagicShel
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      That’s exactly the sort of shit tests mutation testing is designed to address. Believe me it sucks when sonar requires 90% pit test pass rate. Sometimes the tests can get extremely elaborate. Which should be a red flag for design (not necessarily bad code).

      Anyway I love what pit testing does. I hate being required to do it, but it’s a good thing.

    • Deely
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah. All the same. Create lazy metric - get lazy and useless results.