Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there’s sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind?

Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on.

But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?

  • @[email protected]
    link
    fedilink
    English
    201 year ago

    80%. Much beyond that and you get into a decreasing return on the investment of making the tests.

    • @canpolat
      link
      English
      11
      edit-2
      1 year ago

      I think this is a good rule-of-thumb in general. But I think the best way to decide on the correct coverage is to go through uncovered code and make a conscious decision about it. In some classes it may be OK to have 30%, in others one wants to go all the way up to 100%. That’s why I’m against having a coverage percentage as a build/deployment gate.

      • @[email protected]
        link
        fedilink
        English
        61 year ago

        Bingo, exactly this. I said 80 because that’s typically what I see our projects get to after writing actually useful tests. But if your coverage is 80% and it’s all just tests verifying that a constant is still set to whatever value, then yeah, thats a useless metric.

      • @Psilves1
        link
        English
        11 year ago

        God I fucking wish my projects were like this

    • @[email protected]
      link
      fedilink
      English
      11 year ago

      The 80-20 rule is for everything. Don’t waste 80% of effort to get that last 20% of coverage.