Let’s look at an academic example:
function CalculateQuadraticRoots(a, b, c) {
delta = b*b - 4*a*c
d = Math.Sqrt(delta)
x1 = (-b-d) / 2*a
x2 = (-b+d) / 2*a
return [x1, x2]
}
function Test_CalculateQuadraticRoots() {
result = CalculateQuadraticRoots(1, -1, -2)
Test.Assert([-1, 2], result)
}
The code has 100% code coverage, yet it works (without runtime errors) only for quadratic equations with at least one root.
As developers, we often forget about the corner and special cases. Moreover, we often forget (or do not know) that external (not in our codebase) functions behave differently for various arguments.
I do not want to say that automated testing is not beneficial. I want to stress that it does not guarantee correctness. Especially 100% code coverage never means that there are no bugs.
I was also working on a project that had relatively high code coverage, but nobody trusted the test results (as they were randomly failing).
Additionally, the tests were very slow – a long feedback loop. Moreover, any refactoring was causing build failure of test code. Low-quality tests were crippling the developers.
Less high-quality automated tests help more than a lot of low-quality ones.
However, code coverage is still a good tool. It helps to find the untested code. For me, untested code is code which is “out of control”.
If the code coverage is low, that means that it is painful (impossible?) to ensure correctness when changing the (legacy) code. Moreover, automated tests are said to decrease the number of bugs by ~40-80%.
Articles that I recommend to read:
One thought on “High code coverage does not guarantee high quality software”