Agile Tips

#01-Don't Trust Code Coverage Measurements

March 13, 2024 Scott L. Bain Season 1 Episode 1
#01-Don't Trust Code Coverage Measurements
Agile Tips
More Info
Agile Tips
#01-Don't Trust Code Coverage Measurements
Mar 13, 2024 Season 1 Episode 1
Scott L. Bain

Many tech leaders and project managers are mandating a minimum level of code coverage by developers.  They believer this ensures a level of quality and reliability in the product but this is a fallacy.  Hear why.

Show Notes Transcript

Many tech leaders and project managers are mandating a minimum level of code coverage by developers.  They believer this ensures a level of quality and reliability in the product but this is a fallacy.  Hear why.

Project managers seem to love it when developers report their code coverage because it gives them a sense of confidence in the quality and correctness of the work.  They will often establish a percentage of coverage (70% - 80% is typical) before code can be checked in.

This is a mistake.  Code coverage does not guarantee you anything unless one other thing is true.  Let me explain.

Let's say there exists a part of the system that calculates a monthly mortgage payment based on a principle value and a supplied interest rate.  I could write a test that executes this behavior fully and then cause it to pass if the value returned is greater than zero.  That test would pass and, if I wrote it right, report 100% code coverage.  It would ensure nothing about the correctness of the calculation.

The crux of the problem is that code coverage measures the execution of the system by tests but not that the system behaves in the correct way, which of course is what we actually care about. 

Why would a developer do this?  Because they are short on time or cannot not figure out how to test it meaningfully, and the only reason they are covering it in the first place is that you told them they must.  They don't care about it.  You get what you measure.  Demand coverage and that's what you'll get.

But this is not true if the team has adopted disciplined Test-Driven Development as their way of working.

In TDD a test is always written first, and it reflects what the author of the test understands about the needed behavior.  In writing it they ensure they have that understanding.  Developers value this because it gives them guidance about the knowledge they need to write the correct code.  They do it for themselves because they want to succeed.  You don't have to mandate that someone operate in their own best interest.

This test is then executed and of course fails.  Now the developers do the work to make it pass and that code is always covered because of the way it was created to begin with.  Also that coverage is meaningful because of the way it was achieved.  The observation of the transition from failing test to passing test increases developer confidence, which will tend to increase their velocity.

So if your developers are not doing TDD, don't trust code coverage measurements.  If they are doing TDD then only use code coverage to ensure that future work does not damage workflows, but nothing more. You don't need it to prove the comprehensive nature of the tests because that's already ensured by the process itself.