
Agile Tips
Unlocking Agile Wisdom: Insights from Decades of Experience. Scott Bain is a 44+ year veteran of systems development.
Agile Tips
#40-Writing Specifications as Tests
Last week I pointed out that Test-Driven Development, even though it is named as it is, is not a testing activity but rather the creation of an executable specification. So how does this change how they are written? That's the subject of this episode.
Writing Specifications as Tests
Last week I said that one of the reasons that literally everything needs a test in Test-Driven Development, even very simple things, is that TDD, regardless of appearances, is not a testing activity but rather a specifying activity that also yields useful tests. I've also said that this means that TDD is not adding work to the team: you have to specify the needed behavior somehow, and TDD simply replaces traditional specifications with executable ones.
But how does this viewpoint influence the way such tests are written?
First, if you are using tests to capture knowledge, as all specifications do, then the tests must be extremely readable. Normally we think of a test as being intended for automation to run or for an expert tester to execute manually. If they are to serve the role of a traditional specification, then they should be readable by anyone. Acceptance tests seem like a natural fit to this point of view, but I am also saying that unit tests, written in code, must also follow this guidance. Class and method names, fixture variables, every element of a unit test should be crafted to communicate to some future reader everything that is currently known about the systems behavior. This may be yourself (when you may have forgotten what you know now) or someone else who has been tasked with maintaining the legacy code that you left behind.
Second, if a test is meant reflect business value, then it must show all meaningful permutations of behavior. Not all possible permutations (some of those will be dealt with by traditional testing) but all of those that are pertinent to the way the business will use the system.
Third, all testing must reflect positive behavior. This ensures that we only accept requirements that are legitimate. For example:
Let's say that you are writing a system that calculates the commission to a sales representative, but part of the requirements you are given says you "cannot submit a sales amount over $100,000". This is actually not a requirement; it just looks like one.
If the technology can prevent this problem inherently (if, for example, it won't compile if the rule is violated), then there is nothing to specify, and also nothing to test. In fact, such a test should resolutely be avoided as it could never fail.
If the technology cannot prevent it, then the statement "you cannot do X" is simply wrong. Of course, you can. The actual requirement would have to be "if an attempt is made to submit a sales amount over $100,000, then the following should happen", and then go on to specify what that is.
Software is a verb, it does things. Everything we specify must focus on proper behavior, and this is no less true in TDD than in traditional specifications. The difference is that the writing of the test will be impossible if the requirement is not legitimate, and so we will never make the mistake of accepting such a thing.
And that leads me to re-define what "acceptance" means. We normally thing of acceptance as being a confirmation that our work is acceptable to those stakeholders that created the requirements. This is true. But acceptance also means that the team accepts those requirements in the first place. That means that we must be able to accommodate them (we have the time and resources to do so) but also that the requirements are meaningful, clear, and complete.
This is one of the many ways TDD contributes to the success of everyone involved with it.