Agile Tips

#64-Shift left - Detecting Defects During Integration

Scott L. Bain

We're shifting left again, to the integration phase that precedes traditional testing.  What defects can and should ideally be detected at this point, and how can we accomplish this?  That's what this episode with examine.

Shift left: Detecting Defects During Integration

Last week I examined the possibility of detecting defects during traditional testing. I hope I made it clear that traditional testing is still necessary no matter what other testing is being done throughout the development process, only that there are certain limitations as to what can be expected to be achieved by the testing that occurs after the product is theoretically complete. 

This week I want to shift to the left and examine the kinds of defects that can be expected to be detected during the integration phase. 

By “integration phase”, what I'm referring to is the point where external dependencies are included in the system’s behavior.  This could be many things. For example, your system may connect to an external database, or to a web service or restful service that is offered by a vendor or other entity… or to a user interface maintained by another team, or any number of things that are outside of your control but are required for the system to behave properly. 

The assumption is that the system will be tested “full up” during the traditional testing phase I discussed last week, but that integration testing should precede this in order to determine that the connection between our system and the systems we depend upon are correct and correctly utilized. 

Such testing is important, however there are three problems with it.

First, such tests will tend to run slowly because the dependencies will impede the performance of the test itself. Databases and network connections can be very slow from the testing perspective.  Because of this we will be unable to run these tests very frequently. 

Secondly, these external dependencies may not be available at all times, and therefore we may not be able to conduct these tests except during certain time periods or under certain conditions. 

Third, it's possible that these external dependencies may have certain unpredictabilites to them, which means that our tests cannot accurately assert what the behavior of our system should be. 

All three of these problems reflect that external dependencies may be slow, unavailable, or unpredictable in their behavior. Therefore, integration tests must have a way to control these dependencies so that we may test how our system utilizes them. 

I should be clear that what we want to test is not these external dependencies, we assume that those responsible for these dependencies have tested them already, but that our system connects to and utilizes those dependencies properly. Otherwise, our tests may represent a redundancy in the testing effort which itself can impede progress and maintainability. 

If the team is practicing test-driven development, then they will understand several techniques that are specifically designed for controlling dependencies from the testing perspective. These include peripheral entities, mocking, dependency injection, endo-testing, and a number of other techniques that the team must be carefully trained to do properly.

These techniques reduce the performance burden of testing, ensure that the tests can be run at any time, and that the results will be predictable. This means that potential integration problems can be caught at this point rather than be handed off to traditional testing. 

That said, only a limited number of defects involve integration. What about all the other things that can go wrong? If we don't want to leave those to traditional testing, then we need to shift left again. 

That's what we'll talk about next week.