Friday, February 03, 2006

How much testing is "good enough"

Yesterday I had a fruitful discussion with my colleague (Kul) on the above subject. We were debating the selection of test cases, which ones will provide maximum impact or which are not likely to occur in a normal working scenario or to ignore cases which even if they fail will not damage the integrity of the data or working. Although we didn't reach any conclusion yesterday, I reflected on it later and this is what I came up with:

I believe that if you have taken the pains to encode a behavior (and validation of data input is a behavior), then there must be a good enough reason for it. In its simplest form, it could mean that if a program accepts a domain name as a parameter, and if we pass a wrong domain name, the program will do no harm (who knows ?) but may not inform the user about it via a user friendly message. So we as programmers build in data input validation checks and report neat failure messages and stop proceeding further. BUT if we don't test this case (may occur rarely, or is not harmful as we understand), we stand a chance of hitting a potential failure (could be as trivial as an embarrassing error message or worst still some damage which wasn't envisaged.)

Testing is the science of validating ALL assumptions and behaviors of a program. If you have taken the pains to encode a behavior it needs to be tested OR else the behavior need not be encoded at all. Also remember that code is never one time; it is maintained by other programmers and a regression suite of COMPLETE COVERAGE automated suites will help the maintenance programmer make changes freely.

Hence my mantra
1. Make code easy to test. Write compact and simple code with easy constructs with only necessary checks and behavior. (In my experience I have seen that this is possible even for very complex code, if it is broken down into units. This constraint will help you write better code in fact.)
2. Write data validation test cases for invalid and valid data input covering all possible (necessary and sufficient) test cases to cover all encoded behavior. You could refer to code and its flow for designing the cases. Remember if a program accepts junk input, it is more likely to excrete junk output. Control input to control output.
3. Write test cases to check the functioning of the code.
4. Automate aggressively to allow for easy regression.
5. Put in a process to maintain the test cases when the code changes.

Happy bug free coding...

No comments: