Previous experience is an important source of information to prepare and guide quality engineering activities such as reviewing and testing. This previous experience is often stored only in minds of people. An easy way to capture this experience is by listing it in a checklist. A checklist usually starts small, with a couple of entries, and evolves over time. Also, many organizations issue standard checklists that the team can adopt to use as-is or to use as a basis for creating their own checklist.
Checklists are used as a test approach or to support other test approaches and test design techniques.
A checklist is a structured or unstructured list of all situations that are (to be) tested.
In general, a checklist will specify topics and aspects that have to be used during testing. It does not, however, specify how this testing has to be done. Therefore, we see a checklist as an experience-based approach rather than a test design technique. Also, a checklist doesn't provide much coverage. Although some might argue that when the whole checklist is covered this will provide 100% coverage, the checklist itself is gathered experience; the checklist is therefore as good as the experience of the people that put it together. We do emphasize that although a checklist should be executed completely, still this shouldn't be reported with a coverage percentage.
Applying checklists in static testing
When formal reviews – such as technical reviews or inspections – are performed, the reviewers are assigned specific roles and focus points. To support such a role or focus point, a checklist can be used. For example, if a reviewer needs to focus on aspects of security, they can use a checklist that contains the OWASP top ten of security risks. [OWASP 2019]
A reviewer may also keep track of important aspects and add them to a checklist during reviewing and share such a checklist with other reviewers to use during this or future reviews. A checklist for reviewing can also be applied to evaluating the quality of requirements specifications. When performing code reviews, checklists also are a very important tool. For instance, the coding standards of an organization serve as a checklist during the code review. Teams can also create their own checklist of good practices and do's and don'ts.
The requirements describe what is required of the system, without going into detail about how exactly they are realised. For example: "Payment should also be possible in foreign currency." In principle, for every requirement 1 or more test cases are created to test that precise requirement. Some requirements are not testable and should then be removed from the checklist. For example: "With this system, the company's market share should increase by 20%".
Applying checklists in dynamic testing
In dynamic testing a checklist can be a standalone approach when doing syntactic testing, for example. The checklist now contains the elements to check for standard elements in a user interface, for instance.
Checklists are also used in testing user-friendliness aspects. Examples of this are: understandable error messages; possibility of "undoing" the last action; maximum of 10 fields per screen. See also Nielsen's ten heuristics in "Usability". This delivers a checklist of things to be tested with every function or screen in the system and that can be checked off during the testing.
But a checklist can also support other approaches, such as exploratory testing. Using a specific checklist could be a test idea on the exploratory testing charter. This can be applied when testing the usability by using (parts of) a style guide as a checklist. Specific types of checklists are heuristics, see section "Heuristics".
Using checklists to direct intensity and measure progress
Using checklists also offers the possibility of directing in a simple way how far the testing should go, and measuring how far the testing has progressed.
Every line on the checklist can be checked separately and directly, and the sequence of the test cases is not important. However, the concrete working out into physical test cases is not always simple and can involve a lot of work. In addition, it should be realized that the checklist generally only achieves elementary coverage. For example, in the testing of requirements it only provides the certainty that every requirement has been tested once. This is not to say that it proves that the requirements have been correctly implemented in all the expected situations.
It is also a coverage type that has little trouble with any changes in the test basis (and so the checklist itself). If a line is added to the checklist, a test case is added with which the relevant line is tested.
- In the planning or preparation phase, the lines in the checklist are prioritized, e.g. with H(igh) – M(edium) – L(ow). Another common means of notation of priorities is the MOSCOW notation: Must have; Should have; Could have; Would have.
- In the execution phase, the lines (and so the associated test cases) are then executed in order of priority.
- During test execution, the degree of progress so far obtained can be reported. This is measured as follows:
- Progress = (Number of test cases executed) / (total number of test cases).
More information about: Coverage types