Checking and assessing the test results

Aim

To analyse the differences between the obtained test results and the predicted results in the test scripts or checklists.

Method of operation

The method of operation includes the following subactivities:

  1. Comparing test results
  2. Analysing differences
  3. Determining retests.
1. Comparing test results

The test results are compared against the predicted results in the test scripts and checklists. If testing is being done based on an exploratory technique, the tester will compare the outcome against the documented test basis, such as the functional design or a requirements document. If there is no documented test basis, the tester needs to find other ways of comparing the outcome. This information can be obtained, for example, from norms and standards, memos, user manuals, interviews, advertisements or rival products.

In more detail

The dangers of testing without a documented test basis

If no documented test basis is available to the tester, there is a real risk that he or she will begin to rely on information sources other than the test basis, such as his or her intuition. An unwanted end result may be that system and documentation are running out of sync. If the system is correct and the documentation wrong, this can lead to maintenance or administration problems. Conversely, it is possible that (deep) functionality is described in the documentation that has been incorrectly implemented in the system and that has not emerged with testing based on sources other than the system documentation. Another unwanted end result may be that, in the absence of clarity concerning the scope, the testers generate an endless stream of change requests in the form of defects. 

If there are no deviations, this is logged. If deviations are found, they are analyzed. The comparing of the test results often takes place simultaneously with the execution of the test. For example, by checking off the steps in the test script it can be indicated whether a test result corresponds with the expected result. In certain cases, it is not possible to do this during the test (e.g. with batch systems, where the output of several test cases is presented). 

2. Analyzing differences 

The differences found are further analyzed during this subactivity. The tester should perform the following steps:

  • Gather evidence
  • Reproduce the defect
  • Check for own mistakes
  • Determine suspected external cause
  • Isolate the cause (optional)
  • Generalize the defect
  • Compare with other defects
  • Write defect report
  • Have it reviewed.

These steps are explained in the wiki "Finding a defect". The steps are listed in the general sequence of execution, but it is entirely possible to carry out particular steps in another order or in parallel. If, for example, the tester immediately sees that the defect was already found in the same test, the interim steps need not be performed. In the test scripts, the numbers of the defects are registered with those test cases where the defect was found. In that way, it quickly becomes clear in any retest at least which test actions need to be carried out again. Various test tools are available both for comparing the test results and for analysing the differences. 

3. Determining retests

Reasons for carrying out retests may be found defects. If the cause of the defect concerns a fault in the test execution, the relevant test is carried out again. Defects that have their origin in a wrong test script or checklist are solved. Thereafter, the changed part of the test script is executed again or the entire checklist is gone through again. Faults in the test environment should also be solved, after which the relevant test scripts are executed again in their entirety. 

Faults in the test object or the test basis will usually mean a new version of the test object. With a fault in the test basis, the associated test scripts will usually also need to be amended. This often involves a lot of work. When retests take place, it is important to establish the way in which they are to be carried out. The test manager will determine in the Control phase whether all the test scripts should be carried out again in whole or in part, and this partly depends on: 

  • The exit criteria set out in the test plan
  • The severity of the defects
  • The number of defects
  • The degree to which the earlier execution of the test script was disrupted by the defects The time available
  • The risks.

In more detail

When to test solved defects

Defects that have been solved must be tested again. The timing of these tests can be quite different.

  1. Test as soon as a defect is solved. The advantage of this is that the programmer, who has solved the defect, still has it fresh in his memory. He can therefore act quickly in the event that the defect appears not to be solved. The disadvantage is that the code is often changed, delivered and tested. Mistakes can be easily made here, and that is less efficient for the tester. 
  2. Gather solved defects and test these. The advantage of this is that defects can be solved and tested collectively (e.g. per module or per screen), which is a more efficient way of working. The code is also more stable, so that the chances of a defect returning are minimal. The disadvantage, however, is that this method takes longer. 

The choice of option 1 or 2 depends on the project and the way of working. If it is possible to deliver a release of an application every day, also known as the 'daily build' and there are a large number of defects to be retested, the strategy may be to choose a mix of the above. It is then determined each day which solved defects will be included in the release and these can then be tested by the test team the following day. It is important in that case to set up a separate test environment and only to use it for testing the solved defects in the releases. In addition, a test of the entire test object will have to take place at the end, in order to establish that nothing else has changed (regression). 

Products

Defects
Logging of the test results. 

Techniques

Not applicable.

Tools

Testware management tool
Defect management tool
Test data tool
Automated test execution tool
Performance, load and stress test tool
Monitoring tool
Code coverage tool
Comparator
Database manipulation tool.