Investigate & assess outcome | DevOps

When the team members execute the test scenarios and test scripts, they compare the actual outcomes with the expected outcomes. When the outcomes match, the test was passed, the quality risk is covered, the requirement implemented, and they can inform the stakeholders this part of the pursued value seems to be achievable. When the expected and actual outcomes do not match, the test has failed; a quality risk has materialized, and the requirement is not yet implemented. The team now reports to the stakeholders this part of the pursued value is not yet achievable.  

Note that the reporting to the stakeholders will usually be done in an automated workflow that is part of the anomaly management system.  

When a test fails this often means that the rest of a test scenario cannot be completed and therefore some of the test cases will not be executed. Test execution therefore has three possible outcomes for a test case: pass, fail or not run.

test execution

When the test case passed, the team member in the role of tester registers this (or with automated testing, the test tool will take care of this registration) and there will be no further action.  

When the test case failed, the tester will need to do some investigation. 

In automated test execution the test script will also contain the automated check of the expected and actual outcomes. Differences will be reported so that the team members can do their investigations.  

We would like to note that the expected outcome may be a very specific result (for example an exact number). Today, however, we also get to test intelligent machines that involve machine learning by which it may be difficult to exactly specify the expected outcome. In that case some sort of expectation should still be defined before executing the test, because even a rough idea of the expectation can be used to check if the system is working according to expectations.  

Investigating a failed test case 

Basically, there are two possible reasons for a test to fail. The most likely cause is that there's a fault in the test object. It may be that the program code contains a fault,  although the configuration of the IT system could also be wrong or other reasons may exist.

The team members should however realize that another possible cause of a failing test may be that there's a fault in the test itself. The test case may have a wrong expected outcome, the test data may be wrong, the specification in for example a user story may be interpreted wrongly, or there may a multitude of other possible causes.  

failed test case exution

Therefore, the investigation of the failed test case is very important, which may be a tedious and time-consuming task. When the investigation is challenging, the team may apply "pair debugging", which means two team members work together on the investigation.  

It is wise to start examining the test case itself, first the team needs to make sure they did not make a testing error, to prevent them from wasting other people's time with an anomaly report.  

This investigation of a failed test case is also a testing task that always involves humans and cannot be fully automated. 

Steps for analyzing the failed test and creating an anomaly 

The team member should perform the following steps which in case the fault is in the test object will result in registering an anomaly:  

  • Gather evidence (such as screenshots or database dumps) 
  • Reproduce the failure (and register the steps to reproduce) 
  • Check for faults in the test 
  • Determine suspected cause(s) 
  • Isolate the cause (optional) 
  • Generalize the anomaly 
  • Compare with other anomalies and eliminate double 
  • Register the anomaly report 
  • Have the anomaly report reviewed 

The steps are in a general order of execution, but it is possible to carry out certain steps in another sequence or in parallel or skip some steps. If, for example, the team member immediately sees that the anomaly was found previously in the same test, the rest of the steps can be skipped.  
When the anomaly is fixed right away, there is no need to register an anomaly report in the anomaly management system. 

The step of reproducing the fault may be difficult. In the case of performance tests and testing of batch software, for example, it costs a disproportionate amount of time to execute the whole test again. In those cases, the team member in the role of tester tries to investigate without repeating the whole test. 

The person investigating the failed test case should always try to get an idea of the suspected cause. Giving this kind of information may save a lot of time for the person that has to solve the fault. If for example a numeric outcome is 100 times higher than expected, the team member in the role of tester may suspect that something has gone wrong in defining the decimal comma position in a field containing a money amount. 

Be aware of fault clustering

Faults have a tendency to cluster together within a test object. If a fault occurs in a specific function, screen, operation or other part of the test object, chances are that other faults are there as well. There are various reasons for this: for example, the specific part may contain complex code; the likelihood of the programmer making a mistake is therefore greater. Alternatively, a specific part may have been created by an inexperienced developer, or by someone who was having a bad day. It is therefore advisable, when a fault is found, to always look for other faults in the immediate environment. This can be done by executing an exploratory testing charter after the automated test execution was finished.