Finding a defect
Defects may be found practically throughout the entire test process. The emphasis, however, is on the phases of Preparation, Specification and Execution. Since, in the Preparation and Specification phases, the test object is normally not yet used, in these phases the testers find defects in the test basis. During the Execution phase, the testers find differences between the actual and the expected operation of the test object. The cause of these defects, however, may still lie within the test basis.
The steps that the tester should perform when a defect is found are described below:
- Collect proof
- Reproduce the defect
- Check for your own mistakes
- Determine the suspected external cause
- Isolate the cause (optional)
- Generalise the defect
- Compare with other defects
- Write a defect report
- Have it reviewed.
The steps are in a general order of execution, but it is entirely possible to carry out certain steps in another sequence or in parallel. If, for example, the tester immediately sees that the defect was found previously in the same test, the rest of the steps can be skipped.
At a certain point, the test object produces a response other than the tester expects, or the tester finds that the test basis contains an ambiguity, inconsistency or omission: a defect. The first step is to establish proof of this anomaly. This can be done during the test execution, for example, by making a screen dump or a memory dump, printing the output, making a copy of the database content or taking notes. The tester should also look at other places where the result of the anomaly could be visible. He could do this, for example, in the case of an unexpected result, by using an Edit function to see how the data is stored in the database and a View function to see how it is shown to the user. If the defect concerns a part of the test basis, other related parts of the test basis should be examined.
Reproduce the defect
When a defect is found during test execution, the next step is to see whether it can be reproduced by executing the test case once more. The tester is now on guard for deviant system behaviour. Besides, executing the test again helps with recognising any test execution errors. If the defect is reproducible, the tester continues with the subsequent steps. If the defect is not reproducible and it is not suspected to be a test execution error, things become more difficult. The tester executes the test case again. He then indicates clearly in the defect report that the defect is not reproducible or that it occurs in 2 out of 3 cases. There is a real chance that the developers will spend little or no time on a non-reproducible defect. However, the point of submitting it as a defect is that this builds a history of non-reproducible defects. If a nonreproducible defect occurs often, it may be expected to occur regularly in production as well and so must be solved.
During a system test, the system crashed in a non-reproducible way a couple of times a day. The test team reported this each time in a defect report, but the development team was under pressure of time, paid no attention to this defect, and dismissed it as an instability of the development package used. By reporting the large number of non-reproducible defects and indicating that a negative release advice would result, they were finally persuaded. Within a relatively short time, they found the cause (a programming mistake) and solved the problem.
In more detail
In some cases, such as with performance tests and testing of batch software, it costs a disproportionate amount of time to execute the test again. In those cases, the test to see whether the defect is reproducible is not repeated.
Check for your own mistakes
The tester looks for the possible cause of the defect, first searching for a possible internal cause. The defect may have been caused, for example, by a error in:
- The test specification or (central) starting point
- The test environment or test tools
- The test execution
- The assessment of the test results.
The tester should also allow for the fact that the test results may be distorted by the results of another test by a fellow tester.
If the cause is internal, the tester should solve this, or have it solved, for example by amending the test specification. Subsequently, the tester repeats the test case, whether in the same testing session or in the following one.
In more detail
Test environments and test tools usually come under the management of the testers. Defects in these that can be solved within the team belong to the internal defects, and those originating from outside the team are external defects.
Determine the suspected external cause
If the cause does not lie with the testing itself, the search has to widen externally. External causes may be, for example:
- Test basis
- Test object (software, but also documentation such as user manuals or AO procedures)
- Test environment and test tools.
The tester should discover the cause as far as possible, as this would help in determining who should solve the defect and later with discerning quality trends.
Because the tester compares his test case against the test object, there is the inclination in the event of an anomaly to point to the test object as the primary cause. However, the tester should look further: perhaps the cause lies with the test basis? Are there perhaps inconsistencies in the various forms of test basis?
As well as the formal test basis (such as e.g. the functional design or the requirements), the tester regularly uses other, less tangible forms of test basis. These may include the mutual consistency of the screens and user interface, the comparison with previous releases or competing products, or the expectations of the users. See also "Preparation Phase". In describing a defect, it is thus important to indicate which different form of test basis is used, and whether or not the test object corresponds with the formally described test basis, such as the requirements or the functional design. If the test object and the formal test basis correspond, the cause of the defect is an inconsistency between the informal and formal test basis and not the test object.
During an exploratory test, the tester discovers that the position of the operating buttons vary on many screens. Further investigation shows that the cause lies with the screen designs and not with the programming. The tester submits the defect, citing the test basis as the cause.
An external defect is always managed formally. This may be in the form of the defect report and defects procedure described in the sections below. Where reviews are concerned, a less in-depth form may be chosen in which the defects are grouped into a review document and passed to the defect solver; see also "Evaluation Techniques".
Isolate the cause (optional)
While the suspected cause is often apparent, in the case of a defect in the test object or the test environment, it is often insufficiently clear to the defect solver. The tester therefore looks at surrounding test cases, both the ones that have been carried out successfully and the ones that have not. He also makes variations where necessary to the test case and executes it again, which often results in indicating a more exact cause or allows further specification of the circumstances in which the defect occurs. This step is optional, since it lies on the boundary of how far the tester should go in respect of development in seeking the cause of a defect. It is important to make agreements with the developer on this beforehand. This can avoid discussions on extra analysis work later on, when test execution is on the critical path of the project.
Generalise the defect
If the cause appears sufficiently clear, the tester considers whether there are any other places where the defect could occur. With test object defects, the tester may execute similar test cases in other places in the test object. This should be done in consultation with the other testers, to prevent these tests from disrupting those of his colleagues. With test basis defects, too, the tester looks at similar places in the test basis ("In the functional design, the check for overlapping periods for function A has been wrongly specified. What is the situation as regards other functions that have this same check?").
During a Friday-afternoon test, the parallel changing of the same item by two users in function X produced a defect. Further testing on other functions showed that the multi-user mechanism had been wrongly built in structurally.
The tester need not aim for completeness here, but should be able to provide an impression of the size and severity of the defect. If the defect is structural, it is up to the defect solver to solve it structurally. This step also has the purpose of building up as good a picture as possible of the damage that the defect could cause in production.
Compare with other defects
Before the tester writes the defect report, he looks to see whether the defect has been found previously. This may have been done in the same version of the test object by a fellow tester from within a different test. It is also possible for the defect to have been reported in an earlier release. The tester consults with the defects administration, his fellow testers, the test manager, defects administrator or the intermediary to find out.
There are a number of possibilities:
- The defect was found in the same part of the current release.
The defect need not be submitted. The test case in the test execution report may refer to the already existing defect.
- A similar defect has already been found in another part of the current release.
The defect should be submitted and should contain a reference to the other defect.
- The defect has already been found in the same part of the previous release.
If the old defect was to have been solved for this release, it should be reopened or resubmitted with reference to the old defect, depending on the agreement. If the old defect is still open, the tester need not submit a new one.
Write a defect report
The tester documents the defect in the defects administration by means of a defect report. In this, he describes the defect and completes the necessary fields of the report; see "Defect report". The description of the defect should be clear, unambiguous and to the point. The tone should remain neutral, and the tester should come across as impartial, being conscious of the fact that he is delivering bad news. Sarcasm, cynicism and exaggeration are obviously to be avoided.
Ideally, the tester makes clear what the consequences are in the event of the defect not being solved, or what the damage might be in production. This determines the chances of the defect being solved after all. In some cases, the damage is very clear ("Invoices are wrongly calculated") and little explanation is necessary; in other cases, it is less clear ("Wrong use of colour in screens") and the tester should clearly indicate what the consequences could be ("Deviation from business standards means that the External Communication department may obstruct release of the application"). Otherwise, it is by no means always possible for the tester to estimate the potential damage, as he lacks the necessary knowledge. The final responsibility for estimating the damage lies with (the representatives of) the users and the client in the defects consultation, which is discussed later. A difficult question is always how much information the description should contain. The guideline for this is that the defect solver should be reasonably able to solve the defect without further explanation from the tester.
In more detail
'Reasonably' in the above sentence is a difficult concept. The developer would prefer the tester to indicate which statement is wrong in the software. However, this is debugging and comes under the responsibility of the developer. The situation should be avoided in which the tester regularly sits with the programmer to search together for the cause of the defect. This indicates poorly written defects rather than collaborative testing. The tester is at that point no longer involved in testing operations, as the test manager expects of him. If this happens regularly, it will render the plan of the test process unmanageable.
In some cases, the tester finds many small defects in a particular part, e.g. a screen. The inclination is then to keep the administration simple by grouping all these defects into one collective defect report. There is sometimes pressure from the developers to do this, either for the same reason or to make the number of defects appear lower. This is rarely advisable. The chances are that, out of such a collection, a number of defects will be solved in the subsequent release, a number will be solved in the release following, and a number will not be solved at all. Following and monitoring such a collective defect thus becomes an administrative nightmare.
Have it reviewed
Before the defect formally enters the defects procedure, the tester has the report reviewed for completeness, accuracy and tone. This may be done by a fellow tester, the test manager, defects administrator or the intermediary. After processing their comments, the defect is formally submitted. This is performed in accordance with the procedure described in "Handling defect - procedure".