Reporting a defect

Defect report

A defect report is more than just a description of the defect. Other details on the defect need to be established (e.g. version of the test object, name of the tester). In order to do this in a structured manner, a defect report is often divided into several 'fields', in which the various details can be laid down that are necessary for the management of the defect and for obtaining meaningful information from the administration. The most important reasons for including separate fields, rather than one large free-text field, are:

  • The fields compel the defect information to be entered as completely as possible
  • It is possible to create reports on selections of defects.

For example, it is easy to select all the outstanding defects, all the defects with the test environment as a cause or all the defects in a particular part of the test object.

Defect reports are almost impossible now without automated support. This may be a simple spreadsheet or database package, but there are also various freeware or commercial tools available. The latter group of tools often has the advantage that the defects administration is integrated with testware management and plan and progress monitoring. Attention should be paid to the matter of authorisations with the tools. It should not be possible for a developer to change or close a tester's defect, but it should be possible for the developer to add a solution to the defect.

Tip

If testers and other parties are geographically far removed from each other, as is often the case with outsourcing or offshoring, it is advisable to purchase a web-enabled defects tool. This allows all the parties to directly view the current status of the defects administration and significantly eases communication on defects.

In more detail

In some organisations, the defects administration is placed within the incidents registration system of the production systems. While this is possible, such a system contains many more information fields than are necessary for a defect. Sometimes this can be adjusted, but sometimes the testers have to learn to deal with the complex system and ignore all the superfluous fields on the screen. This requires decidedly more training time and involves a greater likelihood of incorrect input of defects than with a standard defects administration.

If the defects are stored in an automated administration, a range of reports can be generated. These are very useful for observing certain trends concerning the quality of the test object and the progress of the test process as early as possible (see "Monitoring"). For example, ascertaining that the majority of the defects relates to (a part of) the functional design, or that the defects are concentrated in the screen handling. Such information can be used again for purposes of timely intervention and adopting measures.

The success of the defects administration is determined to a significant degree by the testers' discipline in completing the fields. To this end, the testers should first be sure of the content of each field and how it should be filled in. Particularly in the beginning, there is a need for guidance and monitoring of the completion of defect reports. This is usually a role for the test manager, defects administrator or intermediary, and forms part of the step "Have it reviewed" in "Finding a defect".

The uniformity and consistency of a defect report can be improved by restricting the possible input values for the fields, instead of using freetext boxes. For example, for the cause of a defect, a choice can be made between test basis, test object or test environment. This prevents all kinds of synonyms from being entered ('software', 'code', 'programming', 'program', 'component') that severely obstruct or render impossible any later selection of cause of defect.

A description is first given below of what a defect report should minimally contain. Subsequently, various recommendations are given as regards expanding on this.

Minimum fields in a defect report

A defect report contains the following fields at minimum:

  • Project or system name; The name of the (test) project or of the system under test.
  • Unique identification of the defect;  A unique identity, usually in the form of a (serial) number of the defect report, for purposes of management and tracking progress.
  • Brief characterisation; A brief characterisation of the defect in a limited number of words, maximum one sentence that preferably also clearly indicates the consequence of the defect. This characterisation is printed in defects overviews and makes the defect more communicable.
  • Submitter; The name of the individual who has submitted the defect.
  • Identification of phase/test level; The phase or test level in which the defect was found, e.g. design, development, development test, system test, acceptance test or implementation.
  • Severity; The severity category proposed by the tester. This categorisation reflects the damage to the business operations. For example:
    • Production-obstructive: involves (high) costs, e.g. because the defect will shut down operations when the system goes into production
    • Severe: (less) costs involved, e.g. because the user has to rework or add items manually
    • Disruptive: little or no costs involved, e.g. chopping of alphanumeric data on the screen or issues relating to user-friendliness
    • Cosmetic: wrong layout (position of fields; colours) which is not a problem for the external client, but can be disturbing to the internal employee.
  • Priority; The priority of the solution proposed by the tester. Possible classification:
    • Immediate reworking required, e.g. a patch available within 48 hours that (temporarily) solves the problem. The test process or the current business operations (if it concerns a defect from production) are seriously obstructed
    • Reworking required within the current release. The current process can continue with work-arounds, if necessary, but production should not be saddled with this problem
    • Reworking required eventually, but is only required to be available in a subsequent release. The problem (currently) does not arise in production, or else the damage is slight.
In more detail

At first sight, it does not appear important to make a distinction between severity and priority. These usually run in sync, so that a high level of severity implies a high priority of solving. However, this is not always the case and that is the reason for distinguishing both categories. The following examples illustrate this:

  1. With a new release, the internally allocated nomenclature in the software has been amended. The user will not be aware of this, but the automated test suite will suddenly stop working. This is a defect of low severity, but test-obstructive and therefore of very high priority.
  2. The user may find a particular defect so disturbing that it may not be allowed to occur in production. This may be, for example, a typo in a letter to a customer. This, too, is a defect of low severity that nevertheless needs to be reworked before going into production.
  3. A potentially very serious defect, e.g. the crashing of the application with resulting loss of data, only occurs in very specific circumstances that do not arise often. A work around is available. The severity level is high, but the priority may be lowered because of the work-around.
  • Cause; The tester indicates where he believes the cause to lie, for example:
    TB: test basis (requirements, specifications)
    S: software
    DOC: documentation
    TIS: technical infrastructure.
  • Identification of the test object; The (part of the) test object to which the defect relates should be indicated in this column. Parts of the test object may be e.g. object parts, functions or screens. Further detail may be supplied optionally by splitting the field into several fields, so that e.g. subsystem and function can be entered. The version number or version date of the test object is also stated.
  • Test specification; A reference to the test case to which the defect relates, with as much relevance to the test basis as possible.
  • Description of the defect; The stated defect should be described as far as possible in accordance with the guidelines in "Finding a defect".
  • Appendices; In the event that clarification or proof is necessary, appendices are added.
    An appendix is in paper form, such as a screen printout or an overview, or a (reference to an) electronic file.
  • Defect solver; The name of the individual who is solving the defect, has solved it or has rejected it.
  • Notes on the solution; The defect solver explains the chosen solution (or reason for rejection) of the defect.
  • Solved in product; Identification of the product, including version number, in which the defect should be solved.
  • Status + date; The various stages of the defect's life cycle are managed, up to and including retesting. This is necessary in order to monitor the defect. At its simplest, the status levels of "New", "In process", "Postponed", "Rejected", "Solved", "Retesting" and "Done" are used. The status also displays the date.

Possible extensions

Besides the above fields, various other fields may be added to the defect report. The advantages of including one or more of the fields below are better management and more insight into the quality and trends. The disadvantages are the extra administration and complexity. Experience shows that the advantages far outweigh the disadvantages in medium-sized and big tests or in cases in which a lot of communication on the defects between various parties is necessary.

  • Identification of the test environment; The test environment used, with identification of the starting situation used.
  • Identification of the test basis; The test basis used: name of the test basis document, including version number, supplemented if necessary with specific-requirement number.
  • Provisional severity category; Provisional: the severity category proposed by the tester.
  • Provisional priority; Provisional: the priority of solution proposed by the tester.
  • Provisional cause; Provisional: the cause of the defect as estimated by the tester.
  • Quality characteristic; The quality characteristic established by the tester, to which the defect relates.

In connection with the solution:

  • Definitive severity; The definite category of severity as determined by the defects consultation.
  • Definitive priority ; The definite priority of solution as determined by the defects consultation.
  • Definitive cause; The definite cause of the defect as determined by the defects consultation.
    Besides the categories mentioned for the minimum defect report, the category of "Testing" is added here.
  • Deadline or release for required solution; A date or product release is set, by which the defect should be solved.

In connection with retesting:

  • Retester; The name of the tester who carries out the retest.
  • Identification of the test environment; The test environment used, with identification of the starting point used.
  • Identification of test basis; The test basis used: name of the test basis document, including version number, if necessary supplemented with specific-requirement number.
  • Identification of test object; The (part of the) test object that was retested. The version number or version date of the test object is also stated.

In addition, test, defects consultation, retest and comments fields may be added, with which extra information may be optionally supplied, e.g. on corresponding defects or the identification of the change proposal by which the handling of the defect is brought within another procedure.