Acceptance criteria

Identifying accepters, using acceptance criteria and other information providers

Usually the client is not the only stakeholder who has to accept the system; there are generally others, and it is important to clarify who these accepting parties are. This is done in consultation with the client. In practice, the test manager gets an opportunity here to discuss with stakeholders at a high level in the organisation (steering group members) and to interpret their opinions and expectations. Often there is no other opportunity for this, unless the test manager is in the (unfortunately) rare position of regularly participating in the steering group discussions. It is important to establish which accepters are to be provided with information directly or indirectly during the project by means of test reports. It should also be clear what requirements or acceptance criteria each accepter is proposing. These are the minimum qualitative requirements that the product must meet to make it satisfactory to the accepter. For the sake of clarity: the gathering of acceptance criteria is not the responsibility of the testers, but it is input into the setup of the test process. Acceptance criteria can be very diverse. Some examples are: 

  • Qualitative criteria as regards product and generation process, e.g. the number of defects that may remain open
  • Criteria as regards the environment, e.g. the infrastructure should be installed or the users should have followed a training course 
  • Criteria in the form of (the detailing of) requirements of the product, e.g. 'an order should be processed within X seconds'.

Not all the acceptance criteria are relevant to testing. The first example has a considerable overlap with the exit criteria for the test process. The second example is usually less important to testing, and the third example is a form of test basis. 

In more detail

Acceptance criteria pitfall

This latter use of acceptance criteria contains a danger. In practice, the following sometimes happens: after establishing and freezing the requirements, users discover that they have additional requirements. They then formulate these requirements as acceptance criteria. In this way, acceptance criteria form the 'back door' for taking in even more requirements. This is not a good method of operation. The only correct way is to submit a change proposal to a Change Control Board. 

Besides accepters, various other parties/individuals can supply the test process with relevant information. Bear in mind, for example: 

In more detail

  • The overall test manager, at coordinating level, for obtaining insight into the test assignment and what is expected of the test or the test manager 
  • The (representatives of the) client, for obtaining insight into the business aims and the 'culture' as well as the aims and strategic importance of the system 
  • The project manager or quality management employee, for obtaining insight into the steps and components of the development process and the correlations, with special focus on the (expected) place of testing in this 
  • The domain experts from the user organisation, for obtaining insight into the (required) functionality of the system
  • The designers, for obtaining insight into the system functionality to be developed
  • System administrators, for obtaining insight into the (future) production environment of the information system
  • Testers, for obtaining insight into the test approach and test maturity of the organisation
  • The suppliers of the test basis, the test object and the infrastructure, for guaranteeing coordination at an early stage among the various stakeholders. 

Exit criteria

Exit criteria can relate, for example, to the number of issues in a particular risk category that may still be open, the way in which a certain risk is covered (e.g. all the system parts with the highest risk category have been tested using a formal test design technique), or the depth in which the requirements should have been tested. From within the master test plan, the exit criteria are applied to the test level. If that is not the case, or if there is no master test plan, the test manager should agree the criteria with the client. 

The box below shows a number of concrete examples of exit criteria:

System X may only be transferred to the AT when the following conditions have been met: 

  • There are no more open defects in the category of "severe"
  • There is a maximum of 4 open defects in the category "disrupting"
  • The total number of open defects is no more than 20
  • A workaround has been described for every open defect
  • For every user functionality, at minimum, the correct paths have been tested and approved

System X may be transferred to the AT when it can be shown in writing that all the risks that were allocated to the ST in accordance with document Y have been tested in the agreed depth and by the agreed test method. 

An important point of focus as regards the above-mentioned criteria is that clear definitions should be agreed by all the stakeholders of what a particular category of severity is and what is meant by 'agreed depth of testing and test method'. In practice, a lack of clarity here can lead to heated discussions. 

Similarities and differences between acceptance and exit criteria

Another term for exit criteria that is used is 'acceptance criteria'. Besides the fact that acceptance criteria may be a broader term than exit criteria, another difference is that acceptance criteria come at the end, i.e. at acceptance, and exit criteria at the transfer from one test level to another, or to production. The figure below illustrates this. 

Exit and acceptance criteria

 

In more detail

Example of exit/acceptance criteria

In this example from practice, exit and acceptance criteria have high overlap.

The test approach and acceptance criteria are tuned with the stakeholders (see section x.y). Two levels of acceptance can be distinguished: 

  1. Acceptance of system XYZ by all stakeholders. This involves releasing the products for production.
  2. Individual acceptance by the client(s) of each test level of the test level executed.

For level 1 acceptance, the test must be executed according to the agreed test strategy and the following guidelines are respected for any defects found: The products of XYZ can be taken into production (are accepted) if: 

  • There are no category A defects.
  • A patch or workaround is available for category B defects. The developer must also submit a planning reporting how and when the problem will be solved structurally. 
  • A document is available for category C defects, explaining how these non-critical defects are dealt with.

For level 2 acceptance, the test team (responsible for the relevant test level) will be discharged if the aims as defined globally in section y.z and further specified in detailed test plans, if any, are achieved. This is also a Go/No Go decision to start executing the following test level. 

Suspend- and resume criteria

In some, particularly formally set up, tests, so-called suspend- and resume criteria may be defined in the plan. These criteria indicate under which circumstances the testing is temporarily suspended and then resumed. Examples of suspend criteria are that testing has to stop when a particular infrastructural component is not available, or if a test-blocking defect is found. A resume criterion may be that with the lifting of the suspend criterion the testing of the system part /function/component has to take place entirely anew. 

 

 

 

Acceptance criteria in a High Performance delivery model:

Building Block (DevOps)