Find your template, checklist or other download to help you in you tasks.
Find your way to be trained or even get certified in TMAP.
Start typing keywords to search the site. Press enter to submit.
This section describes the test estimating technique test point analysis (TPA). Test point analysis makes it possible to estimate a system test or acceptance test in an objective manner. Development testing is an implicit part of the development estimate and is therefore outside the scope of TPA. To apply TPA, the scope of the information system must be known. To this end, the results of a function point analysis (FPA) are used. FPA is a method that makes it possible to make a technology-independent measurement of the scope of the functionality provided by an automated system, and using the measurement as a basis for productivity measurement, estimating the required resources, and project control. The productivity factor in function point analysis does include the development tests, but not the acceptance and system tests.
Test point analysis can also be used if the number of test hours to be invested is determined in advance. By executing a test point analysis, any possible risks incurred can be demonstrated clearly by comparing the objective test point analysis estimate with the number of test hours determined in advance. A test point analysis can also be used to calculate the relative importance of the various functions, based on which the available test time can be used as optimally as possible. Test point analysis can also be used to create a global test estimate at an early stage.
When establishing a test estimate in the framework of an acceptance or system test, three elements play a role:
The first two elements together determine the size of the test to be executed (expressed as test points). A test estimate in hours results if the number of test points is multiplied by the productivity (the time required to execute a specific test depth level). The three elements are elaborated in detail below.
Size in this context means the size of the information system. In test point analyses the figure for this is based primarily on the number of function points. A number of additions and/or adjustments must be made in order to arrive at the figure for the test point analysis. This is because a number of factors can be distinguished during testing that do not or barely play a part when determining the number of function points, but are vital to testing.
These factors are:
During system development and maintenance, quality requirements are specified for the information system. During testing, the extent to which the specified quality requirements are complied with must be established. However, there is never an unlimited quantity of test resources and test time. This is why it is important to relate the test effort to the expected product risks. We use a product risk analysis to establish, among other things, test goals, relevant characteristics per test goal, object parts to be distinguished per characteristic, and the risk class per characteristic/object part. The result of the product risk analysis is then used to establish the test strategy. A combination of a characteristic/object part from a high risk class will often require heavy-duty, far-reaching tests and therefore a relatively great test effort when translated to the test strategy. The test strategy represents input for the test point analysis. In test point analysis, the test strategy is translated to the required test time.
In addition to the general quality requirements of the information system, there are differences in relation to the quality requirements between the various functions. The reliable operation of some functions is vital to the business process. The information system was developed for these functions. From a user’s perspective, the function that is used intensively all day may be much more important than the processing function that runs at night. There are therefore two (subjective) factors per function that determine the depth: the user importance of the function and the intensity of use. The depth, as it were, indicates the level of certainty or insight into the quality that is required by the client. Obviously the factors user importance and intensity of use are based on the test strategy.
The test strategy tells us which combinations of characteristic/object part must be tested with what thoroughness. Often, a quality characteristic is selected as characteristic. The test point analysis also uses quality characteristics, which means that it is closely related to the test strategy and generally is performed simultaneously in actual practice.
TPA has many parameters that determine the required number of hours. The risk classes from the test strategy can be translated readily to these parameters. Generally, the TPA parameters have three values, which can then be linked to the three risk classes from the test strategy (risk classes A, B and C).
If no detailed information is available to divide the test object into the various risk classes, the following division can be used:
25% risk class A50% risk class B25% risk class C
This division must then be used as the starting point for a TPA.
Using this concept is not new to people who have already made estimates based on function points. Productivity establishes the relation between effort hours and the measured number of function points in function point analysis. For test point analysis, productivity means the time required to realise one test point, determined by the size of the information system and the test strategy. Productivity consists of two components: the skill factor and the environment factor. The skill factor is based primarily on the knowledge and skills of the test team. As such, the figure is organisation and even person-specific. The environment factor shows the extent to which the environment has an impact on the test activities to which the productivity relates. This involves aspects such as the availability of test tools, experience with the test environment in question, the quality of the test basis, and the availability of testware, if any.
Schematically, this is how test point analysis works:
Based on the number of function points per function, the function-dependent factors (complexity, impact, uniformity, user importance and intensity of use), and the quality requirements and/or test strategy relating to the quality characteristics that must be measured dynamically, the number of test points that is necessary to test the dynamically measurable quality characteristics is established per function (dynamically measurable means that an opinion can be realised on a specific quality characteristic by executing programs). Adding these test points over the functions results in the number of dynamic test points.
Based on the total number of function points of the information system and the quality requirements and/or test strategy relating to the static quality characteristics, the number of test points that is necessary to test the statically measurable quality characteristics is established (static testing: testing by verifying and investigating products without executing programs). This results in the number of static test points.
The total number of test points is realised by adding the dynamic and static test points.
The primary test hours are then calculated by multiplying the total number of test points by the calculated environment factor and the applicable skill factor. The number of primary test hours represents the time necessary to execute the primary test activities. In other words, the time that is necessary to execute the test activities for the phases Preparation, Specification, Execution and Completion of the TMAP life cycle.
The number of hours that is necessary to execute secondary test activities from the Control and Setting up and maintaining infrastructure phases (additional hours) is calculated as a percentage of the primary test hours.
Finally, the total number of test hours is obtained by adding the number of additional hours to the number of primary test hours. The total number of test hours is an estimate for all TMAP test activities, with the exception of creating the test plan (Planning phase).
The following principles apply in relation to test point analysis:
Test point analysis assumes one complete retest on average when determining the estimate. This average is a weighted average based on the size of the functions, expressed as test points.
To estimate the project size, the COSMIC “COmmon Software Measurement International Consortium]” Full Function Points (CFFP) approach is used more and more often in addition to the Function Point Analysis (FPA) approach [Abran, 2003]. FPA was created in a period in which only a mainframe environment existed and moreover relies heavily on the relationship between functionality and the data model. However, CFFP also takes account of other architectures, like client server and multi tier, and development methods like objected oriented, component based, and RAD. The following rule of thumb can be used to convert CFFPs to function points (FPs):
if CFFP < 250 : FP = CFFP if 250 ≤ CFFP ≤ 1000 : FP = CFFP / 1.2 if CFFP > 1000 : FP = CFFP / 1.5
To perform a test point analysis, one must have a functional design. The functional design must include detailed process descriptions and a logical data model, preferably including a CRUD matrix. Moreover a function point count must have been executed according to IFPUG. These function point methods can be used as input for TPA. It is important to use only one of these function point methods when determining the skill factor, not multiple methods combined. In a function point count, the number of gross function points is taken as the starting point. Which function point method is used is not important when determining the test points. It will, however, have an impact on the skill factor.
The following modifications must be made to the function point count for TPA:
If no function point count is available and you wish to make one (for TPA), the following guideline can be used to determine the time required to count the function points: Determine the number of TOSMs using one of the methods described in Estimation based on test object size and divide it by 400. The outcome represents an estimate of the number of days necessary to count the function points. Note: as a rule, 350 to 400 function points can be counted in a day.
Number of function points (FPf)
An information system has two user functions and one internal logical data collection:
The internal logical data collection ‘data’ has 7 function points and is allocated to the entry function in the context of test point analysis.
(FPf = function points per function)
The number of dynamic test points is the sum of the number of test points per function in relation to dynamically measurable quality characteristics.
The number of test points is based on two types of factors:
The FPA function is used as a unit of function. When determining the user importance and intensity of use, the focus is on the user function as a communication resource. The importance the users attach to the user function also applies to all of the underlying FPA functions.
The function-dependent factors are described below, including the associated weights. Only one of the three described values can be selected (i.e. intermediate values are not allowed). If too little information is available to classify a certain factor, it must be given the nominal value.
User importance is defined as the relative importance the user attaches to a specific function in relation to the other functions in the system. As a rule of thumb, around 25% of the functions must be in the category “high”, 50% the category “neutral”, and 25% in the category “low”. User importance is allocated to the functionality as experienced by the user. This means allocation of the user importance to the user function. Of course, the user importance of a function must be determined in consultation with the client and other representatives of the user organisation.
Intensity of use is defined as the frequency at which a certain function is used by the user and the size of the user group that uses that function. As with user importance, intensity of use is allocated to functionality as experienced by users, i.e. the user functions.
Weight
System impact is the level at which a mutation that occurs in the relevant function has an impact on the system. The level of impact is determined by assessing the logical data collections (LDC’s) to which the function can make mutations, as well as the number of other functions (within the system boundaries) that access those LDC’s. The impact is assessed using a matrix that shows the number of LDC’s mutated by the function on the vertical axis, and the number of other functions accessing these LDC’s on the horizontal axis. A function counts several times in terms of impact when it accesses multiple LDC’s that are all maintained by the function in question.
Explanation: L = Low impact, M = Medium impact, H = High impact.
If a function does not mutate any LDC’s, it has a low impact. A CRUD matrix is very useful when determining the system impact.
The complexity of a function is assessed on the basis of its algorithm. The global structure of the algorithm may be described by means of pseudo code, Nassi-Shneidermann or regular text. The level of complexity of the function is determined by the number of conditions in the algorithm of that function. When counting the number of conditions, only the processing algorithm must be taken into account. Conditions resulting from database checks, such as validations by domain or physical presence, are not included since they are already incorporated implicitly in the function point count.
As such the complexity can be determined simply by counting the number of conditions. Composite conditions, such as IF a AND b THEN count double for complexity. This is because two IF statements would be needed without the AND statement. Likewise, a CASE statement with n cases counts for n-1 conditions, because the replacement of the CASE statement by successive IF statements would result in n-1 conditions. In summary: count the conditions, not the operators.
In three types of situation, a function counts for only 60%:
The uniformity factor is given the value 0.6 if one of the above conditions is met, otherwise it is given the value 1.
In an information system, there can be functions that have a certain level of uniformity in the context of testing, but are marked as unique in the function point analysis. In the function point analysis, being unique means:
In addition, there are functions in an information system that are said to be fully uniform in the context of function point analysis and are therefore not allocated any function points, but must be counted in the testing because they do require testing. These are the clones and dummies.
The factor (Df) is determined by establishing the sum of the values of the first four function-dependent variables (user importance, intensity of use, system impact and complexity) and dividing it by 20 (the nominal value). The result of this calculation must then be multiplied by the value of the uniformity factor. The Df factor is determined per function.
Df = ((Ui + Iu + Si + C) / 20) * U Df = weight factor of the function-dependent factors Ui = user importance Iu = intensity of use Si = system impact C = complexity U = uniformity
If functions for error messages, help screens and/or menu structure are present in the function point count – which often is the case – they must be valued as follows:
Determining the function-dependent variables (Df)
Below, we describe how the requirements specified for the dynamically measurable quality characteristics are incorporated into the test point analysis. In relation to the dynamically measurable quality characteristics, TPA distinguishes between quality characteristics that can be measured explicitly and/or implicitly.
The following can be measured dynamically explicitly:
The weight of the quality requirements must be valuated for each quality characteristics in the context of the test to be executed, by means of a score, possibly by sub-system.
The quality characteristics that are measured dynamic explicit have the following weight factors:
Which relevant quality characteristics (distinguished in the test strategy) will be tested dynamic implicit must be determined. A statement about these quality characteristics can be made by collecting statistics during test execution. E.g. performance can be measured explicitly, by means of a reallife test, or implicitly, by collecting statistics.
The quality characteristics to be measured dynamic implicit must be specified. The number of quality characteristics can then be determined. The weight is 0.02 per characteristic for Qd. In principle, every quality characteristic can be tested dynamic implicit.
The score given to each dynamic explicit measurable quality characteristic is divided by four (the nominal value) and then multiplied by its weight factor.
The sum of the figures obtained this way is calculated. If certain quality characteristics were earmarked for dynamic implicit testing, the associated weight (0.02 per characteristic) must be added to the above sum. The figure obtained this way is the Qd factor. Usually, the Qd factor is established for the total system once. However, if the strategy differs per sub-system, the Qd factor must be determined per sub-system.
Determining the dynamically measurable quality characteristics (Qd)
The following are measured dynamic implicit:
Qd= 0,94 + 0,05 + (3 * 0,02) = 1,05
The number of dynamic test points is a sum of the number of test points per function. The number of test points per function can be established by entering what is now known in the formula below:
TPf = FPf * Df * Qd TPf = the number of test points per function FPf = the number of function points per function Af = weight factor of the function-dependent factors Qd = weight factor of the dynamic quality characteristics
Calculation of total number of dynamic test points (∑TPf)
The number of static test points naturally depends on the quality characteristics that require static testing (the Qs factor), but also on the total number of function points of the system. A static assessment of a large-scale information system simply takes more time than one of a simple information system.
For the relevant quality characteristics, it must be determined whether or not they will be tested statically. A statement about these quality characteristics is arrived at by means of a checklist. In principle, all quality characteristics can be tested statically with the aid of checklists. E.g. security can be measured either dynamically, with the aid of a semantic test, or statically, by assessing the security measures on the basis of a checklist.
If a quality characteristic is tested statically, the factor Qs will have a value of 16. For each subsequent quality characteristic to be included in the static test, another value of 16 is added to the Qs factor rating.
Calculation of static test points (Qs)
The following quality characteristics are measured statically (using a checklist): Continuity Qs = 16
The number of test points of the total system can be established by entering what is now known in the formula below:
TP = ∑TP f + ((FP * Qs) / 500)
TP = the number of test points of the total system
∑TPf = the sum of the number of test points per function (dynamic test points)
FP = number of function points of the total system (minimal value 500)
Qs = weight factor of static quality characteristics
The formula in the section above results in the total number of test points.
This is the measure for the scope of the primary test activities. These primary test points are multiplied by the skill factor and the environment factor to obtain the primary test hours. This represents the time that is necessary to execute the test activities for the Preparation, Specification, Execution and Completion phases of the TMAP model.
The skill factor indicates how many hours of testing are required per test point. The higher skill factor, the greater the number of hours of testing.
The productivity with which the test object is tested on the basis of the test strategy depends primarily on the knowledge and skills of those executing the tests. It is also relevant to know if people are testing part-time or full-time. Testing users that are deployed for test work only part of the workday, have a lot of switch moments between their day-to-day work and the test work, which often results in reduced productivity.
In practice, the following basic figures are used per test point:
The skill factor naturally varies per organisation and within that even per department/person. A factor can be obtained by analysing completed test projects. To make such an analysis, one must have access to experience figures for the test projects already realised.
The number of required test hours per test point is influenced not only by the skill factor, but by the environment factor as well. A number of environment variables are used to calculate this. The environment variables are described below, including the associated weights. Again, only one of the available values may be selected. If too little information is available to classify a certain variable, it must be given the nominal value (in bold print).
The test tools factor involves the level to which the primary text activities are supported by automated test tools. Test tools can contribute to executing part of the test activities automatically and therefore faster. Their availability does not guarantee that, however – it is about their effective use.
For this factor the quality of the test executed earlier is important. When estimating an acceptance test this is the system test, when estimating a system test, the development test. The quality of the previous test is a co-determinant for the quantity of functionality that may be tested at a more limited level as well as for the lead time of the test execution. When the previous test is of a higher quality, fewer progress-hindering defects will occur.
The test basis is awarded a factor representing the quality of the (system) documentation on which the test for execution must be based. The quality of the test basis has an impact in particular on the required time for the Preparation and Specification phases.
The environment in which the information system is realised. Of particular interest here is to what extent the development environment prevents errors and/or enforces certain things. If certain errors can no longer be made, clearly they do not require testing.
The extent to which the physical test environment in which the test is executed has proven itself. If an often used test environment is used, fewer disturbances and defects will occur during the Execution phase.
The level to which existing test ware can be used during the test to be executed. The availability of effective test ware has a particular impact on the time required for the Specification phase.
The environment factor (E) is determined by establishing the sum of the values of the environment variables (test tools, previous test, test basis, development environment, test environment, and testware) and dividing it by 21 (the nominal value). The environment factor E can be established for the total system once, but also per sub-system if necessary.
E = 20/21 = 0.95
The number of primary test hours is obtained by multiplying the number of test points by the skill and environment factors:
PT = TP * S * E
PT = the total number of primary test hours TP = the number of test points of the total system S = skill factor E = environment factor
Since every test process involves secondary activities from the Control phase and the Setting up and maintaining infrastructure phase, a supplement must be added to the primary test hours for this. This will eventually result in the total number of test hours. The number of supplemental hours is calculated as a percentage of the primary test hours.
The supplemental percentage is often determined by a test manager on the basis of experience or using historical data. Some organisations use a fixed percentage. The percentage is nearly always in the range of 5 to 20%.
If no experience, historical data or fixed percentages are available, a supplemental percentage can be estimated in the following way. A standard (nominal) supplemental percentage of 12% is used as the starting point. We must then look at factors that may increase or reduce the percentage.
Examples of such factors are:
These factors are explained below. Since there is a great variety of test projects, we have not used seemingly certain absolute percentage figures to determine the impact of these factors on the percentage, but have chosen to indicate whether the impact will increase or reduce the percentage.
The team size represents the number of members in the test team (including the test manager and a test administrator, if any). A big team usually results in greater overhead and therefore a higher supplemental percentage. However, a small test team results in a reduced percentage:
For management tools, it is considered to what extent automated tools are used during the test activities for Control and Setting up and maintaining infrastructure. Examples of these tools are an automated:
If little use is made of automated tools, certain activities will have to be done manually. This increases the supplement percentage. If intensive use is made of automated tools, this will reduce the percentage:
There are many kinds of permanent test organisation ([Link {8.xml#8.3}section 8.3]). If an organisation has one of these permanent test organisations, lead time reduction, cost savings and/or quality improvement are often realised in a test process that uses it.
The supplemental percentage is used to calculate the supplement (in hours) on the basis of the number of primary test hours. The total number of test hours is then obtained by adding the supplement calculated for Control and Setting up and maintaining infrastructure to the total number of primary test hours.
Below figure shows the TPA calculation example as a whole.
When using TMAP, the test process is split up into seven phases, and many clients will be interested in the estimate per phase in addition to the estimate for the entire test process.
TPA gives an estimate for the entire test process, with the exception of test plan creation (Planning phase).
In principle, for the phases Control and Setting up and maintaining infrastructure, the number of hours is estimated that was calculated on the basis of the number of primary test hours using the supplement percentage (supplement hours). These supplement hours must be divided between the two phases.
The primary test hours are divided over the other phases (Preparation, Specification, Execution, and Completion). The distribution of the primary test hours over the phases may naturally vary per organisation, and even within one organisation. A distribution applicable to the organisation can be obtained by analysing completed test projects. To make such an analysis, one must have access to experience figures for the test projects already realised.
Distribution of primary test hours
Preparation = 10%Specification = 40%Execution = 45%Completion = 5%
Please refer to this wiki for other distributions based on practical experience.
Often, a project estimate for testing must be made at an early stage. In this case, it is not possible to establish factors like complexity, impact and so on, because no detailed functional specifications are available. However, there are approaches that can often be used to perform a rough test point analysis. By using one of the approaches below, the total number of (gross) function points can be estimated:
One function is then defined for the purpose of a rough test point analysis. This function has the size of the total number of defined (gross) function points. In principle, all function-dependent factors (user importance, intensity of use, complexity, impact and uniformity) are given the neutral value, so that Df = 1. A test point analysis can then be made as described in the previous sections. Usually assumptions will have to be made when determining the environment factor. When presenting the test estimate, it is important to describe these assumptions clearly.
Overview – Building Block
EstimatingRelated Wiki’s
Estimation based on ratios
Estimation based on test object size
Work Breakdown Structure
Evaluation estimation approach
Proportionate estimation
Extrapolation
Test Point Analysis (TPA)