This section describes the test estimating technique test point analysis (TPA). Test point analysis makes it possible to estimate a system test or acceptance test in an objective manner. Development testing is an implicit part of the development estimate and is therefore outside the scope of TPA. To apply TPA, the scope of the information system must be known. To this end, the results of a function point analysis (FPA) are used. FPA is a method that makes it possible to make a technology-independent measurement of the scope of the functionality provided by an automated system, and using the measurement as a basis for productivity measurement, estimating the required resources, and project control. The productivity factor in function point analysis does include the development tests, but not the acceptance and system tests.
Test point analysis can also be used if the number of test hours to be invested is determined in advance. By executing a test point analysis, any possible risks incurred can be demonstrated clearly by comparing the objective test point analysis estimate with the number of test hours determined in advance. A test point analysis can also be used to calculate the relative importance of the various functions, based on which the available test time can be used as optimally as possible. Test point analysis can also be used to create a global test estimate at an early stage.
Philosophy
When establishing a test estimate in the framework of an acceptance or system test, three elements play a role:
- The size of the information system that is to be tested.
- The test strategy (which object parts and quality characteristics must be tested and with what thoroughness, what level of depth?).
- The productivity.
The first two elements together determine the size of the test to be executed (expressed as test points). A test estimate in hours results if the number of test points is multiplied by the productivity (the time required to execute a specific test depth level). The three elements are elaborated in detail below.
Size
Size in this context means the size of the information system. In test point analyses the figure for this is based primarily on the number of function points. A number of additions and/or adjustments must be made in order to arrive at the figure for the test point analysis. This is because a number of factors can be distinguished during testing that do not or barely play a part when determining the number of function points, but are vital to testing.
These factors are:
- Complexity
How many conditions are present in a function? More conditions almost automatically means more test cases and therefore greater test effort. - System impact
How many data collections are maintained by the function and how many other functions use them? These other functions must also be tested if this maintenance function is modified. - Uniformity
Is the structure of a function of such a nature that existing test specifications can be reused with no more than small adjustments. In other words, are there multiple functions with the same structure in the information system?
Test strategy
During system development and maintenance, quality requirements are specified for the information system. During testing, the extent to which the specified quality requirements are complied with must be established. However, there is never an unlimited quantity of test resources and test time. This is why it is important to relate the test effort to the expected product risks. We use a product risk analysis to establish, among other things, test goals, relevant characteristics per test goal, object parts to be distinguished per characteristic, and the risk class per characteristic/object part. The result of the product risk analysis is then used to establish the test strategy. A combination of a characteristic/object part from a high risk class will often require heavy-duty, far-reaching tests and therefore a relatively great test effort when translated to the test strategy. The test strategy represents input for the test point analysis. In test point analysis, the test strategy is translated to the required test time.
In addition to the general quality requirements of the information system, there are differences in relation to the quality requirements between the various functions. The reliable operation of some functions is vital to the business process. The information system was developed for these functions. From a user's perspective, the function that is used intensively all day may be much more important than the processing function that runs at night. There are therefore two (subjective) factors per function that determine the depth: the user importance of the function and the intensity of use. The depth, as it were, indicates the level of certainty or insight into the quality that is required by the client. Obviously the factors user importance and intensity of use are based on the test strategy.
The test strategy tells us which combinations of characteristic/object part must be tested with what thoroughness. Often, a quality characteristic is selected as characteristic. The test point analysis also uses quality characteristics, which means that it is closely related to the test strategy and generally is performed simultaneously in actual practice.
TipLinking TPA parameters to test strategy risk classesTPA has many parameters that determine the required number of hours. The risk classes from the test strategy can be translated readily to these parameters. Generally, the TPA parameters have three values, which can then be linked to the three risk classes from the test strategy (risk classes A, B and C). If no detailed information is available to divide the test object into the various risk classes, the following division can be used:
|
This division must then be used as the starting point for a TPA.
Productivity
Using this concept is not new to people who have already made estimates based on function points. Productivity establishes the relation between effort hours and the measured number of function points in function point analysis. For test point analysis, productivity means the time required to realise one test point, determined by the size of the information system and the test strategy. Productivity consists of two components: the skill factor and the environment factor. The skill factor is based primarily on the knowledge and skills of the test team. As such, the figure is organisation and even person-specific. The environment factor shows the extent to which the environment has an impact on the test activities to which the productivity relates. This involves aspects such as the availability of test tools, experience with the test environment in question, the quality of the test basis, and the availability of testware, if any.
Overall operation
Schematically, this is how test point analysis works:
Based on the number of function points per function, the function-dependent factors (complexity, impact, uniformity, user importance and intensity of use), and the quality requirements and/or test strategy relating to the quality characteristics that must be measured dynamically, the number of test points that is necessary to test the dynamically measurable quality characteristics is established per function (dynamically measurable means that an opinion can be realised on a specific quality characteristic by executing programs). Adding these test points over the functions results in the number of dynamic test points.
Based on the total number of function points of the information system and the quality requirements and/or test strategy relating to the static quality characteristics, the number of test points that is necessary to test the statically measurable quality characteristics is established (static testing: testing by verifying and investigating products without executing programs). This results in the number of static test points.
The total number of test points is realised by adding the dynamic and static test points.
The primary test hours are then calculated by multiplying the total number of test points by the calculated environment factor and the applicable skill factor. The number of primary test hours represents the time necessary to execute the primary test activities. In other words, the time that is necessary to execute the test activities for the phases Preparation, Specification, Execution and Completion of the TMAP life cycle.
The number of hours that is necessary to execute secondary test activities from the Control and Setting up and maintaining infrastructure phases (additional hours) is calculated as a percentage of the primary test hours.
Finally, the total number of test hours is obtained by adding the number of additional hours to the number of primary test hours. The total number of test hours is an estimate for all TMAP test activities, with the exception of creating the test plan (Planning phase).
Principles
The following principles apply in relation to test point analysis:
- Test point analysis is limited to the quality characteristics that are 'measurable'. Being measurable means that a test technique is available for the relevant quality characteristic. Moreover, sufficient practical experience must be available in relation to this test technique in terms of the relevant quality characteristic to make concrete statements about the required test effort.
- Not all possible quality characteristics that may be present are included in the current version of test point analysis. Reasons for this vary – there may be no concrete test technique available (yet), or there may be insufficient practical experience with a test technique and therefore insufficient reliable metrics available. Any subsequent version of test point analysis may include more quality characteristics.
- In principle, test point analysis is not linked to a person. In other words, different persons executing a test point analysis on the same information system should, in principle, create the same estimate. This is achieved by letting the client determine all factors that cannot be classified objectively and using a uniform classification system for all factors that can.
- Test point analysis can be performed if a function point count according to IFPUG [IFPUG, 1994] is available; gross function points are used as the starting point.
- Test point analysis does not consider subject matter knowledge as a factor that influences the required quantity of test effort in test point analysis. However, it is of course important that the test team has a certain level of subject matter knowledge. This knowledge is a precondition that must be complied with while creating the test plan.
Test point analysis assumes one complete retest on average when determining the estimate. This average is a weighted average based on the size of the functions, expressed as test points.
TipFrom COSMIC full function points (CFFP) to function points (FP)To estimate the project size, the COSMIC [COmmon Software Measurement International Consortium] Full Function Points (CFFP) approach is used more and more often in addition to the Function Point Analysis (FPA) approach [Abran, 2003]. FPA was created in a period in which only a mainframe environment existed and moreover relies heavily on the relationship between functionality and the data model. However, CFFP also takes account of other architectures, like client server and multi tier, and development methods like objected oriented, component based, and RAD. The following rule of thumb can be used to convert CFFPs to function points (FPs):
|
TPA, the technique in detail
Input and starting conditions
To perform a test point analysis, one must have a functional design. The functional design must include detailed process descriptions and a logical data model, preferably including a CRUD matrix. Moreover a function point count must have been executed according to IFPUG. These function point methods can be used as input for TPA. It is important to use only one of these function point methods when determining the skill factor, not multiple methods combined. In a function point count, the number of gross function points is taken as the starting point. Which function point method is used is not important when determining the test points. It will, however, have an impact on the skill factor.
The following modifications must be made to the function point count for TPA:
- The function points of the (logical) data collections distinguished in the function point count must be allocated to the function(s) that handle(s) the input of the relevant (logical) collection.
- The function points of the interface data collections distinguished in the function point count must be allocated to the function (or possibly functions) that use(s) the relevant interface data collection.
- For FPA functions in the clone class, the number of function points that applies to the original FPA function is used. A clone is an FPA function that has already been specified and/or realised in another, or the same, user function in the project.
- For FPA functions in the dummy class, the number of function points is determined if possible. Else this FPA function is given the qualification average complexity and the corresponding number of function points. A dummy is an FPA function if the functionality does not have to be specified and/or realised, but is already available because it was specified/realised outside the project.
TipEstimating guideline for counting function pointsIf no function point count is available and you wish to make one (for TPA), the following guideline can be used to determine the time required to count the function points: Determine the number of TOSMs using one of the methods described in Estimation based on test object size and divide it by 400. The outcome represents an estimate of the number of days necessary to count the function points. |
Calculation example - part 1Number of function points (FPf)An information system has two user functions and one internal logical data collection:
The internal logical data collection 'data' has 7 function points and is allocated to the entry function in the context of test point analysis.
(FPf = function points per function) |
Dynamic test points
The number of dynamic test points is the sum of the number of test points per function in relation to dynamically measurable quality characteristics.
The number of test points is based on two types of factors:
- function-dependent factors (Df)
- factor representing the dynamically measurable quality characteristics (Qd).
The FPA function is used as a unit of function. When determining the user importance and intensity of use, the focus is on the user function as a communication resource. The importance the users attach to the user function also applies to all of the underlying FPA functions.
Function-dependent factors
The function-dependent factors are described below, including the associated weights. Only one of the three described values can be selected (i.e. intermediate values are not allowed). If too little information is available to classify a certain factor, it must be given the nominal value.
User importance
User importance is defined as the relative importance the user attaches to a specific function in relation to the other functions in the system. As a rule of thumb, around 25% of the functions must be in the category "high", 50% the category "neutral", and 25% in the category "low". User importance is allocated to the functionality as experienced by the user. This means allocation of the user importance to the user function. Of course, the user importance of a function must be determined in consultation with the client and other representatives of the user organisation.
Weight
3 | Low: the relative importance of the specific function in relation to the other functions is low. |
6 | Neutral: the relative importance of the specific function in relation to the other n-functions is neutral. |
9 | High: the relative importance of the specific function in relation to the other functions is high. |
Intensity of use
Intensity of use is defined as the frequency at which a certain function is used by the user and the size of the user group that uses that function. As with user importance, intensity of use is allocated to functionality as experienced by users, i.e. the user functions.
Weight
2 | Low: the function is executed by the user organisation just a few times per day or per week. |
4 | Neutral: the function is executed by the user organisation many times per day. |
8 | High: the function is executed continuously (at least 8 hours per day). |
System impact
System impact is the level at which a mutation that occurs in the relevant function has an impact on the system. The level of impact is determined by assessing the logical data collections (LDC's) to which the function can make mutations, as well as the number of other functions (within the system boundaries) that access those LDC's. The impact is assessed using a matrix that shows the number of LDC's mutated by the function on the vertical axis, and the number of other functions accessing these LDC's on the horizontal axis. A function counts several times in terms of impact when it accesses multiple LDC's that are all maintained by the function in question.
Number of LDC's | Functions | ||
---|---|---|---|
1 | 2 - 5 | > 5 | |
1 | L | L | M |
2 - 5 | L | M | H |
> 5 | M | H | H |
Explanation: L = Low impact, M = Medium impact, H = High impact.
If a function does not mutate any LDC's, it has a low impact. A CRUD matrix is very useful when determining the system impact.
Weight
2 | Low: the function has a low impact. |
4 | Neutral: the function has a medium impact. |
8 | High: the function has a high impact. |
Complexity
The complexity of a function is assessed on the basis of its algorithm. The global structure of the algorithm may be described by means of pseudo code, Nassi-Shneidermann or regular text. The level of complexity of the function is determined by the number of conditions in the algorithm of that function. When counting the number of conditions, only the processing algorithm must be taken into account. Conditions resulting from database checks, such as validations by domain or physical presence, are not included since they are already incorporated implicitly in the function point count.
As such the complexity can be determined simply by counting the number of conditions. Composite conditions, such as IF a AND b THEN count double for complexity. This is because two IF statements would be needed without the AND statement. Likewise, a CASE statement with n cases counts for n-1 conditions, because the replacement of the CASE statement by successive IF statements would result in n-1 conditions. In summary: count the conditions, not the operators.
Weight
3 | A maximum of 5 conditions are present in the function. |
6 | 6 to 11 conditions are present in the function. |
12 | More than 11 conditions are present in the function. |
Uniformity
In three types of situation, a function counts for only 60%:
- A nearly unique function occurring a second time – in this case, the test specifications that are to be defined can be largely reused.
- Clones – in this case, too, the test specifications that are to be defined can be reused.
- Dummy functions – but only if reusable test specifications for the dummy exist.
The uniformity factor is given the value 0.6 if one of the above conditions is met, otherwise it is given the value 1.
In an information system, there can be functions that have a certain level of uniformity in the context of testing, but are marked as unique in the function point analysis. In the function point analysis, being unique means:
- A unique combination of data collections in relation to the other input functions.
- Not a unique combination of data collections, but another logical processing method (e.g. updating a data collection another way).
In addition, there are functions in an information system that are said to be fully uniform in the context of function point analysis and are therefore not allocated any function points, but must be counted in the testing because they do require testing. These are the clones and dummies.
Calculation method
The factor (Df) is determined by establishing the sum of the values of the first four function-dependent variables (user importance, intensity of use, system impact and complexity) and dividing it by 20 (the nominal value). The result of this calculation must then be multiplied by the value of the uniformity factor. The Df factor is determined per function.
Df = ((Ui + Iu + Si + C) / 20) * U
Df = weight factor of the function-dependent factors
Ui = user importance
Iu = intensity of use
Si = system impact
C = complexity
U = uniformity
Standard functions
If functions for error messages, help screens and/or menu structure are present in the function point count – which often is the case – they must be valued as follows:
Function | FPs | Ui | Iu | Si | C | U | Df |
---|---|---|---|---|---|---|---|
Error messages | 4 | 6 | 8 | 4 | 3 | 1 | 1.05 |
Help screens | 4 | 6 | 8 | 4 | 3 | 1 | 1.05 |
Menu structure | 4 | 6 | 8 | 4 | 3 | 1 | 1.05 |
Calculation example - part 2Determining the function-dependent variables (Df)
(In this example, it is assumed that the valuation of the factors system impact and complexity are identical for the FPA functions in a user function). |
Dynamically measurable quality characteristics
Below, we describe how the requirements specified for the dynamically measurable quality characteristics are incorporated into the test point analysis. In relation to the dynamically measurable quality characteristics, TPA distinguishes between quality characteristics that can be measured explicitly and/or implicitly.
The following can be measured dynamically explicitly:
- functionality
- security
- effectivity/suitability
- performance
- portability.
The weight of the quality requirements must be valuated for each quality characteristics in the context of the test to be executed, by means of a score, possibly by sub-system.
Weight
0 | Not important – not measured. |
3 | Low quality requirements – attention must be devoted to it in the test. |
4 | Regular quality requirements – usually applicable if the information system relates to a support process. |
5 | High quality requirements – usually applicable if the information system relates to a primary process. |
6 | Extremely high quality requirements. |
The quality characteristics that are measured dynamic explicit have the following weight factors:
Functionality | 0.75 |
Security | 0.05 |
Effectivity | 0.10 |
Performance | 0.05 |
Portability | 0.05 |
Which relevant quality characteristics (distinguished in the test strategy) will be tested dynamic implicit must be determined. A statement about these quality characteristics can be made by collecting statistics during test execution. E.g. performance can be measured explicitly, by means of a reallife test, or implicitly, by collecting statistics.
The quality characteristics to be measured dynamic implicit must be specified. The number of quality characteristics can then be determined. The weight is 0.02 per characteristic for Qd. In principle, every quality characteristic can be tested dynamic implicit.
Calculation method (Qd)
The score given to each dynamic explicit measurable quality characteristic is divided by four (the nominal value) and then multiplied by its weight factor.
The sum of the figures obtained this way is calculated. If certain quality characteristics were earmarked for dynamic implicit testing, the associated weight (0.02 per characteristic) must be added to the above sum. The figure obtained this way is the Qd factor. Usually, the Qd factor is established for the total system once. However, if the strategy differs per sub-system, the Qd factor must be determined per sub-system.
Calculation example - part 3Determining the dynamically measurable quality characteristics (Qd)
The following are measured dynamic implicit:
Qd= 0,94 + 0,05 + (3 * 0,02) = 1,05 |
Formula for dynamic test points
The number of dynamic test points is a sum of the number of test points per function. The number of test points per function can be established by entering what is now known in the formula below:
TPf = FPf * Df * Qd
TPf = the number of test points per function
FPf = the number of function points per function
Af = weight factor of the function-dependent factors
Qd = weight factor of the dynamic quality characteristics
Calculation example - part 4Calculation of total number of dynamic test points (∑TPf)
|
Static test points
The number of static test points naturally depends on the quality characteristics that require static testing (the Qs factor), but also on the total number of function points of the system. A static assessment of a large-scale information system simply takes more time than one of a simple information system.
For the relevant quality characteristics, it must be determined whether or not they will be tested statically. A statement about these quality characteristics is arrived at by means of a checklist. In principle, all quality characteristics can be tested statically with the aid of checklists. E.g. security can be measured either dynamically, with the aid of a semantic test, or statically, by assessing the security measures on the basis of a checklist.
Calculation method (Qs)
If a quality characteristic is tested statically, the factor Qs will have a value of 16. For each subsequent quality characteristic to be included in the static test, another value of 16 is added to the Qs factor rating.
Calculation example - part 5Calculation of static test points (Qs)The following quality characteristics are measured statically (using a checklist): Continuity |
Total number of test points
The number of test points of the total system can be established by entering what is now known in the formula below:
TP = ∑TP f + ((FP * Qs) / 500)
TP = the number of test points of the total system
∑TPf = the sum of the number of test points per function (dynamic test points)
FP = number of function points of the total system (minimal value 500)
Qs = weight factor of static quality characteristics
Calculation example - part 6Calculation of total number of test points (TP)TP = 32 + ((500 * 16) / 500) = 48 |
Primary test hours
The formula in the section above results in the total number of test points.
This is the measure for the scope of the primary test activities. These primary test points are multiplied by the skill factor and the environment factor to obtain the primary test hours. This represents the time that is necessary to execute the test activities for the Preparation, Specification, Execution and Completion phases of the TMAP model.
Skill factor
The skill factor indicates how many hours of testing are required per test point. The higher skill factor, the greater the number of hours of testing.
The productivity with which the test object is tested on the basis of the test strategy depends primarily on the knowledge and skills of those executing the tests. It is also relevant to know if people are testing part-time or full-time. Testing users that are deployed for test work only part of the workday, have a lot of switch moments between their day-to-day work and the test work, which often results in reduced productivity.
In practice, the following basic figures are used per test point:
- 1-2 hours for a tester, depending on knowledge and skills
- 2-4 hours for a user, depending on experience.
The skill factor naturally varies per organisation and within that even per department/person. A factor can be obtained by analysing completed test projects. To make such an analysis, one must have access to experience figures for the test projects already realised.
Calculation example - part 7Calculation of total number of test points (TP)For the relevant organisation, a skill factor of 1.2 applies. |
Environment factor
The number of required test hours per test point is influenced not only by the skill factor, but by the environment factor as well. A number of environment variables are used to calculate this. The environment variables are described below, including the associated weights. Again, only one of the available values may be selected. If too little information is available to classify a certain variable, it must be given the nominal value (in bold print).
Test tools
The test tools factor involves the level to which the primary text activities are supported by automated test tools. Test tools can contribute to executing part of the test activities automatically and therefore faster. Their availability does not guarantee that, however – it is about their effective use.
Weight
1 | The test uses support tools for test specification, and a tool is used for record & playback. |
2 | Test execution uses support tools for test specification, or a tool with record & playback options is used. |
4 | No test tools are available. |
Previous test
For this factor the quality of the test executed earlier is important. When estimating an acceptance test this is the system test, when estimating a system test, the development test. The quality of the previous test is a co-determinant for the quantity of functionality that may be tested at a more limited level as well as for the lead time of the test execution. When the previous test is of a higher quality, fewer progress-hindering defects will occur.
Weight
2 | A test plan is available for the previous test, and the test team also has insight into the concrete test cases and test results (test coverage). |
4 | A test plan is available for the previous test. |
8 | No test plan is available for the previous test. |
Test basis
The test basis is awarded a factor representing the quality of the (system) documentation on which the test for execution must be based. The quality of the test basis has an impact in particular on the required time for the Preparation and Specification phases.
Weight
3 | Standards and templates are used to create the system documentation. The documentation is also subject to inspections. |
6 | Standards and templates are used to create the documentation. |
12 | No standards and templates are used to create the system documentation. |
Development environment
The environment in which the information system is realised. Of particular interest here is to what extent the development environment prevents errors and/or enforces certain things. If certain errors can no longer be made, clearly they do not require testing.
Weight
2 | The development environment contains a large number of facilities that prevent errors being made for example by executing semantic and syntactic checks and by taking over the parameters. |
4 | The development environment contains a limited number of facilities that prevent errors being made for example by executing a syntactic check and by taking over the parameters. |
8 | The development environment contains no facilities that prevent errors being made. |
Test environment
The extent to which the physical test environment in which the test is executed has proven itself. If an often used test environment is used, fewer disturbances and defects will occur during the Execution phase.
Weight
1 | The environment has already been used several times to execute a test. |
2 | A new environment has been set up for the test in question, the organisation has ample experience with similar environments. |
4 | A new environment has been set up for the test in question that can be characterised as experimental for the organisation. |
Test ware
The level to which existing test ware can be used during the test to be executed. The availability of effective test ware has a particular impact on the time required for the Specification phase.
Weight
1 | A usable general central starting situation (tables etc) is available, as well as specified test cases for the test to be executed |
2 | A usable general central starting situation (tables etc) is available |
4 | No usable test ware is available. |
Calculation method
The environment factor (E) is determined by establishing the sum of the values of the environment variables (test tools, previous test, test basis, development environment, test environment, and testware) and dividing it by 21 (the nominal value). The environment factor E can be established for the total system once, but also per sub-system if necessary.
Calculation example - part 8Environment factor (E)The various environment variables were given the score below:
E = 20/21 = 0.95 |
Formula for primary test hours
The number of primary test hours is obtained by multiplying the number of test points by the skill and environment factors:
PT = TP * S * E
PT = the total number of primary test hours
TP = the number of test points of the total system
S = skill factor
E = environment factor
Calculation example - part 9Calculation of primary test hours (PT)PT = 48 * 1.2 * 0.95 = 54.72 (55 hours) |
The total number of test hours
Since every test process involves secondary activities from the Control phase and the Setting up and maintaining infrastructure phase, a supplement must be added to the primary test hours for this. This will eventually result in the total number of test hours. The number of supplemental hours is calculated as a percentage of the primary test hours.
The supplemental percentage is often determined by a test manager on the basis of experience or using historical data. Some organisations use a fixed percentage. The percentage is nearly always in the range of 5 to 20%.
If no experience, historical data or fixed percentages are available, a supplemental percentage can be estimated in the following way. A standard (nominal) supplemental percentage of 12% is used as the starting point. We must then look at factors that may increase or reduce the percentage.
Examples of such factors are:
- Team size
- Management tools
- Permanent test organisation.
These factors are explained below. Since there is a great variety of test projects, we have not used seemingly certain absolute percentage figures to determine the impact of these factors on the percentage, but have chosen to indicate whether the impact will increase or reduce the percentage.
Team size
The team size represents the number of members in the test team (including the test manager and a test administrator, if any). A big team usually results in greater overhead and therefore a higher supplemental percentage. However, a small test team results in a reduced percentage:
- Reduction
Test team consists of maximum 4 persons. - Neutral
Test team consists of 5 - 9 persons. - Increase
Test team consists of at least 10 persons.
Management tools
For management tools, it is considered to what extent automated tools are used during the test activities for Control and Setting up and maintaining infrastructure. Examples of these tools are an automated:
- planning system
- progress monitoring system
- defect administration system
- test ware management system.
If little use is made of automated tools, certain activities will have to be done manually. This increases the supplement percentage. If intensive use is made of automated tools, this will reduce the percentage:
- Reduction
At least 3 automated tools are used. - Neutral
1- 2 automated tools are used. - Increase
No automated tools are used.
Permanent test organisation
There are many kinds of permanent test organisation ([Link {8.xml#8.3}section 8.3]). If an organisation has one of these permanent test organisations, lead time reduction, cost savings and/or quality improvement are often realised in a test process that uses it.
- Reduction
Test team uses the services of a permanent test organisation - Neutral
Test team does not use the services of a permanent test organisation.
Calculation example - part 10Determining supplement for Control and Setting up and maintaining infrastructure (C and S&MI)Historical data show the supplement percentage for such test projects to fluctuate around 15%. The test manager decides to use this percentage. Supplement percentage C and S&MI = 15% |
Calculation method
The supplemental percentage is used to calculate the supplement (in hours) on the basis of the number of primary test hours. The total number of test hours is then obtained by adding the supplement calculated for Control and Setting up and maintaining infrastructure to the total number of primary test hours.
Calculation example - part 11Calculation of total number of test hoursPrimary test hours = 55 Total number of hours = 55 + 8 = 63 |
Below figure shows the TPA calculation example as a whole.
Distribution over the phases
When using TMAP, the test process is split up into seven phases, and many clients will be interested in the estimate per phase in addition to the estimate for the entire test process.
TPA gives an estimate for the entire test process, with the exception of test plan creation (Planning phase).
In principle, for the phases Control and Setting up and maintaining infrastructure, the number of hours is estimated that was calculated on the basis of the number of primary test hours using the supplement percentage (supplement hours). These supplement hours must be divided between the two phases.
The primary test hours are divided over the other phases (Preparation, Specification, Execution, and Completion). The distribution of the primary test hours over the phases may naturally vary per organisation, and even within one organisation. A distribution applicable to the organisation can be obtained by analysing completed test projects. To make such an analysis, one must have access to experience figures for the test projects already realised.
..Distribution of primary test hours Practical experience with test point analysis in combination with TMap yields the following distribution of the test effort over the various phases:
Please refer to this wiki for other distributions based on practical experience. |
TPA at an early stage
Often, a project estimate for testing must be made at an early stage. In this case, it is not possible to establish factors like complexity, impact and so on, because no detailed functional specifications are available. However, there are approaches that can often be used to perform a rough test point analysis. By using one of the approaches below, the total number of (gross) function points can be estimated:
- On the basis of very rough specifications, perform a so-called rough function point analysis.
- Determine the number of function points by determining the number of TOSMs.
One function is then defined for the purpose of a rough test point analysis. This function has the size of the total number of defined (gross) function points. In principle, all function-dependent factors (user importance, intensity of use, complexity, impact and uniformity) are given the neutral value, so that Df = 1. A test point analysis can then be made as described in the previous sections. Usually assumptions will have to be made when determining the environment factor. When presenting the test estimate, it is important to describe these assumptions clearly.