A quality characteristic is an inherent characteristic of a product that says something about as aspect of the quality of the product. The use of a set of quality characteristics is recommended as a way to check for completeness of your test. It allows you to check that, out of all the aspects or characteristics of a system or package under test, a careful decision has been made about whether or not to test these.
This is a list of quality characteristics. This is a general list for software development - for specific circumstances specific quality characteristics can be of importance and the list can be expanded to fit your specific situation.
The ease with which an interface can be created with another information system or within the information system, and can be changed.
Connectivity is tested statically by assessing the relevant measures (such as standardisation) with the aid of a checklist. The testing of connectivity therefore concerns the evaluation of the ease with which a (new) interface can be set up or changed, and not the testing of whether an interface operates correctly. The latter is normally part of the testing of functionality.
The certainty that the information system will continue without disruption, i.e. that it can be resumed within a reasonable time, even after a serious breakdown.
The continuity quality characteristic can be split into characteristics that can be applied in sequence, in the event of increasing disruption of the information system:
- Reliability: the degree to which the information system remains free of breakdowns
- Robustness: the degree to which the information system can simply proceed after the breakdown has been rectified
- Recoverability: the ease and speed with which the information system can be resumed following a breakdown
- Degradation factor: the ease with which the core of the information system can proceed after a part has shut down
- Fail-over possibilities: the ease with which (a part of) the information system can be continued at another location.
Continuity can be tested statically by assessing the existence and setup of measures in the context of continuity on the basis of a checklist. Dynamic implicit testing is possible through the collecting of statistics during the execution of other tests. The simulation of long-term system usage (reliability) or the simulation of breakdown (robustness, recoverability, degradation and fail-over) are dynamic explicit tests.
The ease with which the accuracy and completeness of the information can be verified (over time).
Common means employed in this connection are checksums, crosschecks and audit trails. Verifiability can be statically tested, focusing on the setup of the relevant measures with the aid of a checklist, and can be dynamically explicitly tested focusing on the implementation of the relevant measure in the system.
The degree to which the information system is tailored to the organisation and the profile of the end users for whom it is intended, as well as the degree to which the information system contributes to the achievement of the company goals.
A usable information system increases the efficiency of the business processes. Will a new system function in practice, or not? Only the users’ organisation can answer that question. During (user) acceptance tests, this aspect is usually (implicitly) included. If the aspect of usability is explicitly recognised in the test strategy, a test type can be organised for it: the business simulation. During a business simulation, a random group of potential users tests the usability aspects of the product in an environment that approximates as far as possible the “real-life” environment in which they plan to use the system: the simulated production environment. The test takes place based on a number of practical exercises or test scripts. In practice, the testing of usability is often combined with the testing of user-friendliness within the test type of usability.
The relationship between the performance level of the system (expressed in the transaction volume and the total speed) and the volume of resources (CPU cycles, I/O time, memory and network usage, etc.) used for these.
Efficiency is tested with the aid of tools that measure the resource usage and/or dynamically implicitly by the accumulation of statistics (by those same tools) during the execution of functionality tests. This aspect is often particularly evident with embedded systems.
The degree to which the user is able to introduce enhancements or variations on the information system without amending the software.
In other words, the degree to which the system can be amended by the user organisation, without being dependent on the IT department for maintenance. Flexibility is statically tested by assessing the relevant measures with the aid of a checklist. Testing can take place during the (user) acceptance test, by having the user create, for example, a new mortgage variant (in the case of mortgages) or (in the case of credit cards), change the way of calculating the commission, by changing the parameters in both cases. It is often tested in this way first, before the change is actually implemented in production.
The degree of certainty that the system processes the information accurately and completely.
The quality characteristic of functionality can be split into the characteristics of accuracy and completeness:
- Accuracy: the degree to which the system correctly processes the supplied input and mutations according to the specifications into consistent data collections and output
- Completeness: the certainty that all of the input and mutations are being processed by the system.
With testing, meeting the specified functionality is often the most important criterion for acceptance of the information system. Using various techniques, the functional operation can be dynamically explicitly tested.
(Suitability of) Infrastructure:
The appropriateness of the hardware, the network, the system software, the DBMS and the (technical) architecture in a general sense to the relevant application and the degree to which these infrastructure elements interconnect.
The testing of this aspect can be done in various ways. The tester’s expertise as related to the infrastructural elements concerned is very important here. More on infrastructure testing can be found in InfraTesting with TMap.
The ease with which the information system can be adapted to new requirements of the user, to the changing external environment, or in order to correct faults.
Insight into the maintainability is obtained, for example, by registering the average effort (in the number of hours) required to solve a fault or by registering the average duration of repair (Mean Time to Repair (MTTR)). Maintainability is also tested by assessing the internal quality of the information system (including associated system documentation) with the aid of a checklist. Insight into the structuredness of the software (an aspect of maintainability) is obtained by carrying out static tests, preferably supported by code analysis tools (see also section 7.2.8 “Test tools for development tests”).
The ease with which the information system can be placed and maintained in an operational condition.
Manageability is primarily aimed at technical system administration. The ease of installation of the information system is part of this characteristic. It can be tested statically by assessing the existence of measures and instruments that simplify or facilitate system management. Testing of system management takes place by, for example, carrying out an installation test and by carrying out the administration procedures (such as backup and recovery) in the test environment.
The speed with which the information system handles interactive and batch transactions. More on performance testing in the building block.
The diversity of the hardware and software platform on which the information system can run, and the ease with which the system can be transferred from one environment to another.
The degree to which parts of the information system, or of the design, can be used again for the development of other applications.
If the system is to a large extent based on reusable modules, this also benefits the maintainability. Reusability is tested through assessing the information system and/or the design with the aid of a checklist.
The certainty that consultation or mutation of the data can only be performed by those persons who are authorised to do so.
The degree to which the manual procedures and the automated information system interconnect, and the workability of these manual procedures for the organisation.
In the testing of suitability, the aspect of timeliness is also often included. Timeliness is defined as the degree to which the information becomes available in time to take the measures for which that information was intended. Suitability is tested with the aid of the process cycle test.
The ease and speed with which the functionality and performance level of the system (after each adjustment) can be tested.
Testability in this case concerns the total information system. The quality of the system documentation greatly influences the testability of the system. This is measured with the aid of the “testability review” checklist during the Preparation phase. Also for the measuring of the testability of the information system a checklist can be used. Things that (strongly) benefit the testability are:
- Good system documentation
- Having an (automated) regression test and other testware
- The ease with which interim results of the system can be made visible, assessed and even manipulated
- Various test-environment aspects, such as representativeness and an adjustable system date for purposes of time travel.
The ease of operation of the system by the end users.
Often, this general definition is split into: the ease with which the end user can learn to handle the information system, and the ease with which trained users can handle the information system. It is difficult to establish an objective and workable unit of measurement for user-friendliness. However, it is often possible to give a (subjective) opinion couched in general terms concerning this aspect. User-friendliness is tested within the test type of Usability.
A Set of Quality characteristics for IoT testing
As stated in the introduction, for each specific situation a set of Quality characteristics can be designed. For Internet of things testing, the following quality characteristics can be relevant:
- Compatibility: The extent to which a product, system or component can exchange information with other products, systems or components.
- Confidentiality: The extent to which a product or system ensures that data is only accessible to those who are authorized.
- Efficiency: The required resources that were used in relation to the accuracy and completeness with which users achieve goals.
- Installability: Degree of effectiveness and efficiency with which a product or system can be successfully installed and/or uninstalled in a specified environment.
- Interoperability: The extent to which two or more systems or components can exchange information and use the information exchanged.
- Reliability: The degree to which a system, product, component performs specified functions under specified conditions for a specified amount of time.
- Resource utilization: The degree to which the quantity and type of resources that are used by a product or system, during the execution of its functions, meets the requirements.
- Satisfaction: Degree to which user needs are satisfied when a product or system is used in a specified context of use.
- Security: The extent to which a product or system information and data is protected, so that persons, other products or systems are given the right level of access appropriate to their level of authorization.
- Time-behavior: The degree of response and processing times and throughput of a product or system, during the execution of its functions, meets the needs.
- Usability: The extent to which a product or system may be used by the users to effectively, efficiently and satisfactorily achieve specified goals in a specified context of use.