The first documented approach to experience-based testing was error guessing [Myers 1979]. The people involved seem to have a knack for "smelling out" errors and faults. They practice, more often than not subconsciously, an approach to investigate certain probable types of faults and create test cases to try to expose these faults.
The value of Error Guessing lies in the unexpected: tests made up by guessing would otherwise not be considered. Based on the tester's experience, he goes in search of fault-sensitive spots in the system and devises suitable test cases for these.
Experience here is a broad concept: it could be the professional tester who 'smells' the problems in certain complex screen processes, but it could also be the user or administrator who knows the exceptional situations from practice and wishes to test whether the new or amended system is dealing with them adequately.
The basic way of working is to think of possible faults and error-prone situations (for example by relying on past experiences) and to create and execute tests for that. The tests and results are mostly not documented. Therefore, we do not favor this approach, especially since exploratory testing is a much better alternative.
Together with exploratory testing, error guessing is rather a strange technique among test design techniques. Neither technique is based on any of the described basic techniques and therefore does not provide any specifiable coverage.
This very informal technique leaves the tester free to design the test cases in advance or to create them on the spot during the test execution. Documenting the test cases is optional. A point of focus when they are not documented is the reproducibility of the test. The tester often cannot quite remember under which circumstances a fault occurred. This may result in a developer not being able to investigate an anomaly, the tester not being able to retest a fix, and the test cannot be added to a regression test set. A possible measure for this is the taking of notes (a 'test log') during the test. Obviously, faults found with the test are documented. In those cases, great attention should be paid to the circumstances that have led to the fault, so that it will be reproducible.
The main downside of applying error guessing is the lack of documentation (as Myers states in his book “error
guessing is largely an intuitive and ad hoc process”). Therefore, tests are not reproducible. This may result in a developer not being able to investigate an anomaly, the tester not being able to retest a fix, and the test cannot be added to a regression test set.
Read more about the value of unstructured testing, here.
An aid for reproducing a fault is the activation of logging during the test, documenting the actions of the tester. A tool for automated test execution can be used for this.
The considerable freedom of the technique makes the area of application very broad. Error guessing can be applied to the testing of every quality characteristic and to every form of test basis.
Error guessing does not apply a structured approach to testing such as test design techniques, nor does it give any certainty about the coverage of functionality or risk. All known possibilities to solve this quickly result in moving to a checklist-based approach or to exploratory testing.
In more detail
Error guessing is sometimes confused with exploratory testing (see "Exploratory Testing"). The table below sums up the differences:
Still, error guessing may be an efficient approach in specific situations, especially in a situation where a system is known to be of very low quality. In that case an experienced tester does not have to prepare any tests to be able to reveal so many anomalies in the IT system that the developers will quickly ask them to stop testing so they can first improve the quality. However, if a system is of good quality, error guessing is a very bad approach. In the case of good quality, the tester will try all sorts of tests but won't find any anomalies.
The stakeholders will start doubting the quality of the testing, and rightfully so, because besides the fact that there are no anomalies found, the tester (with this unstructured, ad hoc and undocumented approach) will not be able to provide any insight in the quality and risks of the IT system and the stakeholders will not get the information they need to establish their confidence that the pursued business value can be reached with this system.
In more detail
In practice, error guessing is often cited for the applied test technique in the absence of a better name – 'It's not a common test design technique, therefore it is error guessing'. In particular, the testing of business processes by users, or the testing of requirements is often referred to as error guessing. However, the basic technique of "checklist" is used here, while with error guessing no specific basic technique is used.
The fact that tests are executed that otherwise would not be considered makes error guessing a valuable addition to the other test design techniques. However, since error guessing guarantees no coverage whatsoever, it is not a replacement.
By preference error guessing takes place later in the total testing process, when most normal and simple faults have already been removed with the regular techniques. Because of this, error guessing can focus on testing the real exceptions and difficult situations. From within the test strategy there should be made availalbe an amount of time (time box) for this activity.
When crowd testing is applied, there is often no control of the approach, techniques and tools the testers use. In practice, we see that these testers often only apply error guessing in trying to find low-hanging-fruit anomalies. To make crowd testing more effective, insight into and advice about the approaches used by the testers is of vital importance.
Points of focus in the steps
The steps can be performed both during the specification phase and during the test execution. The 'tester' usually does not document the results of the steps, but if great value is attached to showing evidence or transferability and reusability of the test, then this should be done.
1 - Identifying test situations
Prior to test execution, the 'tester' identifies the weak points on which the test should focus. These are often mistakes in the thought processes of others and things that have been forgotten. These aspects form the basis of the test cases to be executed. Examples are:
- Exceptional situations
- rare situations in the system operation, screen processing or business and other processes
- Fault handling
- forcing a fault situation during the handling of another fault situation, interrupting a process unexpectedly, etc.
- Non-permitted input
- negative amounts, zeros, excessive values, too-long names, empty (mandatory) fields, etc. (only useful if there is no syntactic test carried out on this part)
- Specific combinations, for example in the area of:
- Data: an as-yet untried combination of input values
- Sequence of transactions: e.g. "change – cancel change – change again – cancel – etc." a number of times in succession
- Claiming too much of the system resources (memory, disk space, network)
- Complex parts of the system
- Often-changed parts of the system
- Parts of the system that often contained faults in the past (processes/functions) .
2 - Creating logical test cases
This step normally takes place only with more complex test cases. The tester may consider creating a logical test case that will cover the situation to be tested.
3 - Creating physical test cases
This step normally only takes place with more complex test cases. The tester may consider creating a physical test case for the logical test case.
4 - Establishing the starting point
During this activity, it may emerge that it is necessary to build a particular starting point for purposes of the test.
An overview of all featured Test Design Techniques can be found here.