Exploratory Testing (ET)


Exploratory testing was founded and described as a concept and an approach a number of years ago by James Bach. What is exploratory testing?


According to James Bach [Bach, 2002]
Exploratory testing is the simultaneous learning, designing and executing of tests, in other words every form of testing in which the tester designs his tests during the test execution and the information obtained is reused to design new and improved test cases.


In more detail

When a tester executes a test script, from time to time he comes up against 'suspicious system behaviour', i.e. when the system looks different or responds differently from what is expected. This does not have to be written into the test script as such, and the expectation does not even need to be justified by particular system documents. When the tester examines this suspicious behaviour more closely, he is engaged in exploratory testing. Most testers, however, will exclaim that this is obvious and they will not themselves have the impression they are applying exploratory testing. It is not so much a question of whether someone is testing exploratively, but more of the degree to which he does this. This makes it difficult to distinguish what precisely to call or not to call 'exploratory testing'.

As with error guessing (see "Error Guessing") exploratory testing is rather difficult to line up with the other test design techniques. It is not based on any of the described basic techniques, it leaves a free choice of basic techniques to be applied and it provides no guaranteed coverage. It is even debatable whether it really is a test design technique. Mainly for practical reasons, in this book the decision has been made to include exploratory testing among these.This means that the tester is always exploring a piece of the system under test, thinking about what should or could be tested (test design) and subsequently carrying it out (test execution). In doing so, the tester gathers new knowledge of the system, considers what to test next, carries this out, etc. This means that the design and subsequent execution of the tests take place in close succession. Documenting the test case is not necessary. In exploratory testing, therefore, test design does indeed take place, in contrast to ad hoc or unstructured testing. The tester employs the most applicable basic technique, depending on the features to be tested and the information available on these. This places high demands on the tester, since he must be able to apply a large collection of basic techniques without explicitly formulating each step in the techniques.


In more detail

In practice, exploratory testing is sometimes confused with "Error Guessing". In below table the differences between them are set out.
Error guessing Exploratory testing
Does not employ the basic techniques Employs the most suitable basic technique, depending on the situation
Suitable for testers, users, administrators, etc. Suitable for experienced testers with knowledge of the basic techniques
The test cases are designed in the Specification phase or during test execution The test cases are designed during test execution
Focuses on the exceptions and difficult situations Focuses on the aspect to be tested in total (screen, function)
Not systematic, no certainty at all concerning coverage Somewhat systematic



There is a risk attached to the use of forms of test basis other than the formal system documentation. This is that the tester becomes so trusting of these other sources of information that testing against the system documentation disappears into the background. An undesirable end result can be that the system and documentation are running out of sync. If the work is based on contracts, this could lead to contractual difficulties. If the system is correct and the documentation not, it can result in maintenance or administrational problems. For that reason, where a defect is concerned, the relationship between software and formal test basis should always be taken into consideration.

By contrast, it is possible that (complex) functionality is described in the documentati that has been implemented incorrectly in the system, so that when testing is carried out based on sources other than the system documentation, defects will not be found. Another undesirable result may be that, owing to the lack of clarity on the scope, the testers will generate an endless stream of change requests under the disguise of possible defects. Both are points of focus for the test manager.

When to apply or not to apply

Exploratory testing is often associated with testing in the absence of a formal test basis, such as a functional design. However, this is not necessarily true. It is entirely possible to apply it with a well-described system. Having said that, the technique lends itself very well to situations in which no described test basis is present. Exploratory testing puts less emphasis on a described test basis and more on other ways of assessing the adequacy of the test object, such as by familiarisation with the system in the course of the test execution.

The great freedom of the technique makes for a very wide area of application. It can be applied to the testing of every quality characteristic and with every form of test basis. There are, however, varying circumstances in which the application of exploratory testing is or is not a good idea. This is indicated below.


  • Where experienced and trusted testers with domain knowledge are available.
    • If testers are not required to document what they are doing, it is cheaper than when they are required to do so. The downside of this choice is a greater risk of insufficient test quality, longer lead-time of testing on the critical path of the project and lower levels of transferability and reusability of the tests (possibly leading to higher test costs in the long term). The preconditions and downsides are explained in the other points in this section
  • Where testing as cheaply as possible is by far the biggest consideration.
    • Since this is usually the case, exploratory testing would appear to be almost always applicable. This does not hold true, however. The technique can be employed from the point of view of costs, but there are some important preconditions: you must have good and experienced testers, and have complete faith in them, without requiring proof of the coverage and depth of testing. The downside of this option is a greater risk of inadequate quality of the test, longer lead-time of testing on the critical path of the project and lower levels of transferability and reusability of the tests (possibly leading to higher test costs in the long term). The preconditions and downsides are explained in the other points in this section
  • Where there is an insufficiently documented test basis.
    • Through exploration, the tester automatically acquires a perspective on the test object. Since emphasis is placed on the inventiveness and intuition of the tester, it is more suitable for use where there is little system documentation at the start of testing or when the documentation strongly deviates (reads: is out of date) with regard to the required operation of the system. This also makes the technique very suitable with 'documentationlight' and agile methods such as Extreme programming, DSDM and (to a lesser degree) with the Rational Unified Process. Points of focus here are that the lack of system documentation is a risk that the use of exploratory testing cannot dispel and that the testers should possess a lot of system and domain knowledge, since there is no frame of reference in the form of system documentation
  • As an addition to testing according to more formal techniques, to encourage creative testing.
    • Defects often exist in unexpected places in the software, and they also tend to cluster together. Formal test design techniques are aimed at finding certain types of defects. The use of these techniques can be seen as a filter on the software. With each filter (test design technique), certain types of defects are found. It is then a question of whether many other kinds of defects remain and how severe these are. Exploratory testing can provide additional insight here. By its informal nature, it is less focused on particular types of defects. Therefore, in the case of a defect that is found by exploratory testing, the tester should consider carefully whether the defect is unique and isolated, whether it occurs in various other places, or indicates a cluster. This makes it a good addition to the other more formal techniques. The figure below demonstrates this.
Exploratory testing as an addition to testing according to more formal techniques
  • Where there is no time available to prepare the tests.
    • Although this is no ideal situation – and one in which the risks should certainly be highlighted from within the testing – nevertheless it is a common occurrence. Exploratory testing is then a means of achieving the maximum amount of testing in the short time that remains and of obtaining a general insight into the product quality. Suppose that the tester has 8 hours in which to test a particular group of functions or screens. What is then more productive: 8 hours of going through the functions in all kinds of explorative ways, using checklists, zooming in on anything that looks suspicious, or spending half the time on specifying a number of test cases in a reusable and verifiable way, before carefully executing them during the remainder of the time?

Other circumstances for which the technique lends itself are when the testers want to learn quickly how the system works, to assess the quality of someone else's testing with a short test, for gaining a first impression of the quality of the system or to examine a specific defect or possible fault source.

This is not to say that, in the event of any of the above situations, exploratory testing is straightaway the best solution. There are various situations in which its application is less suitable.

Do not apply:

  • When higher requirements are set as regards the demonstrability/reporting of the testing, for example by imposed standards.
    • The testing process is less manageable, less measurable and less auditable, because no test cases are defined and low requirements are set as regards the logging of tests. It is not known what the tester will do and how he will do it. It is almost impossible to check in retrospect what has been done.
  • With critical functionality, failure of which can cause severe damage.
    • Because little is documented, the technique leans heavily on trust in the individual tester. The potential damage when this trust turns out to be unfounded may be so great for certain systems or system parts that the organisation cannot or will not accept this risk.
  • For inexperienced testers
    • As they will lack the knowledge and experience necessary for creating good test cases without the explicit support of a technique.
  • If test cases are required to be executed by a tester other than the creator or by a test tool. Or if the test cases are required to be reused, e.g. in future maintenance.
  • If there is no direct feedback from test execution, so that the test results are not directly available, e.g. in the case of test runs of batch software at night.
  • In tests that require a lot of preparation, such as the testing of complicated calculations, performance tests, the testing of security or of usability.
    • These preparations, which may involve test cases, starting points, reference tables as well as test environments, can best take place well in advance of test execution in order to avoid a lot of time being wasted during test execution.
  • When the testing has to be on the critical path of the project as briefly as possible.
    • The tester starts exploratory testing at a late stage, after the test object has been delivered, and carries out both design and the execution of tests during the Execution phase. This is usually on the critical path of the project, requiring more lead-time than when the tests have already been designed before delivery of the test object, in the form of test scripts or checklists. It is important here that the test scripts can be prepared, i.e. that there is a sufficiently documented test basis. It should also be said that the maintenance of test scripts during the test execution usually costs extra time, thus somewhat reducing the time advantage of the test script in respect of the critical path.

Whether the technique can be usefully applied therefore depends on various factors. It is up to the test manager to judge this. In view of the high demands placed on the tester, exploratory testing should be applied in test teams in which professional testers participate. In practice, this concerns the system and acceptance tests.

Points of focus in the steps

The generic steps (see "Introduction") are in principle also applicable to exploratory testing. The substance of the steps, however, depends on which basic techniques the tester selects and applies during the test execution. For this reason, the steps are not further substantiated in this section.

In addition, the generic steps are normally only implicitly applicable and are not documented. When explicit requirements are set for evidence or transferability, it can be agreed that the tester will explicitly document the test cases. This takes place at the same time as the test execution. The variant of session-based test management is also an option.

In more detail

Session-based test management

The unmanageability of exploratory testing is often cited as a big disadvantage. To obviate this, Jonathan Bach introduced session-based test management as an approach [Bach, 2000]. In this, the (part of the) system under test is divided into a number of test charters. A test charter can be anything, e.g. a function, screen, menu, user transaction, user-friendliness or performance, or very generally an area with possible instability, such as memory usage. Thus, a test charter is something different from a test case or a test script. With the testing of a test charter, various test cases are often executed. Criteria set for a test charter are:

  • It sets a test goal
  • It proposes a unit of work, which will take roughly between half an hour and four hours
  • It is independently testable, i.e. a tester can start or finish with any test charter.

The test charters are tested in testing sessions. A session is a period of time, usually between half an hour and four hours, in which the tester can test one or more test charters without interruption. During testing, the tester documents his/her actions (along general lines) in the form of notes on the session paper. This renders the tests reusable to a certain degree, since the retesting of a test charter can take place based on the notes of the previous session(s). It is then up to the tester to repeat the previous session as much as possible, or to try other variations of it.

In contrast to the test charters that can be tested several times, the test sessions are one-offs: you start and end a session at a certain point. If you have to test a test charter again at a later stage, this takes place in a new session. The advantage of sessions is that they are restricted in time and thus more manageable. After the session has ended, the test manager runs through the session with the tester in a debriefing to determine the priorities of the found defects, share points of learning and estimate (remaining) risks.

During the session, the tester can create new test charters, which are then added to the list of test charters. By administering sessions and test charters, the progress of the test process can be monitored and the outside world can obtain insight into what has happened and what still has to happen in the testing.


An overview of all featured Test Design Techniques can be found here.