Exploratory Testing (ET)

Exploratory testing is an experience-based approach of testing, the most important approach of experience-based testing in our opinion. We distinguish coverage-based and experience-based testing. Others use terms like scripted testing and free-style testing, but we prefer the division in focus on either experience or coverage, with the advice to always combine both coverage-based and experience-based testing. Of the described approaches of experience-based testing, Exploratory testing is the most versatile.

Exploratory testing

Definition

Exploratory testing is simultaneously designing and executing tests to learn about the system, using your insights from the last experiment to inform the next. In other words, every form of testing in which the tester designs his tests during test execution and the information obtained is reused to design new and improved tests.

Testing is about creating and executing tests, but more importantly, it is about obtaining information about quality and risks, all of that to ultimately establish confidence that an IT solution will bring the expected business value. Exploratory testing is very well suited to support that confidence because of its interactive nature and the possibility to involve various stakeholders.

Our flavor of exploratory testing

Exploratory testing has many flavors. We define this approach with the following characteristics:

  • Focus on confidence (risk- and value-based)
  • Structured (charter, log, debriefing)
  • Session-based & timeboxed (not too short, not too long)
  • Tandem approach (two testers, or even a mob)
  • Combines experience-based and coverage-based testing
  • Simultaneous test design, test execution and learning
  • Flexible (fit for Agile and DevOps)
  • Prepared (test ideas, testing tours)
  • Tools (heuristics, checklists, test design tooling, test recording tooling)
  • Serious Fun!!

These characteristics are reflected in the explanation below.

Exploratory testing is structured! There are three clear parts:

  1. Prepare with a charter
  2. Execute testing and keep a log
  3. Discuss results, conclusions and advice in a debriefing

Prepare with a charter

Exploratory testing is prepared by means of charters. A charter guides the testers and holds a start set of test ideas and/or test data and/or test cases and/or testing tours.

Definition

A charter is a concise document containing the starting points for an exploratory testing session.

The test charter is a paper or electronic document that contains information to use during the exploratory testing session. The charters may be put on a backlog, for example as tasks on a Kanban board, but they may also be distributed during a designated timebox. A good charter offers direction without overspecifying test actions [Hendrickson 2013]. A charter describes various aspects, for example the test ideas, but it must not be a fully prepared scenario of test cases. Sometimes the testers will decide to make a small start set of test cases that helps them make sure they don't forget any important aspects.

The team can prepare a number of charters in advance and put them on the backlog to be executed as soon as the relevant test object becomes available or when it is time for exploratory testing. The charters may be prioritized based on the chance of failure (for example, "How clear were the requirements?") and the impact if the test object would fail. The charters with the highest priority are tested first.

When a paper charter is used, the backside is often used to store logging and debriefing information

Example of a charter

While creating the charter, decide on the next aspects:

  • Determine the scope
    The scope of the exploratory test can be the user stories agreed on by the team at the start of the sprint. Sometimes a user story can be divided into (sub) stories. One (sub) story may be the scope of the exploratory test.
  • Set a timebox
    For every charter the duration of the test is specified. An exploratory test session is timeboxed – this means that the execution of an exploratory test is limited by the time. When the time is up, the exploratory test ends.
    A timebox should be at least thirty minutes (otherwise it is too short to do sensible testing) and no longer than three hours (to be able to stay focused). If during the debriefing the testers conclude that the timebox was too short they may ask for an extra charter. Together with stakeholders, the team decides if the time for an extra charter is available and if it is the most efficient way to obtain information about quality and risks. Usually a couple of "blank charters" are available, which means some slack time for testing is available in the schedule.
  • Determine the exploratory testing team
    Exploratory testing is done in pairs or in larger groups (a so-called mob, see below). Examples of reasons to involve multiple people:
    • Two persons know more than one, and observe more than one
    • One person concentrates on testing, the other on logging
    • A lot of brainpower is needed for difficult problems
    • During the testing, people learn from each other; about the test object as well as the tips and tricks of testing
    With paired testing, ideally one is experienced in testing and the other one is a product expert. An experienced tester will intuitively apply a test design technique, for example when encountering a boundary do boundary value analysis on the spot. A product expert knows what to expect, without constantly having to read specifications (if those exist at all). But the pair could also consist of an apprentice and a master, where a (minor or even main) goal of testing will be to educate the apprentice.
    How is the work divided in the pair? One person does the test execution, the other focuses on the logging. Together they evaluate the result of a test and think about the best possible next test case. Often it is wise to practice strong-style pairing, which means that the person with the most experience or most knowledge conveys their ideas to the other person that handles the keyboard. This way the people in the team have optimal collaboration and really do the testing together. (Strong-style pairing prevents a situation where the one that does not handle the keyboard, is merely an observer.)
    Do you always need to work in pairs? When doing a re-test of a bugfix or a regression test based on the log of a previous test session, that may well be done by one tester alone. A basic pre-test of newly deployed software may also be done by one tester. Do keep in mind the advantages of working as a team.
    When exploratory testing is done in larger groups this is generally referred to as "mob testing" [Pyhäjärvi 2019]. In DevOps, the team may choose to have a mob testing session every sprint to tackle the hardest challenges of the sprint and to learn together.
  • Test ideas
    An important part of the charter are the test ideas.

    Definition

    A test idea is any useful thought, piece of data, technique, heuristic or whatever that you write down on a charter so that during your exploratory testing session you have an abundance of possibilities to vary your testing.

    Test ideas can come up at any stage prior to test execution, for example during a refinement of user stories. Even during test execution new test ideas may arise and be added to the charter. Test ideas can be created in an unstructured way (for example in a brainstorming session) or structured, using techniques such as heuristics. A test idea may also be to apply a test design technique, for example to have a start-set of test cases. But the test idea part of the charter should not be filled with a complete test scenario. Test cases are generally created during the exploratory testing session, not beforehand.

Execute testing and keep a log

During the testing of each test case, together with expected outcomes, actual outcomes and observations, are logged to keep track of what was tested and what actual results occurred, how these compare to the expectation leading to the observations and anomalies if applicable.

Definition

A test log is a record of the test steps, expected results and actual results, together with observations about the system behavior, which is registered during testing, for example during an exploratory testing session.

The test log may be captured manually, for example on a log form or in a spreadsheet. Also, specific logging tools may be used, but keep in mind that the expected outcome must also be logged. (This is something that is not supported by basic logging tools that only capture what is happening on the GUI.)

It is very important that the testers think about the expected result before executing the test. This expected result may be described in a very detailed or in a global way depending on the goal of the test and the knowledge the testers already have. The expectation may even be very vague in case the testers are trying to uncover the unknown unknowns, in which case they merely wonder "What happens if …?", but even in this case they have some expectation that helps them to determine whether the actual outcome is OK or not.

Although exploratory testing doesn't give any guarantee about coverage, the log makes sure that afterwards the testers can explain to a relevant level of detail what they have tested. There are multiple reasons to log the testing. Firstly, after some time of active testing, the testers may not exactly remember what they already tested, so reading the log will help keep the test session efficient.

When during the testing an anomaly is found, the testers need to register this. The information from the log can be copied to the anomaly administration.

Some tests will have to be executed again, for example when an anomaly is fixed or when the tests are added to a regression test.

All these examples emphasize the need for proper logging. Logging may be done on paper, but can also be done in electronic documents or by using logging tools. However, when using a tool, keep in mind that one of the important parts is the logging of the expected results; a simple record tool that captures all actions is not sufficient since it typically doesn't capture the expected outcome.

Discuss results, conclusions and advice in a debriefing

At the end of the test session the testers have a debriefing with a relevant stakeholder, for example a product owner, scrum master, test master, user or any other person that has an interest in the results of testing.

The main goal of the debriefing is to convey all information that the stakeholder needs and establish their level of confidence that the pursued business value can be achieved. Preferably, the stakeholder assists the team in the debriefing by asking critical questions about the experiences they had during the test session.

Often, multiple exploratory testing charters are executed that together support the establishment of confidence, so the information from various debriefings will be put together. Any anomalies found will of course be recorded and followed up. This is done by using the usual anomaly management procedure of the team (anomalies from exploratory testing are not treated differently from other anomalies). The information and conclusions derived during the debriefing are kept and combined with the various test reports that the team produces. More detailed information in a DevOps situation see Building Block "Reporting & alerting".

Special variants of exploratory testing

When writing down test ideas on a charter you may get inspired by two special variants of exploratory testing.

  • Testing tours
    A special kind of test ideas are the so-called testing tours that were introduced by James Whittaker [Whittaker 2010]. From the many tours he describes, we mostly use the following four:
    1. The landmark tour. Pick specific feature landmarks and use them as a basis for testing. While testing these landmarks, the testers will also encounter other aspects of the IT system, uncovering unprepared areas while still being sure that the important parts (the landmarks) are not forgotten.
    2. The FedEx tour. Think of the data in your systems as parcels that have to be delivered, and follow the flow of the data through the system: enter the data in the system, follow it being stored and handled at various places in the system, try different features that use or process the data and track it all the way until the data emerges as output.
    3. The supermodel tour. While testing, only look at the user interface, not at the processing. Make sure everything is where it is supposed to be, that the interface is sensible, and watch particularly for usability problems.
    4. The intellectual tour. Ask the IT system the hardest questions. If for example the system must open a file, what is the most complicated file you can give it?
  • Soap opera testing
    Another special way to do exploratory testing is the so-called "soap opera testing" [Buwalda 2004]. Soap operas are dramatic daytime television shows that were originally sponsored by soap vendors. They depict life in a way that viewers can relate to, but the situations portrayed are typically condensed and exaggerated. In one episode, more things happen to the characters than most of us will experience in a lifetime. Opinions may differ about whether soap operas are fun to watch, but it must be great fun to write them. Soap opera testing is similar to a soap opera in that tests are based on real life, exaggerated, and condensed.
    For example, the exploratory test scenario is about an insurance customer that insures his house, gets married, the house then burns down, he gets divorced, finds another house, marries the same wife again and lives happily ever after.

Serious fun!

Many people involved in testing have mentioned that exploratory testing is not only a valuable approach for gaining information about quality and risk so they can establish confidence in the pursued value, but it is also a lot of fun!! Working together in pairs or mobs stimulates the creativity and exploration through which information often surfaces that otherwise would have remained uncovered.

A Friday afternoon bug hunt is an excellent way to combine information-gathering with fun. Just reward a small price (like a box of chocolates) for the most interesting piece of information found, distribute one or two simple charters to be executed to all team members and the team will have a great Friday afternoon.