The daunting discussion around automated vs human-driven testing pushed me to arm myself with a pen again and write more about a diamond in the crown of all testing techniques — exploratory testing
Do you remember Sherlock’s hat? The legendary detective Sherlock Holmes (a character created by Arthur Conan Doyle) is best remembered for solving mysteries in the London fog, puffing on his pipe and… a hat. It has become stereotypical headgear for detectives who investigate the most complicated cases.
In our work routine, we wear multiple “hats” daily. Product Owners, Developers, Designers, and QA Engineers might juggle the various roles in line with the new challenges. However, the QA Engineers are often the ones in the team who try the imaginary detective’s hat. And I must say, if you are a good QA, it will fit you.
I started my career in software testing as a manual or (better to say) human QA Engineer. In a few years, I also learned the basics of automation in the Gherkin syntax, helped the Developers select the tests for automation, and regularly monitored the status of automated tests in CI/CD pipelines. Yet my primary focus was on applying human-driven testing techniques and improving the general team’s testing strategy, which included manual and automated tests on multiple product layers.
Now, my eyes widen once I hear from some community voices that manual testers are obsolete. I don’t take it as a personal offence. However, it is still hard to understand: are we ready to allow tools (machinery) to evaluate usability, UX, and accessibility, detect edge cases and UI glitches, and play with the product as humans?
Industry experts note that human-driven testing techniques are still essential for the product’s development. However, I observe that some well-known companies over-rely on automated tests ignoring human-driven testing techniques, which might be unhealthy for the overall product’s quality. I’ve already touched on the danger of over-relying on automation in one of the recent stories. After some conversations on LinkedIn, I’ve got a push to write more about one of the most exciting manual testing techniques — exploratory testing.
Manual testing? Is it still alive?
First, let’s clarify: does manual testing still rock? If yes, then why? Before some sceptics grab the raw tomato to throw in the author, I’d like to clarify: yes, human-driven testing techniques matter. Without skills in human-driven techniques such as exploratory testing, the QA professionals would be redundant and could be easily replaced by AI. But it has not happened (yet).
The “aha!” moment struck me after listening to the great talk “In Praise of Manual Testing” given by Sue Atkins at TestBustersNight (arranged by Rudolf Groetz). Sue stressed that we are still not at the point of delegating all the human-driven testing activities to AI-driven tools. That talk also sparked a discussion about whether the label “manual” is appropriate for testing activities humans perform. Other namings such as “in person testing” (originally suggested by Julian Harty) could work better.
Don’t get me wrong, I stand for the balanced approach in QA. An agile tester should be able to apply human-driven techniques and automate certain tests or collaborate on automation with developers. However, there are “features” that tools (machines) are bad at: they lack creativity, empathy and intuition.
Automated tests will hardly detect edge cases that Developers or Designers did not think of (“unhappy” paths). They will not be able to assess UX from a deeper perspective and according to the product quality attributes (see ISO 25010: Systems and software engineering). Instead, human QA will excel in those areas. Consequently, the existing tools could hardly explore the product at the same level as humans.
As James Lyndsay notes:
“An automated test won’t tell you that the system’s slow, unless you tell it to look in advance. It won’t tell you that the window leaves a persistent shadow, that every other record in the database has been trashed, that even the false are returning true, unless it knows where to look, and what to look for. Sure, you may notice a problem as you dig through the reams of data you’ve asked it to gather, but then we’re back to exploratory techniques again.”
So let’s come closer to exploratory testing and detective’s hat.
Exploratory testing in a nutshell
Exploratory tests are purely based on human creativity and curiosity. These are tests performed without a prepared written script, where the testers are supposed to find issues on the fly. The exploratory tests allow us to investigate the system behaviours missed by scripted tests (e.g. end-to-end tests). As a result, the edge cases, UX discrepancies, and glitches will be detected:
“An exploratory test needs no script, no chosen set of actions. Choice of actions is up to the tester, at the point of testing. Choice of information, of observation, is limited not by pre-existing design, but by opportunity and resource. Moment to moment, the tester chooses what to do, what to do it with, how to check what’s happened. Interesting things will be examined in more detail, weaknesses tried, doorknobs jiggled. The tester chooses to try two things together that between them open the system to a world of pain. The tester chooses to use this information, with that action, not to just to see what’s desirable, but what’s possible. The exploratory tester focuses on risk.”
Exploratory testing is more spontaneous and informal than scripted testing, but it still requires discipline to be done properly. Normally, exploratory tests are guided by a defined goal (e.g. “to test a new feature”) and usually are executed in a session-based (time-boxed) manner. Testers might use a charter (task card) in which they note their observations.
To make an impact, exploratory testing should be applied regularly. As Martin Fowler observes, it is not a good sign if the team does not perform exploratory testing: “Even the best automated testing is inherently scripted testing — and that alone is not good enough.” From my experience, I suggest devoting time to exploratory testing sessions weekly.
QA Engineers, Developers, Product Owners, or Designers might perform the exploratory tests individually. In agile teams, each member can contribute to the quality by exploring the product and informing the team about their findings.
Exploratory testing can also be applied in pairs using peer programming principles. As Mariia Hutsuk and Sivamoorthy Bose write, the paired exploratory testing assumes two roles. One person is a “driver”, another is a “navigator”:
“Driver is a person at the wheel, this person should focus on the application while he/she is performing actions and ask questions if they occur. The navigator is in an observer position and tells direction of further checks and makes notes of steps or findings. Those roles can be changed from time to time.”
From my experience, you will reach the multiplier effect by arranging a team’s exploratory testing session — a so-called bug bash. To detect possible bugs as soon as possible, the team might perform exploratory testing once new features are coded in the developer’s environments.
Here is an example of a charter template for an imaginary team’s exploratory testing session.
“Hey! Today’s goal is to step into the user’s shoes and test the Amazing Portal customer flows. Use your skills and intuition to find as many visual or functional issues in the product as possible. Once you run into a bug or UX discrepancy, note it in the test charter. Below are just loose guidelines. Anyhow, you will need to invent your scenarios on the fly.
Task 1
As a Customer, browse the products listed on Amazing Portal.
Task 2
As a Customer, make an order of the products A, B, C, D.
Task 3
As a Customer, update the data in the Customer’s Profile.
Task 4
As a Customer, explore other areas of Amazing Portal.”
Once the exploratory session is completed, the detected issues should be reported and discussed within the team. Sometimes, the feature must be completely reworked due to the findings revealed in an exploratory testing session. This might be painful for the team, yet it is still a win-win situation because there will be less rework in the later stages of development for Designers and Developers, and less pain and frustration for the customers.
In my opinion, no AI-driven tools can replace human creativity, empathy and intuition in testing. Thus human-driven techniques such as exploratory testing are still in demand and should be applied actively by the team members, whether they are Developers, Designers, Product Owners or QA Engineers (in case your team is lucky enough to have one ;)) If used at the earliest stages of development, the exploratory testing will guide you through the potential edge cases and product issues, whether they are visible for the users (UI level) or hidden (API level, backend behaviours). This might truly impact the quality of the product, reduce the costs of rework in the long run and shift the whole team’s testing strategy to the left. To achieve that, keep exploring and wear your Sherlock’s hat proudly.
You may check my LinkedIn page if you feel like connecting with me or are curious about my background. As a QA Engineer with over 7 years of commercial experience in the industry, I’m ready to communicate with teams looking for guidance and help in enhancing product quality and testing. At this very moment, I’m looking for a new role as a QA Analyst, QA Engineer or QA Lead.
Illustrations: by me (Apple Pencil, iPad, and no AI :))
Resources:
Sue Atkings, Playing with software — learning like a tester: https://medium.com/@TestSprite/playing-with-software-learning-like-a-tester-ca7412537d6bJames Lyndsay, Why Exploration has a Place in any Strategy: https://www.workroom-productions.com/why-exploration-has-a-place-in-any-strategy/Mike Chang, Running an Effective Bug Bash: https://medium.com/@changbot/running-an-effective-bug-bash-317fafa9d963Mariia Hutsuk and Sivamoorthy Bose, Importance of Exploratory Testing: https://medium.com/quality-matters/importance-of-exploratory-testing-3f02e34dc0c3Martin Fowler, Exploratory Testing: https://martinfowler.com/bliki/ExploratoryTesting.html
Test smart: how to explore a product like Sherlock? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.