Rethinking “manual testing”

Problem: “Manual testers” are often treated as second class citizens in the world of software.

Are you fed up of being branded as a “manual tester”? Are you tired of how often people place automation on a pedestal? Do you tear your hair when people ignore your creative thought process and rush to “automate” the excellent information you uncovered? If yes, then there are some things you can do to improve your situation. We think this post will help you get started.


Why this post?

If you do not work on an important problem, it’s unlikely you’ll do important work.
–Richard Hamming in You and Your Research

Qxf2 Services thinks clarifying what good testing looks like is one of the most important problems in the software testing world. At the heart of the misconceptions around software testing is the perception problem around “manual testing”. In this post, we explore some ways to combat this perception problem plaguing “manual testing”. We researched, discussed and debated ways to rethink “manual testing”. We provide a summary of our internal discussion. Based on my chess background, I have suggested a substitute phrase for “manual testing” to better capture what good testers really do. We have linked to high quality articles on this topic from the top minds in the software testing community. These articles are an excellent starting point for the motivated tester to explore this topic further. Happy reading, happy thinking!


Qxf2 internal discussion

I am choosing to provide a brief summary of the internal discussions we had at Qxf2 Services. I am doing so with two goals in mind. One – we receive feedback. Two – I hope this discussion sparks similar discussions within your team.

We began by discussing common places where the perception problem around “manual testing” manifests itself. We noted that recruiters often ask testers to identify themselves as either “manual testers” or “automated testers”. From hearsay “automation testers” are paid more in the industry. Within companies, we have seen engineering leaders devote a disproportionate amount of talk time and focus to automated checking. We have seen developers that seem to think that unit testing is the same as testing thoroughly. We have heard of testing teams afflicted by the “automate everything” bug. Time and again the common theme in our discussion was how often the impact of automated checking is over-estimated by people in the software industry.

Another track of our discussion centered around expressing the value of having intelligent humans performing good testing. Yes, good testers use a variety of tools. Yes, good testers may choose to write automated checks when the situation demands it. However, it is a human that adapts to the changing nature of the software. It is a human who can empathize with the (other human) user’s needs. It is humans that interprets patterns in the results of automated checking. Humans are much better than software at recognizing the initial conditions of a problem. [In chess, this is known as the ‘orientation’ phase.] One key insight we had was that the problem in front of the tester keeps evolving. People can (if they choose to) automate what has already been done but cannot automate everything that they are going to do. So automated checking is in a sense always playing catch up and is inadequate in many situations. During these discussions, I noticed a couple of things:
a) our instincts were right but our language was lacking. E.g.: We ended up with ‘survey vs interview’ instead of the more succinct ‘checking vs testing’
b) we came up with a lot of analogies to explain what we did (machine vs hand embroidery, multiple choice questions vs open ended questions, the absurdity of calling something “manual security” vs “automated security”, how its car and “driverless cars” and not “manual cars” and “automated cars”, etc)

One more line of attack involved trying to understand the origins of the problem. Our speculation is that when the world first saw machines running checks, it named this kind of checking as “automated testing”. Then, without realizing the impact, what testing remained got labeled “manual testing”. The problem is that “manual testing” does a lousy job of describing what remains in testing after you remove automated checks. People inadvertently use the phrase “manual testing” when they really mean NOT automated checking. Our theory is that “manual testing” has suffered from this perception problem because it was passively defined by contrast to “automated testing”.

As a side note, devoting time to attack this problem had a positive effect on morale and energy in the office. I noticed higher energy levels and boosted confidence levels. So testing leads, consider discussing this topic with your team.

NOTE: Team Qxf2 consists of (in no particular order): Rajeswari Gali, Deepthi Sathish, Rupesh Mishra, Saurabh Chhabra, Avinash Shetty, Vrushali Toshniwal and (me) Arun. Special thanks to Avinash and Vrushali for researching important articles on the topic and selecting key quotes.


Advanced Testing (like Advanced Chess)

I love chess. I am going to borrow a phrase from the world of chess. I think that good testers practice Advanced Testing. Ummm … what testing?? Advanced Testing. You know, like Advanced Chess. Advanced Chess is a form of chess where human competitors get to use chess software during the game. The humans usually use the software for blunder checking, exploring complicated variations in-depth and double checking human calculations. In specific positions, the human may choose to completely ignore the computer’s suggestion. At every point in the game, the human’s judgement, wisdom and intuition about the position is paramount. The computer is used as a supplementary tool. To me, this is very similar to good testing. The phrase “manual testing” does a poor job of describing this kind of “Advanced Testing”. India’s greatest sportsman and the 15th Classical World Chess Champion, Viswanathan Anand, expresses my angst beautifully:

I think in general people tend to overestimate the importance of the computer in the competitions. You can do a lot of things with the computer but you still have to play good chess.
— Viswanathan Anand

This. So much this. I’ll paraphrase the above quote so that it applies to the testing world: In general, people overestimate the importance of automated checks in testing. You can do a lot of things with automated checking but you still have to perform good testing.


Relevant links and quotes

This problem around “manual testing” has been attacked by the brightest software testing minds for a long time now. In this section, we are quoting and linking to some of the best articles written by the leading lights of the testing world. To get the most of this post, please take time to study the material listed here.

1. Michael Bolton in many places but you can start with “Manual” and “Automated” testing

The categories “manual testing” and “automated testing” (and their even less helpful byproducts, “manual tester” and “automated tester”) were arguably never meaningful, but they’ve definitely outlived their sell-by date. Can we please put them in the compost bin now? Thank you. –Michael Bolton

The article and the comments in the article are gold. Michael Bolton has also indirectly attacked this problem by actively defining a difference between testing and checking. As his Twitter follower, I regularly see him attacking the false dichotomy of manual/automated.

2. Cem Kaner in Architectures of Test Automation

Rather than calling this “automated testing”, we should call it computer-assisted testing. –Cem Kaner

In the paper, Cem Kaner presents a powerful example of why GUI regression testing is more accurately described by the phrase “computer assisted” testing rather than “automated testing”. To support his point, he breaks down GUI regression tests into a bunch of tasks and examines who/what does the task.

3. James Bach in so many places but you can start with Sapient Processes

I am applying a new term to processes: sapience. Sapience is defined in the dictionary as “wisdom; sagacity.” I want to suggest a particular connotation. A sapient process is any process that relies on skilled humans. –James Bach

I’ll be honest – the first time I read about ‘sapient testing’ I did not like the phrase. I do not know such big-sounding words in English and my gut feeling was that not many people know the meaning of the word “sapient”. Now, as I have gained more experience in the testing field, I see the need for it. I have used “sapient” in specific contexts and it has helped.

Another interesting read is Testing and Checking Refined. The theme of thinking testers representing themselves better is at the heart of a lot James Bach’s lectures. I highly recommend watching his talks on Youtube.

4. James Whittaker in Turning Quality on its head (GTAC 2010)
I enjoyed the first half of this one hour long talk. Testers should focus on the language being used and think about places where they can apply “early stage” and “late stage”. Bonus: There is an interesting anecdote about Linus Torvalds asking for human testers.

5. Martin Fowler in Broad Stack Tests
I liked enhancing my vocabulary with words like “broad stack”, “full stack”, etc. These new words help me describe why I design some tests the way I do.


What YOU can do?

There are some things YOU can do to combat the perception problem around “manual testing”. At a high level, a combination of the following approaches can be taken:
1. Destroy the false dichotomy of “manual” vs “automated” testing
2. Introduce new terms
3. Highlight the limitations of automated checking
4. Do a great job of testing
Do such a good job that people do not care about the tools you used. I have experienced this first hand, albeit, temporarily.

Based on our reading and research, team Qxf2 has decided to implement two concrete action items for ourselves. We think they are easy to do and will make a difference over the long run. The two action items are:

1. Use the words checking and testing correctly
I have been using the phrase “automated tests” on this blog and in various conversations. Going forward, I will use the phrase “automated checks”.

2. Avoid the phrase “manual testing”
We have chosen to learn new words and substitute phrases. Based on our reading, here are a bunch of words and phrases worth thinking about: sapient process, broad stack testing, deep stack testing, full stack testing, late stage testing, early stage testing, automated checks, computer assisted testing. “Advanced Testing” works well for a chess lover like me. Use it if it suits you. If not, invent your own!


I understand that a lot more effort needs to be directed to solving this issue. I sincerely hope that this (really) long post adds something to the existing discussion on this topic. Thanks for reading!

Arunkumar Muralidharan
I want to find out what conditions produce remarkable software. A few years ago, I chose to work as the first professional tester at a startup. I successfully won credibility for testers and established a world class team. I have lead the testing for early versions of multiple products. Today, I run Qxf2 Services. Qxf2 provides software testing services for startups. If you are interested in what Qxf2 offers or simply want to talk about testing, you can contact me at: mak@qxf2.com. I like testing, math, chess and dogs.

© 2013-2017, Arunkumar Muralidharan. All rights reserved.

One Comment

Leave a Reply to Testing Bits – 8/10/14 – 8/16/14 | Testing Curator Blog Cancel reply

Your email address will not be published.