My approach when testing is spread across my non-testing activities as well. It’s part of my thinking toolbox, my mindset. Among other activities, I think that due to its nature software testing is very special. It involves lots of unknowns which require mechanisms to deal with. For this, further mechanisms are needed to understand if the existing mechanisms are suitable in the current context, and this can go on and on, like a fractal. In this blog post, I chose three ideas that I picked up when testing which I consider valuable, and improved my view of the… world.
I know nothing
While hunting for bugs, there is a high risk to fall into the trap of not noticing things. You might be aware of biases like confirmation bias, neglect of probability bias, or observer expectancy bias. To deal with this, I like to apply a mind attitude. I like practicing thinking that I know nothing and the information I was given can be untrue. With this in mind, interesting bugs start to unveil.
I find this attitude useful outside bug hunting too. It reminds me that I don’t know everything and the things I know can be partial, inaccurate or false. For example, let’s suppose you and I are engaged in a conversation about a movie we’ve both seen. Then you refer to something I don’t recall from the movie. The “I don’t know everything” running on the back on my mind, creates an automatism to first listen to you and your argument, instead of reacting, egotistically exposing my own view. Listening is an important skill that enriches relationships. When taking part of an argument, doubting my interpretation, surely helps to sharpen my ears and listen what the other has to say. Over time, I observed that this attitude shortens the time to see bugs, and also seeing another persons’ point.
The physicist Richard Feynman has a nice video talking about Not knowing things [00:00:57]. It is part of a longer recording that can be found here Richard Feynman – The Pleasure Of Finding Things Out [00:49:38]
For me it’s a pass. What if for you it’s a fail?
When we run tests and analyze the results, we testers, may not always have all the relevant information to evaluate. Sometimes, we can’t even know that we don’t know. I see are two cases. One, when we mark a scenario as a fail, but for a stakeholder it’s a pass. And two, when we mark a scenario as a pass, but for a stakeholder it’s a fail. I experienced both cases. The second case I found more painful, especially because it didn’t even feel fishy to get to question my evaluation of the test result. In time, I developed different mechanisms to prevent it from happening. I accepted it’s not in my area of control to know everything. What is in my area of control is how I deliver information. So, I adjusted my test documentation. Now, instead of writing pass or fail for a scenario, I write the behavior of the test result and then have that reviewed, to help to identify false passes. A nice side effect reminding me to look for all the ways I can evaluate a test result: sometimes I should also look in logs, network calls, sent emails, analytics or another functionality, but initially I forget or I’m not aware of their existence. Repeated experience with the false evaluations, also led me to questioning the ‘obvious’. Asking self-evident questions has become very important to me. How easily I can get fooled by not doing it! This approach has the downside of being judged as a lack on my side. Fortunately, observing the good results, I find it less and less hard to overcome the fear of looking silly.
I started tp question the obvious outside the work context as well. Many times I get reactions like “Really, you don’t know that?” or answers to other questions I did not ask. People tend to look low on me when I do that, but other times, people enter a thinking process. Sometimes it results in a fun situation, other times it creates confusion, frustration. I keep doing it though once in a while. Unpleasant feelings come and go, but the invitation to look at things less superficially I see as a valuable reminder.
Next, I’d like to share a comic I love, related to questions and answers: A day at the park – Mused
I used to have sprint demos/reviews in some projects I worked on. During those meetings, we showed the product owner the new features. Did it ever happen to you that the product owner had a different perspective on how the features should behave? When this first happened to me I worked to avoid the situation for the future. Back then I thought that the resolution stands only in my skills set. But then I realized that it’s beyond my area of control. Learning more about sprints, I understood that the demo meetings exist exactly for this reason: to get fast feedback when the team implementation is offtrack.
In day to day life, I started to feel the need of asking for feedback. In the past, I couldn’t handle feedback very well. When someone shared a different perspective, I used to take it as a critique pointing out in me how I’m imperfect. Now, I get less and less grumpy when this happens. It is not a critique. What people evaluate is through what they know, and their knowing is likely to be different than mine. I am limited to know all they know, and I cannot know unless I listen to their feedback.
Feedback can be asked in many forms. From directly asking for it, to implementing verification mechanisms. For example proactively exposing what we understood after a discussion, checking information across more sources, offering visibility into what we’re doing etc.
The list can continue with many others I learned in software testing that enriched my life. Even the existence of testing as a job is a great hint to stop feeling unpleasant when we mistake. It’s expected to mistake, there’s even a job invented because of that. We have bugs. Sometimes we spot them. Sometimes we can fix them. Sometimes we can’t, and we just need to rethink them as features.