I mentioned in the previous article that I would discuss more on the similarity of the phases in a qualitative research process and the phases of the testing activity.
So I continue here the parallel with the book – ‘Reliability and validity in qualitative research’, by Jerome Kirk and Marc L. Miller – and a more extended discussion on invention, discovery, interpretation and explanation.
“[…] the full qualitative effort depends upon the ordered sequence of invention, discovery, interpretation, and explanation.” (page 60)
When I test a product, I go through a sequence of different activities that focus on different aspects of the testing process.
“Invention denotes a phase of preparation, or research design; this phase produces a plan of action.”
In my case, I could see this as the stage at which I decide how to test a software service/product, by identifying and building a test strategy.
In the book three sub-phases associated with invention are presented, in the case of anthropological research: the first directions to the field to be studied, first look over the field and the first taste of it (meaning the first interaction with the culture to be studied). These further dictate the approach of the research.
I associate this with the experience of learning how to approach the testing task at hand. It’s what happens before I dive into the task, a sort of prelude to actually performing the task. The first look and taste could translate into having a hands-on experience with the product to be tested.
Doing the ‘Invention’ part in testing has challenges: it sometimes entails using tacit knowledge, and it’s about learning how to learn.
When testing in order to learn how a product works, this stage can be less evident. Thinking about this though, makes me wonder how many of the actions I do are fortuitous, even when I set out to discover how a functionality works. What I do in that situation is, after all, to apply a strategy, and that strategy is based on some ideas that I have over what my current purpose is.
“Discovery denotes a phase of observation and measurement, or data collection; this phase produces information.”
Observation and measurement in this stage would translate into running tests/experiments.
According to the book, this phase begins with finding a place and time to make observations and ends when reaching an appropriate amount and quality of data.
Some risks I see related to this stage would be:
- finding the wrong place or time to make observations. I could be performing tests for a functionality that is not relevant in my context or at a wrong time, for example before the build is in a stable state, rendering useless the data collected.
- collecting wrong data. This can happen if the tests do not take into account a factor that alters the test results in an unwanted way. For example I didn’t take into account cache clearing for a web application I was testing. So when a new version was released, the latest XAPs were not retrieved and the results I had for my tests were not relevant.
- stopping at the wrong time. Deciding when the appropriate quantity and quality of data was collected is a tricky moment, and I can fail to consider some aspects and stop too early, or too late. By too late I mean getting past the point where the data generated is useful, or where the value that data brings does not justify the amount of time I spent collecting it.
“Interpretation denotes a phase of evaluation, or analysis; this phase produces understanding.”
The interpretation means understanding test results and investigating bugs.
According to the book, at this stage, the relation between the problem, the tools, and the gathered data is verified, adjusted, or reconsidered.
This is where the validity and the reliability problems are considered and analyzed.
In some cases the interpretation of results may be obvious, so the evaluation and analysis would be trivial.
One challenge that could appear in this stage is no pattern arising from the gathered data. This could mean that no model known by the tester would apply to the data or that the tester does not understand how to apply one model. In this case, the tools would need reconsideration.
Also, the interpretation of qualitative data can be very tricky, since it might not be a situation similar to a quantitative analysis with a scale at hand, to compare and place the results against. So making an accurate interpretation from qualitative measures is difficult sometimes. It is up to the tester’s skill, knowledge, and experience, among others. Relying on these if, for example, the team works remotely, as in my case, also means building trust between team members, which is a continuous effort.
“Explanation denotes a phase of communication, or packaging; this phase produces a message.”
Explanation consists of creating some sort of report in order to communicate my results to the people that matter in my context, with the purpose of passing through information that helps them make project-related decisions.
Some challenges I can think of that are related to this stage are communicating the results in a relevant way for the recipients of the information. A report for a person on the business area can look very different from one that is destined to the developers, in order to be effective, and pass through the information that I intend to share. So determining the form of the report that is most suitable is a continuous effort of improvement.
I could probably skip this phase if the data collected during the Discovery phase would be self-explanatory to the team members involved.
I’ve found this iteration of stages in my work and I think it helps me to structure my testing activity in this way because it allows me to:
- define and analyze my activity (I know, depending on what I’m focusing on, at which stage of the testing process I am) and thus
- situate my activity in a general framework that I can customize depending on my particular needs so I can
- obtain hints on which questions I may ask myself or others: the way I collect test ideas to build my strategy will depend on my information objective; the interpretation of test results will depend on that as well
- decide what is a bug and what isn’t; the way the report will be created depends on the need of the stakeholders, so I would first need to find out how this report needs to look like.