At the end of January 2020, I attended the Dutch Exploratory Workshop on Testing. The particular format made this gathering different from what I experienced before. For details, I recommend going through the handbook developed by LAWST – Los Altos Workshop on Software Testing, and Paul Holland’s blog post on facilitation (with K-Cards). In a few words peer workshops distinguish through:
- participation based on invitation, up to 20 participants
- each participant is ready to share an experience with the group, but not all will have the chance to do it
- presenters are announced during the conference, in a batch of three
- a presentation consists of a 15 minutes experience sharing followed by peer discussion facilitated with K-Cards
- discussions last until there are no more ideas in the group
- conclusions, rules, techniques are crystalized & shared with the rest of the industry when possible
This ninth DEWT edition had eighteen participants and a Star Wars poster. Simon ‘Peter’ Schrijver was the workshop chair and Ard Kramer the content owner. The theme was “Testability; A quality aspect which is underestimated!?“
The first time I became aware of testability was in my third year of testing, through James Bach’s Heuristic Test Strategy Model, and the expansion of the Quality Characteristics that thetesteye.com put together. If you don’t know these, I encourage you to go through at least the Software Quality Characteristics. Here are the English version and the Romanian version. The testability dimension is on the second page.
James pointed out in his experience report to another model that can be applied to testability:
CODS – Control, Observe, Decompose, Simplify
Compared to other Testability models, I find CODS very simple to refer to and keep in mind. James demonstrated it in the context of coming with solutions for testability issues.
Maaike’s experience report questioned the usage of the word testability. More testers in the room experienced being ignored when addressing the issue of low testability. While Elizabeth’s solution was not to use this word at all with the teams she works with, Maaike had success by instead using the term Observability. Huib likes to use Learnability. A new naming that came out at the workshop is Whatafuckability. Outside of the workshop’s walls, Anne Marie Charett’s alternative word is Qualitability.
Zeger brought on the table the subject of mental testability. He shared two experiences of products (either through purpose or people) that violated moral values and had the effect to considerably decrease his desire to test. More people in the room found ourselves in similar situations. We shared our thoughts and highlighted that at least when our bodies start responding e.g in increased heart rate, even blackout, it’s important to ignore no more. Apart from this body indicator, Jean-Paul shared about the “wife” indicator. This is when our partner points out behaviors we usually don’t have, e.g discussing obsessively work, working too much over schedule, being more irritable or impatient to our children.
My DEWT experience report
I questioned where our intention to improve testability comes from. What do we do when the other people in our team do not see the value of the improvement? How do we deal with the unfulfilled desire to improve? In my experience report, I shared a case when the test team opposed to having access to the code/deploy system, and my approach to work around it. Ruud sketched it.
I don’t want to leave out anything
Of course, we discussed many other dimensions of testability, from the myth of testing in isolation to the efficiency of pipelines to being able to test but not automate, or whether confidence could be a testability indicator. Ideas did not circulate only during the dedicated slots of the experience reports, but went on during our social evenings, breaks, at the lovely hotel’s fireplace, or during our walk in the forest.