I want to share with you how State Model-Based Testing (SMBT) and AltWalker have helped us in testing a complex auctioning system.
About the project
The project consists of an online web system used for auctions. The main keywords that are used on a day-to-day basis on our project are “auctioneer”, “bidders”, “rounds”, “bids”, “withdrawals” and “waivers”.
The application provides functionalities for all actions that could happen during an auction:
- the auctioneer setting up and moderating the auction
- bidders that are submitting bids
- bidders that are withdrawing
- bidders that are waiving (skipping the round)
How adding automation powers up our tests
An important level of complexity is added by multiple bidders interacting with the application at the same time.
Also, due to the context of the application (working with huge amounts of money) the requirements are strict when it comes to functionality precision, performance, and up-time, all of them being high-priority in our testing strategy.
Even so, in order to simplify our model and avoid the state explosion, the application that was modeled into a statechart consists only of actions from the main flows. Functionalities such as messaging or settings were not taken into consideration to be tested with MBT.
Having to cover a great number of mandatory use cases in a short time frame puts us under a lot of pressure and leaves very little room for exploratory testing. Therefore we concluded that we have to automate the repetitive and basic tests to give us room for exploring.
The complexity and the context of the application are other reasons why we chose a Model-Based Testing approach at the API level instead of linear tests. This approach helps us generate multiple test flows based on different conditions (such as coverage or length). Last but not least, the System Under Test’s behaviour could be modeled as a statechart. This means that we have to perform precise actions to transition between one state to another.
Automating tests using AltWalker also helped provide fast feedback. Even a partial coverage of the application makes the developers feel more confident when committing their code. And also we, as a team, avoid the ping-pong of tasks that don’t work at all at first; therefore saving time for other tasks.
How AltWalker works
Because Altwalker is a state model-based testing framework, we had to represent the application using a statechart. We did this by using the model editor developed by the Altwalker team. It allowed us to create a model by configuring a JSON file following a specific structure and then viewing the generated graph. Another option would be to directly create the graph and then use the generated JSON file.
The main components of a model are the edges and the vertices. Each edge represents a transition from a state to another and a vertex represents a specific state the application is in at a specific moment. Looking from the test automation perspective, in the test file, the edges and vertices are individual methods that contain the implementation of the actions and the asserts for the states of the application.
We created 2 separate models connected by a shared state. These are the auctioneer and bidder models. As you can see in the figures below the v_bidding vertex is present in both models, representing the connection between them. Having the two connected models instead of a single one allowed us to gradually implement the functionalities available and also opt to test only parts of the system (either one model or another).
Figure 1. Part of the auctioneer model
Figure 2. Simplified bidding model
Bugs found with Altwalker
Using Altwalker we found functionality issues. When we implemented the tests and noticed these issues, our first thought was that there’s something wrong in the logic of our test flow. However analyzing the steps that Alwalker performed, we noticed there was a regression issue in the functionality of the application. An example of such a bug is the bidder being able to access the round report after he withdrew from it.
Another issue found was about an improper configuration of the auctions. This was allowing bidders without access to a specific auction to be part of it. This is a major configuration issue raising security concerns and was found fast and easy using the automated tests.
Slow responsiveness of the application was discovered by running the automated tests. As mentioned earlier, this type of system requires fast responses. There were cases when the round did not end fast enough, the tests still finding the “running” state of the round while asserting for it to be ended.
Since we’re using a tool to perform the tests, it’s not always about the application we’re testing, but also about the tools involved. Yes, we also found bugs in Graphwalker, the base tool on top of which Altwalker was built. Such a bug was causing the tests to be stuck in a specific vertex, not knowing how to get out of there and throwing an algorithm exception. That was not helpful when our focus was on testing the auction application but managed to report the issue and find workarounds successfully.
Don’t put all your eggs in one basket
Automation or Altwalker are not universal solutions. It all depends on what you want to find with the tests you’re writing and start coding with that goal in mind. We did not prioritize to find front end or design issues. Also, this type of testing might not be the most suitable to cover security or error handling issues.
We were not even looking for all data testable in the application. Our focus was on having the states covered and the tests set to validate that the application is indeed in the state we were expecting, letting the models be randomly covered afterwards. This way we were able to cover multiple flows.
Most importantly, automation and Altwalker were not used to replace manual testing as they can’t. However, they helped with reducing the workload of repetitive tasks and we got the time needed to get creative on designing tests.
How we are now happier as testers
Having at least the basic coverage of the functionalities, we now have time to consider more in-depth tests. Due to the model we’ve created based on the architecture of the application, we are more prepared than ever to come up with new tests and flows that maybe we didn’t even consider before.
We do not waste time to perform repetitive tasks, but test more complex use cases and apply other testing techniques that were not used before on the project. We now have more time for exploratory testing, looking for flows that might fail even though they are not that straight forward.
We now use our time to further plan and develop both the automated tests we have so far and to also work on our code review and coding skills.
For questions and support, join the Gitter room: https://gitter.im/altwalker/community