Automated testing for auctioning systems

Model Based Testing Case Study

Key achievements

  • Provided a model based testing framework for the auctioning system
  • Offered a solution to test multiple auto-generated paths of the system
  • Reduced the amount of repetitive tests done by the team in order to have time to consider more in depth ones

Types of testing

Automated testing
Model Based Testing
API Testing

Tools & technologies

AltWalker
Model Editor
Python

Industry

Auctioning System
Web Platform

The project

Specure Auctions is an online web system used for frequency spectrum auctions, allowing agencies from different states to run and moderate an auction. For each auction, there are multiple bidders able to perform actions such as bidding, waving, or withdrawing from it, based on the rules defined in the auction workflow.

Case Study Online Auctioning System

automated solution auctioning system

The needs and challenges

We were facing a great variability in the tests due to multiple bidders available to perform multiple actions in no specific order. This added an important level of complexity to the application.

We needed to provide fast feedback with our tests so that we could find regressions of the application as soon as possible. This was critical due to its context with large amounts of money being bidded. Moreover, only manually testing the application with a high test coverage was very time consuming and difficult to accomplish considering the status required in a short timeframe.

Another challenge was finding a suitable automated solution considering the complexity and the variability of the application. There are a lot of flows that could be covered and each transition from one state of the application to another generates multiple other available flows.

Our solutions

In order to meet the needs of the application, we came up with the solution of using Altwalker, a Model Based Testing framework, and running the tests at the API level. The application was modeled using the model editor into a statechart that consists only of actions from the main flows in order to avoid state explosion.

First we started implementing the actions that allowed the auctioneer and the bidders to transition from one state of the application to another. The plan was to have the actions covered before starting to implement the asserts to validate that the application was indeed in the state that we were expecting it to be.

We were aware that automation is not the solution to all problems. Designing tests require critical thinking and knowledge of the application. Sometimes implementing an automated test might be more complex and time consuming than just running it when it’s required. Therefore, we started analyzing the stories before the development started. We also started manually testing them and performing exploratory testing on the functionalities of the application. 

These actions drastically reduced the amount of time invested in development of the automated tests. But we had to do it in order to find and report issues as early as possible in the development phase.

Noticing that we were not progressing fast enough with the automated tests, another solution was required. And as a team we came up with the following actions so that we would have more time for coding on the automated tests:

  • Developers would test the user stories strictly against the acceptance criteria
  • The bug reporter would test the fix proposed for the bug 
  • We as testers would allocate at the end of the sprint a couple of days for exploratory testing

These decisions were beneficial for the team and helped us have more time to cover the statecharts with automated tests. And we did it.

GraphWalker Auction System

Our results

Having the tests ready, we had great success in finding early the regressions in the functionality of the application. This framework also helped in finding configuration issues on the auctions and even detecting slow responses of the application when specific actions were performed.

A great success generated by these tests also consists of making time for us as testers to consider more in-depth tests to perform. Due to the model we’ve created based on the architecture of the application, we were more prepared than ever to come up with new tests and explore the application in a way that we didn’t even consider before.

For more details on how we applied this solution and the path towards it, see Auctioning Systems – A Model-Based Testing Approach.

Looking to improve your test automation process?

We can find a solution suitable for you!

or call +40 371 426 297