State Transition Testing – Automated Tests for Authentication Flows

In this article, we present an approach of testing the authentication flow of an application with the State Transition Testing technique. We use AltWalker, GraphWalker and Python to write and execute tests automatically.

The article is structured in 5 sections. The strategy section covers what we want to achieve from our automatic tests. In Modeling the authentication flow section we go over how to model your tests using State Transition Diagram. Then, in Writing tests and in Executing Tests sections we will have a hands-on approach on implementing the tests using Python and running them using AltWalker. And in the final section, Running in CI, we will cover a way to continuously run the tests during the development of the system under test.

In Model Based Testing we create a model that represents the behaviour of the system under test. State Transition Testing is an application of Model Based testing where the behaviour of the system under test is modeled as a directed graph of states and transitions.

The strategy

When developing an application we want to put in place sanity checks that ensure it’s well functioning across the development and maintenance phases. The authentication workflow is among the first interactions of a user with the application. Failure of any step in the authentication flow means failure of converting potential clients. 

When testing an application we want to have a clear overview of what’s being covered by the automated tests. In case of bigger projects, with multiple stakeholders, we want to share the coverage with the developer, product owner, project manager or other parties involved. 

In the authentication module we want to replicate as close as possible multiple ways in which users interact with the application. In a web application the user starts from the homepage and navigates to login, where it has the possibility to enter the credentials, create an account or reset the password. Different users may have different behaviours. To cover as many behaviours as possible, we will randomize the decision of choosing the next transition.

Transitions from home page to login, or logging in means clicking on links on the page or filling out and submitting a form. After transitions are made we want to make sure the user landed on the expected page so we are doing asserts to ensure the transition is correct. Separating transitions from asserts makes for a cleaner code.

In the state transition testing technique, we define the states the application can be in and the transitions that can occur between states. 

“Homepage”, “Login” and “Registration” pages are considered to be different states within the application. Clicking on login, filling out a login form, submitting it, or filling out a registration form are all examples of transitions

From the “Login” page, there are 3 transitions that can occur. We can enter credentials and login, click on reset password link or click on create account link. The decision to transition from one state to another is taken by the test execution framework. 

Using the standard approach, to test the change password and logout functionality we would have to call login once for each test. In case of state transition testing we define the states and transitions and then the test execution framework will do the job of executing all the steps required before changing the password. This way our tests code will be DRY.

Moving forward, we will go in more depth on how to use state transition technique to test an authentication module having the following goals in mind:

  • The authentication module works consistently during the application development.
  • Have clear overview shared between testers and stakeholders about what’s been tested
  • Replicate as close as possible the user workflow
  • Separate state checks from transitions ( asserts from navigation )
  • Reduce the amount of duplicate code in the test repo

Modeling the authentication flow

State transition diagram is represented by the model of the application, a directed graph where nodes represent the states of the application and edges represent the transitions

Depending on the application under test, a state of the application could take many forms. Here are some examples:

  • if you are testing a website, the state could be the page the user is on (homepage, login page, change password page)
  • if you are testing an ecommerce platform, the state could be the page the user is on (homepage, product or cart page)
  • if you are testing a transaction api, the state could be the transaction status (pending, on-hold, completed)
  • if you are testing an auctioning system, the state could be a combination of the auction status and bidder status (auction_started, auction_ended, auction_bidding, bidder_bids_submitted, bidder_withdrawn, bidder_waived) 

For this example we’ve built a simple Django Auth application with the following authentication functionalities: 

  • Create account
  • Login – Logout
  • Reset password
  • Change password

Edges and vertices

We’ve modeled each page as a state (vertex) and each navigation from one page to another as a transition (edge). This way the user behaviour is simulated as close as possible. Clicking on the login link from the homepage is represented with an edge called go_to_login, which unidirectionally connects home vertex to login_form vertex. The home and login_form vertices are associated with the home and login page on our application.

The algorithm implemented in GraphWalker walks on the graph step by step choosing a random edge to move forward every time it is in a node that has more than one outgoing edge. This way we cover multiple user workflows. In the vertices we do checks that we have navigated onto the desired page and in the edges we fill in and submit forms, or click on links to navigate to a different page (state). This way we separate state checks from transitions. 

Actions and guards

On the homepage the user can be logged in or logged out. This is why we update a variable `logged_in` using actions. The variable is updated when the user logs in or logs out (login and logout edges). If the user is already logged in we can transition to change_password_form, if not we can transition to login_form

From the login_form we can login and transition back home only if we have created an account, as the login edge is protected by a guard that requires an account to be created first.

By modeling your application as a state transition diagram, the possibility to collaborate with different stakeholders with different expertise opens up. Thus, we’ve decided to model the application using the AltWalker Model Editor.

Writing tests

Once the model is created and ready to be used, AltWalker will allow us to generate our test structure with an easy command as following: 

altwalker init -l python -m path/to/model-name.json test-project

This generates a structure of the project containing a folder for the models and a file for the tests. The tests folder will contain a test.py file with methods for each edge and vertex previously added in the model json.

Afterwards, we also created a pages folder with a file for each page that we want to cover with our tests. This represents the page object model (POM) structure which helps minimize code duplication. 

After we created all the files needed, the next step was to add the locators for the elements that we decided to use in the tests either to interact with them or to be used for assertions. It is highly recommended for you to ask the team for clear locators for each element that you plan to use. Do not settle for searching them by general xpaths, but ask for unique ids or classes. This makes life way easier both when writing and when maintaining the tests.

To diversify the tests we also created methods to generate random usernames, email addresses and passwords. This is done by picking random characters and numbers with a specific format set for each field. 

Having the locators set, we created helper methods that were required in the tests, such as functions to perform clicks or functions to fill in text.

Now that we have the model, the POM with the locators and helper methods ready we could start implementing the edges and vertices methods, the actual tests.

As we noticed above, our graph is represented by nodes and transitions between the nodes. Each of these have a method generated in our tests file. In order to have a clear delimitation between states and transitions, we recommend you to implement transitions in the edges, and asserts in the nodes. Asserts in the nodes should check whether the transition got us indeed in the intended state. This is what we also did in the tests file of this authentication model.

You can verify your code by running the tests anytime as you will see in the next section.

Executing tests

Demo app

The tests we’ve written run against a Django Auth web application. Before executing the tests, make sure the demo app is running  on localhost:8000. To start the Django Auth web application on localhost you can either run it in docker, or run it from source code.

  • Run in Docker

To run it in docker you need to have Docker installed. With docker installed, start the demo with the following command: 

docker run --rm -it -p 8000:8000 altwalker/demos:django-auth

  • Run from source

To run it from source, follow the installation guide available here https://gitlab.com/altom/altwalker/altwalker-demos/django-auth

To verify your Django Auth demo is up and running visit http://localhost:8000/.

Tests

To run the tests you can clone the repo from https://gitlab.com/altom/altwalker/altwalker-examples and follow the installation steps for python-auth example. 

Before running the tests, a few dependencies need to be installed (or just make sure you have all of them): 

  • altwalker (documentation available here
  • Geckodriver
  • Dependencies that can be resolved using “pip install -r requirements.txt”

altwalker is using multiple helpful commands:

  • check – checks and analyzes model(s) for issues
  • init – initializes a new project
  • generate – generates test code templates based on given model(s)
  • verify – verifies the test code against the model(s) 
  • offline, online, walk – these commands run different kind of tests, depending on the desired outcome

The usual order of altwalker commands assume checking the model at first, and afterwards initializing the project. Afterwards, the generation of the test code template is mandatory, before doing any type of test. 

There are multiple ways to run the tests depending on what you want to accomplish with your run. We’re using some commands in the terminal very often:

altwalker online -m models/models.json "random(edge_coverage(100))" tests

Runs until a 100% coverage of the transitions from the model is reached

altwalker online -m models/models.json "random(time_duration(300))" tests

Runs for 5 minutes and passes if no error is encountered meanwhile

altwalker online -m models/models.json "random(never)" tests

Runs until something happens, either an error is encountered or we manually stop the run

altwalker online -m models/models.json "random(reached_vertex(account_created))" tests

Runs until the account created state is reached.


The same commands can be combined with the `quick_random` instead of `random` generator. The purpose of it is to reach a specific stop condition faster. The `quick_random` chooses an edge not visited from the entire model, selects the shortest path to that edge and repeats until the stop condition is met. 

If you’re interested in exploring more ways to generate the path to run your tests, take a look at the path generation section in AltWalker.

Running in CI

To make sure the authentication module works consistently during application development we are scheduling the tests to automatically run each time we are modifying the application. We create a staging environment where tests run every time we deploy a new version of the application. The tests run on a clean database environment. On a bigger project we work with feature branches. Every time we do development on a new feature branch we run the tests.

For the example project we have set up AltWalker tests to run every time a change is made on any branch. The django-auth demo is started as a service from the docker image, and the tests run on AltWalker docker image. Check python-auth-lint, python-auth-check-models and  python-auth-tests for an example of how we set up GitLab CI for our project.

Links

Contributors

Dorin Oltean
Octavian Vladut
Rares Fetean
Timea Pusok
Melinda Nagy

Subscribe to our newsletter

And get our best articles in your inbox

 Back to top

2 responses to “State Transition Testing – Automated Tests for Authentication Flows

  1. Hi,
    In my experience, In the CI phase, I like the tests to be predictable. This can be a challenge with GraphWalker, which uses a random generator, which means that the tests will have different paths every time tests executes.
    This can be fixed by seeding the tests: https://github.com/GraphWalker/graphwalker-project/wiki/How-to-seed-your-test
    My question is if you have tried that?

    One could choose offline mode. Then the same execution path will be used every time. But there is an overhead in loading, parsing and executing an offline generated path into your test tooling suite.

    The reason for using the same path in CI tests, in my opinion, is
    * a quick feedback loop from tests are desired. The sum of all tests need to fit into a specific time window. Random paths will vary in time, and in some cases, they might just take too long.
    * Troubleshooting. When tests fails, and they will ;-), using the same paths, makes it easier to troubleshoot. Typically, you might wanna compare the latest passing test runs with the failing one. Using different paths makes that harder.

    Thanks for sharing!
    Kristian

    1. Hey Kristian,

      Thanks for the suggestions!
      We did not use the seed option yet but as I am seeing it is worth taking a look. I’ll investigate it.
      The offline mode was suggested by us too on a project where we were asked for more predictability on the tests covered.

      Though if we use these approaches, we won’t benefit that much from randomness. And this randomness has the power to find new bugs. Therefore the predictability and randomness of tests have to be combined based on the needs of the project and what’s the testing goal.
      I’d say that a predictable path might be used for absolutely critical flows (doing it either with the seed option or with offline command). And for the purpose of finding bugs we should keep the randomness for model coverage.

      Moreover we implemented tests such as even if we’re running them using the random command we would have a pretty good understanding of what’s been covered. This was done at some point by investing time in designing a good logging system on the edges and vertices. Would you also see this approach useful?

      Thanks,
      Tavi

Leave a Reply

Your email address will not be published. Required fields are marked *