Software testing approach

We test in context

Software systems are complex and diverse, and there is no testing formula that can be applied to all of them. Each project has its own context and the value of testing is enhanced if we understand it and use it as essential reference. We do context-driven testing.

  • Do you want a quick test with focus on time to market?
  • Do you want to focus on fail safe aspects?
  • Is user-friendliness important for you or are you interested in correct functionality only?
  • Should your software be highly secure or is flexibility what matters?

We want to understand what your wishes upon an application are, and the little monster-bugs you fear might live under its bed.

The years of following the IEEE 829 standard, working according to best practices and extensively documenting everything have only increased our awareness of the importance of context, which has led to a significant shift of perspective. We have switched our focus to the actual testing, the actual needs of a project, the people working on it, skills and ways to improve them. Besides the scripted testing approach, we started using exploratory testing.

Each project is unique, and so is our approach – a customized one.

We write code to test more efficiently

Whenever suitable, we use our coding skills to enhance our testing. We develop testing frameworks or improve existing ones, and we create automated tests using Java, Python, C#, JavaScript and others.

We help our clients with their DevOps activities and use Continuous Integration testing and reporting for fast feedback loops.

We develop libraries and tools whenever we see an opportunity to reuse a solution and solve problems on multiple projects. Some of them are open source, some are provided under commercial licenses. These include hardware solutions that can physically control various devices with the purpose of testing them and SaaS applications that help manage testing.

We explore

We are enthusiasts of the exploratory approach to software testing. We believe in fluid, versatile, intellectually rich testing. We learn as we test. We design as we execute. We always ask questions and look for the most suitable solutions.

Why exploratory testing? Learning about the weaknesses of an application and identifying test scenarios as you explore it can be quite a challenge, but the focus is on relevance, on finding the significant, to-be-fixed bugs – from the stakeholders’ and clients’ standpoint – within time, budget and allocated resources. And that is our mission.

How does it work? In a nutshell:

  • First, we set our goals.
  • Second, we establish a structured method for testing.
  • Third, we use an efficient system for logging bugs, while carefully documenting our work.
  • Fourth, we report using pertinent metrics to provide effective decision-making support.

So, ET is not ad-hoc testing, improvised, unstructured bug hunting. On the contrary. The emphasis is placed on bringing value. Exploring the application leads to a variety of tests being performed, new tests being generated all the time, and new tests uncover new bugs.

We manage Exploratory Testing

We document test ideas and we keep track of what we do, again, with focus on relevance. We do not document everything; we document what counts and to the extent it counts for the client (from checklists to detailed logs of testing ideas and session hours, to screenshots and videos). We document as we want to be accountable for our work, we document to build trust, and to create a collection of ideas, of knowledge and inspiration.

In order to most efficiently account for our work, we use different methods, such as, for instance, Session-Based Test Management. We also use instruments such as the checklist method and mind mapping in the process of planning and managing tests. The metrics we select are aimed to help us explore better, to elicit significant questions, and not just provide numbers.

Strongly believing that testing is a sapient process, we rely on our skilled testers to perform the tests. However, we do use automation as a tool to support manual testing, whenever the testing process would benefit from it, automating all that is repetitive and mechanizable.

We run a pilot

One of the specific approaches we propose is a testing pilot project (5-10 man-days), which allows us to familiarize with the application to be tested, as well as with the project team. This first assessment of the application is also a valuable point of reference for future estimations regarding the project.

Extremely useful, both in terms of a timely identification of critical issues, and of setting up a workflow, the pilot is followed by several small iterations, depending on the needs of the project.

We work with passion

We love what we do. We believe in our work. We do our best and always strive for excellence.

« Bio