Test smart: How to validate product on a business-facing level?

It took a long time since my story How to Select Testing Techniques for an Agile Team was published (if you ask me what happened in between, the short answer will be: a war in Ukraine and a career pivot). The mentioned story focused on Agile Testing Quadrants, a method presented by Lisa Crispin and Janet Gregory. In this story, I’ll shift to business-facing tests that guide the development and critique the product in more detail. These testing techniques are crucial for product validation from the user's perspective. They are readable by business stakeholders and are performable by QA people, product owners, and designers. From my experience, the types of testing described below could fit best any agile development team and its SDLC — from the prototype phase to polishing the product.

Testing prototypes

As soon as the prototypes are ready or a preliminary version of your digital product is coded, it is wise to check it, answering the question: will the product meet the usability criteria so the users can achieve their goals in a specific context? If not, some rework should be in place.

Here, it is vital to test the main flows of your software through defined criteria of usability (operability, learnability, user error protection, user interface aesthetics, accessibility, etc.). Feel free to develop your heuristics for the usability of your product — this way, you will create a checklist that matches your unique context.

The tester puts a carrot into a fan. The device burns. The tester notes: “Just as I expected”.
Prototype testing will show if a team is developing the proper thing.

Any team member skilled in testing may evaluate the prototypes or a “raw” product. As a QA, I love to review prototypes and give feedback to product designers. Another question is whether the designers are open to this feedback and if there is enough time for prototype updates before it goes into development.

Story/feature acceptance testing

Once a team completes feature development, perform story (feature) acceptance testing. These tests check if the developed functionality matches the user story acceptance criteria, or in other words, functionality requirements. Usually, the team members perform story acceptance tests according to a prepared checklist of test cases (scenarios) within a user story.

If the story acceptance tests are applied, it is necessary to report discovered bugs inside the user story. In this case, the tester should recheck the user story once the bugs are fixed. Consequently, the team should complete the user story only when they resolve detected bugs. This way, fewer bugs will migrate to the production. Isn’t it a dream?

However, there is another side of the coin. Story acceptance tests should be applied by testers early, at the stage of feature development. So, developers are supposed to resolve the bugs before they build the new version. In practice, this requires a lot of “ping-pong” communication between developers and testers. Story acceptance tests have their price: the team needs additional time for thorough user story testing and fixing the bugs.

At the same time, story acceptance tests result in fewer product fixes after the team deploys the version to production. Overall, this means better product quality (and happier customers).

End-to-end testing

To define if the product is ready to be shipped to the end-users, use end-to-end tests. End-to-end tests are scripted tests that aim to test whether the flow of the product is performing as designed, from start to finish. They typically represent real-world use cases, walking through the steps as the end-user would. These tests identify system dependencies and ensure the correct information is shared between database tables and the application’s modules and features. They confirm that the product is functional enough to be delivered to end-users or get to the stage of user acceptance tests.

In other words, end-to-end tests are the ones that reveal the possible issues on the system level: e.g. if the frontend of your application communicates well with the backend. These tests also discover the existing regressions — issues that appeared after some change was introduced to the product. Consequently, the team is supposed to apply end-to-end tests before the production releases.

End-to-end tests can be automated or performed manually. In agile teams, there is a golden standard to automate end-to-end tests intensively to achieve shorter release cycles. Of course, not all of the test cases can be automated due to specific limitations like external dependencies. Development teams should consider these limitations when selecting the cases for automation.

Scripted end-to-end tests might be efficient when you check if the main product’s features are functional after introduced changes. These tests look like a checklist that the tester is supposed to go through either manually or automatically (the results of automated end-to-end test runs should be still monitored by humans :)) and confirm if any bug (“regression”) occurred.

A robot emoji on the white background
I like the idea of balanced testing: automation cannot replace human creativity but recurring end-to-end tests should be automated if possible in agile teams.

At the same time, the results of end-to-end tests will not allow to raise a ready-to-be-shipped flag. The smart teams combine this technique with the other ones, e.g. exploratory testing.

Exploratory testing

As the product goes live, it is vital to apply exploratory testing. Unlike the previous methods, exploratory tests are executed without a written script: the tester is supposed to invent test scenarios and run them on the fly.

However, these tests are guided by the defined goal (e.g. to test a feature) and performed in a session-based (time-boxed) manner. The tester may also use a test charter — a document that includes key points to go through.

In general, exploratory tests allow us to investigate system behaviours not captured by scripted tests. Once these tests are applied, it is easy to find new edge cases and UX discrepancies. Similarly to story acceptance tests, the team needs to allocate additional resources for this type of testing.

Any team member can do exploratory testing. Lots of dedicated QA people enjoy this technique and use it on a day-to-day basis. It is the one that requires a creative approach: just imagine yourself in the user’s shoes, explore the application (as if you see it for the first time) and challenge it with unexpected actions.

I remember the astonished faces of developers questioning: “How did you manage to run into that bug?” Natural curiosity about the product always leads to surprising testing artefacts.

User testing

Among the other methods, mature development teams tend to apply user testing. These tests allow us to check how easy the product is to learn and use by the end-users. Typically, user tests are applied to refine the design or validate the readiness of the developed software, by involving end-users in the testing sessions.

Teams may perform user testing with prototypes or after coding is complete (as user acceptance tests). The purpose of user testing is to check that an application meets the needs and requests of its users, ensuring a high level of functionality, usability, and alignment with real-world scenarios.

Usually, teams collect the end-user’s feedback through moderated (guided) testing sessions or unmoderated ones (arranged via specific online user testing services).

Overall, user testing is always the cherry on top of business-facing tests. In an agile world, it is necessary to listen to your (potential) users, as they will be the ones who will judge the product quality.

Disclaimer: the list above is not a golden rule. Don’t just copy-paste it. It is neither a recipe for successful testing on the business-facing layer. You should not be limited by the described techniques when selecting the ones that fit your team’s needs. Feel free to use a blank sample of Agile Testing Quadrants when discussing the testing methods with your teammates.

Resources:

  1. Janet Gregory and Lisa Crispin, Agile Testing Condensed (2019): https://leanpub.com/agiletesting-condensed or https://www.amazon.de/gp/product/199922051X
  2. Janet Gregory and Lisa Crispin, More Agile Testing (2015): https://www.amazon.de/gp/product/0321967054
  3. Mariia Hutsuk and Sivamoorthy Bose, Importance of Exploratory Testing: https://medium.com/quality-matters/importance-of-exploratory-testing-3f02e34dc0c3


Test smart: How to validate product on a business-facing level? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Leave a comment

Your email address will not be published.