Overview

A common challenge that faces many maturing QA organizations is that, as their application continues to grow, they find that they must grow our test suite as well. Over time, as they continue to add test coverage, their test suite may become large, bloated, and difficult to maintain., with gaps in coverage they may not realize.

They may find that tests are duplicative or overlapping, perhaps they've lost insight into the coverage they have, or maybe deferred test maintenance has led to noisy results.

Whatever the cause or effect, we know that to maintain a clean and manageable test suite, it's important to prune frequently, cut unneeded bloat, and keep on top of new feature coverage. In this article, we'll dive into how to conduct a Test Suite Audit and get your coverage back under control.

First, take stock of what you have:

Ask us for a test export! We'll share an export of pass/fail rates, step counts, and average run durations by test.

With this data, you can quickly spot patterns and pinpoint problems; even if flaky tests don't feel like issues, you may discover quick optimization opportunities, or problem areas you didn't realize you had.

Next, let's dive into some common problems and ways to address them:

1. Flaky tests / noisy test results:

Flaky tests pass and fail intermittently, without any changes to the product, and create noise. Noisy tests don't provide value; not only are the results not informative, but also the time you spend investigating and manually checking the results detracts from higher-value activities. Even if your noisy tests cover high-priority areas, you'll want to think critically about if the problem is fixable or not.

Get a test export from us, and look specifically at pass and fail rates by test; you'll likely find that a small handful of tests contribute to the majority of noise. Investigate the recent history of these tests thoroughly:

Are they failing due to:

  • Environment issues on your side? Are the issues fixable? **Note: consider setting up a concurrent tester limit if your environment can't handle high volumes of tester traffic.
  • Insufficient or unreliable test data in your internal system? consider setting up a DB seeding/resetting strategy, or webhooks to reset your DB to a pre-defined state before each test run.
  • Tester concurrency issues (ie testers stepping on one another's toes)? Provision unique login credentials using tabular variables to allow testers to test in isolation.
  • Testers misunderstanding or misinterpreting test instructions? Consider using our Test Writing Service. Our trained test authors will rewrite your instructions and hand back passing tests. If you don't have access, we'd be happy to set up a free trial for you.
  • Or are the flaky tests simply better suited for in-house testing? Some tests may require a level of domain knowledge that require internal testing. Consider setting up on-premise testing through Rainforest to take advantage of our VM infrastructure and results aggregation.

Assess the problem areas in your test suite, and invest time in hardening your flaky tests. If you require help from another team, quantify your argument with data: "X% of our tests that cover P1 areas of our application fail intermittently due to issues with our environment, and it takes us X hours to investigate, manually rerun, and recover from these failures weekly" can be a compelling argument to secure resources.

Strategy moving forward

The "no broken windows" approach works well for many aspects of software development. Applied to flaky tests, this means that you should fix a flaky test as soon as it appears. If not, you risk acquiring more flaky tests, as the team will care less about the overall test suite quality.

To ensure a test suite is kept clean, stable, and up-to-date, we highly recommend enforcing a policy of categorizing 100% of test failures - this ensures all failures are investigated - and an internal SLA for how quickly the team is expected to recover from a failure - whether "recovery" means reporting a bug, updating a test, or resetting login credentials.

2. My run times are too long!

The time a run group takes to complete is measured from the time the tests are kicked off until the time the last test finishes running. For this reason, a run group that appears to be slow is often caused by a single test or a handful of tests that need some extra attention.

Ask for a test export from us, and examine the data to pinpoint problem tests. Three measures to pay attention to:

  • average run time: a quick "smell test" to quickly spot problem tests.
  • step count: longer tests will take longer to execute, but did you know that tests over 25 steps have a 40% higher chance of failure? Work to keep tests as short and modular, and make sure that test scope is aligned to the feedback you care about. If you suspect your tests might be too broad, check out Ideal Scope of a Rainforest test for more information.
  • average duration per step: this measure tells us how efficiently testers are able to get through each test step. Look critically at tests where each step takes ~2 minutes to complete, and investigate recent test run videos to understand why. Are the test instructions too vague or difficult to understand? Investing time to fix up these tests will improve both your run times and the reliability of your test suite.

To read more on optimizing run times, check out 15 tips to optimize your tests to run faster.

3. I have too many tests that I know are low value:

It can be easy to fall into a pattern of "testing all of the things" and writing tests for every scenario pertaining to a new feature.

First, agree on a coverage strategy. What is your approach to testing? What volume of testing do you have the resourcing to handle?

While every test requires time and effort to write and maintain, they do not all provide equal value. For that reason, we generally recommend against writing tests to cover edge cases. There are nearly infinite edge cases and depth you can cover in your tests, yet the time they take to write and maintain detracts from other, higher-value activities. Additionally, an edge case by definition covers an area that is very rarely trafficked or user scenario that is very rarely done, and, by definition, the feedback edge case tests provide about your application is lower value.

Instead, consider using Rainforests Exploratory testing to test edge cases for you. A monthly bug bashing run conducted against your entire application, or against the areas you feel are most vulnerable, will help you uncover bugs lurking in edge cases. Check out Getting Started with QA Strategy to learn more on our recommended approach to coverage strategy.

Next, think about how to rightsize your test suite and eliminate unneeded tests. A good place to start is by looking at tests you haven't run in 6 months +. Depending on how quickly you develop product, it's possible that these tests are so out of date that they're no longer relevant.

Then, determine which tests are low-priority; if you use the "Priority" attributes in Rainforest then this will be easy! If the pass/fail rates suggest that low-priority tests are noisy or difficult, consider removing them altogether.

4. I'm not sure what coverage I have:

Over time and as a test suite grows, it can be common to lose visibility into the test coverage you have. This can become dangerous - high-priority areas left uncovered leave you vulnerable to defects slipping through. Conversely, low-priority areas that are over-covered with overlapping test cases can provide a false sense of security.

If you've lost visibility into the coverage you have, don't worry! It's better to catch this problem now, and put in the work to resolve it. Although there is no silver bullet or "easy way" to address this problem, the sooner you address it the sooner you can have an accurate and realistic view of your test coverage.

For this exercise, we recommend taking at least a full day, depending the size and complexity of your product. If you're part of a team, make it a team activity: every person or small group takes a segment of the application.

1. Map your application: Create a functional map of your application -- from a user's perspective. Even folks who have been working on the same product for years find that taking the time to map out features and flows can be informative and rewarding. When done as a team, this exercise can also help promote shared understanding of your product, and the map can act as a living document.

You can make your map as high-level or as granular as you wish - the important thing is to clearly see your application's layers of features and functionality. If your app is particularly large and complex, it may make sense to make a separate map for each key feature. To help you get started, here's an example functionality map for Slack's web app.


2. Organize your workspace to better assess coverage: Organize your test database, if it's not already. Check out the "Organizing your tests" section of Building Your First Test Suite for more information on how to use Features, tags, and run groups to organize your tests.

Examine the tests you have (perhaps they're organized by feature and/or tags) and compare them to your functionality map. Are there important areas clearly missing coverage? Or perhaps unimportant areas are over-covered.

Make sure you document (on your map, or elsewhere) the areas that are well-covered and do not need additional attention vs. gaps.

3. Address gaps: Ensure you document the areas you're missing coverage, and come up with a plan to address them. Check out Building your first test suite for our recommended approach to building feature coverage from scratch.

5. Moving forward

Moving forward, think of test suite maintenance as taking care of your car; regular and proactive maintenance is relatively painless. Deferring maintenance can be quite costly.

For that reason, we want to take a preventative approach to avoid issues in the future. Our recommendations are summarized below:

  • Decide on a coverage strategy - what will you test, and how will you test it. Considering resourcing constraints and risk tolerance, to what level of depth and breadth will you cover new features?
  • Define ownership: is one person responsible for writing tests? Multiple people? Where are the divides - by feature, or something else?
  • Smart test writing: how will we employ features like embedded tests and custom variables to make test writing and maintenance easier?
  • Smart organization: ensure you have an organization structure in place that you can continue to build on over time. Agree on what organizational properties you'll use. Put tests into features, so that you can assess coverage against particular features. If a low value or little-utilized feature has lots of tests in it, you know it's time to stop.
  • Test maintenance: enforce a process of investigating and triaging all results; our most successful customers enforce a 100% failure categorization policy, and an internal SLA for how quickly they'll recover from failures - whether that's by updating a test, fixing data, or writing up a bug report.

Finally, don't stop here! We recommend sitting down regularly, whether that's quarterly, bi-yearly, or yearly to sit down with your test suite and coverage map, and ensure you have the appropriate level of coverage in the places that matter.

Did this answer your question?