Overview

Rainforest's analytics feature helps to guide your QA team toward making the right decisions regarding your QA testing efforts, so you can stay on track toward your goals.

The Analytics page within the application contains customizable charts that provide actionable insights into your test suite health, testing performance, and team activities.

In this article, we will cover:

  • Orientation to the Analytics page
  • How to configure charts: adjusting time periods
  • Available charts and data definitions

Orientation to the Analytics Page

To reach the Analytics page, click the chart icon from the left navigation menu.
Once on the page, you'll see a series of configurable charts.

Use the blue "Customize" button at the top right to choose additional charts to be included in your default view.


Configure Charts: Adjusting Time Periods

To adjust data ranges displayed in each chart, simply change the time periods displayed using the dropdown options. See the gif below for a demonstration.

Available Charts and What They Mean

Usage Metrics

1. Total Test Steps Executed

This chart represents usage patterns and fluctuations over time, and simply displays the number of test steps executed over a given (and configurable) time period.

2. Average Test Run Time (Minutes)

In this chart, a "test" is a single test run against a single browser, and the time it takes to complete is measured as the time it from the moment you initiate the run to the moment that each individual test is completed.

We separate Automation-executed test runs from test runs completed by our Tester Community (crowd).

Note: this chart excludes draft runs.

3. Usage by Browser / Platform

This chart simply displays coverage (in terms of tests run) by browser/platform. If the breakdown you here does not represent your user traffic, then you may want to make adjustments to your testing activities.

Note: this chart excludes draft runs.

4. Utilization by Tester Community Versus Automation

This chart helps you understand how much of your testing is done by Automation vs. our Tester Community (crowd). A well-balanced testing strategy typically involves a mix of humans and machines, so this chart can help you measure and track your testing activities compared to your goals.

Note: this chart does not include draft runs.

Test Suite Activity Metrics

1. Tests created vs. edited

This chart displays the total number of individual tests created and individual tests edited. The number of tests edited is different than the number of "test edits." For example, if a single test is edited several times over the time period displayed, it will count as a single "test edited."

2. Plain English Tests created vs. edited

Same as the above chart, but this chart displays only your Plain English tests created and tests edited.

3. Automation tests created vs. edited

Same as the above chart, but this chart displays only tests written in our Rainforest Automation language that have been created or edited.

4. Tests created and test edits by team member

In this chart, we can see a breakdown of test suite activity by team member. Specifically, we see the number of tests created and the number of test edits by team member. Note that a "test edit" is an event - if you edit the same test seven times, that is seven edits.

Keep in mind that if you are using our Test Writing Service, you will see the names of whichever Test Authors wrote tests for you in this chart.

Test Results: Pass and Fail Rates

To understand the metrics that relate to pass and fail rates, it's important to know what we're actually measuring. In these charts, a "test run" is defined as a single test run against a single browser. In the below image, we have run a single test against 3 browsers, so that is three "test runs." If the test were to fail on one browser but pass on the other two, we would have one failed "test run" and two passed "test runs."

1. Test Run Pass Rate

We see a breakdown of test runs by result. Note that (1) draft runs are *not* included, and (2) "no result" simply means that the test was not able to be completed. A common example for why a test run might display "no result" is if the run is aborted.

Hovering over any of the bars will show you the total count of test runs by result during each period.

2. Tester Community Test Run Pass Rate

This chart is the same as the above, but only displays test runs that were executed by our Tester Community (in other words, not executed by Automation).

3. Automated Test Run Pass Rate

This chart is the same as the above, but only displays test runs that were executed by our Automation, rather than by our Tester Community.

Failure Engagement

1. % Test Failures Viewed

Critical feedback about your application exists within your test results. For that reason, we recommend ensuring you view each and every test failure. Use this chart to track your team's progress!

2. % Test Failures Categorized

Failure categorization is the best way to ensure that you have eyes on every failure and take action to recover from failures quickly. Over time, you can spot patterns and trends within your test suite. For that reason, we suggest a policy of 100% failure categorization. Use this chart to track your progress over time!

Note: this chart excludes draft runs.

3. Failure Reasons Over Time

Failure categories give you actionable insight into your test suite; Quantify the impact of your team's testing activities by reporting on the # bugs caught, or use this chart to proactively spot and take actions on patterns, such as system issues or deferred test maintenance.

4. Failure Reason Breakdown

Another visualization option for the data above.

5. Top Failure Reviewers

Team members who have categorized failures within the filtered date range.

Run Results: Pass and Fail Rates

"Runs" are the groups of tests run together. Every line item on the runs page https://app.rainforestqa.com/runs represents a single run. A "run" can contain one single test, three tests, or even a hundred tests.

  • "Passed" run = 100% of tests within the run (the group of tests run together) passed. Example below:
  • "Failed" run = at least one test within the run failed. In the example image below, one test in the run passed and one failed, so the result is that this is a "failed" run.
  • "No result" run = the run was not able to complete for whatever reason. The most common reason is if the run was aborted before it could complete.

1. Run Pass Rate Over Time

This chart displays the run results over a filtered time period. For teams who run large groups of tests together (and rarely run "one-off" single tests at a time), this chart can be useful to understand how many runs passed completely cleanly vs. how many contained at least one failure.

2. Run result by environment

Similar to the above chart, looking at run results by environment can help you pinpoint where in your testing cycle issues tend to occur, and/or spot environment instability issues.

Did this answer your question?