Rainforest Analytics Overview

Learn how to use analytics to optimize your test suites.

Overview

Rainforest’s analytics feature helps your QA team make the right decisions when testing so you can stay on track toward your goals. The Analytics page contains customizable charts that provide actionable insight into your test suite health, testing performance, and team activities.

In this article, we cover:

  • Analytics page orientation
  • How to configure charts to adjust periods
  • Available charts and data definitions

Analytics Page Orientation

To reach the Analytics page, click the chart icon in the navigation bar. A series of configurable charts appears on the page.

1654

The Analytics page.

To choose additional charts to include in your default view, click the Customize button in the upper right-hand corner of the page.

1407

Adding additional charts.

Configuring Charts to Adjust Periods

To adjust the date ranges displayed in each chart, use the dropdown options.

1407

Adjusting the periods.

Following are the types of charts available to you.

Usage Metrics

Total Test Steps Executed

This chart represents usage patterns and fluctuations over time. It displays the number of test steps executed over a configurable period.

1214

Usage patterns and fluctuations over time.

Average Test Run Time (Minutes)

In this chart, a test represents a test run in one browser. The time it takes to complete is measured, starting when you initiate the run to the moment each test completes. We separate Automation-executed test runs from test runs completed by our Tester Community (crowd).

Note: This chart does not include draft runs.

1210

Average run time each month.

Usage by Browser and Platform

This chart displays coverage (in terms of tests run) by browser and platform. Suppose the breakdown you see here does not represent your user traffic. In that case, you may want to make adjustments to your testing activities.

Note: This chart does not include draft runs.

1194

Coverage by browser and platform.

Utilization by Tester Community vs. Automation

This chart helps you understand how much of your testing is done by Automation vs. our Tester Community (crowd). A well-balanced testing strategy typically includes a mix of humans and machines. This chart can help you measure and track your testing activities compared to your goals.

Note: This chart does not include draft runs.

1210

Testing activities compared to your goals.

Test Suite Activity Metrics

Tests Created vs. Edited

This chart displays the total number of individual tests created and edited. The number of tests edited is different than the number of test edits. For example, if you edit a test several times over the period displayed, that counts as a single test edited.

1216

The total number of individual tests created and edited.

Plain English Tests Created vs. Edited

Same as the chart above but only displays your plain-english tests created and edited.

1212

Plain-english tests created and edited.

Automation Tests Created vs. Edited

Same as the chart above but only displays tests written in our Rainforest Automation language that have been created or edited.

1218

Automation tests created or edited.

Tests Created and Test Edits by Team Member

This chart shows a breakdown of test suite activity by team member. Specifically, we see the number of tests created and the number of test edits by team member. Note that a test edit is an event; if you edit the same test seven times, that counts as seven edits.

Keep in mind that if you are using our Test Writing Service, you see the names of the Test Authors who wrote tests for you.

1210

Test suite activity by team member.

Test Results—Pass and Fail Rates

To understand the metrics relating to pass and fail rates, you should know what we measure. In these charts, a test run is defined as a test run in one browser. In the image below, we ran a test using 3 browsers. This translates to 3 test runs. If the test failed on one browser but passed on the other two, we would have one failed test run and two passed test runs.

2382

Pass/fail rates.

Test Run Pass Rate
Here we see a breakdown of test runs by result. When reviewing this chart, keep in mind the following:

  1. Draft runs are not included.
  2. No result means the test could not be completed. A typical example of why a test run might display “no result” is if the run is aborted.

Hovering over any of the bars shows you the total count of test runs by result for each period.

1212

Test run results.

Tester Community Test Run Pass Rate

Same as the chart above but only displays test runs executed by our Tester Community rather than Automation.

1214

Tester Community test run results.

Automated Test Run Pass Rate

Same as the chart above but only displays test runs executed by our Automation rather than our Tester Community.

1212

Automation test run results.

Failure Engagement

% Test Failures Viewed

Critical feedback about your application exists within your test results. We recommend ensuring you view each and every test failure. Use this chart to track your team’s progress.

1206

Test failure views.

% Test Failures Categorized

Failure categorization is the best way to ensure you have eyes on every failure and can take action to recover quickly. Over time, you can spot patterns and trends within your test suite. We suggest a policy of 100% failure categorization. Use this chart to track your progress over time.

Note: This chart does not include draft runs.

1200

Test failure categorization.

Failure Reasons over Time

Failure categories give you actionable insight into your test suite. Quantify the impact of your team’s testing activities by reporting on the number of bugs caught. Or use this chart to take action on patterns such as system issues or deferred test maintenance.

1210

Test failure reasons.

Failure Reason Breakdown
This is another visualization option for the data above.

1204

Test failure breakdown.

Top Failure Reviewers

This chart shows team members who have categorized failures within the filtered date range.

1200

Test failures by team member.

Run Results—Pass and Fail Rates

Runs are groups of tests that are run together. Every line item on the Results page represents a single run. A run can contain one, three, or even a hundred tests.

Passed Run. 100% of the tests within the run passed.

2460

Pass/fail rates for runs.

Failed Run. A run where at least one test failed. In the example below, one test passed and one failed. The result is a failed run.

2488

A failed run.

No Result Run. For some reason, the run did not complete. The most common cause is when the run was aborted.

2482

An run that did not complete.

Run Pass Rate over Time

This chart displays the run results over a filtered period. It is useful for teams running large groups of tests together and sometimes one-off single tests at a time. This chart helps you understand how many runs passed compared to how many contained at least one failure.

1208

Pass rates over time.

Run Result by Environment

Like the chart above, looking at run results by environment helps you pinpoint where your testing cycle issues tend to occur. Moreover, it’s useful for spotting environment instability issues.

1206

Run result by environment.


If you have any questions, reach out to us at [email protected].