This guide is Rainforest-specific and intended for folks who have a relatively undefined QA process (or are perhaps getting started with QA for the first time) and are excited to develop a smart, scalable approach to Quality Assurance.

In this guide, we'll cover:

  1. How to think about QA
  2. Where does Rainforest QA fit?
  3. Coverage strategy - how to build your initial test suite
  4. Execution strategy - how often to run your tests, and against what browsers and environments
  5. Results strategy - recommended process for triaging failures, communicating defects back to the team, and measuring/tracking your efforts

1. How to think about QA?

If you're getting started with QA for the very first time, there are fantastic online resources that explain the basics.

Essentially, QA is a function dedicated to risk mitigation - the goal is to deliver a quality product to your customers/users, and to do this QA must proactively and continuously assure quality at every stage of the development cycle. The later that defects are caught in the development cycle, the more expensive they are and longer they take to fix, so the process and testing activities must be designed to catch defects as early as possible.

Most companies have limited resources (time, money, and people), so QA must prioritize investment into testing activities that will yield a maximum return, and reduce/avoid activities that slow down, bloat the process, and do not provide equivalent value.

In other words, there is a line of diminishing returns when it comes to QA testing. If any of the QA terms in the visual below are unfamiliar to you, don't worry! Read on and revisit this section later.

2. Where does Rainforest QA fit?

Rainforest does functional testing on the UI level - in other words, it validates the software against functional requirements/specifications.

Let's take a look at the typical evolution of a QA process / function:

  • Phase 1: When a QA function first begins, testing is typically manual and done ad-hoc, and test cases may or may not be documented.
  • Phase 2: As it matures, testing becomes more structured and organized. Test cases are documented, and a testing process is developed.
  • Phase 3: As the product under test grows, it becomes increasingly challenging for QA to keep up with the speed of development, and QA must look for ways to achieve efficiency and scale. At this point, the QA team's testing activities can largely be split into two buckets: manual and repetitive execution of scripted tests, and unscripted exploratory testing of new features.

In order to regain efficiency, an organization must look to automate repetitive and manual testing activities, as repetitive testing done by humans can lead to nose-blindness to an application, passivity toward known issues, and slow reporting.

Where Rainforests fits:

  • Crowd-based testing: Offloading manual/repetitive testing by defining test cases in Rainforest and executing them on-demand against our global crowd of testers.
  • Automation testing: no-code UI automation is well suited for more mature applications and/or processes.
  • Exploratory testing: unscripted bug-bashing sessions to test the edges around new features and provide extra air-cover to a QA team.

More information on our core product offerings can be found on our website.

3. Coverage strategy - how should I build my test suite?

If you don't have existing test cases, or you do but you have little confidence in them, this is where you need to start. What should I cover, and how should I cover it?

It can be natural to want to "test all the things," and write a test to cover every scenario. However, not all tests provide the same ROI, and you must think critically about how to invest your time in building coverage that will provide us the most value.

We recommend taking a pragmatic feature-driven testing approach, which focuses on building coverage that maps to key features and critical use cases.

When you begin with Rainforest, consider the "feature" to be your entire application Focus on building out the most critical test flows first, and, after you have about 30 or so tests, then begin "featuring out" into more specific features. Let's dive into the utility pyramid below to illustrate where to focus your efforts - the size of each segment represents the value it provides, rather than the number of tests you should have.

  • At the top of the pyramid are your Smoke tests, which typically cover the 3-5 most critical pieces of functionality within your application. A (simplified) example for Gmail would be: send an email, receive an email. Because these tests cover such basic functionality, they tend to be the easiest to write and maintain.
  • Next we have Happy Path tests: less critical than your smoke tests but still high-priority, these cover highly trafficked or most common paths within your application. Examples for Gmail might be: add an attachment to an email, archive an email.
  • Regression tests cover known breaking points, i.e. bugs that have already been reported in the past, to ensure that these issues don't surface again in the future. When you discover a bug, you'll write a test for it to ensure it doesn't occur again.
  • At the bottom of the pyramid are your Edge Cases, that cover uncommon user flows and low-trafficked areas within your application. There are nearly infinite edge cases and depth you can drill into; in general, we recommend against writing Rainforest tests for these; the investment it takes to write and maintain these tests is typically not worth the return. Instead invest in a cadence of running unscripted exploratory testing sessions to uncover edge cases.

To get started with building out coverage:

Define your smoke, happy path, and regression tests for your entire application; a good rule of thumb is the "5, 10, 15 rule" which essentially recommends a 1:2:3 ratio of smoke to happy path to regression tests. You'll want to write your tests in Rainforest and get them into a passing, stable state before continuing to build coverage.

After about 30 tests, you'll want to "feature out" into more specific features, starting with the most critical. For each feature, you'll want to generally follow the 5/10/15 ratio, but it depends on the size and complexity of your application.

Two helpful resources:

  • Check out our Building Your First Test Suite guide, in which we go through the end-to-end process of designing, writing, and organizing a small example test suite for Airbnb
  • For an example of a functionality map and test plan, check out the example here, which covers the "New Workspace" feature of Slack's web app

What should the scope of each test be?

See Ideal Scope of a Rainforest test.

How should I organize my tests?

More tests doesn’t always mean more coverage — in fact, we’ve found that utilizing a smaller group of tests strategically can be far more impactful than building a huge database.

Read our Building Your First Test Suite guide to learn how to use Rainforest test organization features to keep your suite clean & manageable.

4. Execution Strategy

Now, that you've written your initial test suite, you need to determine how you'll run them - how often, against what browsers, and against which environments.

Smoke Testing and regression testing are two of the most important testing techniques - smoke testing (also known as "build verification testing" comprises of a non-exhaustive set of tests that aim at ensuring that the most important functions work. The result of this testing is used to decide if a build is stable enough to proceed with further testing. Regression testing is used to verify and validate the existing functionality of the application after modifications and/or the addition of new features. Let's dig into how and when to perform each testing technique.

How often should I run my tests?

Smoke suite: Once your smoke tests are written and passing consistently, we suggest running them daily to ensure you catch significant issues as early as possible in the development cycle. Take steps to reduce the effort needed to identify and run tests suites, including:

Regression suite: Before every major feature release, run tests as far down the utility pyramid as possible (smoke + happy path + regression tests) for all features. If integrated into your PR process, run your smoke tests + any other tests that touch the features that were changed.

*A note on testing releases automatically: for an engineer, it's quite simple to run your Rainforest suite from the command line using our Command Line Interface (CLI). Most of our customers choose to use a combination of our CLI client and a cloud-hosted CI provider (we use CircleCI, but there are a bunch of great ones) for maximum win.

We run all builds that are triggered by a release (for us that's the Develop -> Master branch merge), but you could run all builds, or all builds with a certain tag, or all builds triggered by a certain branch. Get in touch if you have a custom requirement that you don't know how to setup.

What browsers should I run my tests on?

The browsers/mobile platforms you test on should represent where the majority of your user traffic originates from - a good rule of thumb is to try to cover 90-95% of your user traffic sources.

As a best practice, we suggest running your smoke suite against only your single most popular browser/platform to get quick, inexpensive feedback.

Your regression suite we suggest running against the latest version of the big 4 (Safari, Microsoft Edge, Chrome and Firefox), and then expand based on your user traffic patterns. If you're testing a native mobile app, the same principles apply - we suggest testing it on the most recent device versions, then expanding based on your traffic sources and the mobile versions your app supports.

Deciding on testing environments:

Why not test in production? Testing in production is easiest, but “too late” - your customers may find bugs before you do. However, testing in production does provide assurance that things are working for your users. In order to find issues before customers do in production, you must have a highly streamlined testing process in place.

We highly recommend testing in a dedicated QA environment that mirrors production as closely as possible. Testing before release acts as a gate check to catch defects before they reach your users, and provide you with earlier feedback.

As you mature your testing process, you may want to consider seeding your testing environments to allow for more complex state testing with less effort. Check out Seeded State Management and webhooks for more information.

5. Results Strategy

The value your tests provide you is in the results - and ensuring you have a proper process for investigating and triaging failures is essential. When triaging failures, it's extremely important that ownership is clear - who is responsible for investigating and triaging failures - and what the scope of that responsibility entails.

In this section, we'll cover:

  1. How to receive and triage results
  2. Process for documenting bugs
  3. Process for communicating bugs back to the team
  4. Process to measure quality over time

Receiving results:

Pushing test results notifications into the channels your team already uses ensures quality issues are front-and-center for your team. If you use chat apps like Slack or Hipchat, we highly suggest setting up the results notifications integration; sending notifications to team channels is a great way to keep the team engaged with Quality, and ensures that no issues slip through the cracks.

If you don't use a chat app, email notifications can be set up to go to individuals or to team distros.

Results Investigation & Categorization:

It's important that every failure is investigated, categorized, and action is taken on the failure category immediately - whether that's updating an outdated test and rerunning it, fixing data within your own testing environments, or reporting a bug found.

Deferring test maintenance will create noisy test results, which can cause a team to lose trust in the test suite, or even worse - miss bugs. To ensure a test suite is kept clean, stable, and up-to-date, we highly recommend enforcing a policy of categorizing 100% of test failures - this ensures all failures are investigated - and an internal SLA for how quickly the team is expected to recover from a failure.

Reporting Bugs & Communicating Back to the Team

After diagnosing the cause of a defect, the feedback must flow back into the appropriate team.

Ensure you have a standard for bug classification that is understood and agreed upon by QA and developers, and begin tracking all defects in a shared spreadsheet or an existing bug-tracking tool, where defects can be recorded and the lifecycle of the defects can be referenced at any time by any member of the project teams.

If you use JIRA, consider setting up our integration so that your team can quickly and easily export failures, including logs, screenshots, and links back to the test failure.

Measuring quality over time

The primary goal of a QA process is to mitigate risk when shipping new code. You must measure your results to learn about the current state of quality and measure improvement. Primary measurements are:

  • Number of bugs; log any reported issues by date, product area, priority and source of the issue. Examples: external (i.e., customer), internal (i.e. missed by QA), automatic (e.g., error reporting), or test-case failures. Regularly summarize this log, look for patterns and report back to the team at the root.
  • Time to fix: Measuring time-to-fix answers how a development team is able to use the output from QA to triage and fix bugs. The simplest way to measure this is the time between a failed build and the next passing build.
  • Time to test: you're using Rainforest to help scale your testing efforts; measure and track your progress over time, and start picking out patterns. What tasks and activities are taking you the longest, and are they providing equivalent value? What can you offload or reduce?


We know that building a strategy from scratch can feel like a daunting task, and we hope that this guide can act as a roadmap to help you figure out where and how to focus your effort.

If there's anything that you take away from this guide, it's this: QA is an exercise in risk mitigation, and only you know the level of risk tolerance your organization has when it comes to quality. Ensure you know what your quality standards are, start small, focus on investing in the testing activities that provide maximum value, and measure and track your progress along the way.

Happy testing!

Did this answer your question?