When getting started with Rainforest, building out your initial test suite can feel like a daunting task.
In this guide, we'll go through the end-to-end process of designing, writing, and organizing a small test suite for an example project, and cover best practices, tips and tricks along the way.
If you don't yet have a coverage strategy or do not have your initial test cases defined, check out our Getting Started with QA Strategy guide to learn more about how to think about coverage and test suite planning.
In this example scenario, we're Airbnb and we want to build our initial test suite in Rainforest. When building a test suite, we recommend breaking the project into small, manageable chunks, and focusing on our highest-priority tests first. For that reason, we're going to focus on designing our Smoke Suite and building it in Rainforest first.
Our smoke tests are the basic set of tests that we run often to help us determine whether deployed builds are stable or not, and they typically cover the most critical functionalities within our application.
Following that logic, the scenarios we want our Smoke tests to cover are:
- Sign up as a new user
- Book a stay as a Guest
Step 1: Designing & Writing Our Tests
For each test, we'll want to write an outline, to give us an idea of what the test will cover and where it will start and end, and make a couple of key decisions before writing it in Rainforest:
- Which of Rainforest's two types of tests is a better fit for this test - Plain English vs. Rainforest Automation? Note: it's important to understand the differences and limitations of each test writing language before you begin test writing. If you're unfamiliar with the two test types, please read our Side by side comparison here.
- Do I need dynamic data in my tests, and what (if any) test data do I need to set up? If you're unfamiliar with our supported variable types, learn more about them here. If you require seeded states or methods for maintaining a clean testing environment, check out our Seeded State Management article.
Read on to see how we outline our tests, think through the key decision points above, and go about writing our tests.
1. Book a Stay
Outline: Login to an existing user account, search for any instant booking stay, and complete the booking.
Test Type: This test is fairly complex and will require some level of human judgment, so it won't be compatible with Rainforest Automation (meaning we wouldn't be able to successfully define the actions for a bot to execute). For that reason, we'll write this test in Plain English, as we'll plan to run it only with our Tester Community (human crowd).
Test data needed: Every user will need to log in to a different test account; otherwise, each time we run this test the 3 testers who will execute it in parallel will encounter concurrency issues with multiple users trying to book a stay at the same time. For that reason, we'll want to provision unique test account credentials to each tester who executes this test - we'll do this by setting up tabular variables.
Here is the test:
As you can see, we've followed the basic Rainforest Action/Question format for each test step. The action is what the tester should do, and the question is a Yes/No verification of expected behavior. This simple format allows us to validate each essential action, tracking success or failure each step of the way.
We've also followed some important guidelines and best practices for writing Plain English tests:
- Embedded Tests: To save us time with test creation and maintenance, we've created the login steps a standalone test, and embedded it into this test.
- Step Variables: When we execute this test with our Tester Crowd, at least 3 testers will be called on to execute it in parallel. If all of them were to log into the same test account and try to complete a booking for the same time period, they would encounter concurrency issues (ie "stepping on each other's toes) that would compromise the test result. For that reason, we've set up login credentials as tabular variables and inserted the variables into our test step. This means that each tester will be provided login credentials to a different test account.
- Inserting screenshots and files into test steps: We've dragged and dropped a screenshot into Step 3 to ask testers to confirm the appearance of UI elements.
- Quotes rule: Testers are trained to copy check text in quotes for exact spelling, spacing and punctuation; if they can't find a match they're trained to fail the test.
They are also trained to treat underscores as placeholders, and accept any value in their place; the "show stays" button described in Step 3 will contain a dynamic value, and so we've represented this with an underscore.
Last step: Before deeming any tests "finished" we'll first want to Preview it, which will allow us to perform the test as if we were a member of Rainforest's tester community, against any browser/platform of our choosing. After we're able to successfully preview it, we should consider running it in Draft Mode before publishing the test and integrating it into the rest of our test suite.
To learn more about writing Plain English tests, check out the Creating & Managing Tests section of our Help Center, and check out our 5 minute instructional video on how to write a Plain English test.
2. Sign Up Test
Outline: Click sign up on the home page, fill out the sign up form, and confirm my sign up was successful.
Test Type: This test is simple, deterministic and does not require any human judgment or interpretation. It's simply filling out a form. For that reason, it will be compatible with Rainforest Automation (meaning bots will be able to execute it), and so we'll write this test in Rainforest Automation Language.
Test data needed: We'll need every user to sign up with a unique username and password. We'll use Rainforest built-in random variables, to provide discrete and randomized data, so we don't need to set up any data ourselves.
Test writing video:
Once the test is written in Rainforest Automation Language, we can actually toggle it to Plain English (see below); when we do that, the test steps we defined are compressed into a Plain English "action / question" format, and the screenshots we took are replaced by the screenshot text descriptions we defined.
This means that we have the option to run our Automation tests with our tester community as well.
Last step: Before deeming any tests "finished" we'll first want to validate it; in the video above, we used the Instant Replay feature to live replay the steps we defined against the VM. This assures us that the test will pass when run with our Automation bot. Once the test is validated, we'll want to Publish it and integrate it into the rest of our test suite.
To learn more about writing Rainforest Automation tests, check out the Rainforest Automation section of our Help Center.
Step 2: Organizing our tests:
Now that we've written our tests, it's time to organize them. Building the foundation of a well-organized test suite will help us keep track of our coverage as we continue to build upon what we have. It will also allow us to more easily find, reference, and run the specific tests we need to at any point within our development cycle.
Below are the key organization elements we'll want to consider when building our initial suite:
- Test name: we'll want the name of our tests to accurately represent what they cover, so that anyone running the test will know functionality will be validated.
- Features: we can organize our tests into feature groupings to assess our coverage by feature, and easily run tests that relate to a specific functionality.
- Tags: tags are simply an additional organization layer we can apply; we can run tests by tag manually within the application, or via the CLI / git triggers.
- Run groups: Run groups are simply groups of tests that we plan to run together - typically we'll have a "Smoke Suite" run group that we'll schedule to run daily, or even integrate with our CI, and then our "Regression Suite" will include all of our tests, which we'll run before major releases.
To better visualize how Features, Run Groups, and Test Tags relate to one another, see the continuation of the Airbnb example below.
See how the test titles make clear what the test represents and how we've introduced a "Guest" vs. "Host" naming convention in the test titles and tags. Our features differentiate tests by functionality groupings, and our Run Groups split our Smoke tests we'll run daily from the rest of our Regression tests that we'll run before major releases.
For more information on test suite organization, check out Organizing your Test Suite.
Step 3: Set a Test Execution Cadence
If you need help determining how often to run your tests, and against what browsers/platforms, check out our Strategy Guide.
Once we've written our first few tests, we'll want to immediately set up a test execution cadence. Organize the collections of tests you plan to run together into a Run Group, and then define a Run Group Schedule - the days and times you want to run your tests, and against what browsers/platforms.
Some tests you may not want or need to run on a schedule; those you can run individually, or as a part of a Feature or Saved Filter anywhere you see the blue "Run" button in the platform.
Alternatively, tests can be run programmatically via our CLI, API, and/or as part of your CI process. Our CLI client supports triggering based on the contents of a git commit message, so you can have more control over what is tested.
We hope this guide was informative, instructive, and will help you understand the steps and decisions needed to build your initial test suite in Rainforest:
- Plan: Break each project into small, manageable chunks, and focus on getting tests into Rainforest and validated/passing before continuing with the part of the project.
- Design: Write out test outlines that cover the high-level flow that the test should cover, before writing it in Rainforest. This will help you think about what test language is a better fit, and what (if any) test data you need. **For an example of a functionality map and test plan, check out the example here, which covers the "New Workspace" feature of Slack's web app.
- Decide on Test Language: Decide what language you'll write the test in - Rainforest Automation Language vs. Plain English.
- Test Data: Determine what, if any, test data you need; if you need to set up the test data in Rainforest (in the form of step variables) or you need to seed your own data in your application, do it now.
- Write your test
- Validate: Before publishing your test, validate it yourself. If it's a Plain English test, you can do this by executing the test yourself in Preview mode, and/or running it in Draft Mode. If it's an Automation test, use Instant Replay to replay the test actions against the VM, and/or run the test in Draft mode.
- Publish the test
- Organize: Apply the necessary tags, and add the test to the necessary Feature and/or Run Group
As you're building your first test suite, it's natural for questions to come up. Please don't hesitate to reach out to our support team via the in-app chat bubble or via email at firstname.lastname@example.org.