When starting with Rainforest, building out your initial test suite can feel like a daunting task. We’ll go through the end-to-end process of designing, writing, and organizing a small test suite for a sample project and cover best practices, tips, and tricks along the way.
If you don’t have a coverage strategy yet, or you haven’t defined your initial test cases, begin by reading Introduction to QA Strategy to learn about coverage and test suite planning.
Acting as the Airbnb QA team, we want to build our initial test suite. Rainforest recommends breaking the project into small, manageable chunks and focusing on the highest priority tests first when creating a test suite.
We’re going to focus on designing our smoke suite. Our smoke tests are the basic set of tests we often run to help us determine whether new builds are stable. They typically cover the most critical functions in our application. In this scenario, we want our smoke tests to cover:
- Signing up as a new user
- Booking a stay as a guest
For each test, we need an outline to explain what the test covers and where it begins and ends. Moreover, we must make a couple of key decisions before writing the test in Rainforest:
- Which Rainforest test type is a better fit?
We can write tests for Rainforest using the Visual EditorVisual Editor - A tool that allows you to write tests using precise browser interactions with a live preview. Tests created with the Visual Editor (VE) run using either the Automation Service or Tester Community. or the Plain-Text EditorPlain-Text Editor - A tool that allows you to write tests as step-by-step instructions in free-form English. Tests created with the Plain-Text Editor (PTE) run using either the Tester Community or your own on-prem testers.. It’s critical to understand the differences and limitations of each test type before continuing. For automated tests, see Creating a Test with the Visual Editor; for manual tests, see Creating a Test with the Plain-Text Editor.
- Do I need test data?
If you’re unfamiliar with Rainforest test datatest data - Placeholders that allow you to inject other values into your tests. Rainforest supports three types: dynamic, static, and built-in., see Using Test Data. If you require seed dataseed data - Information you provide that’s required for testing your application. Examples include usernames, passwords, and roles. to maintain a clean testing environment, check out Seeded State Management.
- Click the Sign Up button on the home page.
- Fill out the form and submit.
- Confirm the sign-up was successful.
This test is simple, deterministic, and doesn’t require any human judgment or interpretation. All we want to do is fill out a form. For this reason, we can target the Automation Service and write our test using the Visual Editor.
Each user must sign up with a unique username and password. For this reason, we need test data to provide discrete and randomized login credentials automatically.
Even though we decided that this test doesn’t require human interaction, we could choose to run it using the Tester CommunityTester Community - Human test execution. Harness the ingenuity of on-demand QA testers. Pay more and wait longer for test results but with the benefit of more detailed output.. With this execution method, the test steps are compressed into an “action/question” format. In addition, the accompanying screenshots are used to help further define the instructions. Nevertheless, let’s stick with the Automation Service.
Before deeming any test “finished,” we need to validate it. In the video, we used the Preview Action feature to execute the steps we defined using the selected virtual machine (VM). Doing so ensures the test passes when run with the Automation Service. Once the test is validated, we publish it and integrate it into our test suite.
- Log in to an existing user account.
- Search for an instant booking stay.
- Complete the booking process.
This test is more complex and requires human judgment. As a result, it isn’t compatible with the Automation Service. For this reason, we must write the test with the Plain-Text Editor and run it using the Tester Community.
Every user needs to log in to a different test account. Otherwise, testers could encounter concurrency issues as they attempt to book a stay simultaneously. For this reason, we must provision unique test credentials by setting up test data.
As you can see, we followed the action/question format for each test step. The action is what the tester should do, and the question is a Yes/No verification of the expected behavior. This simple format allows us to validate each action while tracking success or failure. We also followed important features, guidelines, and best practices for writing Plain-Text Editor tests.
- Embedded Tests. To save time with test creation and maintenance, we embedded the login steps in a separate standalone test that we included in the Book a Stay test.
- Test Data. When we execute this test using our Tester Community, at least two testers are called on to run the test in parallel. We set up test data and inserted placeholders into our test step to avoid concurrency issues. As a result, each tester is provided with unique login credentials.
- Screenshots. We dragged-and-dropped a screenshot into Step 3 so testers can confirm the appearance of certain UI elements.
- Quoted Text. Testers are trained to examine any text in quotes for spelling, spacing, and punctuation. If they can’t find an exact match, they fail the test. They are also trained to treat underscores as placeholders and accept any value in their place. The Show Stays button described in Step 3 contains a dynamic value, which we represented with an underscore.
Before deeming the test “finished,” we should preview it, which allows us to perform the test as a Tester Community member, using our chosen browser/platform. Then, after a successful preview, we should run the test in draft mode before publishing it and integrating it into the test suite.
Now that we’ve written our tests, it’s time to organize. Building the foundation of a well-organized test suite helps us keep track of our coverage as we continue to build on what we have. It also allows us to easily find, reference, and run specific tests at any point in our development cycle.
Below are the key organizational elements we should consider when building our initial suite:
- Test Name. Name tests in a way that accurately represents what they cover. Doing so allows anyone running the test to know the functionality being validated.
- Organizing by Feature. Use groupings to assess coverage by feature and run tests that relate to specific functionality.
- Organizing by Tag. Tags represent an additional organizational layer. You can run tests by tag manually within the application or via the CLI/git triggers.
- Organizing by Run Group. Run groups are collections of tests you plan to execute together. One example is a “Smoke Suite” run group that you run daily or integrate with your CI. Another is a “Regression Suite” that includes all the tests; you run this group before major releases.
See how each test is clearly identified? Moreover, we introduced a “Guest” vs. “Host” naming convention in the test titles and tags. Finally, features differentiate tests by functionality groupings. For example, run groups split Smoke tests that are run daily from the Regression tests that are run before major releases.
Once you’ve written your first few tests, you should set up a test execution cadence. First, organize the collections of tests you plan to run together into a run group. Next, define a run group schedule, which represents the days and times you want to run your tests and which browsers/platforms to use.
Some tests you may not want to run on a schedule. Instead, run them individually or as part of a feature or saved filter anywhere you see the Run button on the platform. Alternatively, run tests programmatically via our CLI, API, or as part of your CI process. Our CLI client supports triggering based on the contents of a git commit message, which provides more control over what is tested.
For more information, see these articles:
We hope this guide was informative, instructive, and helps you understand the steps and decisions needed to build your initial test suite in Rainforest.
- Plan. Break each project into small, manageable chunks. Then, for each group, focus on getting the tests to run successfully in Rainforest before continuing with the next group.
- Design. Write test outlines that cover the high-level flow each test should cover before writing it in Rainforest. Doing so helps you think about which test type is a better fit and whether test data is required. To see an example of a functionality map and test plan, check out this example, which covers Slack’s New Workspace feature.
- Decide on the test type. Choose the test type—Visual Editor for the Automation Service or Plain-Text Editor for the Tester Community.
- Test data. Determine what, if any, test data you require. If you need to set up the test data in Rainforest or want to seed your own data in your application, do it now.
- Write the test.
- Validate. Before publishing the test, validate it yourself. If it’s a Plain-Text Editor test, do this by executing the test in Preview or running it in Draft mode. If it’s a Visual Editor test, use Preview Action to replay the test actions using the VM or run the test in Draft mode.
- Publish the test.
- Organize. Apply the necessary tags and add the test to the relevant feature or run group. As you build your first test suite, it’s natural for questions to come up. Don’t hesitate to reach out to our Support team.
If you have any questions, reach out to us at [email protected].
Updated 4 months ago
- Introduction to QA Strategy
- Creating a Test with the Visual Editor
- Creating a Test with the Plain-Text Editor
- Using Test Data
- Seeded State Management
- Plain-Text Editor Embedded Tests
- Inserting Screenshots and Downloadable Files
- Using Quoted Text Effectively
- Organizing Tests by Feature
- Organizing Tests by Tag
- Organizing Tests by Run Group
- The Rainforest CLI
- The Rainforest API
- Sample Map and Test Plan
- STF: Smoke Testing