We recommend creating a sufficient number of accounts to accommodate all testers assigned to your test. This helps avoid tester concurrency issues and ensures there is enough test data for the testers. Here’s an easy way to determine the number.
(Tests x Browsers x 2) x 3
- Tests is the number of tests.
- Browsers is the number of browsers.
- 2 is the default number of testers for each test.
- 3 is a safety margin in case we need to add more testers.
Let’s say you want to run 1 profile update test using 5 browsers. That requires a minimum of 30 accounts. (1 x 5 x 2) x 3 = 30.
Failure categorization is easy and ensures you get the most information and understanding of your test suite’s health. It also helps to confirm that proper action is being taken when a test fails. For more information, see How to Categorize Failures.
The Rainforest Tester Rating System allows you to provide feedback on how a tester performed in any step of a completed test. You can give a thumbs-up or thumbs-down. Moreover, you can add feedback about your rating.
Rainforest uses your feedback to quickly identify issues that affect test quality. Upon reviewing the data collected, Rainforest proactively improves the quality of results by identifying areas for further training and instances of tester inconsistency.
Thumbs-up is the best possible rating; it indicates outstanding performance. Usually, good ratings are the result of a tester providing helpful comments, possibly uncovering a previously unknown bug.
Conversely, a thumbs-down rating usually indicates that the tester didn’t provide any comments or performed poorly overall.
If you have any questions, reach out to us at [email protected].
Updated 9 months ago