FAQ
Frequently asked questions about running tests using the Automation Service.
Are Visual Editor tests code based?
A Visual Editor test does not require any code. This design makes it easy for technical and nontechnical teammates alike to own testing. Moreover, it offers flexibility, which Rainforest users value. Visual Editor tests work on a visual level. Users interact with the UI and not with the DOM.
Rainforest’s Automation Service runs your application automatically on our scalable virtual machine (VM) infrastructure. Your application is tested visually by comparing the results on the screen with those in your test. And all without code.
How does the Automation Service perform its evaluation?
The Automation Service is pixel based vs. DOM based, which means we test what the user sees and how they interact with your site.
On a practical level, some things are not possible with DOM-based testing. These include:
- Working across browser windows
- Interacting with the OS
- Downloading, editing, and uploading files
- Any interaction with desktop software
No code-based system does this reliably, but our Automation Service can.
The other difference is more philosophical. Purely visual testing evaluates what a user sees and experiences. DOM-based tools test what the user’s computer sees. In some cases, this distinction is crucial, depending on your application.
Can Visual Editor tests run using humans?
Visual Editor enables Rainforest customers to run the same no-code automation test using our robot automation workers or our human crowd. From a single test suite, you can mix and match execution based on your workflow needs. For example, run with humans for major product releases and run every branch merge with robots.
Can I run tests using the Tester Community and Automation at the same time?
Yes. When running Visual Editor and Plain-Text Editor tests as part of the same group, you have the option of “Crowd-Only” or “Crowd + Automation.” The automation bot runs Visual Editor tests only once. But you can run those same tests with our community, where at least two testers execute them.
Which of my tests are a good fit for automation? Why would I want to use manual execution vs. automation?
A good rule of thumb is to use Plain-Text Editor tests for the parts of your product that are not stable. For example, let’s say you’re in the middle of introducing a new design language, or you’re just starting to build out a given area. In these cases, a human tester is more forgiving. They understand ambiguities.
Humans can spot problems you weren’t expecting, such as a missing image or an alert box located at the top of the page that a user might not notice. Human testers can leave comments on your tests, providing more information and pointing out potential problems.
Similarly, human testing is more suited to applications or environments that are dynamic or unpredictable. Examples include the latest news, streams of recent images, relative dates, pop-up windows appearing at random intervals. Note that you can still write your tests using Visual Editor to provide clear instructions for testers.
For the mature parts of your product, where you know what you want to test and where your environment is predictable, automation is a better fit. It’s faster, and you can run it more frequently, which can accelerate your release cadence.
My team isn’t technical and has never used automation before. How easy is it for nontechnical folks to use your automation?
We built the Automation Service from the ground up to be the best no-code automated testing platform available. We’ve had nontechnical users from the start and continue to improve usability. While we want to stay as close to the release process as possible, you don’t need to know how to code to write good tests.
My test application is dynamic. Some states remain from the last time the user visited. Is there any conditional logic I can use?
Yes, you can insert a conditional block to help with this use case.
Is there a hybrid model where pieces of the tests are automated, and then the crowd testers take over after setup? Or automated teardown scripts to clean up?
Currently, this is not possible. For now, you can have both automated and non-automated tests within the same run, but you can’t mix and match parts of the tests. This is something we are looking into as part of our roadmap.
Are Visual Editor tests compatible with mobile?
Not at the moment. You can run mobile tests using our Tester Community.
Is the Automation Service separate from Rainforest?
No, they are the same. Manage your manual and automated tests on a single platform, or run them together and see the results in one place.
How many automated tests can I run in parallel?
As many as your testing environment can handle. There’s no limit to the number of automated tests (or Tester Community tests, for that matter) you can run in parallel.
Are scroll events captured automatically? Or does that require a manual event addition such as clicks and see?
If scrolling is necessary, add an explicit Scroll action; the automation bot does not scroll automatically.
What tooling powers the test runner? Are you running Selenium on the VM or something else?
Unlike Selenium, which locks you into the browser, ours is a proprietary solution. Our technology allows automation to interact with multiple browser windows, desktop software, and the operating system.
For those who might not use this feature, are you confident that automated and crowd testing are well supported?
A significant difference between Rainforest and other automation vendors is our belief that an excellent overall QA strategy combines the power of human and machine capabilities. We plan to continue supporting and improving the way we work with our tester community.
Our long-term vision is to use automation for boring-for-humans testing and the power of human judgment and experience for higher-level tasks. You can continue doing exclusively human testing, and we’ll keep supporting and improving it.
We have sensitive data. Will running automation tests guarantee that no human has access to our IP and data?
If you need custom guarantees about sensitive data, get in touch with us directly. Currently, executing a test with automation does not expose any data to human testers. In general, though, this is not something we can guarantee for every future test.
What if the page takes time to load? Is there a “wait” function, or does the test fail?
The automation bot has some tolerance. For example, if you ask it to click on a button, it waits up to 30 seconds for this button to appear. If the button appears within the window, the bot clicks it and continues; otherwise, it fails the test. Suppose your test requires a longer waiting period. In that case, you can add a Sleep action, which pauses execution for the specified time.
What are the limitations?
- Not compatible with mobile at the moment.
- Optimized for a single browser, meaning that the test runs with the browser it was created for when executed by automation. When performed by humans, it’s compatible with multiple browsers.
- Dynamic data can be used for data entry via test data but can’t check on subsequent screens.
Example: The dynamic user is “Fred.” There is no way to check to see that “Fred” is logged in. - Calendar and other mathematical manipulation.
Example: Can’t advance the date by 3 days.
Example: Can’t confirm that the correct sales tax was added to a random item. - There’s no DevX equivalent for RFA at the moment. However, you can execute tests via CLI. For more information, see Running Tests from the CLI.
- Automation can’t execute steps that require human judgment (“Is this a picture of a dog?”). However, you can add those steps into your Plain-Text Editor tests. Plain-Text Editor tests can only be executed by our Tester Community. In the future, we plan to support a hand-off between automation and humans within the same test.
Test Retries
Are retried attempts counted in my billing?
Yes, you are charged for test retry attempts unless you’re participating in a free trial of Rainforest.
When considering cost, remember that only failed tests are retried, and only until a passing result is produced (or all attempts fail). If your test suite is well maintained, failures should be uncommon. For this reason, the cost impact of Test Retries should be minimal.
How do I know if Test Retries is a good fit for me?
It depends. Some level of results inconsistency is unavoidable with any test automation. Automatically rerunning failed tests might be a reasonable proactive measure. However, it depends on the extent to which unreliable results are a problem.
For example, suppose you have an unreliable testing environment that behaves inconsistently or is prone to intermittent issues. In that case, Test Retries can help prevent unwanted noise. However, if your tests aren’t prone to inconsistent results, rerunning failed tests might not be necessary.
What should I do with the flaky test results (a failure that passes when retried) flagged by Rainforest?
In general, we recommend you investigate and resolve sources of test flakiness. Though Test Retries can prevent flaky results from failing your runs and blocking your build pipeline, allowing flakiness to persist is unadvisable. Flaky results slow down your test runs due to time spent on retries and hint at more serious issues such as intermittent bugs, problems with your test environment, or data management practices.
You might not need to investigate flaky test results with the same urgency that you’d address failures blocking your continuous integration pipeline. Nevertheless, it’s still advisable to investigate and resolve any flakiness.
If I’ve configured Rainforest webhooks, how does Test Retries work with them?
Depending on what type of webhook you’ve configured, it continues to work normally, either running at the start or finish of your test run.
How does Test Data work with Test Retries?
Test Data normally works with retry attempts. When you leverage Built-In Data such as random email addresses and inboxes, each retry attempt is allocated a different variable value. For Dynamic Data such as login credentials, each retry attempt is given a different variable value. If your uploaded CSV file contains fewer rows of data than required, the run fails. See Dynamic Data for information on calculating the number of rows required for a test run.
Is there a way to configure unique retry settings for individual tests?
If you only want to retry a few tests, you should configure the *Maximum retries” setting on the Global Settings page to 0
. For tests you want to retry, configure “Use Test Retries global setting” when running your tests. See Test Retries for more information.
A failed test produced a “passed” result after retrying. However, I want to mark it as a “failed” result. Can I do this?
At the moment, no, but let us know. We rely on your feedback to prioritize enhancements. So, reach out to our Support team to share your feedback.
If you have any questions, reach out to us at [email protected].
Updated about 19 hours ago