Rainforest Automation is Rainforest's unique and powerful Automation offering.  

Is RFA code based?
Rainforest Automation is no-code. This keeps it easy for non-technical and technical teammates alike to own testing, which preserves the flexibility that Rainforest users value. Importantly, it works completely on the visual level - just like humans do. Your users interact with your visual UI, not with the DOM, and your tests should reflect that.

Rainforest Automation runs your application automatically on our scalable VM infrastructure. Your application gets tested visually by comparing your test with the application shown live. All without code.

How does RFA do its evaluation?
RFA is pixel-based vs. DOM, meaning we test on what your user sees and how they will ultimately interact with your site.

On a practical level, there are things that are simply impossible with DOM-based testing: working across browser windows, interacting with the OS, downloading, editing and uploading files, any interaction with desktop software etc. No code-based system can do this reliably, but Rainforest Automation does. 

The other difference is more philosophical. Purely-visual testing tests what your users actually see and experience. DOM-based tools, on the other hand, test what your users' computer see. In some cases this distinction is important, but it will depend on your particular application.

Can RFA tests be run against humans as well?
Rainforest Automation enables Rainforest customers to run the same no-code automation test against either our robot automation workers and our human crowd.

From a single test suite you can mix and match execution based on the needs of your workflow: run against humans for major production releases, run every branch merge against robots.

Can I run tests against the Tester Community and against Automation in the same run?
Yes, when running Plain-English tests and RFA tests as part of the same group, you will see the option to run tests against “Crowd-Only” or “Crowd + Automation.”

Automation tests are run only once by the automation bot; however, you can run those same tests against our community as well, and then they will be executed by at least three testers.

When looking at my tests, what tests are a good fit for your automation? What is NOT good for automation? Why would I want to use manual execution vs automation?

A good rule of thumb is to use manual tests for parts of your product which are not yet completely stable. If you’re in the middle of introducing a new design language, or just starting to build a given area, human testers can be more forgiving and able to understand ambiguities. They can also spot problems you’re not expecting: e.g. a missing image or an alert box at the top of the page will not be picked up by automation, unless you’re specifically looking for it. Human testers, on the other hand, are able to leave comments on your tests giving you more information and pointing out potential problems.

Similarly, if your application or testing environment is dynamic or unpredictable (e.g. latest news, streams of recent images, relative dates, pop-up windows appearing at random intervals) you will have more success with human testers. You can still write your tests using the RFA interface though - this will make them very clear for testers to follow.

For mature parts of your product, where you know well what you want to test and your environment is predictable, automation is a batter fit: it’s faster and you can run it much more frequently, speeding up your release cadence.

My team isn't technical and so they haven't been able to do automation before. Have you had non-technical folks use your automation?

Absolutely! We’ve built automation from the ground up to be the best no-code test automation available. We’ve had non-technical users from the start and continue to improve usability. While we want to stay as close to the release process as possible, you don’t need to know how to code to write and maintain good tests.

My test application is very dynamic - some states may remain from the last time the user visited. Is there any conditional logic?

There are no conditionals in Rainforest Automation.

Is there a hybrid model? Pieces of the tests are automated and then the crowd testers take over after the "setup"? Or automated teardown scripts to clean up?

This is not currently possible. For now you can have both automated and non-automated tests within the same run, but you cannot mix-and-match parts of tests. It  is something we are looking at as part of our roadmap, however.

Is there an easy way to route failed automated tests to the crowd for a second set of eyes? I'm sure I can do it with my CI tool, but I didn't know if that was something available out of the box.

There is no out-of-the box solution for this, but indeed, both our CLI and the API can be scripted to achieve this!

Is RFA compatible with mobile? 

Not at the moment.

Is RFA a separate platform from Rainforest?

No they are the same platform. Manage both your manual and automated tests on a single platform, run them together and see results in one place.

How many automated tests can be run in parallel?

As many as your testing environment can handle! There is no limit to the number of automated tests (or tester community tests, for that matter!) that can be run in parallel.

Are scroll events captured automatically? Or does that require a manual event addition like clicks, see, etc?

If scrolling is necessary, add a  Scroll action explicitly, the automation bot will not scroll automatically.

What tooling powers the test runner? Are you running selenium on the VM or something else?

It is a proprietary solution (not like e.g. Selenium, which would lock us into the browser). Our technology allows our automation to interact with multiple browser windows, desktop software, the operating system etc.

For those that may not end up using this feature, is this pulling development time from crowd testing feature work or do you feel confident that both automated and crowd testing is well supported? Can you speak to that a little bit?

One of the biggest differences between Rainforest and other automation vendors is our strong belief that good overall QA strategy combines the power of human and machine capabilities. We plan to continue both supporting and improving the way we work together with our tester community. Our long-term vision is using automation for the boring-for-humans testing and the power of human judgement and experience for higher-leverage tasks. You can continue doing exclusively-human testing and we’ll keep supporting and improving it.

Do you have client(s) that were able to try out this RF Automation in advance? What was the feedback from them?

Yes - during our beta period, we’ve had multiple clients start using RFA and integrate it into their release process. We’ve made many usability improvements based on their feedback.

We also found places, where Rainforest Automation might not be the right fit - especially around highly dynamic content and scenarios, where the testing environment is not predictable. This partly informed our recommendations above about where RFA fits in well and where using human testers might be a better idea.

Any sense of roadmap on next steps, maybe even easier creation of automation tests (that is no typing of Button names, but pulling it directly from screen, etc)

Absolutely! We have a bunch of usability (and other) improvements coming up. Watch this space!

We have sensitive data. Will running those automation tests guarantee that no human will have access to our u/p and data?

If you need custom guarantees about sensitive data, please get in touch with us directly. While it’s true that executing a test with automation does not currently expose it to any human testers, this is not something that we can guarantee in general for every future test.

What if the page takes time to load. Is there something like "wait" function or the test most likely fail? 

The automation bot has some tolerance - for example if you ask it to click on a certain button it will wait for some time (up to 30s) for this button to appear. If it does appear within this time, the bot will click and continue - otherwise, it will fail the test after 30s.

If you know you need to wait for longer (e.g. if you’re testing email and it might take a couple minutes to arrive etc.) you can add a Sleep action explicitly. This will pause execution for the given number of seconds and proceed afterwards.

What are limitations? 

  • Not compatible with mobile at the moment
  • Optimized for single browser, meaning that the test can only be run against the browser it was created for, when executed by automation. When executed by humans, it is compatible with multiple browsers. 
  • Dynamic data - can be used for data entry via variables, but unable to check on future screens.
  • Ex: Dynamic user is “Fred” - can’t ask it to check that “Fred” is now logged in
  • Calendar and other mathematical manipulation
  • eg: Can’t ask to advance the date by 3 days
  • eg: Can’t ask to confirm appropriate sales tax was added to a random item
  • There’s no DevX/RFML equivalent for RFA at the moment. However, tests can be executed via CLI. See here for details.
  • Automation cannot execute steps that require human judgment - “is this a picture of a dog?” However, you can add those steps into your RFA tests using the action “Plain Language”; any tests containing the “Plain Language” action can only be executed against our tester crowd at the moment, but in the future we plan to support a pass-off between automation and humans within the same test. 
  • Instant Replay on the Test Building page currently does not support the replay of Reusable Tabular Variables at this time.  This is a temporary issue and will be fixed soon.

Known image matching limitations

As noted above, RFA uses image matching at the UI level.  Image matching is done using grayscale and not color values. Greyscale is used so the imaging matching is more resilient. Color matching cannot be reliably tested today. If specific color matching is necessary for the test, RFA should not be used.

  • Grayscale matching.
  • While we’re sensitive to edges/contrast in images, we’re less sensitive to absolute values.
  • Small differences in large otherwise-identical targets are more likely to be ignored. E.g. if a small part of the screen is important, the target we create should be small as well, ideally only containing the important areas (and as much context around them as necessary to make it unambiguous). In other words, when you create a large target, small differences within it might get ignored as “similar enough” because most of the target matches. If you want the details to matter, you need to focus the target on those details.
  • Can’t test mouse cursor appearance.

Did this answer your question?