A Visual Editor test does not require any code. This design makes it easy for technical and nontechnical teammates alike to own testing. Moreover, it offers flexibility, which Rainforest users value. Visual Editor tests work on a visual level. Users interact with the UI, not with the DOM.
The Rainforest Automation Service runs your application automatically on our scalable virtual machine (VM) infrastructure. Your application is tested visually by comparing the results on the screen with those in your test. And all without any code.
Visual Editor Automation is pixel based, which means we test what the user sees and how they interact with your site.
On a practical level, some things are not possible with DOM-based testing. These include:
- Working across browser windows
- Interacting with the OS
- Downloading, editing, and uploading files
- Any interaction with desktop software
No code-based system does this reliably, but Visual Editor tests can.
The other difference is more philosophical. Purely visual testing evaluates what a user sees and experiences. DOM-based tools test what the user’s computer sees. In some cases, this distinction is crucial, depending on your application.
Visual Editor enables Rainforest customers to run the same no-code automation test using our robot automation workers or our human crowd. From a single test suite, you can mix and match execution based on your workflow needs. For example, run with humans for major product releases and run every branch merge with robots.
Yes. When running Visual Editor and Plain-Text Editor tests as part of the same group, you have the option of “Crowd-Only” or “Crowd + Automation.” The Automation bot runs Visual Editor tests only once. But you can run those same tests with our community, where at least two testers execute them.
Which of my tests are a good fit for automation? Why would I want to use manual execution vs. the Automation Service?
A good rule of thumb is to use Plain-Text Editor tests for the parts of your product that are not stable. For example, let’s say you’re in the middle of introducing a new design language, or you’re just starting to build out a given area. In these cases, a human tester is more forgiving. They understand ambiguities.
Humans can spot problems you weren’t expecting, such as a missing image or an alert box located at the top of the page that a user might not notice. Human testers can leave comments on your tests, providing more information and pointing out potential problems.
Similarly, human testing is more suited to applications or environments that are dynamic or unpredictable. Examples include the latest news, streams of recent images, relative dates, pop-up windows appearing at random intervals. Note that you can still write your tests using Visual Editor to provide clear instructions for testers.
For the mature parts of your product, where you know what you want to test and where your environment is predictable, automation is a better fit. It’s faster, and you can run it more frequently, which can accelerate your release cadence.
My team isn’t technical and has never used automation before. How easy is it for nontechnical folks to use your automation?
We’ve built Visual Editor from the ground up to be the best no-code solution available. We’ve had nontechnical users from the start and continue to improve usability. While we want to stay as close to the release process as possible, you don’t need to know how to code to write and maintain good tests.
My test application is dynamic. Some states remain from the last time the user visited. Is there any conditional logic I can use?
There are no conditionals in Visual Editor tests when run in automation. Our Tester Community is better suited to execute these types of tests.
Is there a hybrid model where pieces of the tests are automated, and then the crowd testers take over after setup? Or automated teardown scripts to clean up?
Currently, this is not possible. For now, you can have both automated and non-automated tests within the same run, but you cannot mix and match parts of the tests. This is something we are looking into as part of our roadmap.
Is there an easy way to route failed automated tests to the crowd for a second set of eyes? I’m sure I can do it with my CI tool, but I didn’t know if that was something available out of the box.
There is no out-of-the-box solution for this, but our CLI and the API can be scripted to achieve this.
Not at the moment. You can run mobile tests using our Tester Community.
No, they are the same. Manage your manual and automated tests on a single platform, or run them together and see the results in one place.
As many as your testing environment can handle. There is no limit to the number of automated tests (or tester community tests, for that matter) you can run in parallel.
Are scroll events captured automatically? Or does that require a manual event addition such as clicks and see?
If scrolling is necessary, add an explicit Scroll action; the automation bot does not scroll automatically.
Unlike Selenium, which locks you into the browser, ours is a proprietary solution. Our technology allows automation to interact with multiple browser windows, desktop software, and the operating system.
For those who may not use this feature, are you confident that automated and crowd testing are well supported?
A significant difference between Rainforest and other automation vendors is our belief that an excellent overall QA strategy combines the power of human and machine capabilities. We plan to continue supporting and improving the way we work with our tester community.
Our long-term vision is to use automation for boring-for-humans testing and the power of human judgment and experience for higher-level tasks. You can continue doing exclusively human testing, and we’ll keep supporting and improving it.
Do you have clients who were able to try out Visual Editor in advance? What feedback did you receive?
During our beta period, multiple clients integrated our automation service into their release process. We’ve made many usability improvements based on their feedback. We also saw use cases where automation was not the right fit, with highly dynamic content and an unpredictable testing environment. This partly informed our recommendations around where Visual Editor tests fit well and where human testers are a better solution.
In terms of your roadmap, are you planning to make automation tests easier by adding capabilities such as pulling button names directly from the screen?
Absolutely. Currently, we are working on usability and other improvements.
We have sensitive data. Will running automation tests guarantee that no human has access to our IP and data?
If you need custom guarantees about sensitive data, get in touch with us directly. Currently, executing a test with automation does not expose any data to human testers. In general, though, this is not something we can guarantee for every test.
The automation bot has some tolerance. For example, if you ask it to click a button, it waits up to 30 seconds for the button to appear. If the button appears within the window, the bot clicks it and continues; otherwise, it fails the test. Suppose your test requires a longer waiting period. In that case, you can add a Sleep action, which pauses execution for the specified time.
- Not compatible with mobile.
- Optimized for a single browser, meaning that the test runs with the browser it was created for when executed by automation. When performed by humans, it is compatible with multiple browsers.
- Dynamic data can be used for data entry via variables but cannot check on subsequent screens.
Example: The dynamic user is “Fred.” There is no way to check to see that “Fred” is logged in.
- Calendar and other mathematical manipulation.
Example 1: Can’t advance the date by 3 days.
Example 2: Can’t confirm that the correct sales tax was added to a random item.
- There’s no DevX/RFML equivalent for Visual Editor tests. However, tests can be executed via CLI. For more information, see Running Tests from the CLI.
- Automation cannot execute steps that require human judgment (“Is this a picture of a dog?”). However, you can add those steps into your Visual Editor tests using the Tester Community action. Any test with Tester Community actions can only be executed using our tester crowd. In the future, we plan to support a hand-off between automation and humans within the same test.
Visual Editor tests employ image matching at the UI level. We do this using grayscale instead of color values, which makes imaging matching more resilient.
- Color matching cannot be reliably tested today. If you require specific color matching for your test, you should not run Visual Editor tests using automation.
- The Automation Service that will pass a step if there's a 95% pixel match. Small differences in large otherwise identical targets are more likely to be ignored. For example, if only a small part of the screen is required, the target we create should be tiny as well. Ideally, it should only contain the crucial areas, with as much context around them as necessary to make the image unambiguous. In other words, when you create a large target, minor differences within it might get ignored as “similar enough” because most of the target matches. If you want the details to matter, focus the target on those details.
- We can’t test mouse cursor appearance.
If you have any questions, reach out to us at [email protected].
Updated 4 months ago