Flakiness is a term used to describe sporadically failing tests. These tests don’t produce the same result each time they run, despite making no changes to the test or underlying code. Common causes include:
- Network issues
- Concurrency or load issues
- Test environment sluggishness
- Test order dependency
- Nondeterministic behaviors in the application
- Issues with the test itself
- Intermittent bugs
Though some level of flakiness occurs in any test automation, unreliable results can hinder team productivity and undermine confidence in testing accuracy. For this reason, it’s essential to develop a strategy to mitigate flakiness and isolate and handle flaky tests.
Rainforest can retry failed tests executed by our Automation Service to help reduce flaky results and unwanted continuous integration build failures. In doing so, your team saves valuable time and resources.
When you configure Test Retries, Rainforest automatically retries your failed test up to n times until a passed result is produced. For example, after specifying 2 retry attempts, Rainforest retries a failed test up to 2 additional times (for a total of 3 attempts) until a passed result is produced or until all attempts fail.
Assuming we have configured Test Retries with 2 retry attempts, here is how a test might run:
- A test runs for the first time. If the test passes, the result is a Pass, and no retry attempts occur.
- If the test fails, Rainforest immediately attempts to run the test a second time.
- If the test passes on the second attempt, the result is a Pass, and no further retries occur.
- If the test fails on the second attempt, Rainforest attempts to run the test a third and final time.
- If the test passes on the third attempt, the result is a Pass.
- If the test fails on the third attempt, the result is a Fail.
Our goal is to reduce unwanted noise in your test results while providing useful information to help you investigate and address sources of flakiness. For this reason, we flag flaky test results and provide debugging tools.
Runs containing at least one inconsistent test result (a failure passed when retried) are flagged with an exclamation icon.
On the Run Summary page, tests that produced a flaky result are flagged with an exclamation icon, and the number of retry attempts is noted.
The Test Results page displays the result of each retry attempt, along with the reproduction video. With this information, you can investigate and debug any failed retry attempt, even when the ultimate result is a passed test.
Test Retries is configured as a global setting, though you have the option to override this setting for any test run.
You configure a maximum number of retries on the Global Settings page. With a setting > 0, any Automation Service test that fails is retried up to the maximum number of times or until a passed result is produced.
If your account was created after January 12, 2022, your global setting is automatically configured for 1 maximum retry attempt, though you can modify it.
Note: The maximum number you can configure is 3, for a total of 4 attempts.
You can override the global setting when you run tests or create a run group. Doing so is useful for situations where you might want to apply a different number of retries or disable retries altogether.
When running tests or creating run groups, “Use Test Retries global setting” defaults to
on. This means that your run inherits the test value configured in Global Settings, which could be 0, 1, 2, or 3. You have the option to toggle this setting to
off and apply your own value.
You can use the
--max-retries flag in the CLI to override the setting for a specific run. If you omit the flag, the default from your account or run group is used.
If you have any questions, reach out to us at [email protected] or via the chat bubble below.
Updated 9 days ago