As Exploratory is run more and more, the number of test cases generated will inevitably grow, whether the run was intended to find bugs or create regressions. Below we've shared some proven best practices for how to handle both bug-finding and test creation test cases after they've been logged. 

Bug-finding Exploratory test cases

Run the folder of newly documented tests

One of the easiest ways to manage newly documented bug-finding test cases is to run these tests to verify the bugs. As all test cases are written with the expected result in mind, the entire folder should fail, verifying the bugs documented by each test case. If you have your run failure integrations in place, your team will automatically notified and can begin your bug triaging process seamlessly.

Tag or title the test with the priority level of the identified bug

When bugs are identified, tagging and/or changing the title of test with a priority schema allows the results to be communicated efficiently across your team. By clearly identifying which bugs are more critical than others, you can provide your team with an order to follow when they address the bugs found by Exploratory.

Incorporate critical bug-flows into regular regression suite

A tempting thing to do after a bug-finding test case has  be resolved is to delete the test case to eliminate 'extra' test cases. While this might be appropriate for lower priority or one-time bugs, we recommend keeping the more important and critical test cases to run as part of your regular regression. As these test cases are written with the expected result in mind, they can be run to ensure that such critical bugs are not recurring. 

Test-creation Exploratory test cases 

Edit titles to match naming scheme of your test suite

Organizing your test suites with a standard naming schema is a classic way to effectively manage your regression suite. Applying this to tests created through Exploratory, standardizing the titling scheme will help in incorporating these test cases into your larger regression suite. If you're unsure what naming scheme would make sense, check out our best practices on naming tests here.

Keep the #Exploratory and the Exploratory run identification tag

As noted elsewhere, your Exploratory testing team will be comprised of the same 4 testers each time an Exploratory run is triggered. In the case of test-creation Exploratory, keeping the #Exploratory and numeric Exploratory run identification tag helps testers recognize in later test-creation runs which tests have already been written. This in turn would also help with regression suite bloat, saving you the time and trouble of sifting through repeated test cases later on. 

Shared Tips

Tag tests with the name of tested feature

Tagging exploratory generated test cases with the name of the feature is a simple way to bridge the gap between bug-finding and test-creation runs. Feature name tags can also be used to organize these tests in a smart-folder devoted specifically to the feature, making it easy to comprehensively test out a feature in Rainforest.

Title Exploratory runs with a Standard Naming Scheme

Whenever an Exploratory run is triggered, all tests produced during the run will be stored in a smart folder that takes on the title of the run. By standardizing the title of the runs - like "Exploratory - Feature" of "Bug-finding/Test Creation - Feature" - what happens is that the smart folders containing Exploratory results will be organized in the smart folder view, making sorting through results visually a breeze.


If you have any questions or would like to share your own best practices for organizing your Exploratory results, please let us know at support@rainforestapp.com!

Did this answer your question?