A frequently used term when discussing Rainforest Exploratory is 'scope'. In the context of Rainforest Exploratory, the scope of an Exploratory run is the whole of what is being tested in one Exploratory run. To determine the scope of an Exploratory run, there are 3 primary considerations: Exploratory resources, the object of testing and level of coverage.
When determining the scope of an Exploratory run the first thing to consider are the resources allocated by Rainforest for the Exploratory run. Resources allocated for each Exploratory run are the same.
Exploratory testers work in teams of 4 and that same team of 4 testers will be sent to test your application each time you trigger an Exploratory run. These testers will each perform 2 hours of testing for a total of 8 hours of testing per run. These 8 hours of testing will occur at any time during the allotted 48 hour window that begins the moment a run is triggered.
The Object of Testing
The object of testing is what is actually being tested during that Exploratory run. Often, while Exploratory runs are constructed around testing a single object of testing such as a feature or portion of application, how the object of testing interfaces with the rest of the application is another important consideration.
Suppose, as an example, that a new feature 'A' is designated as the object of testing for an Exploratory run. Let's also suppose that feature 'A' functions primarily by interacting with features W, X, Y and Z. When approximating the scope of this Exploratory run, because feature 'A' functions via it's interactions with these other features, how 'A' interacts with W, X, Y and Z is the actual scope of the Exploratory run.
In this hypothetical, while the intent of the Exploratory run is to "test the functionality of feature 'A'", the scope of this Exploratory run could encompass at least 4 test scenarios.
Level of Coverage
When planning out an Exploratory run, a final consideration is the level of coverage for the object being tested. In the context of exploratory, coverage can be thought about as balancing between breadth and depth of testing. Depending on need, determining this balance between breadth and depth of coverage will be help in approximating the scope of an Exploratory run.
Using the example from before, the object of the run testing feature 'A' against features W, X, Y and Z was just 'test the interaction of feature A with features W, X, Y and Z'. This indicates to the tester team that breadth of testing of each tester would be spread portioned out evenly between all 4 scenarios.
But imagine that out of the 4 primary features, the interaction of feature 'A' with features 'W' and 'Z' is more frequent and heavy than it's interaction between 'A' and 'X' plus 'Y'. When considering what the scope of the run should be, it may be of more value to test feature 'A' and it's interactions with features 'W' and 'Z' in greater depth with a run, and then save testing between 'A' and both 'X' and 'Y' for a follow up run.
Alternatively, the run objective can specifically instruct the tester team to test 'A' against W, X, Y and Z but place a heavier focus on 'W' and 'Z'. This configuration of the run may strike a balance between both breadth and depth.
Pro-tip: Discuss Scope with your CSM
With all these considerations to juggle, learning how to scope out Exploratory instructions can seem daunting. As such, we highly recommend sending draft instructions of an Exploratory run to your Rainforest CSM for review for the first few runs of Exploratory.
If you have additional questions or would like some advice, please reach out to email@example.com!