I’ve started coding some tests and I’m placing my own repo as I may not necessarily want them checked into the main repo.
I’m not sure the best method for this, at some point I plan to have a machine running various tests that I’ve created which we might not necessarily want to have run on a constant basis.
Basically I want to stagger my tests that check various things:
Test set A would have a context menu check for open a new window and open a new window using that.
Test set B would have a context menu check for open a new private window and open a new private window using that
Then I would alternate running test set A and test set B for faster coverage. Bob Moss talked about this way back when he was a director of QA. I think we can get a broad area of coverage + a variety of tests running if we spread out the tests in several sets.
My plan is to get good enough where I can do exploratory testing + some automation in a similar fashion in the areas that I am covering so I have smoke tests + functional tests automated that I can run on a weekly (if not daily) basis. When we get to stabilization phase that’s when we can triage which tests we want to push into the rest of the repo. It’s a bit ambitious, I understand with the amount of maintenance I may have to do. and it also depends on the areas of course.
Basically with this, I’m experimenting on how much automation we can balance and leverage before the maintenance cost becomes too high. My goal is to try to optimize my own processes and see what works best for me. If it gets overwhelming I will change up. I’m not even sure I will be able to completely go with this idea as of yet.
One of the other things I was thinking about… Is there a way to have marionette report to moztrap with the results to an associated test? Because then we can have individuals using the automation tests that doesn’t report to tree herder or jenkins report test runs….
This is the type of experimentation that I do in order to find bugs faster. It’s how I figured out exploratory testing finds bugs faster than the traditional regression based testing. I had empirical data and tried this approach on multiple products in various companies to make sure that it wasn’t because of my expert knowledge in the product. FileMaker Pro, Powerpoint, and it carried on with Firefox for Android. More over you can find bugs faster by looking at areas that have just recently changed and talking to dev about the changes implemented. This is also from studying with my QA mentor when I first started QAing. With experience you can take a look at the changes and figure out what to test for in order to check if a bug exists or not. i.e. if a variable is initialized properly or not (which can be tested by creating a new profile and seeing if the app crashes), etc.
My mentor had also ran automation on new areas as well as old areas. This is where I haven’t done any experimentation to see what is the best balance as of yet in terms of maintenance cost. I’m not sure how successful my experimentation will be; wish me luck!
The new test can be found here :
note:I edited it so that I post a branch to the gaia repo that I already have upon suggestion by my peers.