Raising the question:

Per version of the product per run, could you graph the output of how many bugs a test case produces?

Reason I ask is how can we tell the effectiveness of a test case, and what are the best test cases to run?

Personally I would keep all the positive test cases in a need to run basis for each stage of a project and hopefully automate them.  negative test cases… probably the high risk ones.  ie if it breaks then the project would suffer bad news by consumers…  hopefully they’re automated too…

The rest?  not so sure.  In a release cycle you have limited time and one should consider using it effectively if you want to make a deadline.

IMO, this should mean that you want to run as effectively as possible to uncover bugs as quickly as possible.  Risk analysis based on how things are coded, how things work, how things are designed, how long things take to implement… etc… these things should be analyzed to try to hit the high risk areas first.

When running test cases, and they aren’t effective in finding bugs… I would hope they would be effective in determining some thing like a positive result, or you just may be wasting time running it.

Advertisements

About shizen008

Breaking things and getting in trouble for it since '74. Disclaimer: I am not responsible if I make your head explode reading this blog! The writings here are my own expression and not of any companies. I currently work on being a QA for B2G aka Firefox OS
This entry was posted in QA, QMO. Bookmark the permalink.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s