E.g., 11/18/2019
E.g., 11/18/2019

Validate Your Automated Tests for Ability to Catch Bugs

By: Peter Jonasson, Senior QE Manager, Product Globalization - VMware, Inc.

22 February 2016

Running a successful Quality Engineering (QE) effort at scale would likely require automation. Simply put... tests which have to run more than once ought to be automated. When attempting automated test effort don’t lose focus on the testing aspect, which is the ultimate goal. 

Your team probably have a solid set of manual tests that have and will catch bugs. If the focus is only on automation, testing will play second fiddle and the QE team will give up its fundamental value-add to the organization. Automation will play a greater part during regression test cycles, especially during update/maintenance releases where there is less code churn (new/edited features). 

Once automated tests are declared ready, can they find bugs? Well, you simply don't know what future bugs would be, so it's harder to validate the test harness against future bugs. But luckily your team may have filed valid bugs in the past and error-prone code can likely be brought back to life via the build system.  

Testing automated tests may sound redundant, but crucial. Bug Injection Practice (BIP) refers to running valid automated tests on code/builds with known bugs identified via manual testing in the past. It's like vaccinating automated tests against the ability to NOT find bugs... especially older valid bugs that may regress within the same product. Accounting for known previously sensitive areas of code that have fixed bugs, especially during a maintenance release, is high-value-add for any QE team. 

So you run the automated tests on the old build and indeed your framework catch the old bugs just like your manual tests did. Fantastic. If not, then go back to the drawing board of the automated test/script and research why not. This may require feedback from other teams and would generate a good group discussion about automation and add a deeper understanding of the framework you've chosen or even the test it self. For complex test scenarios, manual effort may be more cost effective?! If time allows, perhaps a "trailing manual test cycle" should follow the automated test cycle in order to identify discrepancies in end results? 

Good luck... and test the automated test... before you test!

Peter Jonasson

Product Globalization QE Manager for VMware focusing on i18n and L10n end-to-end UI testing via manual and automated effort. Leading a host of global QE releases including vSphere, NSX, vSAN, and vCD.

randomness