QASource’s Favorite Metrics for QA Services

QASource
QASource | February 8, 2017

QASource's Favorite Metrics for QA ServicesWhat happens when you can’t accurately measure the cost, effectiveness, and progress of a software testing project? A lot of less-than-ideal things. Projects can balloon in cost, creep in scope, fall in quality, or run on for what seems like forever. Without defined metrics attached to each QA project, they can get out of control -- and this scenario is a nightmare for both the product company and their team and the testing provider and their engineers.There are a variety of different testing metrics that QA services providers use to keep projects moving and in tip-top shape up until delivery to the customer. Here are several of the metrics we use at QASource:

Monthly QA regression summary

This metric shows the trend of verified and closed defects, as well as any reopened defects, on a monthly basis.

What does this metric reveal? If the number of reopened defects continues to rise over a quarter, it could be due to one or several of the following:

  • The QA team is using one defect to track multiple issues, or is reporting new issues while doing regression in the same defect.
  • The dev team is not doing the default spot check before delivering to the QA team.
  • There is a lack of communication between the onsite team and the offshore QA team.

Monthly defects reported summary

Like the name implies, the monthly defects reported summary shows how many valid defects are reported by the QA team each month.

It also shows the trend of duplicate defects, invalid defects, and defects that are not reproducible by the QA engineers. If this number continues to rise throughout the month, it can be attributed to:

  • Lack of product or domain knowledge in the QA team, or lack of detail in the defect reported
  • Frequent changes in testing requirements for the project
  • Change in hardware or software configuration used by the dev or QA teams
  • Lack of communication between onsite and offsite teams

Automation coverage

This metric helps show automation coverage by monitoring total test cases. It also shows pending tests for specific modules. Any deviation in the total automation test case count can be attributed to:

  • Instability of the affected module’s testing system
  • The obsolescence of the affected module or feature
  • Frequent changes in the affected module, which leads to the creation of new automation scripts and blocks actual test execution

This metric is helpful because it can help management come up with a clearer plan of action for pending test cases in modules with less automation coverage.

Monthly automation velocity

This measures the number of new automation test cases, as well as the delivery of new automation scripts and resource allocation. A deviation in automation script delivery can mean:

  • That testing systems are unstable
  • Automation scripts are being constantly updated to accommodate changing requirements
  • That defect fix time is high, meaning that the same defects are reopened
  • That a release is coming up and the automation team has been shifted to functional testing tasks

The value of metrics

All of these metrics help project leads and managers assess problems with their QA services and take the actions needed to fix them. The benefit is two-fold: teams improve efficiency and the customer has a better product to deliver to their users.

Disclaimer

This publication is for informational purposes only, and nothing contained in it should be considered legal advice. We expressly disclaim any warranty or responsibility for damages arising out of this information and encourage you to consult with legal counsel regarding your specific needs. We do not undertake any duty to update previously posted materials.