A GUIDE TO FINDING GAPS IN YOUR AUTOMATED TEST COVERAGE

For years, our Quality Assurance team members manually verified long checklists of features alongside each bi-weekly release. Now, with 100% automatic deployments multiple times a day, automated tests are the essential tool to preserve site quality and stability. 

Every engineering team has the responsibility to ensure adequate automated testing. But how do you follow up to ensure adequate automated test coverage–especially for critical-use features? How do you create a measurable key performance indicator (KPI) for automated test coverage?

Harnessing the power of event tracking

If you have a tracking service to record use of app features, you can gain valuable test insights by reviewing the tracking events occurring within automated tests on the build. When tracking events do not take place anywhere within tests, potential automation gaps are highlighted. Combining this build data with customer-use metrics provides a valuable heatmap of potential testing gaps—enabling focused improvement of test coverage on the most-used features.

Daily event tracking allows you to monitor and respond to changes in customer use, feature experiments, release of new features, and changes to testing content. Associating test coverage with customer-use statistics also adds motivation to ensure features do not break.

In order to stay organized, it is helpful to categorize the events into buckets that are assigned to appropriate teams. A dashboard puts all of this information at the developer’s fingertip, allowing them to view, sort, filter, add notes, manually set an event status, and track team KPI metrics.

One tool of many to ensure a quality product

End-to-end tests are best suited to this type of event-tracking analysis, since they most closely simulate real customer usage. In other situations not well suited to this evaluation technique, lower-level tests may provide sufficient feature coverage but never fire a tracking event. It is also possible that tracking events will fire in end-to-end tests, but the feature may not be sufficiently tested. While a tracking event comparison strategy may not result in a perfect map of automation coverage, it shines valuable light on many real automation gaps.

By using this technique at Lucid, we continue to identify features needing additional automation testing.  We find valuable insight into which experimental feature arms are set in production but not in tests.  Most importantly, we use the ability to view testing coverage as matched to customer use to prioritize the identification and filling of potential testing gaps from over 180,000 tracked features.

By comparing automated test events to customer’s app usage events, potential automated testing gaps will be highlighted. It also provides a measurable way to monitor, prioritize, and improve your automated test coverage. 

Have you tried this? Let us know what you think!

No Comments, Be The First!

Your email address will not be published.