Why software companies should measure the impact of every change—including back-end updates.
You probably expect UX and design changes to affect users’ behavior, but are you testing the impact of infrastructural changes as well? Last year we began testing all major changes to Lucidchart, our diagramming web app. We had learned that if we didn’t implement careful tracking, we could potentially lose thousands of customers without fully understanding why. Our goal was to improve app performance across our entire user base by implementing Web Graphics Library (WebGL), which would leverage users’ built-in graphics processors to render Lucidchart more quickly. Although increasing payments was not the primary motivation for the project, we tracked payments relative to WebGL use in our analytics tool, Kissmetrics. We’ve found that measuring a test’s effect on payments provides a general indicator of success or failure. Using Kissmetrics allowed us to not only find early signs that something was wrong but also to diagnose the problem and solve it before it caused more damage.
Reading the warning signs
For the initial rollout, we activated WebGL for just 2 percent of users, then increased activation to 10 percent of users. In each case, we saw the expected improvement in loading speed but also noticed a statistically significant drop in payments. Our first response was to review support requests from users to see if they could reveal what was wrong. Unfortunately, WebGL-related support requests were trickling in at just one or two a week, so it was impossible to identify a trend without deeper behavioral tracking. We then dug into Kissmetrics to see if we could tie the drop in payments to any certain user type. No matter how we sliced the data, we weren’t able to identify any obvious trends.
Diving into the data
At that point, we faced a difficult choice: should we give up on WebGL even though we knew it could improve most users’ experience, or should we continue testing it even though it was costing us money every day? In the end, work we had done beforehand to estimate the value of a better rendering experience led us to keep testing WebGL. But we still needed more information, so we configured Kissmetrics to gather additional data points for each user, including WebGL renderer, WebGL vendor, browser type, browser Version, OS, and OS version. It quickly became apparent that users in Firefox and Safari were paying at much lower rates than normal in a WebGL experience. That was the smoking gun we were looking for. From there, it was much easier for our engineers to hone in on the problem.
Since none of the computers in our office were having trouble, we needed to test older machines. We offered lunch to any employee who would contribute their time and old computer to a “laptop graveyard” for testing and soon discovered an old Macbook that didn’t support WebGL. One intrepid developer even tested WebGL on every computer for sale at the local BestBuy, eventually finding one that didn’t support it.
Analytics pays off
It turned out that a missing extension among some users with experimental WebGL had created the confusion, causing them to appear as though they supported WebGL when they actually didn’t. Once we discovered the issue, we started checking for that extension and enabled WebGL for all our users, resulting in the expected performance improvements:
It can be tempting to skimp on analytics and instead rely on users to report problems. However, since most users just leave if the product’s not working for them, it can take much longer to diagnose any problems. Monitoring the data allowed us to zero in on the issue much more quickly. Best of all, after implementing WebGL for all users, we saw the increase in revenue that we had been expecting all along. Thanks to the powerful analytics capabilities of Kissmetrics, we turned what could have been a million-dollar mistake into a major win both for us and our users.