At Lucid, we use data to support almost every important decision we make. Forecasting the lifetime value (LTV) of our customers is crucial to informing many of these decisions, so we have invested significant time to ensure that we can do so accurately.

Since starting full-time as a data scientist at Lucid last summer, LTV modeling has been one of my top priorities. In this article, I explain the business value and technical details behind the work I’ve been involved in.

## Importance of LTV modeling

LTV represents the monetary value of a customer over an extended period of time. As such, LTV can inform various financial decisions, including how much to spend on customer acquisition and how to allocate those funds across various channels.

At first glance, LTV appears straightforward—it’s a single number that represents a customer’s value to the company. However, its precise meaning varies from one company to another. At Lucid, LTV represents the amount of **nominal recurring revenue attributable to a given customer over a 48-month period**. That’s a mouthful so let’s look at each piece one at a time:

**Nominal:**We don’t perform any discounting based on inflation or the time value of money.**Recurring revenue:**We include revenue from recurring subscription charges but not from one-time purchases, such as professional services.**Attributable to a given customer:**We perform all LTV calculations one customer at a time (as opposed to in aggregate).**Over a 48-month period:**Rather than predicting LTV over an unbounded lifetime, we consider only the first 48 months from milestone events, such as product registration or the start of a paid subscription.

The first two aspects of our LTV definition are primarily simplifying assumptions and could change as our LTV modeling becomes more sophisticated. The last two aspects, however, reflect deliberate decisions about how we use LTV. Having LTV forecasts at a granular level allows us to understand and compare the value of different customer segments. Employing a fixed horizon simplifies comparisons of actual and forecasted LTV; avoids making tenuous assumptions about long-term customer behavior; and standardizes the period over which we calculate and optimize return on investment (ROI).

## LTV use cases

At Lucid, we currently use LTV for three primary purposes: financial planning and analysis; marketing campaign optimization; and A/B test valuation. All three purposes represent opportunities to **financially optimize our business using customer-level revenue forecasts**.

### Financial planning and analysis

LTV is useful in a wide variety of financial analyses. At Lucid, some examples include the following:

- Estimating the total value of customers acquired over a given period of time to assess company performance
- Calculating the ROI of various efforts within marketing and sales
- Informing budgeting decisions based on expected ROI estimates

### Pay-per-click advertising

Pay-per-click (PPC) advertising on search engines represents a particularly straightforward opportunity to optimize ROI using LTV. To engage in this form of advertising, companies place bids on keywords related to their products and solutions. At Lucid, for example, these keywords include “flowchart maker” and “make diagram.”

As the PPC specialists will tell you, not all keywords are created equal. Advertisers are willing to pay more for keywords that they believe are more likely to lead to conversion events (e.g., trials or subscriptions). At Lucid, we use LTV to inform bidding decisions by tying user-level LTV forecasts back to individual clicks. Because these LTV forecasts represent expected revenue over a four-year horizon, we can use them to calculate our expected financial return on different keywords and optimize bids accordingly.

### A/B test valuation

Like many tech companies, we perform extensive A/B testing. Within each test, we track a variety of metrics to ensure the changes being tested improve the user experience. These analyses lead to a natural follow-on question: How much value does this change provide to Lucid?

One of the challenges associated with answering this question is that tests often create trade-offs between various metrics. For example, we might run a test that increases the proportion of users that start free trials, but that increase might come at the expense of users that start subscriptions without a preceding trial. Using LTV as a test metric has the benefit of counterbalancing these metrics against each other—it provides an estimate of the test’s financial impact using only one metric.

## Defining value

Now that I’ve explained the value of LTV forecasts, let’s dive into how we actually generate them. The first step in that process is to **meticulously define what we want LTV to represent**, including which population to model and how to calculate the historical data points that the model will forecast.

### Choosing a population

To motivate what it means to choose a population, consider the use cases explained above. On the one hand, subscriptions and payments are account-level events. On the other hand, advertising and test assignment generally occur at the user level. This contrast begs the question: At what level should we calculate LTV?

The answer generally depends on the intended use cases and their relative importance. At Lucid, we decided to create separate models for each level of granularity mentioned above—user and account—each catered to the use cases it would support.

Choosing a population also means deciding when to generate LTV forecasts and who to include. For example, our account-level model generates forecasts when accounts start paid subscriptions. Our user-level model, on the other hand, generates forecasts when users register regardless of their subscription level. In each case, the timing is catered to the use cases supported by the model.

Now, let’s walk through how historical data points are calculated for each model.

### Account LTV model

At Lucid, monthly subscriptions represent the shortest subscription term we offer, so a period of one month is the “least common denominator” in terms of the lifecycle of an account. Based on this fact, we structured our LTV models to use historical revenue values normalized to a period of one month. We refer to these values as “incremental LTV.”

In the case of the account LTV model, incremental LTV corresponds to accounts’ monthly recurring revenue (MRR). For mature cohorts, the actual (as opposed to forecasted) 48-month LTV is the sum of the 48 incremental LTVs. For recent cohorts, we employ a statistical model to forecast incremental LTV for each month of the accounts’ 48-month lifetime.

### User LTV model

Our calculation of incremental LTV for the user model must account for one additional complication: MRR exists at the account level, not the user level. To address this complication, we take an account’s MRR and divide it equally among the licensed users on the account. The Lucidchart diagram below depicts this process for a hypothetical account.

Another important difference from the account model is that we update the user-level forecasts daily for a period of time after users register. Updating them in this fashion allows us to get an early read on the value of a cohort and then adjust as we receive more information.

## Statistical model

This section describes the structure of the statistical model that we use for LTV forecasting. The first section below, titled “Non-technical model description,” describes the model’s high-level structure and should be accessible to a general business audience. The second section, titled “Technical model description,” describes the specific type of statistical model we use in more technical detail and is intended primarily for statisticians and data scientists.

### Non-technical model description

The statistical model predicts incremental LTV for each month of users’/accounts’ 48-month lifetime. As described above, the final 48-month LTV forecast is the sum of these 48 monthly predictions.

The account model uses the following predictors to forecast LTV: subscription level; country; seasonality; long-term trends; and lifetime month number. The user LTV model includes these predictors and a number of others, such as users’ trial status and the marketing channel to which we attributed their registration.

One of the primary challenges in designing the statistical model was ensuring the model could accurately fit the observed LTV retention curves. As a hypothetical example, consider differences we’d expect to see between SaaS subscriptions with monthly and annual payment terms. For monthly subscriptions, some cancellations and upgrades occur every month. You’d generally expect the cancelations to outweigh the upgrades, so the historical incremental LTVs might follow a smooth downward trend like the one shown below in Figure 2.

In annual SaaS subscriptions, however, cancellations generally impact ARR at the year marks, but upgrades can occur every month. Together, cancellations and upgrades might create a downward stair-step pattern with slight inclines on each step (see Figure 3 below). Giving an LTV model just enough flexibility to accurately fit complicated trends like these is crucial in designing an accurate model.

### Technical model description

The type of statistical model we use is a gamma generalized linear model (GLM) with the log link function. We chose the gamma distribution because incremental LTV is nonnegative and the model’s error tends to increase with the scale of the predictions. Because incremental LTV can be zero (which is outside the support of the gamma distribution), we add a small offset to incremental LTV before passing it to the model. We later subtract the offset from the model’s predictions to ensure they are calculated on the appropriate scale.

Using the log link function means that predictors have a constant percentage-change impact on the model’s predictions. We felt comfortable employing this assumption because most simple LTV models assume a constant percentage of churn each time period. Additionally, the log link function offers a unique benefit when paired with a gamma likelihood. This pairing allows us to calculate the weight matrix in the model-fitting algorithm just once instead of during every iteration.

The model employs one slightly uncomfortable assumption. It assumes that every data point is statistically independent—an assumption that is demonstrably false in virtually every scenario involving time series data. While this assumption isn’t strictly true, it turns out not to have a dramatic impact if we’re concerned with only the model’s point estimates. It does, however, mean that confidence intervals generated by the model are going to be inaccurate. While it is possible to relax this assumption, the potential benefits to Lucid currently aren’t large enough to outweigh the additional effort and computational resources that would be required.

We included a large number of interactions in both LTV models. We employed L2 regularization to avoid identifiability problems and pull estimates toward overall averages when limited information is available on specific predictors.

We used two terms in the model (interacted with many other predictors) to accurately fit the retention curves we observed in our historical data for monthly subscriptions. Both were transformations of the “Lifetime Month Number” predictor. The first term was linear and the second was a shifted inverse term. We found that these two terms were able to accurately fit the observed relationships in a parsimonious manner.

For subscriptions with a different payment term, we included two additional terms specific to the subscriptions’ payment period. For annual subscriptions, for example, we had both a linear and shifted inverse term for the predictor “Lifetime Year Number.”

## Conclusion

LTV forecasting has proven itself valuable in multiple aspects of our business and ranks among the most impactful data science projects we’ve pursued to date. While data nerds like me find forecasting interesting in its own right, the impact of LTV at Lucid stems from our collective commitment to improving decision-making by leveraging accurate, relevant data.

## No Comments, Be The First!