This post was written to help bring more clarity to mobile marketers who want to find a KPI for analyzing profitability from marketing campaigns.
In case you haven't got time to read the whole post, bookmark it and keep this set of steps in mind to ensure your campaigns are in the best shape to run profitably:
- Start by ensuring you know what time period you want to become profitable within (e.g. 1 year). This is important for giving your calculations a clear goal that can define success/failure.
- Select a KPI and correlate what the minimum acceptable level in this KPI is, in order to turn a profit within your target period of time. Some common mobile marketing KPIs include:
- Retention rate
- ARPU/ARPDAU (this is flipped, so that you may pay no more than this amount per user)
- LTV (this is flipped, so that you may pay no more than this amount per user)
- Be sure to assess your predictions within sensible user segments. For example, don't apply the same prediction to both Android and iOS campaigns.
- As your campaigns scale, feed new data into your prediction system and continually assess your profit prediction to ensure your profit forecast is still on-target, given the latest data.
- If you feel comfortable, try exploring other methods to raise the accuracy of your model, including different attribution models and K-Factor and organic uplift analysis.
Reviewing the 4 Common KPIs
The Easiest KPI: Retention Rate
When it comes to mobile marketing KPIs, retention rate is an old stalwart and one of the most commonly used metrics for assessing success in mobile app marketing. Generally, retention rate is calculated as a ratio of users who opened the app to install-day users on a 1-day, 7-day, and 30-day basis.
Retention rate is popular because it is calculable for every app regardless of monetization method, it is easy to calculate, it can correlate decently well with ROI, and it is plentiful in data points. Yet in the grand scheme, retention rate is rather basic, and there are several downsides to retention rate:
- Retention rate can take a while to obtain final results (7-day and 30-day).
- Retention rate does not factor for differences in behavior within the user's session (e.g. do people open the app multiple times on the day they are recorded as "retained?" How long do they spend in the app on each "retained" day?).
- Retention rate does not factor for the cost to acquire or retain a user.
- Retention rate does not necessarily guarantee that retained users generated value for the business (unless it is retention of paying users).
- Retention rate can miss users who opened the app on days around, but not on the number of days after install (i.e. users who opened the app on day 5, 6, 8, and 9, would not be counted in day 7 retention; using week0 retention would count users who opened the app at least once in the first week, regardless of whether they opened the app on the first or last day of the week).
For those of you interested in employing retention rate as your profit predictor, consider using cost per day x retained user as your method, such as cost per day 7 retained user.
Alternatively, consider the following ideas for improving your retention-based profit predictor:
- Calculate the retention rate of free and paid users separately and forecast based on your costs for all users acquired and the paid user retention rate.
- Align retained days by your app's own normalcy. If you offer a 3-day trial, calculate cost per day 3 retained user.
- Per Andrew Chen's power curve article, look into calculating retention rate on a basis beyond open rate ratios by days, such as number of active user days or on a basis more aligned with business value such as number of key events completed per user by a certain day.
LTV Junior: Average Revenue Per User/Average Revenue Per Daily Active User AKA ARPU/ARPDAU
Average revenue per user (ARPU, AKA ARPDAU/average revenue per daily active user) is another benchmark for success that is often discussed as a KPI for mobile marketing. ARPU/ARPDAU is one of the components of LTV, along with some form of retention or usage rate. ARPU/ARPDAU can be used predict the maximum amount that can be paid to acquire a new user and remain profitable, and can raise the red flag when profitability is unlikely.
A downside to using ARPU/ARPDAU (and also LTV) is that it can break down if there arise significant changes in user behavior or the cost to acquire users.
The Most Versatile KPI: Return on Ad Spend AKA ROAS
ROAS is probably the most commonly used KPI for predicting profitability. ROAS is generally more useful than retention rate or ARPU/ARPDAU because it is predicated on the core inputs of profit: revenue generation, as a percentage of cost. ROAS is also relatively easy to calculate, provided that your app generates trackable revenue events.
In particular, week0 ROAS is as common a KPI as day 7 retention, due to the fact that it captures a full week's worth of data (thus factoring for the frequent variance in performance between weekdays and weekends), but is also quick enough for maintaining a weekly reporting cadence.
Note - when using ROAS or APRU/ARPDAU, be sure to cohort your users, so that you understand their true revenue generation in the first day/week/month/etc. Otherwise, your calculations will be skewed by users generating revenue from other acquisition cohorts.
The Granddaddy of KPIs: Lifetime Value AKA LTV
Lastly, the granddaddy of them all is lifetime value (LTV). LTV is the most useful KPI for determining whether your campaigns will turn a profit, and is mostly heralded as the most desirable KPIs because:
- LTV assesses the growth trends of both user retention and monetization, which none of the aforementioned KPIs does.
- Once established, LTV, like APRU/ARPDAU, is the fastest at predict profitability (i.e. as early as CPI or cost per paying user).
The downsides to using LTV as your mobile marketing profit prediction KPI include:
- LTV modeling takes more effort (and data) to calculate and maintain than other KPIs.
- There are myriad ways to calculate LTV, and choosing the best one is not always a clear decision.
- Ensuring your LTV model is well-trained, but not over-trained can prove a difficult task, as well as other upkeep/fine-tuning.
So which KPI should you use?
- For folks with low or difficult to track, source-level revenue data, consider using cost per day x retained user.
- For folks with an MMP and source-level revenue data, consider using ROAS.
- For folks with a good grasp of how an LTV model works and the time/patience to test and tune one, consider taking a swing at using LTV.
An important consideration for using benchmarks is that, by themselves, benchmark do not guarantee that your campaigns will turn a profit. To gain the knowledge of whether or not your campaigns will be profitable based on your benchmarks requires an additional step, which is to analyze what benchmark values do correlate with profit.
For instance, in order to use retention rate you will need to determine, say what day 7 retention rate leaves you with enough users who continue using (and paying) to the point of turning a profit on your ad dollars after, say 6 months. Is it 40%? 50%? 30%? 60%? Any of these retention rates could end up turning a profit - they are unique to the acquisition costs as well as your app's monetization model and user base.
To answer this question, you must analyze what KPI trends correlate with achieving your stated payback timeline.
There are many ways to go about correlating payback, from the classic "eye-ball" method to the more advanced programmatic-enabled methods.
Below is an example of an in-between method, which is to use an Excel scatter plot graph's linear trend line equation. In this case, the linear trend line predicts that a week0 ROAS of 11.68% will mature to a 6-month ROAS of 100%, with an accuracy (R-squared) of ~82%, based on the inputs.
While producing predictions is fun and useful, be acutely aware that predictions can break down when transitioned from the lab to the real world in many ways. For instance, having an extremely high R-squared can allude to an over-fit prediction in the case of LTV, indicating that your model will poorly predicting LTV beyond the training data set, and could actually carry a risk if used to inform longer-term predictions.
FYI: the calculation in the highlighted cell solves for x when y=100%:
If your forecasts don't end up panning out accurately in practice, here are some tips for improving your profit predictions:
- Gather more data points.
- Select a different KPI. If day 1 retention rate ends up producing poor profit prediction results, try day 7 or day 14.
- Lower your profitability forecast - does your prediction improve if you forecast for 90% ROAS, or payback after 8 months instead of 100% at 6 months?
- Hire a consultant that can help you to help get it right!
Also, for best results be sure to re-calculate the profit correlations for each segment of users that has a significant difference in cost per acquisition or monetization trend. For example, you should calculate a different correlation for each of the following acquisition segments:
- Operating system
- Country (especially between the US and others)
- Ad channel
- Event optimization
Once more - be aware of the danger to leaning too heavily on upstream data to predict downstream user behavior. With more data and and after spending a longer time watching how profitability plays out you can be more sure that your preliminary data will correlate with profit later on. Yet there is always the possibility that the current environment has changed, and historic data is no longer as good of a predictor.
For instance, if a bug in your app code is shipped and user retention drops, then any KPI would be affected, independently of your marketing campaigns. Or, it's also possible that optimizing for a higher week0 ROAS can lead to lower ROAS over the long-term, such as offering users more monetary incentives that raise short-term purchase volume, but at the cost of deflating user intrinsic motivation and ARPU.
Additional Profit Influences to Consider
Limit Ad Tracking and Global Data Protection Regulation AKA LAT and GDPR
When calculating profit from marketing campaigns, there are four additional considerations to be aware of.
The first is the lack of visibility on users whose data tracking has been disrupted. Limit Ad Tracking (users whose source is recorded, but whose post-install metrics such as revenue are untraceable) and the EU's Global Data Protection Regulation/GDPR are two causes of such disruptions and can cause your campaigns' profitability to be under-reported.
Many marketers get around this by either reducing the number of users acquired with LAT enabled (e.g. adding demographic targeting in Apple Search Ads) or else baking in an assumption of profitability based on a benchmark from other users from the same source/campaign type.
Ad Attribution is a second consideration that can cause differences in reporting on your campaigns' profitability. Attribution in and of itself is objective and simply involves different stipulations as to which acquisition sources receive credit for user behavior; but the way in which attribution is decided can cause your metrics/KPIs to either appear higher or lower, and hence change your profitability predictions. Not only this, but attribution windows set for ad networks have large ramifications on how your campaign bidding/targeting works.
For instance, the decision to either report inclusive of both view-through attribution or else only use click-through attribution will cause your campaigns' KPIs to shift accordingly. Your MMP will ensure that total attribution always adds up to 1, but adding view-through (or extending the attribution window for either clicks or views) can pull more KPI credit into a campaign and out of the organic bucket (or from other channels, too).
What impact does adding view-through attribution have? Per discussions with Google adding view-through attribution credit reportedly delivers an uplift of 14% higher conversion volume from UAC campaigns.
View-through attribution also raises the question of "viewability" when crediting views with user behavior. While an ad click inherently has no room for interpretation, viewability generally refers to how much of the ad must be in view of the user before an impression or conversion is logged. Google's viewability standards, for instance state that a full 50% of the ad must be in view for at least 1 second for display ads, and 2 seconds for video ads, and only within 24 hours of a user generating such an impression.
The amount of credit that a view-through impression should earn depending on ad format, viewability rules, window of time, and other factors. Take the time to study view-through attribution and understand how it works and how to apply it, and your profit calculations can benefit from expanding beyond click-through only attribution.
There is no "right answer," as to how to handle LAT/GDPR/attribution. Yet these considerations are important for any marketer calculating profit from campaigns should be thinking about and to establish a stance on.
In addition to directly generating revenue, all users also generate some level of indirect growth and incrementally increase the profit from your marketing campaigns.
K-Factor (AKA word of mouth/referrals) describes an app's viral potential, or the propensity for one user to tell a second or third user about your app, which generates a new user without additional cost. The higher the K-Factor, the higher the profitability of your marketing campaigns, because the cost to acquire each new user is offset by the cost savings from the new users that the new paid-acquired user generates.
K-Factor is difficult to calculate due to the difficulty in attributing new users directly to a referral, unless the referred user clicks a link, uses a coupon code or is otherwise tracked. K-factor often ends up being expressed through branded searches or direct app page downloads, both of which end up in the organic catch-all bucket. Analyses such as time series studies can be used to try to claw back K-Factor influences from the organic bucket, but such methods are also susceptible to influences unrelated to K-factor, such as competition, ASO, seasonality, and so on. Like profitability forecasts, the K-Factor is also likely to be different for each user segment.
Organic uplift refers to the algorithmic improvement in top chart rank or keyword ranks due to an app's increase in downloads volume, whether from organic sources or inorganic sources. Like K-Factor, organic uplift increases the profitability of marketing campaigns, because the cost to acquire each new user is slightly offset by the cost savings from the new users that the paid-acquired user generates by raising the app's algorithmic ranking score. Unlike K-Factor (which is exponential due to the compounding effect of penetrating social/word of mouth networks), organic uplift is a logarithmic phenomenon that actually becomes less effectual per new user acquired as the total number of paid user acquisition increases.
Organic uplift is easier to assess than K-Factor given that it is much more trackable. This is especially true now that Apple and Google both report on organic downloads sourced from browse/explore source type (which includes top chart-sourced downloads, featuring, editorial and other types of browsing behavior) vs the search source type.
Topics for Further Reading
- Other Incipia Posts
- Andrew Chen - introducing the power curve for retention analysis
- Navigating The Three Stages of the Mobile Marketing Lifecycle
- Calculating churn rate and retaining valuable customers
That’s all for today! Thanks for reading and stay tuned for more posts breaking down mobile marketing concepts.
Be sure to bookmark our blog, sign up to our email newsletter for new post updates and reach out if you're interested in working with us to optimize your app's ASO or mobile marketing strategy.
Incipia is a mobile marketing consultancy that markets apps for companies, with a specialty in mobile advertising, business intelligence, and ASO. For post topics, feedback or business inquiries please contact us, or send an inquiry to firstname.lastname@example.org
- A/B testing
- android development
- app analytics
- app annie
- app development
- app marketing
- app promotion
- app review
- app store
- app store algorithm update
- app store optimization
- app store search ads
- apple search ads
- aso tools
- client management
- coming soon
- facebook ads
- google play
- google play algorithm update
- google play aso
- google play console
- google play optimization
- google play store
- google play store aso
- google play store optimization
- google uac
- google universal campaigns
- ios 11
- ios 11 aso
- ios development
- itunes connect
- limit ad tracking
- mobiel marketing
- mobile action
- mobile analytics
- mobile marketing
- play store
- promoted iap
- promoted in app purchases
- push notifications
- search ads
- universal app campaigns
- universal campaigns
- user retention
- ux design