How do you understand which features your users are willing to pay for? Like many I have struggled with this exercise in the past 🤯
❓ I used to conduct user interviews which (if done properly) can be very insightful. I also ran user surveys to gather insights from different persona. While both instruments were crucial in uncovering a wealth of relevant features users would potentially be paying for, they didn’t allow me to judge which features users would be willing to pay for. Prioritizing according to (R)ICE would not solve for it either.
💡 The technique I was missing out on initially was the so-called MaxDiff analysis. Within a MaxDiff analysis, you basically ask users of your survey which features they value the most and which ones the least (make sure to shuffle the order the features are presented to your survey respondents).
After running the survey, you count the number of times each feature was selected as most and least important, and then calculate the Maximum Difference Scaling Value in the following way:
(# of most important selected - # of least important selected) / # of total responses
That way each feature is assigned a value between -1 and 1. The closer to 1, the more important is the feature to your survey respondents.
👉 Adding MaxDiff analyses to your tool suite will allow you to rank which features users are more likely to pay for.
Let me tell the story of how I increased subscription revenue by 7X for a mobile app. The app was decently priced compared to its competitors. If you considered all the details, it was actually already priced higher than the competition. I, nevertheless, had the feeling that we could charge more. So we decided to run Van Westendorp’s willingness to pay survey.
❓ What is this Van Westendorp survey? In short, you ask your survey participants four questions around which price points would be too expensive (definitely wouldn’t buy), expensive (but still consider buying), a bargain (definitely would buy), and too cheap (wouldn’t trust it).
📈 Based on the results, you can create a chart like the one below that allows you to identify the ideal price range which lies between the intersection of the “too cheap” and “too expensive” lines and the intersection of the “too expensive” and “bargain” lines.
💰 In our case, the results of the survey left us astonished. It told us that we could charge 4X more on our monthly subscription. We couldn't believe our eyes and thought that we probably had made a mistake in our survey setup. After checking and rechecking and not finding any issues with the results, we decided to “conservatively” increase our prices by 2X (yes, we really doubled them). As a result, our conversion rates went down a bit (but much less than expected), and our revenue shot up - success!
🤑 But we did not stop there. We monitored the situation for a while before making our next move: we increased prices another time. This time to 4X of the initial value. And again, our conversion rate dropped slightly, but the increase in revenue more than covered our losses of the conversion rate. Particularly, since the changes hardly affected our retention rate.
👉 Will the Van Westendorp survey (see here for a free template) always lead to these results? No. But if you have not done it before and/or are unsure about how much to charge your customers, it will help you get a much better understanding.
Are you copying your competitors’ pricing? If yes, I suggest you rethink that approach.
While I believe that studying your competition is important as a product manager for a multitude of reasons, you need to be careful when looking at their price points as those are part of their bigger monetization strategy that does not necessarily fit to your product and services.
I used to be opposed to pricing much more than the competition when the services we offered were very similar. Nowadays, I find myself often challenging teams why they are not charging more. Why❓
Germany’s consumer advice center (“Verbraucherzentrale”) recently won at court against Sky who was (and actually still is) burying the subscription cancellation button. While the US is still a bit more of a wild west, bad practices such as allowing users to cancel their subscriptions only through phone calls have been banned since 2021 by the FTC (yes, several companies really did not allow you to cancel your subscription via email or websites).
While it feels like there is still a long way to go, subscription management is slowly improving for consumers. Here are some of the best practices for companies that want to be compliant with the consumer advice center & FTC:
Companies are way too afraid of experimenting with their pricing model. Launching and testing a new pricing model (e.g. new $-amount, new pricing tier, new fee, new pro plan) should not require much more considerations than running any other experiment.
And still… this kind of experiments is run way too infrequently across all products. I believe there are four main reasons for that:
After writing about why companies are too afraid of experimenting with their monetization strategy, it is only natural that I continue by writing about how you can A/B test your monetization strategy.
Like with every experiment, you first need to define at a minimum
In addition, think about a roll-back strategy as well as whether you plan to grandfather in your existing customers (in the future) or not. Launch the experiment only for a subset of your user base. Monitor its impact on your key and secondary metrics (e.g. install-to-paid conversion rates, retention, revenue). If everything looks good (>= expected business impact), gradually expand to the remaining user base.
Here are two templates I published for free that you can use for running your own pricing experiment:
🤑 Business impact calculator template
Have you ever localized your price points? Just like you are personalizing your ads or product experience, you should also personalize your pricing. The first step towards more personalized pricing is to define localized prices.
🌐 What are localized prices? In a nutshell it means that you do not charge the same $-amount across the world. Instead of charging $10 everywhere for your product, you may charge $10 in Tier 1 countries, $5 in Tier 2 countries, and $2 in Tier 3 countries.
💸 Why should you have localized prices? Because the willingness to pay (and purchasing power) differs per country. If you simply convert currencies 1:1, then your prices are more likely to be perceived as too high (or too low) across countries.
🕵️ How do you find the right prices? You can repeat the process you used for your target country/market. If you want a shortcut, then you can start by looking at metrics such as the GDP per capita per country and use that as a multiplier.
Do you keep a separate monetization experimentation backlog? If not, you may want to think about it because of the following advantages:
⚡ Monetization experiments are usually low effort but high impact. Having a separate backlog just for monetization emphasizes their importance and keeps them on the radar at all times.
🏢 Depending on the size of your company, you may need to involve a dedicated pricing committee to make decisions on pricing changes and experiments. Discussions with the pricing committee are easier if you prepare a separate backlog and GTM strategy on upcoming pricing experiments.
⏱️ You should run monetization experiments at all times. By having a separate backlog, you increase transparency and it is easier to keep the responsible team accountable.
👉 If you need support with creating or improving your monetization strategy for your product, feel free to reach out.
TBD
TBD