top of page

How we used a North-Star metric to guide product decisions 🔢

Updated: Jan 30


Here, I am going to cover:

  1. How we decided on our North-Star metric?

  2. The strategy that helped us move this metric.

Welcome back to my weekly newsletter. This is going to be a series of blogs covering how we came up with the idea of Alora (Android) and Alora (iOS) , scaled it to over 1 Million users in less than a year with less than a $1,000 total investment. 🌱


I’m Akshay Pruthi, an entrepreneur who loves to build products from the ground up. Over the past 6 years, I have built multiple products from scratch and scaled them to millions of users. This is my attempt to share our most important lessons from building one of those apps: Calm Sleep. 🤗


Note - Alora (Android) and Alora (iOS)  is my side hustle that I started with Ankur Warikoo, Anurag Dalia, and a team of 3.


Every week I will publish an article about a different problem we faced while building Calm Sleep. Last week, we discussed the good signs and the bad signs we saw from data analysis This week, I am going to cover how we decided on our North-Star metric to get things on track and the action points that helped us move the numbers.


How Did We Decide on Our North-Star Metric?

Before jumping into the good stuff, I wanted to bring to your attention the importance of building a culture of experimentation. I think the best decision I made early on is to have one solid data scientist in the team who helped us navigate through the data analysis.


We set up this culture from Day 1 where Nagarjuna helped us find our way through data before we took any product calls. He was operating at the outset and not like when we wanted to do some fancy analysis. I think it would be an understatement to say that most of our product ideas were cultivated through and with the help of Nagarjuna in the team.


Building on data, it was important for us to align all the stakeholders towards one metric. It can’t be a generic metric like Day 1 retention as it’s dependent on too many factors. A good way to think about this is to try and understand the activity that users indulge in to achieve the goal that is quantifiable.

For example, people come to this app to have a good night’s sleep.


What Would Help People Achieve this Goal?

If people enter the app and complete the sound that they attempted to listen to, it would mean they had a good night’s sleep.

So, our north star metric was: Percentage of “Sound Completions” taking place in a day.


Now, we had to think of the right strategy to move this metric. Here’s how we did it:

  1. From our learnings in the previous article, we identified that to achieve the goal (Listen to the sound), people were going to a category and looking for the sound that appealed to them the most. Action: We decided to make the app even simpler. We decided to remove 50% of the things from the app 😎(And here we thought, the MVP was the simplest version) 🤪 This is how the new user experience would look like:

We removed the bottom navigation from 3 tabs to 2 tabs. Gave the categories upfront to the users. A click of a button would give them the sounds that they would want to listen to.


2. Sleep is a regular activity that people indulge in. How do we make the use and the need of our app an ingrained part of this activity. ? This led us to write how an ideal user journey should look like:

User comes to the app -> Plays a sleep sound -> Falls asleep -> Wakes up sporting a smile! 😀

Action: Put an alarm feature on the launch of the app to drive daily behavior. This was an attempt towards a hack to make them come to the app to drive daily behavior.


3. Don’t miss the opportunity to ask for feedback at the right moment. When the user wakes up thanks to the alarm the next day, they will either wake up in fantastic mood or a grouchy one considering how they slept. Action: Ask for feedback at the right moment.

When people used to come back to the app the next day after waking up from the

alarm, we used to ask them for their invaluable feedback.


If they responded yes, we would take them to rate our app, if not we would ask them to give us feedback. And at any point in time, when people tried to exit the app, we asked for feedback asking users to select an option from the lot.


We launched our app with these changes and waited for the data to give us more insight into our actions.


The Thursday meeting happened.

  1. Effect of launching alarm: Setting up an alarm did three things:

    1. Users invested their time while setting up the alarm forcing them to commit to our app.

    2. Reduced the uninstall rate.

    3. Increased the probability of reusing the app the next day: In some way this feature made users come back to the app on Day 1 to stop the alarm creating a recall to use it again on Day 1

  2. Effect of the feedback:

~30% sound complete ratio


~7% increase on Day 1. Okay, we are progressing somewhere 😛


But how do we take steep jumps? Isn’t that what all of us want. One feature and boom!


We started questioning, is “Sound Complete” the right metric to chase as a North-Star?


In my next article, I will try to cover how we were fooled by the “Sound Complete” metric while there was some other metric we should have been chasing. I will also cover how all of that led to an increase in “Sound Completion” rate from 30% to 45% and Day 1 retention to 40%.


I publish weekly blogs where I share everything I learn.
Subscribe below to stay updated! 👇🏽

Thanks for submitting!

bottom of page