Lean analytics with Metric Mondays

Building data-informed product teams

Michael Kølleskov Gunnulfsen
ice Norge

--

Photo by Isaac Smith on Unsplash

In the mobile app team at ice, we aim to work data-informed by making decisions based on data analytics. Every member participates in the decision-making process, and so it’s important that the team is aligned and well-informed on the relevant data. In practice, however, it can be challenging to prioritize time to think about metrics and ideas, when you first and foremost need to work on your dedicated tasks. To improve this we’ve designed a product development process we call “Metric Mondays”. The idea is to have a continuous flow of relevant analytics data fed into the team on a weekly basis and to build and prioritize functionality based on this. By pushing our most important data and metrics into the teams’ Slack channel, the team can align better around our shared goals and purpose. This can lead to improved creativity around ideation, and better focus on what really matters to grow our product. In addition, it is motivating and gives team members a better sense of purpose.

Background

Our team (called the DCI team) uses the OKR framework to define our business goals. We have decided on some high-level goals and identified the metrics that tell us to what extent we are reaching them. In order to improve our metrics, we need to build better functionality and verify whether the new solutions are affecting our metrics the way we hoped for. Building great functionality can be very difficult, and you are unlikely to be good at it until you’ve gained experience through trial and error. As a product team, it’s important to realize that the goal is really to learn, as fast as possible, which ideas stick and what functionality removes the most friction in your customer’s lives. A great way to acquire this learning is through rapid product experimentation.

Product experimentation can be a truly creative process, that involves gathering ideas and running each one as an experiment. This is sometimes referred to as high-tempo testing, where instead of building one solution per iteration, you build multiple variations and see which ones are more successful.

From the beginning, our goal with the team was to become fully autonomous so that we could reach this level of rapid experimentation. Metrics are essential in this process, as they are the leading indicator for which variation performs better. In addition, the metrics tell us to which degree we are having the impact we aim for in our OKR.

Lastly, there’s the creative part of working with ideas. We believe that a good way to force positive brainstorming in this context is to be fed frequently with the results of our experimentation. A sort of positive reinforcement of our current mission. By getting these metric reminders, we believe each team member will spend more time thinking of new ideas and care more about the success of our product. Seeing the same number on a screen each week should also trigger more growth-minded discussions in the team. Additionally, we see great value in sharing openly our most important metrics with the rest of the company.

Metric Mondays

The Metric Mondays process can be explained in a single sentence. Each Monday, a specific data report is posted in the teams’ Slack channel with the most important metrics, including the current and previous weeks’ measures, together with our current ongoing experiments. Here’s what it may look like:

Example of a (made up) Metric Mondays report

Let’s discuss the metrics. The team has identified one primary metric, called our North Star metric. The north star metric alone indicates progress towards the teams’ primary business goal so that everything we decide to build should ultimately impact this metric. We do this by breaking down the north star metric into a set of secondary metrics, called input metrics. The north star metric is thus a function of its input metrics, meaning that the inputs should collectively have a substantial impact on the north star. The input metrics are closely tied to functionality in the app, and the team designs experiments to influence them.

For example, one input metric could be the percentage of users who regularly gift rollover data. In this example, let’s say it’s 16%. We want to increase this metric because we see it as a “positive customer interaction” (our north star). We design and roll out experiments that we believe can increase this metric, and measure the results over time. Once we have enough data to make a statistically significant conclusion, we decide to either change, kill or keep the feature.

The Metric Monday report shows our current input metrics, north star metric, and current experiments. We believe that being frequently exposed to our metrics this way can give powerful context that help us to better prioritize and ideate when we plan our backlog. The ultimate goal of all this is to learn quickly what actually works, so we can improve our product at a high pace. Many successful tech companies have product teams running similar growth cycles with great results. Here’s a recent tweet from a product owner at a fast-growing mobile-app company:

High tempo testing

Process

Each week, we bring the teams’ metrics into a “Monday Commitments” meeting. In this meeting, we align on our weekly agenda and look at our current running experiments. We discuss the experiments together with the metric results and decide if we need to make minor adjustments. In addition to the weekly meeting, we hold a “Monthly Metric” meeting once every month. Here, we bring new ideas to the table and prioritize based on what we think will make the biggest impact on our metrics. These ideas are very much shaped/biased by our understanding of the data we get through Metric Mondays.

From mercenaries to missionaries

It can be interesting to compare this way of working with a traditional conservative siloed organization. In this context, our team would be handed a requirement specification from other parts of the company. Only a year ago, this was to some degree the way we would build our product. Thought leaders in the agile sphere sometimes call the members of such a team “mercenaries”. In the world of Metric Mondays however, every team member is instead in charge of defining that specification based on what they believe is the best way to reach their goals. Using the same lingo, members of such a team are called “missionaries”. We believe that the people sitting closer to a problem, have unique insight, and are more likely to know how to grow the product faster. We also believe that brilliant ideas come from a great understanding of the data and that knowing which ideas stick is something you learn over time after repeated attempts. This form of product development is a skill that needs to be trained. Over time, the training is likely to pay off, big time.

Learnings and pitfalls

There are a few reflections we’ve made around the Metric Mondays process that we think is important to address. Here are some key points:

  • Set clear, and measurable goals, early. To be a data-informed team (and run Metric Mondays successfully), it is crucial that the team decides on a single, clear goal. More importantly, this (primary) goal must be easy to measure so that the result of building something would be clearly reflected in the metric. Otherwise, we would not be able to know whether building something has a positive or negative impact on our target goal. Once such a goal is clearly defined, the team should come up with a set of smaller input metrics that collectively affects the primary one. This should be done through a team workshop.
  • Be aware of false positives/negatives by blindly looking at the Metric Mondays report. The metrics we post do not tell the whole story, and we need more context to make good decisions. There might be new features from other teams that could impact our numbers, or maybe there have been certain events lately that alter the data, etc. An idea here could be that for the Monthly Metric meeting, one person would be in charge of bringing a report with additional relevant data, to add more context to the Metric Monday numbers.
  • Be aware of spending too much time on improving a metric that does not actually affect the global metric (also called local maximums). Sort of, moving the wrong needle. We believe this particularly can become a problem when you are continuously exposed to the same metrics, and can easily get obsessed with local growth.
  • Quantitative data alone does not tell the whole story, and we need a deeper qualitative understanding. Optimally, Metric Mondays would include a summary of recent user-research findings discovered through user interviews. We believe there should be a dedicated user-researcher in each user-facing domain, especially when it’s difficult to measure success solely through data.

We started this journey with the goal of becoming a fully data-informed and autonomous product team. This quickly made us realize the need for a better process around observing and understanding our data. The Metric Mondays process has been a great tool, and just like our product, we’ll continue to iterate on the process itself. Hopefully, some of these learnings can be relevant to others who aim to make their team members data-informed product missionaries.

--

--

Michael Kølleskov Gunnulfsen
ice Norge

Hacker with a romantic passion for code from the NORTH. Musicianish. Member of the Finimize fam and maker of Shiftfm.app