Metrics can be a tricky thing and certainly what is measured can be a focus for the team. Choose wisely, be transparent, and talk about the what and the why for each metric your team chooses…or is chosen for you. If you need a place to start, here are a few to consider.
1. Backlog Health
A poor backlog will derail the best of Agile teams. A metric that you can use to track the health of your backlog is ready stories / velocity. I use 2.0 as a starting benchmark, which means a team should have about 2 sprints worth of ready work in it’s backlog. If your team is working in two-week sprints, then that correlates to one month of ready user stories at any given time. For example, team Lightening’s output (or capacity, velocity) is usually around 5 stories per sprint. After their last refinement meeting, they have 12 groomed stories. 12 / 5 = 2.4. They’re good. You can do the same math using story points. Remember, it’s not an exact science.
2. Team Happiness
Lots of smarter people than me have been correlating company success to team happiness lately, so it’s a great idea to capture your team’s mood. Some teams track it daily, while others just get a pulse at each retrospective. You can use a simple 5 level scale (1 is feeling awesome and 5 is I’m ready to explode or something like that). Both individual sprint data and trend data is important and can be an excellent source of information during retrospectives. Get feedback from each person and then aggregate it.
Oh, and don’t forget to keep a pulse on your product owner’s happiness…and even bigger bonus points if you have figured out a way to capture your customer’s happiness with your product using something like net promoter score.
Track the number of escaped defects and watch the trend over time. Defects building up can signal trouble. Code coverage is also a good one and is a leading indicator for quality software.
4. Output (velocity, cycle time, throughput)
You’ll need some way to track the team’s output and there are many good ways to do so. Probably the most common is velocity, which is usually calculated as how many story points (or stories) the team is able to complete in a sprint. For example, a team that completed 26 story points in the last sprint has a velocity of 26. Teams sometimes use a rolling average to calculate what their expected velocity will be in the future. PS…I prefer the term capacity rather than velocity.
Cycle time is another great choice as it will tell you how long it takes for a story to go from “in-progress” to “done.” For example, it may take a team on average 6 days to complete a story.
Throughput is also another great measure that is similar to velocity in that it will tell you how many stories got done in a period of time. For example, a team’s throughput may be 7 user stories every 2-weeks.
There are several great free tools available to do all sorts of cool things to develop probabilistic forecasts using output data. Flow metrics are where the industry is going in my opinion. I have some listed on my Agile Resources page.
Many teams struggle with completing (meaning done, done) their sprint commitments. One simple metric you can use to track predictability is actual / planned velocity. For example, team Thunder forecasts that it will complete 30 story points in sprint 1. They actually complete 25. Their completion percentage is 25 / 30, or 83%. You can use the same math with just # of stories if the team doesn’t use story points.
A goal of 100% completion every sprint is just not practical…things are going to happen that prevent a team from meeting its commitment every now and then. Be more interested in trends and large differences between actual and planned. This usually signals issues with either planning or that team members are getting pulled off to work on non-sprint backlog items.
Do you have metrics that are useful to your teams and organizations? I’d love to hear about them!
October 22, 2016 at 1:13 am
Great stuff! I recently worked with a team which implemented the Backlog Health Index. In the early stages, it was hard to get the BHI number up to our team’s 3.0 target because (obviously) the BHI dropped by ~1 every time we started a new Sprint. However, once the Product Owner started holding more frequent refinement sessions, we were able to achieve (and maintain) a target of 3.0. One cautionary note to anyone who implements the BHI: If the target is 3.0, that doesn’t mean that 6.0 is great and 12.0 is awesome… That might just be a path to waterfall! The target range of 2.0 to 4.0 (1-2 months of work meeting the Definition of Ready) seems like a reasonable range.
October 22, 2016 at 2:50 am
Totally agree Matt! Great point about the BHI being too high. That would be a sign of waste for sure.
April 2, 2017 at 2:18 am
Great article! One metric we used was Stakeholder Satisfaction, measured with an NPS survey that was sent to all named stakeholders once a quarter. The survey had the actual NPS question, along with 3-4 others aimed at gauging the stakeholder’s satisfaction with the team – not the product. It was interesting to see how, even when they weren’t thrilled with the product (it wasn’t as far along as they would like), they would still rate the team high because they were responsive, open, and had worked hard to form good relationships.