July 12th, 2009Velocity gone wrong #3: Used as a performance measure
Continuing with the series, this time I want to highlight a very dangerous anti-pattern: using velocity as a performance metric. Before getting into the examples of how it applies to velocity, I want to first explain my view on metrics. I am in favour of metrics and coming up with interesting ways of displaying data (information visualization is a very interesting topic). However, the problem lies in the way that these metrics are used. There are two main types of metrics that I like to categorise as:
- Diagnostics Metrics: these are informative measurements that the team uses to evaluate and improve it’s own process. The purpose of collecting them is to gain insight into where to improve, and to track whether the proposed improvements are taking effect. They are not associated to a particular individual or to how much value is being produced. They’re merely informative and should have a relatively short life-cycle. As soon as the process improves, another bottleneck will be identified and the team will propose new metrics to measure and improve that area.
- Performance Metrics: these are measurements of how much value your process is delivering. These are the ones you should use to track your organisation’s performance, but they should be chosen very carefully. A good approach is to “measure up”. Value should be measured at the highest level possible, so that it doesn’t fall into one team’s (or individual’s) span of control. People tend to behave according to how they’re measured and if this metric is easy to game, it will be gamed. There should also be just a few of these metrics. An example of one such metric would be a Net Promoter Score (that measures how much your custumer is willing to recommend you to a friend) or some financial metric like Net Present Value (read Software By Numbers if this interests you). As you can see, these are very much outside of a team’s control and to be able to score high on them, they should try and do a good job (instead of gaming the numbers).
Going back to velocity, a very common mistake is to use it as a performance metric instead of diagnostics. Velocity doesn’t satisfy my criteria for a good performance measure. Quite the opposite, it’s a very easy metric to game (as mentioned in my previous posts). When approached as a performance metric, it’s common to see things like:
- Comparing velocity between teams: “Why is Team A slower than Team B?” Maybe because they estimate in different scales? Maybe their iteration length is different? Maybe the team composition is different? So many factors can influence velocity that it’s only useful to compare it within the same team, and even then just to identify trends. The absolute value doesn’t mean much.
- Measuring individual velocity: as highlighted by Pat, this is a VERY DANGEROUS use of velocity, and it can actually harm your process and discourage collaboration.
- A push to always increase velocity: it’s common to have a lower velocity in the beginning of a project, and that it tends to increase after a number of iterations. Inspite of that, I’ve seen teams pushing themselves to improve it when they reach a natural limit (Who doesn’t want to go faster, right?). Velocity measures the capability of your team to deliver and, as such, tends to stabilise itself (if you have a stable process and the number is not being gamed). A Control Chart could help you visualise that. As noted by Deming, in a stable process, the way to improve is to change the process.
It’s important to remember that velocity is a by-product of your current reality (your team, your processes, your tools). You can only improve your process once it’s stable and you know it’s current capacity. Velocity is just a health-check number that will tell your team’s capability. It will not tell you about how much value is being delivered or how fast you’re going. You can deliver a lot of points and make trade-offs on quality which, no matter how you measure it, will impact your ability to go fast in the long run. As uncle Bob says:
“The way to go fast, is to go well”
So let’s stop using velocity to measure performance and look at it as a diagnostic metric to improve our software delivery process.
April 8th, 2010 at 12:36 am
[…] colleague Danilo Sato previously wrote about the dangers of using velocity as a performance measure because it's something that's directly within our control and can therefore be gamed: Value should […]