January 6th, 2010GQM: Metrics come last

Inspired by a post on the Lean Blog, Pat reminds us that you can’t measure everything effectively. Having written my Master’s thesis on metrics for Agile projects, I’ve learned and read about this in a lot of different places. One approach that is very known to Empirical Software Engineering researchers is the Goal-Question-Metric approach, first published by Vitor Basili et al. in the 90’s.

The GQM model suggests a hierarchical view of three levels to define which metrics to use:

  • Conecptual level (goal): the motivation for measurement. Measuring things without a purpose and a thorough understanding of the problem will lead to meaningless metrics. This level imposes the hardest questions: what’s the purpose? what’s the object of measurement (your product, process, people)? what’s the motivation? who is interested in this goal? what are the quality attributes?
  • Operational level (question): at this level, a set of questions are defined to try and correlate the object to the quality attributes we are interested in. These questions should help in understanding and assessing the current situation, but also in identifying ways to determine whether the goal is achieved.
  • Quantitative level (metric): only then a set of metrics is associated to the questions, to try and find a quantitative way to measure and answer it. These can be objective (like code coverage), or subjetive (individual’s ranking of current code quality). Finding these metrics is not easy either.

It’s easy to try to cut corners and get into the things that are easy to measure first, specially when you can collect lots of quantitative data to work with these days. However, if you you don’t stop to think about the goals and motivations for measurement, it’s easy to forget the systemic complexity that surrounds us and look only for the easy-to-track numbers.

Lean management and problem solving is known for taking a very thorough and detailed approach in the understanding phase. To many people this is a paradigm-shift approach to management. Don’t let the numbers fool you, use them to your advantage.

Post to Twitter

In this session, Joshua Kerievsky (the founder of Industrial Logic and author of “Refactoring to Patterns”) shared his experiences about stepping away from using estimations and moving into what he called ‘Micro-releases’. What got me interested is that, although he claimed not to be following the recent kanban/iteration-less techniques, he arrived in something relatively similar through his own experiences.

He started talking about the usual confusion of using different units to estimate stories (using points and people converting it into time-units when estimating, using ideal hours and people interpreting it as real hours, and so on). This confusion is also reflected in the way people interpret the team’s velocity. Instead of using it as a measure of the team’s capacity, it’s easy to make it a performance measure. Improving velocity becomes a synonym to improving productivity, and we still don’t know how to measure productivity in software.

Another usual problem that he raised is “What to do with hang-over stories between iterations?”. Are they considered waste? Should they be counted in the team’s velocity? Why is it a hang-over? Is it because it’s close to be completed or is it actually much bigger than the team expected?

His proposed practice of ‘micro-releases’ is based on a high state of collaboration with the customer: there is a small, prioritized list of what should be done next and the team picks up a small and coherent set of stories and ‘guess’ the next micro-release date based on gut feel. He reports that slipping the end date of a micro-release is not such a big deal, because the next one is always close. In this model, the length of a micro-release varies, but usually between 1-6 days (with some outliers like 15, as I recall from his slides, but it’s not too different than that). Because of that, there’s not a stable measure of velocity and using it becomes useless. Feeding the input list with the next stories is done in a just-in-time fashion, and the value produced by the software is realized earlier, as soon as the micro-release is finished.

It’s important to note that he still values the heartbeat of iterations to gather feedback (like a customer showcase or a team’s retrospective). But he simply detaches the release life-cycle from the iterations. This opinion was reinforced by Ron Jeffries in the audience, when he said they were releasing to production twice per iteration in one of his projects. What an interesting point of view: we usually tend to see a ‘release’ as the result of a set of ‘iterations’, but he is reporting smaller ‘release’ cycles than ‘iterations’. Hmm…

But there are some important caveats that should be in place in order for these techniques to work:

  • High collaboration with the customer
  • Ability to break-down stories into small-but-still-valuable chunks: He reckons it’s very important to know how to break down requirements into “stories of the right size”. If they are too big, there’s no way for the team to guesstimate when they will be finished. He thinks this practice is one of the most important to any Agile project (using micro-releases or not).
  • Easy to deploy: There’s no point in having a new release after 2 days if it takes 3 days to deploy it to production.
  • Bargaining stories: Alongside breaking stories into small chunks, teaching the customer to bargain for features and avoid what he calls “Feature Fat” (building more that’s actually needed, or gold-plating requirements) is very important (again, this is something you can use regardless of the micro-releases approach).

Since the experiences started with Idustrial Logic’s internal product development, some people argued they are only capable of using that because they have enough experience with XP and Agile, but Joshua claimed he is introducing these ideas with success in their new client projects.

This seems like a common theme in the agile space lately, and you can follow up on the discussions on InfoQ, the XP mailing list, or here. :-)

I’ve found out recently that there’s an interesting timeless statement in the Agile Manifesto that we usually forget: “We are uncovering better ways of developing software…”. We should be open to alternatives and to discuss new ideas!

Post to Twitter

Besides presenting the Coding Dojo experience report, one of my main interests of going to Agile 2008 was to attend some of the Lean-related sessions. In this and the next posts I will give a quick summary of the sessions I attended.

Expanding Agile: The Five Dimensions of Systems – Mary Poppendieck

Although not directly related to Lean, in this session Mary gave some insights on leadership and systems thinking that are worth talking about. By using the metaphor of Plank Roads (that provided fast improvements, but deteriorated over time, eventually overcoming its benefits), Mary questioned how sustainable is Agile?

She gave a great overview of the history of software development and the term Software Engineering. From Waterfall to Agile, going through ‘Plank Roads’ such as CMM, Spiral, and RAD, but also highlighting some lessons learned. Her main point was that the fragile aspects of our short history are mostly related to processes and what she called Project Management concepts (life-cycle, requirements, stages of testing, maturity levels, among others). On the other hand, the successes from our history are related to the Systems Engineering aspects of software (built-in quality, information hiding, continuous integration, skilled technical leaders, among others).

As she has already said, successful leadership needs two views: technical vision and marketing vision. She thinks process-related roles (such as a Scrum Master) does not guarantee a successful software. The team needs to understand the entire system, and she highlighted 5 dimensions to consider:

  • Purpose: Is there a clear vision and shared understanding of success? Is there technical leadership and built-in learning cycles to verify and improve suitability for purpose?
  • Structure: Do junior people have the training and oversight to assure they produce well-factored code? Does everyone on the team understand what they are producing and how it fits into the overall system?
  • Integrity: Is quality built-in or inspected at the end of development? Are failures rare and investigated deeply to fix their root cause? Is such learning captured and shared in an effective manner?
  • Life Span: Are those responsible for operations deeply involved during development? Is there a long-term vision of maintainability? Are rewards structured so that developing a robust system that performs well over time is the most strongly encouraged behavior?
  • Results: Does the system meet its overall economic investment? How quickly is ROI realized?

I think my main lesson from this session is that we should start re-thinking our approach to operations and support. After all, they are all part of the software life span and essential to the value stream. More about Lean and Agile 2008 in my next posts…

Post to Twitter


© 2007-2009 Danilo Sato | Powered by Wordpress

Page optimized by WP Minify WordPress Plugin