Kenji Hiranabe was awarded with this year’s Gordon Pask Award and 2 of his sessions were voted for a re-run on the last day of the conference, which he decided to present together. In this post I will summarize the first half of his presentation.

New Product Development @ Toyota

Kenji presented an english version of Nobuaki Katayama’s (a former chief engineer at Toyota) talk on a Japanese Agile conference. The video from the first run of this talk is already available on InfoQ, but there are some points that I think are worth highlighting:

  • Product development is a phased process: the first phase (getting the concept right) is all about creativity and insights to arrive at an overall vision of the product. For example, the vision for the Prius was not to be a hybrid engine car (this was a decision made later, at the design phase), but to be energy-efficient. According to him, this phase should take as long as necessary to get the concept right. The second design phase is milestone-driven and the chief-engineer has an overall cost buffer that he can use when making trade-off decisions during the development process. The third phase is going into the manufacturing line. This phased approach was somehow brought up again by Alan Cooper on his closing keynote (commented slides are also available online).
  • Leadership characteristics: Toyota doesn’t value leaders with a dictator attitude. What really surprised me is that they also don’t seek charism in a leader’s attitude. They should be there to enable teamwork and keep a constant focus on the macro view (product vision).
  • Bad news first: Toyota leaders don’t like to hear what is going well. They trust that everyone is doing their best to keep doing the good things. They are there to remove impediments and help solve the problems, so they cultivate a culture of always giving the bad news first.

Another interesting fact mentioned by Kenji is that, after his presentation, the chief engineer watched two experience reports on the adoption of XP and Agile in Japan. He said that, even though we are following different practices, we are applying the same engineering thinking. We use changeability in software to defer the chance of changing things to the last responsible moment. In manufacturing, repetition (as in iterations) is considered a failure. But we use tests to continually keep quality high, allowing for late changes to be implemented without incurring high costs.

Although software development is more like product development than product manufacturing, building a car is still something different than building software. We need to take care about how far we push our analogies. There’s definitely something to learn from Toyota and their product development process, but we won’t be able to replicate the same techniques and practices without careful thought.

Post to Twitter

I attended this session on Friday morning. This time the subject was another Lean tool: kanban. Corey Ladas showed 3 different project scenarios and presented different approaches to implement a kanban system.

He started by talking about some important Lean concepts like one-piece flow, work-in-process (WIP), cycle time, and their relationship using a Cumulative Flow Diagram (or finger chart). He showed how a constraint in the system can cause disruptions to flow and cause more harm than benefit to downstream processes by building more and more inventory (WIP). He then went on to explain the benefits of using a kanban system to limit the amount of WIP in the different scenarios.

The first scenario was a traditional waterfall-style process. He used a value stream map to identify the amount of value-producing time in each phase of the process, and used that to calculate an initial buffer size for each kanban lane. Instead of trying to explain the whole idea here, I suggest you to read Corey’s series of 4 posts on the subject.

The second scenario was a transition from a Scrum process into using a structured kanban instead of a traditional story/task board. Again, the approach is explained in more detail in a post about what he called Scrum-ban.

The third scenario would be an improvement over an existing kanban process, but due the number of questions throughout the presentation (which generated some quite interesting discussions) he didn’t have enough time to go into more detail.

My overall impression of the session was that he did a good job explaining kanban as a tool, and the reasoning behind it to limit WIP and control the flow of your process. The discussions and questions were also very interesting, which demonstrated the audience was interested in the subject. The thing that I didn’t like so much was his argument in favor of having specialists in the team, and how to move into using kanban without too much change along the way. I think one of the fundamental Lean principles is kaizen, or continuous improvement, and it encourages the team to constantly search for better alternatives. Change is part of the process and should be seen as a Good Thing. And I have already shared some of my impressions about Generalist vs. Specialists, so I’m a bit biased :-)

An important thing to keep in mind is that kanban is just one of the tools in our Lean toolkit. I particularly like Kenji Hiranabe’s InfoQ article on the subject, where he explains kanban in a more broad context, acting as a balancing tool between reducing WIP and one-piece flow, as well as a way to visualize kaizen. Lean is full of contradictions, and you must understand the underlying principles to be able to apply (and change) the practices to a particular situation. Using kanban per-se shouldn’t be a goal for any agile team, but it is very important to understand how it works and the reasoning behind it. If you haven’t looked into kanban yet, you should definitely check out the references in this post.

Post to Twitter

In this session, Joshua Kerievsky (the founder of Industrial Logic and author of “Refactoring to Patterns”) shared his experiences about stepping away from using estimations and moving into what he called ‘Micro-releases’. What got me interested is that, although he claimed not to be following the recent kanban/iteration-less techniques, he arrived in something relatively similar through his own experiences.

He started talking about the usual confusion of using different units to estimate stories (using points and people converting it into time-units when estimating, using ideal hours and people interpreting it as real hours, and so on). This confusion is also reflected in the way people interpret the team’s velocity. Instead of using it as a measure of the team’s capacity, it’s easy to make it a performance measure. Improving velocity becomes a synonym to improving productivity, and we still don’t know how to measure productivity in software.

Another usual problem that he raised is “What to do with hang-over stories between iterations?”. Are they considered waste? Should they be counted in the team’s velocity? Why is it a hang-over? Is it because it’s close to be completed or is it actually much bigger than the team expected?

His proposed practice of ‘micro-releases’ is based on a high state of collaboration with the customer: there is a small, prioritized list of what should be done next and the team picks up a small and coherent set of stories and ‘guess’ the next micro-release date based on gut feel. He reports that slipping the end date of a micro-release is not such a big deal, because the next one is always close. In this model, the length of a micro-release varies, but usually between 1-6 days (with some outliers like 15, as I recall from his slides, but it’s not too different than that). Because of that, there’s not a stable measure of velocity and using it becomes useless. Feeding the input list with the next stories is done in a just-in-time fashion, and the value produced by the software is realized earlier, as soon as the micro-release is finished.

It’s important to note that he still values the heartbeat of iterations to gather feedback (like a customer showcase or a team’s retrospective). But he simply detaches the release life-cycle from the iterations. This opinion was reinforced by Ron Jeffries in the audience, when he said they were releasing to production twice per iteration in one of his projects. What an interesting point of view: we usually tend to see a ‘release’ as the result of a set of ‘iterations’, but he is reporting smaller ‘release’ cycles than ‘iterations’. Hmm…

But there are some important caveats that should be in place in order for these techniques to work:

  • High collaboration with the customer
  • Ability to break-down stories into small-but-still-valuable chunks: He reckons it’s very important to know how to break down requirements into “stories of the right size”. If they are too big, there’s no way for the team to guesstimate when they will be finished. He thinks this practice is one of the most important to any Agile project (using micro-releases or not).
  • Easy to deploy: There’s no point in having a new release after 2 days if it takes 3 days to deploy it to production.
  • Bargaining stories: Alongside breaking stories into small chunks, teaching the customer to bargain for features and avoid what he calls “Feature Fat” (building more that’s actually needed, or gold-plating requirements) is very important (again, this is something you can use regardless of the micro-releases approach).

Since the experiences started with Idustrial Logic’s internal product development, some people argued they are only capable of using that because they have enough experience with XP and Agile, but Joshua claimed he is introducing these ideas with success in their new client projects.

This seems like a common theme in the agile space lately, and you can follow up on the discussions on InfoQ, the XP mailing list, or here. :-)

I’ve found out recently that there’s an interesting timeless statement in the Agile Manifesto that we usually forget: “We are uncovering better ways of developing software…”. We should be open to alternatives and to discuss new ideas!

Post to Twitter

Continuing my series of posts about Agile 2008, I will summarize the session presented by Rod Coffin and Don McGreal about Pull Systems.

We played a simple game to demonstrate the concept of a pull system. The goal of the system was to produce “Mr. Potato Paper Heads” of different variations: squared/triangle eyes with squared/triangle mouths (4 variations in total). The “production line” was divided in 4 phases:

  1. Cut the FACE
  2. Assemble the EYES
  3. Assemble the MOUTH
  4. Launch to MARKET

In a first round, we simulated a push system: the first person was responsible for cutting the face and drawing a specification of what should be built, by choosing the type of eyes and mouth. The next phases were responsible for cutting and glueing the eyes and mouth, respectively. The last phase would take the finished product and stick it to the wall, representing a launch to the market. We knew the market would consume 10 faces, but didn’t know how many of each type, so we had to guess.

At the end of the first round, the presenters showed what the market actually requested, counting revenues and wastes for each team. They then explained the concept of a pull system, that starts with the customer order and drives the upstream processes of the production line based on that.

In order to implement a pull system, we needed some buffers along the way (the mouth assembler would need at least one face of each different variation of eyes in order to build and deliver anything the customer ordered). As soon as an eye-only face was consumed, it triggered a signal to the eye-assembler to build another of that kind, to replace the buffer, creating another trigger to the upstream face-cutter (triggers are represented in red on the following picture, while green represents something being delivered).

Pull System

I think this was a very interesting and instructive session. It’s much easier to understand concepts in practice, by playing a game, instead of reading it in a book – or a blog ;-).

Post to Twitter

Besides presenting the Coding Dojo experience report, one of my main interests of going to Agile 2008 was to attend some of the Lean-related sessions. In this and the next posts I will give a quick summary of the sessions I attended.

Expanding Agile: The Five Dimensions of Systems – Mary Poppendieck

Although not directly related to Lean, in this session Mary gave some insights on leadership and systems thinking that are worth talking about. By using the metaphor of Plank Roads (that provided fast improvements, but deteriorated over time, eventually overcoming its benefits), Mary questioned how sustainable is Agile?

She gave a great overview of the history of software development and the term Software Engineering. From Waterfall to Agile, going through ‘Plank Roads’ such as CMM, Spiral, and RAD, but also highlighting some lessons learned. Her main point was that the fragile aspects of our short history are mostly related to processes and what she called Project Management concepts (life-cycle, requirements, stages of testing, maturity levels, among others). On the other hand, the successes from our history are related to the Systems Engineering aspects of software (built-in quality, information hiding, continuous integration, skilled technical leaders, among others).

As she has already said, successful leadership needs two views: technical vision and marketing vision. She thinks process-related roles (such as a Scrum Master) does not guarantee a successful software. The team needs to understand the entire system, and she highlighted 5 dimensions to consider:

  • Purpose: Is there a clear vision and shared understanding of success? Is there technical leadership and built-in learning cycles to verify and improve suitability for purpose?
  • Structure: Do junior people have the training and oversight to assure they produce well-factored code? Does everyone on the team understand what they are producing and how it fits into the overall system?
  • Integrity: Is quality built-in or inspected at the end of development? Are failures rare and investigated deeply to fix their root cause? Is such learning captured and shared in an effective manner?
  • Life Span: Are those responsible for operations deeply involved during development? Is there a long-term vision of maintainability? Are rewards structured so that developing a robust system that performs well over time is the most strongly encouraged behavior?
  • Results: Does the system meet its overall economic investment? How quickly is ROI realized?

I think my main lesson from this session is that we should start re-thinking our approach to operations and support. After all, they are all part of the software life span and essential to the value stream. More about Lean and Agile 2008 in my next posts…

Post to Twitter

Next 15th and 16th of October I will be in São Paulo speaking at Rails Summit Latin America. I will be presenting a session about Developer Testing, showing some interesting tools in the Ruby world such as: RSpec, Mocha, Synthesis, RCov, and Autotest. I will also be co-presenting a session with George about Application Design for REST (or something along those lines).

Rails Summit Latin America

In the meantime, for the Portuguese speaking audience, I have uploaded the slides and Carlos Eduardo made the recorded video available of my Merb webinar that I presented last month at “Café com Tom”. I will be back to present another webinar about BDD and RSpec in September.

Hope to see you all there!

Post to Twitter

August 12th, 2008Coding Dojo @ Agile 2008

The main reason I went to the conference was to present my experience report on Wednesday about running a Coding Dojo in São Paulo since last year with my friends: Hugo and Mariana. I just made the slides available for download, as well as the published paper. Feel free to comment and send questions and comments about our unanswered questions.

Dojo@SP Mind Map

The overall feedback we got from the 30 minutes presentation was very good and we decided to do a Live Dojo at the Open Jam in the next available slot. Besides me, Hugo, and Mariana, Emmanuel and Arnaud from the Paris Dojo showed up, Ivan Sanchez from the Floripa Dojo as well, and other people joined us during the session.

We did a Randori session solving the Blocks Problem in Ruby/RSpec, switching pairs every 7 minutes. Although we didn’t finish the problem, it generated some interesting discussions about exposing the internal state vs. a representation of the state to the tests, and how valuable it is to introduce a data structure that you don’t need now but will soon need.

The problem generated some interest and, on the following day, Arnaud had scheduled a slot on the Opem Jam, and we decided to tackle the problem again, but this time in Haskell. Dan and Liz were also there and it was really cool to see the same problem from a different perspective. We discussed how some features of a functional language forces you to think in a different way (I still don’t quite understand Monads, by the way) :-)

Another interesting learning point was: In our paper, we discussed a whole topic about issues with people pairing in different environments (operating system, keyboard layout, shortcut keys, IDEs, …) at every meeting. At the Open Jam, Dan proposed that we used a local Mercurial repository to share the progress on development (pushing at each TDD step for the next pair to continue working on the problem). This allowed him to work on his Mac with a Dvorak layout, while we were using Emacs on a Linux laptop. The other benefit of this approach is that it allows people to experience how Continuous Integration works in practice, committing as often as possible, whenever it makes sense to do so. Good stuff!

Post to Twitter

August 12th, 2008My Agile 2008

It’s hard to summarize all the things that happened during last week in one post, so I decided to break it in smaller posts to keep them short and interesting. Overall, Agile 2008 was great: about 1600 participants from 39 countries (I met over 10 brazilians there! That’s much better than being the only one in 2006), the majority being first-time attendants. With more than 40 concurrent sessions at each 1:30h slot, it’s hard to give a fair summary of what happened there, that’s why I decided to talk about My Agile 2008.

My Agile 2008 Mind Map

Networking is always good at these conferences: I got to see some old friends and also met a bunch of interesting people: ThoughtWorkers from around the world (we were around 30 presenting/attending the conference), Kiko from Canonical (the company behind Ubuntu), the guys from the Paris Dojo, and a lot of other people that you share ideas with at the bar, the Ice Breaker, the Banquet, and all social events.

The whole conference was organized around the concept of a music festival, having different Stages (with Producers and everything). My personal favorites this year were the Open Jam (self-organizing conference) and the Muzik Masti (a room with instruments and a lot of geek/musicians jamming).

Dan North on the Drums

In the next posts, I will share my experiences presenting and running a Coding Dojo at the conference, as well as some of my take away lessons from the sessions, keynotes, and conversations. Stay tuned! :-)

Post to Twitter

I was kindly invited by my friend Carlos Eduardo (e-Genial) to give 2 webinars to the brazilian Ruby/Rails community in the next months (in Portuguese). So, add to your calendar:

  • 19/Jul/2008: Merb: web development with Ruby without Rails
  • 13/Sep/2008: BDD with RSpec

Both webinars will be held at 3:00PM (brazilian time) via Treina Tom, a great e-learning/e-conference tool developed by Carlos. More details can be found at the event website. Hope to see you there!

Post to Twitter

A recent post on InfoQ made me think about how I use mocks during development. I used to be a classicist tester and thought mocks were only useful to fake external services (usually slow or hard to setup). After learning more about interaction-based testing and BDD, I now see some good advantages of using mocks during development:

Outside-In Development

Behaviour-Driven Development is more than a variation of TDD. I like Dan North‘s explanation of BDD as an “outside-in” approach to development. We start at the Story level, defining the acceptance criteria as scenarios which create an agreement between the team and the customer on the meaning of DONE. From the outside view, we start digging into the domain model, designing the objects and services needed to implement the story. I find mocks really useful in such scenario, because I can define the interface of the objects that still doesn’t exist, while focusing on the current layer of functionality. As a side effect, my tests become more isolated and when they fail I usually have a good indication of where the problem is located, requiring less debugging.

In this sense, I think the really important scenario for using mocks from Colin Mackay’s article is the last one: “when the real object does not yet exist”. I still mock someone else’s objects in the boundaries of the system, to assert that my code interacts properly with external libraries. But I see much more value in mocking my own objects, designing interactions and isolating behaviour. Which leads to my next point…

CRC with tests

CRC cards was one of the techniques that thought me what good Object-Oriented design looks like. It focus on designing behaviour-rich objects, defining its Responsibilities and Collaborators. In general, while state-based testing is great for defining Responsibilities, mocks give me the ability to precisely describe the Collaborators in my tests. Since I’m all in for using tests as executable documentation, mocks turned out to be a great technique to express my intent. Leading to my last point…

Mocking methods vs. mocking objects

My last point is not exactly an advantage of using mocks, but an example of how a framework can influence how you approach development. Coming from the Java world, I was used to encapsulate behaviour behind well defined interfaces. Now that I spend most of my time developing in Ruby, I not only have the ability of creating a mock/stub object, but also of mocking just one or two methods in a real object. These are usually called Partial Mocks.

Partial mocks allow me to express intent more precisely: instead of having to create a mock from an interface/class and record expectations in a proxy, I can specify the interactions in the real collaborators. It also makes the separation between state-based and interaction-based tests more loose, because I can selectively choose which methods will be mocked and which will be real. Finally, I can use tools such as Synthesis to verify that the interactions I’m mocking in one test are actually being verified in the collaborator’s test.

I know some people don’t feel comfortable using mocks and some even find it complex, but I think there’s great value on using it in the appropriate context. Instead of complaining about different framework’s idiosyncrasies and complexities, I think people should focus on its benefits as a design technique.

Post to Twitter


© 2007-2009 Danilo Sato | Powered by Wordpress

Page optimized by WP Minify WordPress Plugin