In my last QCon presentation in São Paulo, I gave the audience my take on refactoring strategies. You can check out my slides from that talk on Slideshare (in Portuguese, but you should be able to follow the pictures and code samples).

One of my main points is that you need to learn how to shift between two levels of thinking when you engage in refactoring, each requiring you to develop different sets of skills. I called them: Mechanic vs. Strategic.

Mechanics is important because you should be doing small, incremental, and opportunistic refactorings all the time while you develop code. It’s not a coincidence that TDD has an explicit step of refactoring after you tests are green. In order to improve the design of your code you should master how to perform these small steps well. A lot of the modern IDEs – especially in static typed languages – provide great support for this and you should learn how to use them well.

On a strategic level, you have to be able to take a step back and understand the big picture of what you’re trying to achieve. While it’s important to know how to perform individual refactorings, it’s more important at this level to know how to compose them in order to achieve the design improvement you desire.

One of the examples I used to demonstrate this, was inspired by a blog entry that Paul Hammant published a few weeks before my talk. I got some feedback that it was hard to follow the code changes on the slides, so I decided to record a small video of me tackling his refactoring experiment.

In the following video, the strategy is not overly complex: extracting a few methods from a single controller into separate controllers. However, you will see how the order of steps you use can affect the refactoring, making your job harder or easier. You should also be able to pick up a lot of the mechanics along the way:

You can find some of my attempts in branches on this Github repository. If you want to try it out for yourself, or to take this experiment further, these are some of the things I would explore:

  • The Upseller class should have it’s own tests, and I missed the @Component and @Scope("request") annotations that would probably be caught by integration tests.
  • Using Mockito or some other mocking framework to mock collaborators, and adding more behaviour to the domain model objects since using toString for testing is not something I would particularly do in real life.
  • Perhaps I could’ve used git stash instead of git reset to save the initial refactoring steps and later reapplying it assuming the merge would be simple.
  • The path I took is not the only one and you can probably explore different refactoring steps and different approaches to get to the goal.

I would like to thank Paul Hammant for coming up with this experiment and for encouraging me to publish this screencast. Please send me some feedback if you think this was useful or helpful for you!

Post to Twitter

A recent post on InfoQ made me think about how I use mocks during development. I used to be a classicist tester and thought mocks were only useful to fake external services (usually slow or hard to setup). After learning more about interaction-based testing and BDD, I now see some good advantages of using mocks during development:

Outside-In Development

Behaviour-Driven Development is more than a variation of TDD. I like Dan North‘s explanation of BDD as an “outside-in” approach to development. We start at the Story level, defining the acceptance criteria as scenarios which create an agreement between the team and the customer on the meaning of DONE. From the outside view, we start digging into the domain model, designing the objects and services needed to implement the story. I find mocks really useful in such scenario, because I can define the interface of the objects that still doesn’t exist, while focusing on the current layer of functionality. As a side effect, my tests become more isolated and when they fail I usually have a good indication of where the problem is located, requiring less debugging.

In this sense, I think the really important scenario for using mocks from Colin Mackay’s article is the last one: “when the real object does not yet exist”. I still mock someone else’s objects in the boundaries of the system, to assert that my code interacts properly with external libraries. But I see much more value in mocking my own objects, designing interactions and isolating behaviour. Which leads to my next point…

CRC with tests

CRC cards was one of the techniques that thought me what good Object-Oriented design looks like. It focus on designing behaviour-rich objects, defining its Responsibilities and Collaborators. In general, while state-based testing is great for defining Responsibilities, mocks give me the ability to precisely describe the Collaborators in my tests. Since I’m all in for using tests as executable documentation, mocks turned out to be a great technique to express my intent. Leading to my last point…

Mocking methods vs. mocking objects

My last point is not exactly an advantage of using mocks, but an example of how a framework can influence how you approach development. Coming from the Java world, I was used to encapsulate behaviour behind well defined interfaces. Now that I spend most of my time developing in Ruby, I not only have the ability of creating a mock/stub object, but also of mocking just one or two methods in a real object. These are usually called Partial Mocks.

Partial mocks allow me to express intent more precisely: instead of having to create a mock from an interface/class and record expectations in a proxy, I can specify the interactions in the real collaborators. It also makes the separation between state-based and interaction-based tests more loose, because I can selectively choose which methods will be mocked and which will be real. Finally, I can use tools such as Synthesis to verify that the interactions I’m mocking in one test are actually being verified in the collaborator’s test.

I know some people don’t feel comfortable using mocks and some even find it complex, but I think there’s great value on using it in the appropriate context. Instead of complaining about different framework’s idiosyncrasies and complexities, I think people should focus on its benefits as a design technique.

Post to Twitter

April 24th, 2006Design Incremental em XP

Nesse post falaremos um pouco sobre alguns dos mitos relacionados ao design em XP e sobre a nova abordagem de Kent Beck para garantir a qualidade e a simplicidade do design de um sistema.

Na primeira edição do livro Extreme Programming Explained, Kent Beck propôs 3 práticas que estavam diretamente relacionadas ao design do sistema: Design Simples, Refatoração e Metáfora. Dessas três, a última é a que mais gerou confusão nos praticantes de XP, sendo uma das menos utilizadas na prática. Com relação às outras duas, já ouvi diversas críticas e dúvidas:

Algumas dessas dúvidas estão relacionadas à forma como elas foram apresentadas na primeira edição, enquanto outras se baseiam numa interpretação errônea da prática no contexto de XP. A resposta para essas perguntas está melhor estruturada numa nova prática da segunda edição do livro: o Design Incremental. Invista no design do sistema um pouco a cada dia e se esforce para que o design seja excelente para resolver as necessidades atuais do sistema.

  • Quando?: Ao contrário de pensar pouco ou nunca pensar no design, a estratégia de XP é pensar sempre no design.
  • Como?: Faça mudanças em passos seguros e pequenos. A disciplina que facilita a evolução constante do design em XP é a refatoração, que melhora a qualidade do código sem alterar o comportamento do sistema.
  • Onde?: A heurística mais simples e eficaz é eliminar código duplicado. Adicionar uma feature num único lugar é muito mais simples e rápido do que ter que mexer em código espalhado por diversas classes.
  • Por quê?: Um design simples é mais comunicativo e mais fácil de ser entendido e alterado. Se o objetivo de XP é manter o custo da mudança constante, a facilidade de alterar o design a qualquer momento é fundamental.

Os novos critérios propostos por Kent Beck para avaliar a simplicidade são, em ordem de prioridade:

  1. Apropriado para o público-alvo: O código deve ser facilmente entendido pelas pessoas que vão trabalhar com ele.
  2. Comunicativo: Todas as idéias e intenções devem estar presentes no código, facilitando o entendimento do time e de um futuro leitor.
  3. Fatorado: A existência de código duplicado dificulta as alterações e o entendimento.
  4. Mínimo: Cumprindo as três restrições acima, um sistema deve ter o menor número de elementos possível. Menos classes implicam em menos código para testar, documentar e comunicar.

Para auxiliar o entendimento dos critérios de XP para avaliar e produzir código de qualidade, uma outra prática – que a princípio não parece ter relação direta com o design – se mostra muito importante: o Desenvolvimento Orientado a Testes (TDD – Test Driven Development). Ela será o tema dos próximos posts e, ao contrário do que possa parecer, está tão relacionada à produção de testes automatizados quanto à produção de código mais simples, comunicativo, fatorado e mínimo.

Post to Twitter

© 2007-2009 Danilo Sato | Powered by Wordpress

Page optimized by WP Minify WordPress Plugin