Uma técnica importante para reduzir o risco de suas implantações (ou deploys) é conhecida como Blue-Green Deployments(Implantações Azul-Verde). Se chamarmos o ambiente de produção atual de “azul”, a técnica consiste em introduzir um ambiente paralelo “verde” com a nova versão do software e, uma vez que tudo é testado e está pronto para começar a operar, você simplesmente redireciona todo o tráfego de usuários do ambiente “azul” para o ambiente “verde”. Num ambiente de computação em nuvem, assim que for confirmado que o ambiente ocioso não é mais necessário, é comum descarta-lo, uma prática conhecida como servidores imutáveis.

Se você estiver utilizando Amazon Web Services (AWS) como seu provedor de nuvem, existem algumas opções para implementar as implantações azul-verde dependendo da arquitetura do seu sistema. Visto que esta técnica depende de uma única troca de “azul” para “verde”, a sua escolha vai depender de como você está servindo conteúdo no front-end da sua infraestrutura.

Instância EC2 Única com Elastic IP

No cenário mais simples, todo o seu tráfego público está sendo servido a partir de uma única instância EC2. Toda nova instância no AWS recebe dois endereços – um IP privado, que não é acessível a partir da Internet, e um IP público que é. No entanto, se você desligar a sua instância ou se ocorrer alguma falha, esses endereços IP são liberados e você não poderá recuperá-los.

Um Elastic IP é um endereço IP estático alocado à sua conta AWS que você pode atribuir como o IP público para qualquer instância EC2 que você possui. Você também pode reatribuí-lo a outra instância fazendo uma simples chamada de API.

No nosso caso, Elastic IP’s são a maneira mais simples de implementar a troca azul-verde: suba uma nova instância EC2, configure-a, faça deploy da nova versão do seu sistema, teste-o e, quando ele estiver pronto para produção, simplesmente reatribua o Elastic IP da instância antiga para a nova. A mudança será transparente para os seus usuários e o tráfego será redirecionado quase que imediatamente para a nova instância.

Múltiplas Instâncias EC2 por trás de um ELB

Se você estiver servindo conteúdo através de um balanceador de carga, então a mesma técnica não funcionaria pois você não pode associar um Elastic IP a um ELB. Neste cenário, o ambiente azul atual é um grupo de instâncias EC2 e o balanceador de carga encaminhará requisições para qualquer instância saudável no grupo. Para realizar a troca azul-verde usando o mesmo balanceador de carga você precisa substituir o grupo todo por um novo conjunto de instâncias EC2 contendo a nova versão do software. Há duas maneiras de fazer isso: automatizando uma série de chamadas de API ou utilizando grupos de AutoScaling.

Cada serviço AWS expõe uma API e um cliente de linha de comando que você pode usar para controlar a sua infraestrutura. A API do ELB permite registrar e desregistrar instâncias EC2, o que irá adicioná-las ou removê-las do grupo. Realizar a troca azul-verde com chamadas API exigirá que você registre as novas instâncias “verdes” e ao mesmo tempo desregistre as instâncias “azuis”. Você pode até mesmo realizar essas chamadas em paralelo para ter uma troca mais rápida. No entanto, a troca não será imediata porque há um atraso entre registrar uma instância a um ELB e o ELB começar a encaminhar requisições para ela. Isso acontece pois o ELB só encaminha requisições para instâncias saudáveis ​​e ele precisa executar algumas verificações (health checks) antes de considerar as novas instâncias como saudáveis.

A outra opção é utilizar o serviço AWS conhecido como AutoScaling. Ele permite que você defina regras automáticas para gerenciar a escalabilidade da sua infraestrutura, seja aumentando ou diminuindo o número de instâncias EC2 em sua frota. Para usá-lo, primeiro você precisa definir uma configuração de lançamento (launch configuration) que especifique como criar novas instâncias – qual AMI utilizar, o tipo de instância, grupo de segurança, script de inicialização (user data), etc. Então você pode usar essa configuração de lançamento para criar um grupo de AutoScaling definindo o número de instâncias que você deseja ter em seu grupo. O AutoScaling irá então subir o número desejado de instâncias e continuará monitorando o grupo continuamente. Se uma instância falhar ele irá subir uma nova instância para substituí-la; se um limite for ultrapassado, ele irá aumentar ou diminuir o tamanho da sua frota com base na demanda.

Os grupos de AutoScaling também podem ser associados a um ELB e ele cuidará de registrar e desregistrar instâncias EC2 no balanceador de carga toda vez que um evento de escalabilidade ocorrer. No entanto, a associação só pode ser feita quando o grupo é criado pela primeira vez e não depois que ele estiver sendo executado. Podemos usar esse recurso para implementar a troca azul-verde, mas isso exigirá alguns passos não intuitivos, detalhados aqui:

  1. Criar a configuração de lançamento para a nova versão “verde” do seu software.
  2. Criar um novo grupo de AutoScaling “verde” usando a configuração de lançamento a partir do passo 1 e associá-lo ao mesmo ELB que está servindo as instâncias “azuis”. Aguardar até que as novas instâncias se registrem e se tornarem saudáveis.
  3. Atualizar o grupo “azul” e definir o número desejado de instâncias para zero. Aguardar até que as instâncias antigas sejam encerradas.
  4. Remover o grupo de AutoScaling “azul” e sua configuração de lançamento.

Este procedimento manterá o mesmo ELB funcionando enquanto as instâncias EC2 e o grupo de AutoScaling atrás dele são substituídos. A principal desvantagem dessa abordagem é o atraso. Você precisa esperar até que as novas instâncias subam, até o grupo de AutoScaling registrá-las no ELB, até o ELB considera-las saudáveis​​ e, por fim, até que as antigas instâncias sejam encerradas. Enquanto a troca está acontecendo, há um período de tempo em que o ELB encaminha requisições tanto para instâncias “verdes” quanto para “azuis”, o que pode causar um efeito indesejável para seus usuários. Por este motivo eu provavelmente não usaria essa abordagem para realizar implantações azul-verde com ELB’s e em vez disso consideraria a próxima opção – o redirecionamento usando DNS.

Redirecionamento de DNS utilizando Route53

Ao invés de expor endereços de IP ou longos hostnames dos ELB’s aos seus usuários, você pode ter um nome de domínio para todos as seus URL’s públicas. Fora do AWS, você pode realizar a troca azul-verde alterando registros CNAME no DNS. No AWS, você pode usar o Route53 para alcançar o mesmo resultado. Com o Route53, você cria uma zona de hospedagem (hosted zone) e define conjuntos de registro de recursos (resource record sets) para dizer ao Sistema  de Nome de Domínio (DNS) como direcionar tráfego para o seu domínio.

Você pode usar o Route53 para realizar a troca azul-verde introduzindo um novo ambiente “verde” –  que pode ser uma única instância EC2 ou um novo ELB – e simplesmente atualizando o conjunto de registro de recursos para apontar seu domínio/subdomínio para a nova instância ou para o novo ELB.

Embora o Route53 suporte essa prática comum de gerenciar registros DNS, há uma alternativa melhor. O Route53 tem uma extensão para DNS específica ao AWS que se integra melhor a outros serviços AWS e também é mais barato – o conjunto de registro de alias de recursos (alias resource record sets). Eles funcionam praticamente da mesma maneira, porém ao invés de apontar para qualquer endereço IP ou registro DNS, eles apontam para um recurso AWS específico: uma distribuição CloudFront, um ELB, um bucket S3 servindo um site estático, ou outro conjunto de registro de recursos Route53 na mesma zona de hospedagem.

Finalmente, outra forma de realizar a troca azul-verde com o Route53 é utilizando o Weighted Round-Robin. Isso funciona tanto para conjuntos de registro de recursos normais quanto para conjuntos de registro de alias de recursos. Você precisa associar várias respostas para o mesmo domínio/subdomínio e atribuir um peso entre 0 e 255 para cada registro. Ao processar uma consulta DNS, o Route53 irá selecionar uma resposta usando uma probabilidade calculada com base nesses pesos. Para realizar a troca azul-verde você precisa ter um registro existente para o atual ambiente “azul” com peso 255 e um novo registro para o ambiente “verde” com peso 0. Em seguida, basta trocar esses pesos para redirecionar o tráfego de azul para verde.

A única desvantagem dessa abordagem é que a propagação de mudanças no DNS pode demorar algum tempo, por isso você não teria controle sobre quando o usuário final perceberá isso. Os benefícios são que você expõe seus usuários a URL’s mais fáceis de serem memorizadas, a mudança acontece quase imediatamente, você pode testar o novo ambiente “verde” antes de promovê-lo e, com o weighted round-robin, você ganha a flexibilidade adicional de fazer deploys do tipocanary automaticamente.

Troca de ambiente com Elastic Beanstalk

O último cenário é quando você está implantando seu aplicativo web no Elastic Beanstalk, a oferta de plataforma como serviço (PaaS) da Amazon que suporta Java, .Net, Python, Ruby, NodeJS e PHP. O Elastic Beanstalk expõe o conceito de um ambiente que permite que você rode várias versões do seu aplicativo lado a lado, bem como a capacidade de realizardeploys com tempo inoperacional zero. A troca azul-verde simplesmente consiste em criar um novo ambiente “verde” e seguir as etapas na documentação para realizar a troca.

Conclusão

A Implantação Azul-Verde é uma técnica importante para implementar Entrega Contínua. Ela reduz o risco permitindo a realização de testes antes do lançamento de uma nova versão para produção. Ao mesmo tempo, ela permite implantações com tempo inoperacional quase zero e um mecanismo de reversão rápida caso algo dê errado. É uma técnica poderosa para fazer deploy de software, especialmente quando você estiver utilizando computação em nuvem na sua infraestrutura. Os provedores de nuvem, tal como o AWS, permitem que você crie novos ambientes facilmente através de uma simples chamada de API e oferecem várias opções para implementar as implantações Azul-Verde.

Post to Twitter

An important technique for reducing the risk of deployments is known as Blue-Green Deployments. If we call the current live production environment “blue”, the technique consists of bringing up a parallel “green” environment with the new version of the software and once everything is tested and ready to go live, you simply switch all user traffic to the “green” environment, leaving the “blue” environment idle. When deploying to the cloud, it is common to then discard the idle environment if there is no need for rollbacks, especially when using immutable servers.

If you are using Amazon Web Services (AWS) as your cloud provider, there are a few options to implement blue-green deployments depending on your system’s architecture. Since this technique relies on performing a single switch from “blue” to “green”, your choice will depend on how you are serving content in your infrastructure’s front-end.

Single EC2 instance with Elastic IP

In the simplest scenario, all your public traffic is being served from a single EC2 instance. Every instance in AWS is assigned two IP addresses at launch – a private IP that is not reachable from the Internet, and a public IP that is. However, if you terminate your instance or if any failure occurs, those IP addresses are released and you will not be able to get them back.

An Elastic IP is a static IP address allocated to your AWS account that you can assign as the public IP for any EC2 instance you own. You can also reassign it to another instance on demand, by making a simple API call.

In our case, Elastic IPs are the simplest way to implement the blue-green switch – launch a new EC2 instance, configure it, deploy the new version of your system, test it, and when it is ready for production, simply reassign the Elastic IP from the old instance to the new one. The switch will be transparent to your users and traffic will be redirected almost immediately to the new instance.

Multiple EC2 instances behind an ELB

If you are serving content through a load balancer, then the same technique would not work because you cannot associate Elastic IPs to ELBs. In this scenario, the current blue environment is a pool of EC2 instances and the load balancer will route requests to any healthy instance in the pool. To perform the blue-green switch behind the same load balancer you need to replace the entire pool with a new set of EC2 instances containing the new version of the software. There are two ways to do this – automating a series of API calls or using AutoScaling groups.

Every AWS service has an API and a command-line client that you can use to control your infrastructure. The ELB API allows you to register and de-register EC2 instances, which will either add or remove them from the pool. Performing the blue-green switch with API calls will require you to register the new “green” instances while de-registering the “blue” instances. You can even perform these calls in parallel to switch faster. However, the switch will not be immediate because there is a delay between registering an instance to an ELB and the ELB starting to route requests to it. This is because the ELB only routes requests to healthy instances and it has to perform a few health checks before considering the new instances as healthy.

The other option is to use the AWS service known as AutoScaling. This allows you to define automatic rules for triggering scaling events; either increasing or decreasing the number of EC2 instances in your fleet. To use it, you first need to define a launch configuration that specifies how to create new instances – which AMI to use, the instance type, security group, user data script, etc. Then you can use this launch configuration to create an auto-scaling group defining the number of instances you want to have in your group. AutoScaling will then launch the desired number of instances and continuously monitor the group. If an instance becomes unhealthy or if a threshold is crossed, it will add instances to the group to replace the unhealthy ones or to scale up/down based on demand.

AutoScaling groups can also be associated with an ELB and it will take care of registering and de-registering EC2 instances to the load balancer any time an automatic scaling event occurs. However the association can only be done when the group is first created and not after it is running. We can use this feature to implement the blue-green switch, but it will require a few non-intuitive steps, detailed here:

  1. Create the launch configuration for the new “green” version of your software.
  2. Create a new “green” AutoScaling group using the launch configuration from step 1 and associate it with the same ELB that is serving the “blue” instances. Wait for the new instances to become healthy and get registered.
  3. Update the “blue” group and set the desired number of instances to zero. Wait for the old instances to be terminated.
  4. Delete the “blue” AutoScaling group and launch configuration.

This procedure will maintain the same ELB while replacing the EC2 instances and AutoScaling group behind it. The main drawback to this approach is the delay. You have to wait for the new instances to launch, for the AutoScaling group to consider them healthy, for the ELB to consider them healthy, and then for the old instances to terminate. While the switch is happening there is a period of time when the ELB is routing requests to both “green” and “blue” instances which could have an undesirable effect for your users. Because of that reason, I would probably not use this approach when doing blue-green deployments with ELBs and instead consider the next option – DNS redirection.

DNS redirection using Route53

Instead of exposing Elastic IP addresses or long ELB hostnames to your users, you can have a domain name for all your public-facing URLs. Outside of AWS, you could perform the blue-green switch by changing CNAME records in DNS. In AWS, you can use Route53 to achieve the same result. With Route53, you create a hosted zone and define resource record sets to tell the Domain Name System how traffic is routed for that domain.

You can use Route53 to perform the blue-green switch by bringing up a new “green” environment – it could be a single EC2 instance, or an entire new ELB – then you simply update the resource record set to point the domain/subdomain to the new instance or the new ELB.

Even though Route53 supports this common DNS approach, there is a better alternative. Route53 has an AWS-specific extension to DNS that integrates better with other AWS services, and is cheaper too – alias resource record sets. They work pretty much the same way, but instead of pointing to any IP address or DNS record, they point to a specific AWS resource: a CloudFront distribution, an ELB, an S3 bucket serving a static website, or another Route53 resource record set in the same hosted zone.

Finally, another way to perform the blue-green switch with Route53 is using Weighted Round-Robin. This works for both regular resource record sets as well as alias resource record sets. You have to associate multiple answers for the same domain/sub-domain and assign a weight between 0-255 to each entry. When processing a DNS query, Route53 will select one answer using a probability calculated based on those weights. To perform the blue-green switch you need to have an existing entry for the current “blue” environment with weight 255 and a new entry for the “green” environment with weight 0. Then, simply swap those weights to redirect traffic from blue to green.

The only disadvantage of this approach is that propagating DNS changes can take some time, so you would have no control over when the user will perceive it. The benefits are that you expose human-friendly URLs to your users, the switch happens with near zero-downtime, you can test the new “green” environment before promoting it, and with weighted round-robin you get the added flexibility of doing canary releases for free.

Environment swap with Elastic Beanstalk

The last scenario is when you are deploying your web application to Elastic Beanstalk, Amazon’s platform-as-a-service offering that supports Java, .Net, Python, Ruby, NodeJS and PHP. Elastic Beanstalk has a built-in concept of an environment that allows you to run multiple versions of your application, as well as the ability to perform zero-downtime releases. Therefore, the blue-green switch simply consists of creating a new “green” environment and following the steps in the documentation to perform the swap.

Conclusion

Blue-Green deployment is an important technique to enable Continuous Delivery. It reduces risk by allowing testing prior to the release of a new version to production, while at the same time enabling near zero-downtime deployments, and a fast rollback mechanism should something go wrong. It is a powerful technique to manage software releases especially when you are using cloud infrastructure. Cloud providers such as AWS enable you to easily create new environments on-demand and provide different options to implement Blue-Green deployments.

Post to Twitter

“A Coding Dojo is a meeting where a bunch of coders get together to work on a programming challenge. They are there to have fun and to engage in deliberate practice in order to improve their skills.” — Definition from codingdojo.org

I’ve participated and facilitated many Coding Dojo sessions since 2007 and people have been asking me this question many times that I decided to answer it in this post: How can I start a Coding Dojo and how to keep it engaging?

First of all, I would highly recommend the book The Coding Dojo Handbook by Emily Bache. She dedicates an entire section for tips on how to organize a Coding Dojo. Instead of repeating her advice, I wanted to provide a few tips based on my own experience founding the Coding Dojo São Paulo and running sessions inside ThoughtWorks and at some of our clients.

Engage a small group of core members

It is important to create a cadence to the meetings and although having different people show up each time is good, I found it very valuable to engage a small group of core members that are almost always there. This creates healthy social pressure to keep things going and gives flexibility when one person cannot attend.

In the beginning I facilitated most of the sessions, but once the core team got used to the process we started rotating the facilitator role. This also meant that when I had to move to another country, the Coding Dojo did not stop. I knew I left it in the good hands of people like Mariana, Hugo, Thiago, and although none of us attend the sessions anymore, the Coding Dojo São Paulo is still alive and active. New people are running it in different locations throughout São Paulo and our mailing list has more than 350 members with new people joining every week.

Keep it fresh! Try out new ideas

Following the standard meeting styles is a good way to start. In Randori format only one pair codes while the rest of the audience follows along and the pair rotates every 5-7 minutes; In a Prepared Kata one person is coding all the time and presenting their solution while the audience follows along. These formats are simple but provide enough structure to have a fun and effective session. However, after a while, you should start experimenting with new ideas. This is how we ended up inventing the Kake style in São Paulo. A Code Retreat is another format that became very popular by Corey Haines.

Keeping things fresh doesn’t only apply to the format of the session. Try experimenting with variations on:

  • Programming language
  • Creating constraints: like following Object Calisthenics rules or using gigantic font sizes to avoid long lines of code
  • Tools and environment: IDEs, text editors, operating systems, testing frameworks
  • Design approaches: solving the same kata with different designs or algorithms

In summary, don’t let things get stale or boring.

Be welcoming and tolerant with new members

The Dojo should happen in a safe environment, and as a facilitator you need to create that safety. Be aware of people’s behaviors during the meeting to avoid things like: speaking over the pair, opening their own laptop and start coding in parallel, big bang changes to the code, rushing to complete the solution in their 5 minutes slot, or usage of language-specific idioms without explanation.

When I am facilitating the session, if there is one new person in the group, I always go through an introductory slide deck. It takes me only five minutes to present, but it sets the scene and expectations about what’s about to happen. I also reinforce the fact that they are not required to code and can simply observe. The only people that will code are those that are willing to do so. I cannot remember how many times I had new participants wanting to be observers and volunteering themselves to step up half-way through the meeting, once they get a sense of what’s happening.

Another strategy that I use is to put myself in “beginner’s mind” the entire time, and observe people’s body language during the session. If someone frowns or seem puzzled about something and they don’t ask, I will ask on their behalf even if I know the answer. Instead of just giving the answer or explaining what happened, I find that by asking the question I demonstrate the behavior I want to encourage. Human beings are good at copying others and once they see someone else doing it, they’re much more likely to do it themselves.

Find your group’s sustainable pace

As I mentioned before, meeting cadence is important. However, you should find the right cadence for your group. When we started the Coding Dojo São Paulo, we were so excited that we ran 2 sessions per week. After a while it became clear that this was a hard pace to keep up with, so we switched to weekly meetings. This has shifted over time between bi-weekly and weekly as the group of participants changed.

If you are running a Coding Dojo inside your company, depending on how much support you get internally, you can run sessions more frequently.

Another aspect of cadence to adjust is the duration of the meeting. I’ve participated in sessions that lasted 2 hours or more, but that might not be sustainable for your group. Too short and you don’t get enough time to learn. Too long and people get tired.

My advice is to experiment and gather feedback in the retrospective at the end of the session. In my experience the duration should be at least one hour, although 1:30h to 2:00h might be better. And weekly or bi-weekly meetings are the best frequency.

Prepare prior to the session

A little bit of preparation will help the meeting go smoother. Here are some of the things I’ve done to prepare:

  • Have a pre-selected list of problems for people to choose from
  • Decide on the format
  • Make sure you have all the equipment you need (keyboards with different layouts, projector, adapters, post-it notes, etc.)
  • Environment setup: create the project structure, have an initial skeleton for source files, make sure tests execute
  • Order food: in longer meetings or after work hours, I try to get food ordered prior to the session to save time
  • Have the introductory slides at hand
  • Send a reminder e-mail confirming location, date, and time
  • Engage key participants: if you’re trying a new language, find a language guru so she can assist during the session; if you’re doing a prepared kata, make sure the presenter is ready; if you want to share the session outcomes, find someone to be a scribe.

Respect the time

Finally, try to respect people’s time. This was especially hard in Brazil since we are all used to being late :-)

People who arrive on time should be rewarded with an on-time start to get the maximum out of the coding session. Also, respect the timebox when rotating pairs and don’t let the meeting run longer than you planned. As programmers, we are all passionate about making things work, so stopping half-way through a solution is not an easy ask. However, the goal of the Coding Dojo is to learn from each other and not to finish the entire Kata. I would much rather save the last 15 minutes for the group retrospective to discuss what we learned than having a few more rounds of coding without a wrap-up at the end.

Take action!

Hopefully these tips will help you get off the ground. We have just recently started a Coding Dojo at ThoughtWorks’ San Francisco office so I felt inspired to finally write about this. Let me know if you have more tips on how to run a great Coding Dojo!

Post to Twitter

In my last QCon presentation in São Paulo, I gave the audience my take on refactoring strategies. You can check out my slides from that talk on Slideshare (in Portuguese, but you should be able to follow the pictures and code samples).

One of my main points is that you need to learn how to shift between two levels of thinking when you engage in refactoring, each requiring you to develop different sets of skills. I called them: Mechanic vs. Strategic.

Mechanics is important because you should be doing small, incremental, and opportunistic refactorings all the time while you develop code. It’s not a coincidence that TDD has an explicit step of refactoring after you tests are green. In order to improve the design of your code you should master how to perform these small steps well. A lot of the modern IDEs – especially in static typed languages – provide great support for this and you should learn how to use them well.

On a strategic level, you have to be able to take a step back and understand the big picture of what you’re trying to achieve. While it’s important to know how to perform individual refactorings, it’s more important at this level to know how to compose them in order to achieve the design improvement you desire.

One of the examples I used to demonstrate this, was inspired by a blog entry that Paul Hammant published a few weeks before my talk. I got some feedback that it was hard to follow the code changes on the slides, so I decided to record a small video of me tackling his refactoring experiment.

In the following video, the strategy is not overly complex: extracting a few methods from a single controller into separate controllers. However, you will see how the order of steps you use can affect the refactoring, making your job harder or easier. You should also be able to pick up a lot of the mechanics along the way:

You can find some of my attempts in branches on this Github repository. If you want to try it out for yourself, or to take this experiment further, these are some of the things I would explore:

  • The Upseller class should have it’s own tests, and I missed the @Component and @Scope("request") annotations that would probably be caught by integration tests.
  • Using Mockito or some other mocking framework to mock collaborators, and adding more behaviour to the domain model objects since using toString for testing is not something I would particularly do in real life.
  • Perhaps I could’ve used git stash instead of git reset to save the initial refactoring steps and later reapplying it assuming the merge would be simple.
  • The path I took is not the only one and you can probably explore different refactoring steps and different approaches to get to the goal.

I would like to thank Paul Hammant for coming up with this experiment and for encouraging me to publish this screencast. Please send me some feedback if you think this was useful or helpful for you!

Post to Twitter

Clique aqui para ler em Português

Agile Brazil 2011

I’m excited to be going to Fortaleza at the end of this month to present and organize Agile Brazil 2011. Last year in Porto Alegre I decided to focus on the organization aspects of the conference, but this year I decided to present again. Even though I’ve been participating in conferences around the world, my last talk in Brazil was in 2008 and I really miss being around the Brazilian community and sharing experiences with everyone. That’s why I’m happy to be presenting in (mostly) Portuguese again :-)

Managing your technical debt – June 29th

In this 50 minutes talk, I will cover a few practices and ideas I’ve used and seen used in projects to manage technical debt responsibly. Some of the topics I will cover are:

  • What is technical debt and what are the consequences of incurring it
  • Ideas on how to identify “hot spots”
  • How to prioritize and plan the payment of your debt
  • Tracking and visibility
  • How to avoid incurring debt
  • Communicating the importance of paying technical debt to non-technical managers and stakeholders

Slicing and dicing your user stories – July 1st

Co-presenting with Jenny, in this 50 minutes talk we will discuss the benefits of working with small user stories, and present different ways to split requirements into user stories. The session will cover topics related to:

  • What makes good user stories
  • How to break down features into smaller chunks without losing track of the overall goal
  • Different ways to split stories into vertical slices
  • Helping stakeholders to track and understand how the feature will be delivered piece by piece
  • Planning the delivery to increase feedback

Refactoring Katas – July 1st

In this 10 minutes Lightning Talk, I’ll share an idea I’ve been using to practice refactoring. Using a different Kata format, I will explain the mechanics and quickly demonstrate it in practice.

If you haven’t registered yet, you can still register online. And if you are around Rio and São Paulo the following week, I will be giving ThoughtWorks’ AWS Training, which you can register here. I’m looking forward to seeing you in Fortaleza!


Agile Brazil 2011 – Eu vou!

Estou empolgado com a viagem para Fortaleza no final do mês para participar e organizar a Agile Brazil 2011. No ano passado em Porto Alegre, eu decidi me focar mais na organização do evento, mas esse ano resolvi apresentar. Mesmo estando participando de diversas conferências ao redor do mundo, minha última palestra no Brasil foi em 2008 e eu sinto saudade de estar compartilhando experiências com a comunidade brasileira. É por isso que estou feliz por apresentar (na maior parte) em Português novamente :-)

Gerenciando sua dívida técnica – 29 de Junho

Nessa palestra de 50 minutos, eu vou apresentar algumas idéias e práticas que tenho usado para gerenciar dívida técnica de forma responsável. Alguns dos tópicos que irei abordar:

  • O que é dívida técnica? Quais os sintomas mais comuns e qual as consequências do acúmulo de dívida técnica?
  • Formas de identificar e encontrar “hot spots”
  • Como priorizar e planejar o pagamento da dívida
  • Tracking e visibilidade
  • Como evitar o acúmulo
  • Idéias para convencer gerentes e pessoas sem conhecimento técnico sobre a importância de pagar a dívida

Slicing and dicing your user stories – 1 de Julho

Apresentado junto com a Jenny, essa palestra de 50 minutos vai discutir os benefícios de usar user stories pequenas e apresentar diferentes formas de dividir requisitos em histórias. A palestra vai cobrir tópicos sobre:

  • Características de boas user stories
  • Como quebrar funcionalidades em pedaços pequenos sem perder a visão do todo
  • Diferentes idéias para quebrar requisitos em fatias verticais
  • Formas de ajudar stakeholders a rastrear e entender como uma funcionalidade será entregue em pedaços menores
  • Planejamento da entrega para maximizar o feedback

Refactoring Katas – 1 de Julho

Nessa Lightning Talk de 10 minutos, vou compartilhar uma ideia que tenho usado para praticar refatoração. Usando um formato de Kata diferente, vou explicar a mecânica do exercício e demonstrar um pouco como ela funciona na prática.

Se você ainda não se inscreveu, ainda pode se inscrever online. E caso esteja por perto no Rio e em São Paulo na semana seguinte, estarei ministrando o Treinamento AWS da ThoughtWorks, que você pode se inscrever aqui. Vejo vocês em Fortaleza!

Post to Twitter

Clique aqui para ler em Português

With my visit to Brazil to attend Agile Brazil 2011, ThoughtWorks is organizing two classes of our AWS (Amazon Web Services) training in Brazil. If you are near Rio de Janeiro on July 6th or near São Paulo on July 7th, don’t miss the chance to participate!

ThoughtWorks AWS Training is a one-day, hands-on training course for developers, systems administrators and all technologists who want to embark on a technical deep-dive of Amazon’s powerful AWS tools. Developed in partnership with Amazon, the class will lead you through the AWS infrastructure services, and show you how to architect and deploy your applications “in the cloud”.

Target Audience

  • Developers
  • Architects
  • System Administrators
  • Technologists in general

Prerequisites

  • Basic Unix command line experience required. No previous AWS experience necessary
  • AWS account configured and setup (registered directly with Amazon)

What you’ll learn

  • Understand how Amazon EC2, S3, EBS, VPC, CloudFront and other services work together to form a scalable web architecture
  • Gain hands-on experience provisioning machines, managing storage, caching content at network endpoints, and more
  • Walk through migrating systems from internal infrastructure to Amazon
  • Understand use cases, best practices, and tips to make cloud computing easier

Topics

  • Introduction to Amazon Web Services
  • S3 (Simple Storage Service)
  • CloudFront and Import/Export
  • EC2 (Elastic Compute Cloud), Availabitilty Zones, and Regions
  • EBS (Elastic Block Storage)
  • Managing security
  • ELB (Elastic Load Balancing)
  • CloudWatch and AutoScaling
  • VPC (Virtual Private Cloud)
  • RDS (Relational Database Service)
  • Architecture / Migrating systems to AWS

More Information

For more information about prices and registration, please go to the training’s page on ThoughtWorks’ website. If you have any questions, get in touch or leave a comment here!


Treinamento AWS no Brasil

Aproveitando minha visita ao Brasil para participar da Agile Brazil 2011, a ThoughtWorks vai promover duas turmas do nosso treinamento do AWS (Amazon Web Services) no Brasil. Se você está perto do Rio de Janeiro no dia 6 de Julho ou perto de São Paulo no dia 7 dia Julho, não perca a chance de participar!

O treinamento Amazon Web Services (AWS) da ThoughtWorks é um workshop de um dia para desenvolvedores, administradores de sistema e pessoas interessadas em tecnologia que querem embarcar numa experiência “hands-on” das poderosas ferramentas do AWS. Desenvolvido pela ThoughtWorks em parceria com a Amazon, esse curso vai guiá-lo através dos serviços de infra-estrutura do AWS e mostrar como arquitetar e fazer deploy de suas aplicações “na nuvem”.

Público-Alvo

  • Desenvolvedores
  • Arquitetos
  • Administradores de Sistema
  • Interessados em Tecnologia em geral

Pré-Requisitos

  • Conhecimento básico em linha de comando (terminal)
  • Conta configurada com acesso ao AWS (registre-se diretamente com a Amazon)

Benefícios

  • Entender como EC2, S3, EBS, VPC, CloudFront e outros serviços do AWS trabalham em conjunto para criar uma arquitetura web escalável.
  • Ganhar experiência “hands-on” provisionando servidores, gerenciando armazenamento, fazendo cache de conteúdo em CDN, e mais.
  • Discutir formas de migração de sistema internos para insfra-estrutura na Amazon.
  • Entender os casos de uso, boas práticas, e dicas de como aproveitar computação em nuvem de forma efetiva.

Tópicos

  • Introdução ao Amazon Web Services
  • S3 (Simple Storage Service)
  • CloudFront e Import/Export
  • EC2 (Elastic Compute Cloud) e Regiões
  • EBS (Elastic Block Storage)
  • Gerenciamento de segurança
  • ELB (Elastic Load Balancing)
  • CloudWatch e AutoScaling
  • VPC (Virtual Private Cloud)
  • RDS (Relational Database Service)
  • Arquitetura / Migração de sistemas para o AWS

Maiores Informações

Para maiores informações sobre preços e formas de inscrição, por favor acesse a página do treinamento no site da ThoughtWorks e preencha o formulário de contato. Caso tenha alguma dúvida, entre em contato ou deixe um comentário aqui!

Post to Twitter

The second day of the conference was one of the busiest for me, and it started with a great keynote…

Keynote: Leveraging Diversity in Parallel: Perspective, Heuristics and Oracles – Scott Page

Scott Page, author of “Complex Adaptive Systems: an introduction to computational models of social life” and “The Difference: how the power of diversity creates better groups, firms, schools, and societies”, gave the introductory keynote on the topic of diversity. His talk was one of the most interesting of the conference to me, since he showed the “algebra of collaboration” or, in other words, what are the factors that make diversity work in a team. A few of the highlights for me were:

  • The importance of bringing different perspectives to the table: his example of playing the card game “Sum to 15″ as Tic-Tac-Toe was really good.
  • When it comes to problem solving, hardness is not the same as complexity: in hard problems, the solution space have lots of peaks (hard to find the global optimum); in complex problems, the solution space moves (even if an optimum is found now, the entire landscape might change due to external conditions).
  • I’ve read about the Wisdom of Crowds before and saw James Surowiecki’s keynote at Agile 2008, but Scott showed the formula behind it: crowd error = average error – diversity. In other words, the amount of diversity will decrease the average error of a crowd.
  • Netflix Prize: an amazing story about a three-year-long contest put up by Netflix to beat on their own algorithm that predicts how much someone is going to enjoy a movie based on their preferences. What started as a competition to come up with the best algorithm, ended up in a race to collaboration, when the teams running for the prize started combining their approaches and finding that the diversity on the combined algorithm would yield a better result than any of the solutions in isolation. You can read more about this story here.

“Diversity becomes more valuable as the problem becomes harder and harder” — Scott Page

Thawing the “Design Winter” – Michael Feathers

The second talk I attended in the morning was about how there’s not much going on in the software design space lately. Michael Faethers started off noting that our industry is very generational: a lot of the innovation on development techniques in the past, were followed by a wave of design books around the topic (Structured programming, then Structured Analysis and Design; Object-Orientation, then OO Analysis and Design; and so on). Michael said he decided to stop when he saw the book “Aspect Oriented Programming with Use Cases” :-) But with Agile techniques, the last good reference about design was Martin Fowler’s “Is Design Dead?”, from the early 2000’s. Some of Michael’s points on the topic were:

  • Design happens around constraints: everytime there are decisions to be made around how to develop a given piece of software, there will be design happening.
  • Architecture is design: people should acknowledge that and recognise software architecture as design.
  • Look at design as the composition of “things”, not just classes.
  • Design are the trade-offs and the discussions you make when taking a particular decision.

I found it an interesting take on the topic, and agree with most of his claims about software design. I personally think that, as a TDD practitioner, I spend a lot of the time doing design (coding is a design activity, not just typing). I also agree that we need to think about design in a higher level, when defining the system/application architecture: those are the decisions that will be hard to change. To me it’s about being able to combine those early-and-hard-to-make decisions with the emergent aspects that you will discover with time, as your system evolve. What you can’t expect is to get it right on the first time, but be flexible enough to incorporate your learnings into something that can evolve.

The Five Habits of Successful Lean Development – Mary Poppendieck

This was the only session I ended up attending on the topic of Lean/Kanban on this conference. Mary summarised 5 habits of successful lean teams:

  • Purpose“Why are you doing it?” In lean companies, workers have a high sense of purpose and understand how their jobs relate to the common goal/vision of the company.
  • Passion“We Care” Having individuals with intrinsic motivation will produce high quality results.
  • Professionalism“Build the right thing” customers don’t want software. If they could get the same thing without software, they would go there. But also “Build the thing right” as simple as possible – and no simpler
  • Pride – Expect local decisions, and push them to the front-line workers. They are effectively engaged in delivering superior customer outcomes.
  • Profit“GM stays in business to make money. Toyota makes money to stay in business.”

As catchy as it sounds to summarise Lean into these “5 Ps” mnemonics, I found it to be just a partial view into the topic. Other strong aspects of Lean are the focus on people, and on leaders taking an active role as teachers. These were some of the things I would also expect to see in a lean organisation that I found missing on this “5 Ps” view.

Done Considered Harmful – Marcus Ahnve

This was a lightning talk by my friend Marcus, and I found it really interesting. It was provocative, by picking on a widely discussed concept in Scrum: the Definition of Done (and things like Done-Done, or Ready-Ready). Marcus’ point is that by calling it “Done”, it creates a fake state that usually translates to a hand-off to a different team, or a partial completion state that encourages local optimization. From Lean thinking we know that hand-offs are usually a big source of waste (along with the queues that are usually associated with it).

I was worried he would only present the problem in the talk, but was otherwise pleased with the approach he suggests to tackle this problem: instead of “Done”, call the state by what it represents (e.g. “Ready for deployment”, “In UAT”, “In Production”). Look at the last column in your wall from a hand-off perspective and treat it as inventory in the end-to-end value stream: you might be accumulating WIP and optimizing locally instead of improving the overall effectiveness of your value stream.

Test Automation at Enterprise Scale – Mark Streibeck

This was the technical session I most enjoyed in the conference. Mark talked about the story behind creating Google’s infrastructure to run tests in a massive scale, across teams and codebases, and to provide useful feedback and reports to their developers and teams. Some of the highlights for me were:

  • Scale of the system: they run 60M tests/day, including browser tests (cross-browser and running multiple configurations). More than 1700 projects use it (they account that ~80% of all tests are run on this system). The system runs in ~5000 cores.
  • Sharing their challenges: this project involved a lot of infrastructure work, which took a lot of time at the beginning of the project, to come up with a scalable architecture. Also, they had to design a UI for a system that solves a known problem in a different way. They also had a lot of work when integrating with various heterogeneous build/test systems that were already in place accross different teams.
  • Rollout process: the way they planned and executed the rollout across the company: starting with a few “early innovators”, organic rolling our to a few “early adopters”, and using feedback from these early teams to “fix it” when helping the “early majority” (they even had setup a day where the entire software engineering group could fix things to make the system better.)
  • Creativity when solving problems: one of the biggest problems on building and running tests was fixing dependencies issues for projects. Instead of tackling the difficult task to try and solve those issues as a system feature, they focused on improving the way that these issues are displayed on the UI. Just showing the dependencies and letting the teams see it and fix it by themselves proved to be a much simpler solution to the problem.

Conference Banquet and Keynote

The second day finished with a keynote and banquet at the Student Society building. The keynote was presented by Bjørn Alterhaug and John PÃ¥l Inderberg, and the topic was “Improvisation: Between Panic and Boredom. Perspectives on teamwork, dialogue and presence in music and other contexts”. It was a very interesting talk about improvisation in music but, most of all, it showed me how practice can lead to mastery: the dynamics between the two musicians and how they communicate to each other through music was a fine example of highly skilled individuals that can achieve incredible results in a highly collaborative environment. A great talk to an Agile audience :-)

The evening finished with an amazing banquet and more music concerts, including a “Keep of Kalessin” show – an Epic Metal norwegian band.

Post to Twitter

Tuesday was the first day of the XP2010 Conference, being held in beautiful Trondheim, Norway. I was on a morning flight from Oslo and only managed to arrive at the venue in the afternoon. It was nice to see a lot of my friends again – so far, I managed to count 8 Brazilians, which is probably the highest number in XP conferences so far :-)

Functional Programming with XP – Amanda Laucher

The first talk I attended in the afternoon was from my friend Amanda Laucher. She presented on an interesting topic, discussing how Functional Programming languages and concepts can relate to today’s Agile practices and principles. Some of the lessons that I took away from her session were:

  • Functional Programming and Testing: she talked about different aspects of Functional Programming and how you can approach testing them. Things like: testing for lazyness, testing monads, minimizing side-effects, testing on strong vs. weak typed languages, testing on static vs. dynamic typed languages (and useful tools like quickcheck). Even though I don’t have a lot of professional experience in FP languages, one particular topic resonated quite strong with my experience: “you need to know FP before you TDD in FP”. On our Coding Dojo in São Paulo, we learned a similar lesson when trying to TDD algorithms: you need to have an idea of how to solve the problem before you drive your solution with TDD. If you don’t know how to solve your problem, TDD won’t give you the solution.
  • How to learn a new language: I really liked Amanda’s pragmatic approach to learning new languages. I particularly liked her reference to the Functional Koans, specifically designed to teach you features of a new language by giving you failing tests that you need to understand and fix. This approach separates the learning of the language, from the learning of the testing tool, or TDD. This is a problem I found when trying to learn a new language by first looking for the testing frameworks available, before understanding the languages’ constructs and concepts.

System Metaphor revisited: The lost XP practice – Joshua Kerievsky

My second session of the day was Joshua Kerievsky’s take on the System Metaphor, an XP practice that he asked on his review of Kent Beck’s second edition of the XP book to be removed, and that has been later applied with great success on Industrial Logic’s main product.

He first talked about metaphors in general: how they link a source domain to a target domain, how they’re always partial, and how people need to have experienced the metaphor in the source in order to apply it to the target. He also briefly compared the role of a System Metaphor with the Ubiquitous Language, as described by Eric Evans on Domain-Driven Design (this is something I came across a while ago too). Finally, Joshua described the benefits of a System Metaphor, summarising them on the three I’s:

  • Illumination: when the chosen metaphor is good, it will provide illumination into aspects of the system design, clarifying unfamiliar design via a familiar domain. It will help describe not only static structure, but also the runtime behaviour of the system. I particularly liked how he described two ways of understanding the Composite pattern using concepts from the music metaphor.
  • Inspiration: a good metaphor will also provide inspiration for new ideas. It provides a system of names from which these ideas may flourish. You might consider ideas from the source domain and evaluate how to apply them to the target. The only thing you need to be careful is that the metaphor is partial, so stretching it to fit your situation might not always be a good idea.
  • Integrity: when the source has a rich, familiar set of related parts, it will provide more integrity when applying it to your target. It will also have more integrity when the mapping between the source and the target is strong. If integrity is high, mixing other metaphors won’t weaken the overall structure.

What I found interesting on Industrial Logic’s use of the System Metaphor is that it is visible to the users of the system. If you’re taking one of their eLearning courses, you can see and interact with concepts from the music metaphor such as albums and playlists. The previous examples I’ve heard about uses of the System Metaphor were internal to the team and the business stakeholders, to improve communication. You can read more about how they discovered and applied the music metaphor here.

Conference reception – Nidaros Cathedral

The first day ended with an organ concert at the beautiful Nidaros Cathedral, followed by the conference reception with drinks and food at the Archbishops Residence and Palace Museum.

Post to Twitter

Clique aqui para ler em Português

As I’ve already mentioned here, the Agile Brazil 2010 conference is accepting session proposals to be part of our program. Already with more than 90 sessions proposed, the program committee decided to postpone the deadline for session submissions until next Sunday (7th March 2010) due to requests and to allow more time for the community to interact and help us build the program. Some new functionality was also released:

  • Users are now able to add comments to the sessions. We want the community to provide feedback to help our authors to improve their sessions prior to the deadline;
  • You can now vote and help us choose the conference logo. We received many proposals, and narrowed it down to 3, and we’re now asking the community to vote on the winner.

In order to participate voting or adding comments, you don’t have to fill out the full author profile, so visit our website (if you haven’t created your account yet) and participate!


[Agile Brazil 2010] Prazo de envio de sessões prorrogado e concurso do logo

Como já publiquei aqui, a Agile Brazil 2010 está aceitando propostas de sessões para fazer parte do nosso programa. Com mais de 90 sessões propostas, o comitê de programa decidiu prorrogar o prazo de submissões até o próximo domingo (7 de Março de 2010) devido a pedidos e para dar mais tempo para a comunidade interagir e nos ajudar a montar o programa da conferência. Algumas novas funcionalidades foram lançadas:

  • Usuários agora podem adicionar comentários nas propostas existentes. Queremos que a comunidade nos ajude fornecendo feedback aos autores e ajudando-os a melhorar suas propostas antes do prazo final;
  • Você pode agora votar e nos ajudar a escolher o logotipo da conferência. Nós recebemos diversas propostas de logotipo e escolhemos 3 para a votação final, onde a comunidade vai decidir o vencedor.

Para participar da votação ou adicionar comentários, você não precisa preencher o perfil completo de autor, então visite nosso website (se ainda não criou uma conta) e participe!

Post to Twitter

Clique aqui para ler em Português

I’m one of the organizers of the program committee for Agile Brazil 2010, and we’re very happy to announce that ThoughtWorks has agreed to sponsor the visit of Martin Fowler, our Chief Scientist, as one of our keynote speakers. Since this is Martin’s first visit to Brazil, I decided to ask him some questions that I thought would be of interest to the participants, and he has kindly agreed to participate in this mini-interview:

Q: What have been keeping you busy lately?

Martin Fowler: Overwhelmingly it’s my upcoming book on DSLs. I found writing books to be hard work, and it’s actually getting harder. By June I expect all of the content will be cast so my mind will be able to get away from it – which I’m very much looking forward to.

Q: What are your expectations about Agile Brazil 2010?

MF: I try not to have expectations about things, that way my mind can be open to the reality when I see it. I’ve been doing conferences frequently for two decades now, so it’s hard to get excited about them. I am excited about coming to Brazil. It will be my first time in South America and both I and my wife have long wanted to come down.

Q: What are you going to talk about in your keynote?

MF: I have no idea. I often don’t decide on my keynote until very close to speaking – often doing extemporaneous talks <http://martinfowler.com/bliki/ExtemporarySpeaking.html>. Recently I’ve been doing keynote talks consisting of three or so talklets, some with slides, some without. But exactly how I’ll do it is something I may only decide the night before.

Q: How do you see the Brazilian software community influencing the future of Agile?

MF: It’s hard to say, as I’m not that familiar with the Brazilian software world. I’ve been very impressed by the Brazilian ThoughtWorkers I’ve met over the years, so I know there’s great potential here. I’m generally keen to see more varied cultures contribute to the software world, I think it’s an important part of us growing as a profession.

Agile Brazil 2010 is going to be an incredible conference, and we’re inviting speakers to submit session proposals (the deadline is approaching: 28/Feb!). Don’t miss the chance to see and talk to Martin Fowler, as he’s one of the few speakers I know of that can put together a first-class keynote on the night before :-)

Don’t forget to follow @agilebrazil on Twitter for conference news, and hope to see you there!


[Agile Brazil 2010] Martin Fowler pela primeira vez no Brasil!

Como um dos organizadores do comitê de programa da Agile Brazil 2010, estamos felizes em anunciar que a ThoughtWorks aceitou patrocinar a visita de Martin Fowler, nosso Cientista-Chefe, como um dos keynotes do evento. Como esta será a primeira vez que Martin visita o Brasil, decidi fazer algumas perguntas que julguei interessantes para os participantes do evento, e ele concordou gentilmente em participar desta mini-entrevista:

P: O que tem te mantido ocupado ultimamente?

Martin Fowler: Surpreendentemente é o meu novo livro sobre DSLs. Eu acho que escrever livros é um trabalho árduo e na verdade isso tem se tornado cada vez mais difícil. Até Junho eu espero que todo o conteúdo esteja definido assim isso vai poder sair um pouco da minha cabeça – algo que estou realmente ansioso para acontecer.

P: Quais são suas expectativas para a Agile Brazil 2010?

MF: Eu tento não criar expectativas sobre essas coisas, assim minha mente pode estar aberta para a realidade quando eu a ver. Eu tenho participado de conferências frequentemente há duas décadas, então acho difícil me empolgar com elas. Mas estou empolgado em visitar o Brasil. Esta será minha primeira vez na América do Sul e tanto eu quanto minha esposa estamos ansiosos há tempos por essa visita.

P: O que você irá abordar no seu keynote?

MF: Eu não tenho idéia. Eu geralmente não decido o assunto do meu keynote até uma data muito próxima do evento – geralmente fazendo palestras extemporâneas (improvisadas) <http://martinfowler.com/bliki/ExtemporarySpeaking.html>. Recentemente eu tenho feito keynotes com em torno de três pequenas palestras, algumas com slides, outras não. Porém decidir exatamente como irei fazê-lo vai ser algo que eu possivelmente decida na noite anterior.

P: Como você vê a comunidade brasileira de software influenciando o futuro dos Métodos Ágeis?

MF: É difícil dizer, pois não estou tão familiarizado com o mundo de software brasileiro. Eu tenho me impressionado bastante com os ThoughtWorkers brasileiros que conheci ao longo dos anos, então eu sei que existe um grande potencial aqui. Em geral eu gosto de ver uma variedade maior de culturas contribuindo para o mundo do software, pois acredito que seja uma parte importante para crescermos como profissão.

A Agile Brazil 2010 vai ser uma conferência incrível, e estamos convidando palestrantes para submeterem propostas de sessão (a data limite está se aproximando: 28/Fev!). Não perca a oportunidade de ver e conhecer o Martin Fowler pessoalmente, pois ele é um dos poucos palestrantes que conheço que consegue preparar um keynote da mais alta qualidade na noite anterior :-)

Não esqueça de seguir @agilebrazil no Twitter para notícias da conferência, e espero ver vocês por lá!

Post to Twitter


© 2007-2009 Danilo Sato | Powered by Wordpress

Page optimized by WP Minify WordPress Plugin