Posts Tagged ‘sizing’

We’ve all bbigcompanyeen there. Someone describes a problem that they want solved (and possibly suggests how they think you should solve it) and in the very next breath asks: “So, how long will it take?”.

Invariably, we get talked into providing some kind of gut feel indication (days/weeks/months/quarters) based on nothing much besides (perhaps) experience. But how often in software do you actually do a rinse-and-repeat of something you’ve done before? In my 10 plus years in IT, never. Never ever.

Unfortunately, we don’t yet work in a world where most people are happy with “we’ll be done when we’re done” so a vague timeline is always needed: if only for coordinating training or the launch email and party. So where does one start?

First, there are some principles/facts-of-life that are important to bear in mind:
1. The cone of uncertainty
2. Some practical examples of the developer cone of uncertainty
3. A good analogy of why our estimates always suck, no matter what data we’ve used

In the beginning….

For me, the first time you can possibly try get a feel for how long your horizon might be, is after you’ve shared the problem with the team and they have had a chance to bandy around some options for solutions. At this point, assuming your team has worked together before, you can try do some planning poker at an Epic level. Pick a “big thing” that the team recently worked on, allocate it a number (3 usually works) and then have the group size “this big thing” relative to the one they have previously completed. I prefer to use a completely random number (like 3) rather than the actual story points delivered for this exercise because otherwise the team might get tied up debating the actual points and not the gut feel relative size.

Now, if you have a known velocity and also know the points delivered for the big thing we already built, you should be able to calculate an approximate size for the new piece and use your velocity to find a date range with variance (don’t forget about that cone!). For example:
– If we agreed our “Bake a cake” epic was a 3
– And then sized the “Bake a wedding cake” epic as a 5
– And “Bake a cake” was about 150 points to deliver
– Then “Bake a wedding cake” is probably about 3/5*150 = 225 points to deliver
– Which means you’re probably in for 225/velocity sprints (with 50% variance)

At the very least, this should help you pin-point which year and perhaps even quarter this thing is likely to be delivered. (Don’t make any promises though – remember the cone!)

When we know more….

Now, if you’re doing things properly, your team will groom the big epic and slowly start agreeing on small to medium stories and perhaps even slices. Ideally you’ll have a story map. At some point, you should have a list of stories (or themes or whatever) that more-or-less cover the full solution that the team intends to build. At this point, it is possible to do some Affinity Estimation, which will give you another estimate of the total size (in points) relatively quickly that you can sanity check with the help of velocity against your previous guesstimate. If you’re working with a new team and haven’t got a velocity yet, this is also the time when you can try ‘guess’ your velocity by placing a couple of stories into two-week buckets based on whether the team feels that they can finish them or not. This article explains this process in a bit more detail.

Keep checking yourself…

You will probably find that when you do Affinity Estimation that you will still have some biggish stories in the list, which is OK. Over time as these break down, it’s probably a good idea to repeat the exercise (unless you’re so great at grooming that your entire backlog has already been sized using Planning Poker). Until you have sized everything in detail to ready stories, Affinity Estimation is the quickest way to determine a total remaining backlog size that is reasonably useful. Over time, if you maintain your burn-up, you’ll be able to track your progress and re-adjust plans as you go along.

Did you find this post useful? Have you used these techniques before? What other techniques have you used to try build a view of your release roadmap?

Advertisements

StoriesThis photo is the list of actions/changes/learnings one of my teams came up with in their most recent retrospective. This did not come from someone who went on training or read an article. It also didn’t come from a new Agile coach or Scrum Master. It came from them missing their sprint commitment and goal. This team only managed (on paper) to complete 8 out of 18 points; but they all knew they had delivered and learned a lot more than that measure reflected.  Here are some things that they decided to do going forward:

1. If the team cannot reach consensus about the size of a story, then split it into two stories and size the smaller stories

One of the main reasons the team had such a poor burn-down is that they took in one quite large story which did not quite meet the INVEST requirements. For one, it was a ‘common component’ that was to be used in most of the later stories (so not independent). It also was not small enough – and turned out to be even bigger than the team had thought. During sizing, there had been some debate about its size and eventually reluctant consensus was to make it the smaller size. Turns out the less optimistic team members were right. This was one of the stories that was not done at the end of the sprint.

2. Keep Planning II – and use it to verify the sprint commitment

This is a team that often decides to skip Planning II (I don’t like it, but ultimately it is the team’s decision and so far we’ve muddled along without it). For this sprint, they decided that they did need a session to unpack the stories and how they would be implemented. Everyone agreed that without Planning II we would have been even worse off. They also realised that at the end of Planning II, there were already some red flags that the big story was bigger than everyone had thought and they could have flagged,  at that point, that the commitment for the sprint was optimistic. The team agreed that, in future, if going into the detail during Planning II revealed some mistaken assumptions, then the team would review the sprint commitment with the Product Owner before kicking off the sprint.

3. Feel free to review story-splits in-sprint

Early in the sprint, the team were already aware that the big story was very big and probably could be split into smaller components. Their assumption was that this wasn’t possible once the sprint had started. For me, re-visiting a story split mid-sprint or once you start the work is not a bad thing: sometimes you don’t know everything up-front. It also, in the case where a story is bigger than expected, gives the Product Owner some more wiggle room (negotiation part of INVEST) to drop/keep parts of the story to successfully meet the sprint goal. Of course, where we have got things really wrong, then sometimes the sprint goal cannot be rescued and the sprint would be cancelled.

4. Raise issues as they happen

Pretty much summarises most of the above points. One of the agile principles is responding to change over following a plan, so when change happens make it visible and decide how to adjust the plan at that point. There’s no point in battling on as planned when you know that the planned path is doomed to fail.

Some references:

Velocity: a dangerous metric

Posted: July 10, 2014 in Agile, Team
Tags: ,

This is a useful technique you may want to try next time you need to quickly prioritise some actions with a large group of stakeholders who have varied levels of experience with agile.

In our case, we were trying to find a framework to help decide what actions to take to improve one of our testing environments. The stakeholders ranged from the development teams who used the environment to test their work, to the IT Ops team who used it to test the deployment process, to the head of the division who obviously had an interest in ensuring things got to Production as efficiently and correctly as possible. After a process of re-affirming the goals of the environment in question and highlighting the issues that were currently being experienced, we found ourselves with a list of thirty problems that needed prioritisation.

In order to do so, I first had everyone plot the problems across a horizontal axis based on the effort/cost/difficulty they associated with solving the problem. As not everyone knew about sizing (and, sometimes when people are familiar with sizing, it can also confuse things), I used a scale of animals. I made the scale clear (an image) plus had a picture of each animal plotted along the horizontal axis. The group quickly grasped the concept and set about categorising problem solutions from cat to blue whale.

Animal sizes scale

Animal sizes scale

Once they had plotted all the problems along an investment (time and/or money and/or skills) scale, it was time to categorise the problems according to impact. For this I added a vertical axis with three sections: showstopper, major and inconvenient. The important bit was that I provided a clear and simple definition for each of these to make sure everyone was speaking the same language.

We used stickies plotted on big sheets of paper on a table so that people could move around easily. At the end of about 15 minutes, a group of about 16 people from different teams and backgrounds had categorised thirty problems by size and impact to form a useful framework for prioritising actions. Documentation was as easy as taking a photo with my cell phone.

Grid

UPDATE 16/11/2015: Animal images resource (for printing).

StoryMap2

(Part 2 of 2)

As we were unable to finish the story-mapping process in a single session, we had a second session the following day. As prep and to save time, I asked the team to bring their top six user stories written on separate stickies. Following the approach outlined in the video, we grouped and prioritised their top stories and then I asked the team to pick their overall top five. At this point I also introduced the concept of tarred vs cobblestoned vs dirt roads (stories):

  • A dirt road is minimum implementation with manual workarounds; something that will be in place for a limited time span (throwaway)
  • A cobblestone road is a bare minimum implementation with foundations for a longer term solution
  • A tarred road is a complete implementation: a ‘done done’ story

Cobblestone and dirt road stories were written on an orange sticky, while tarred road stories were yellow. Even with this mechanism, the team were not able to get their first slice down to less than seven tarred stories – which was still great going 🙂 With our first slice agreed and our pink risks/issues added to the appropriate place on the map, we had just enough time to do a bucket estimation exercise. For this, I laid a set of planning poker cards on the floor to create buckets; picked a story as a baseline (I chose one we’d already implemented for more context); and then everyone had to add stories to the buckets according to where they thought they should go relative to the baseline. This is a mostly silent exercise. For the first part each person is given a random pile of stories to place into buckets. For the second part, everyone reviews and (silently) moves stickies to different buckets if they disagree with where they have been placed. If a stickie moves a lot, it is parked. When the stickies stop moving and stabilise in their buckets, then the parked stories are discussed to understand why there are differences in opinion and the team should then reach consensus on which bucket each parked story belongs to. We didn’t have to do this last part as everyone was more or less on the same page about where things belonged.

As an aside, after we’d created our map, I reviewed our completed work and added those stories into it as well. We will be tracking progress on our map as we go. It is recommended practice to write duplicate stickies for your taskboard and not leave gaps in your map. I can see this also making sense where a ‘map story’ may need to be broken down into smaller stories for sprint purposes (if your team is using Scrum). For us, a map story that is in progress gets a dot (sticker) and, once it’s done, we tick the dot. It’s visual and easy to follow.

After a total of three-and-a-half hours we had the following:

  1. A better understanding of the short-term and long-term goals of the project.
  2. A ‘first slice’ to test our architecture, resolve some technical challenges, and deliver some business value in production.
  3. A first pass idea of the size of the full scope of work.
  4. A picture view of all the above with a visual way of showing progress against the overall scope.

Things that I felt worked well in the second session:

  • As we don’t have a team room (so were using a shared meeting room), I did have a challenge with the session being split over two days as I needed a quick way to reproduce the map for the team for the second session. In the end, I took photos of the various sections and printed them out in colour and that was enough for the team to refer back to (helped by the fact that they had all rewritten their top six stories on stickies for the second session anyway).
  • The road analogy helped create a common understanding of the level of work we were targeting and using a different colour to represent work that was not ‘tarred’ made it clear where we would have to revisit items to fully complete the story.
  • Giving people a very strict limit initially. I doubt we would have ended up with a slice of seven stories had I not been so strict about trying to get the team to only pick five.
  • The sizing exercise got people off their feet and moving around (automatically creates more energy).
  • The sizing exercise took all of 15 minutes, so high value for small time investment.

Things that didn’t work well:

  • Some of the team members were unable to attend the second session. We decided to go ahead anyway, but this did impact the perceived value and accuracy of the sizes.
  • The team haven’t taken ownership of the story map (still perceived as ‘my thing’). Need to find a way to change that perception!

Links

  • Part 1
  • More detail on bucket estimation
  • This article describes two techniques: Affinity Estimation and Bucket Estimation. I’ve used both before and I so far have found bucket estimation is quicker so better for large backlogs (value:time ratio for affinity estimation doesn’t seem as good).
  • Affinity Estimation How-to