Posts Tagged ‘facilitation’

We’ve all bbigcompanyeen there. Someone describes a problem that they want solved (and possibly suggests how they think you should solve it) and in the very next breath asks: “So, how long will it take?”.

Invariably, we get talked into providing some kind of gut feel indication (days/weeks/months/quarters) based on nothing much besides (perhaps) experience. But how often in software do you actually do a rinse-and-repeat of something you’ve done before? In my 10 plus years in IT, never. Never ever.

Unfortunately, we don’t yet work in a world where most people are happy with “we’ll be done when we’re done” so a vague timeline is always needed: if only for coordinating training or the launch email and party. So where does one start?

First, there are some principles/facts-of-life that are important to bear in mind:
1. The cone of uncertainty
2. Some practical examples of the developer cone of uncertainty
3. A good analogy of why our estimates always suck, no matter what data we’ve used

In the beginning….

For me, the first time you can possibly try get a feel for how long your horizon might be, is after you’ve shared the problem with the team and they have had a chance to bandy around some options for solutions. At this point, assuming your team has worked together before, you can try do some planning poker at an Epic level. Pick a “big thing” that the team recently worked on, allocate it a number (3 usually works) and then have the group size “this big thing” relative to the one they have previously completed. I prefer to use a completely random number (like 3) rather than the actual story points delivered for this exercise because otherwise the team might get tied up debating the actual points and not the gut feel relative size.

Now, if you have a known velocity and also know the points delivered for the big thing we already built, you should be able to calculate an approximate size for the new piece and use your velocity to find a date range with variance (don’t forget about that cone!). For example:
– If we agreed our “Bake a cake” epic was a 3
– And then sized the “Bake a wedding cake” epic as a 5
– And “Bake a cake” was about 150 points to deliver
– Then “Bake a wedding cake” is probably about 3/5*150 = 225 points to deliver
– Which means you’re probably in for 225/velocity sprints (with 50% variance)

At the very least, this should help you pin-point which year and perhaps even quarter this thing is likely to be delivered. (Don’t make any promises though – remember the cone!)

When we know more….

Now, if you’re doing things properly, your team will groom the big epic and slowly start agreeing on small to medium stories and perhaps even slices. Ideally you’ll have a story map. At some point, you should have a list of stories (or themes or whatever) that more-or-less cover the full solution that the team intends to build. At this point, it is possible to do some Affinity Estimation, which will give you another estimate of the total size (in points) relatively quickly that you can sanity check with the help of velocity against your previous guesstimate. If you’re working with a new team and haven’t got a velocity yet, this is also the time when you can try ‘guess’ your velocity by placing a couple of stories into two-week buckets based on whether the team feels that they can finish them or not. This article explains this process in a bit more detail.

Keep checking yourself…

You will probably find that when you do Affinity Estimation that you will still have some biggish stories in the list, which is OK. Over time as these break down, it’s probably a good idea to repeat the exercise (unless you’re so great at grooming that your entire backlog has already been sized using Planning Poker). Until you have sized everything in detail to ready stories, Affinity Estimation is the quickest way to determine a total remaining backlog size that is reasonably useful. Over time, if you maintain your burn-up, you’ll be able to track your progress and re-adjust plans as you go along.

Did you find this post useful? Have you used these techniques before? What other techniques have you used to try build a view of your release roadmap?

Hindsight is 20/20

Posted: December 2, 2015 in Team
Tags: , , ,

There has been a lot of change in my space in the past year. A lot of it hasn’t been managed very well (which creates a lot of ‘people noise’). I’ve been exposed to a number of change management models over the years, including ADKAR and this cool exercise. I quite liked this tool about Levers of Influence which one of our People Operations (a.k.a. HR) team members shared with us. Although I already knew we’d done very badly when it came to change management, when I reviewed the two biggest changes (moving to Feature Teams and reducing our release cycle to having a release window every month rather than a synchronised release every nine-ish weeks), the tool helped highlight examples of what we had done badly, which also meant we could see where we needed to focus our efforts from a recovery perspective.

Levers of Influence

Levers of Influence

This is my retrospective on the change relating to monthly releases and what we did, did not, and should have (probably) done.

1. A Compelling Story

The idea to move to monthly release cycles had been brewing in the senior heads for a while, however we were going through a major upgrade of one of our core systems (in a very waterfall fashion) which meant that anything unrelated to that was not really discussed (to avoid distractions). Our upgrade was remarkably smooth and successful (considering its size and time span) and about two weeks after we went live with it, senior management announced that the release cycle was changing. Not many people had seen this coming and the announcement was all of two sentences (and mixed in with the other left-field announcement of the move to vertical feature teams rather than systems-based domain teams). In hindsight, most people didn’t know what this shorter release cycle meant (or what was being asked of us). Nor was the reason why we were making this change well communicated (so, to most people, it didn’t really make sense).

In hindsight:

  1. The why for the change should have been better communicated. We want to be able to respond more quickly and move to a space where we can release when we’re ready.
  2. The impact should have been better understood. One of the spaces that has most felt the pain is our testers. With a lack of automation throughout our systems and the business IP sitting with a few, select people, the weight of the testing has fallen onto a few poor souls. Combined with this, our testing environments are horrible (unstable, not Production-like, and a pain to deploy to), so merely getting a testing environment that is in a state to test requires a lot of effort across the board.
  3. We should have explored the mechanics/reasons in more detail with smaller groups. For example, it was about 3-4 months before people began to grasp that just because one COULD release monthly, it didn’t mean that one had to. The release window was just that: a window to release stuff that was ready for Production. If you had to skip a release window because you weren’t ready, then that was OK. (A reminder here that our monthly release ‘trains’ are an interim/transition phase – we ultimately want to be able to release as often as we like.)

2. Reinforcement mechanisms

One of the motivating factors for senior management to shorten the release cycle from nine weeks to one month was that our structures, processes and systems for releasing were monolithic and, although the intention had always been to improve the process, it just wasn’t happening. In a way, they created the pain knowing full well that we didn’t have the infrastructure in place to support it, because they wanted to force teams to find ways to deal with that pain. So, in this case, there weren’t any reinforcement mechanisms at all. The closest thing we had was a team that was dedicated to automating deployments across the board that had worked together for about six weeks before the announcement.

In hindsight:

  1. There should have been greater acknowledgement of the fact that we didn’t have the support structures, etc. in place to support the change.
  2. We shouldn’t have done the release cycle and Feature Team change at the same time (as there weren’t structures, processes and systems in place to support that change either).
  3. We should have been more explicit about the support that would be provided to help people align structures, systems and processes to the change.

3. Skills required for change

Shortening the release cycle certainly created opportunities for people to change their behaviour (whether in a good or bad way). Unfortunately most of our teams didn’t have the skill sets to cope with the changes: we were lacking in automation – both testing and deployment – skills. Throwing in the Feature team change with its related ‘team member shuffle’ also meant that some teams were left without the necessary domain knowledge skills too.

In hindsight:

  1. We should have understood better what skills each team would need to benefit from the opportunities in the change.
  2. We should have understood the gaps.
  3. We should have identified and communicated how we would support teams in their new behaviours and in gaining new skill sets.

4. Role modeling

Most people heard the announcement and then went back to work and continued the way they had before. When the change became tangible, they tried to find ways to continue doing what they did before. (A common response in the beginning to “why are we doing this” would be “that’s the way we’ve always done it”.) The leaders who had decided to enforce the change were not involved operationally, so could not be role modelled. Considering the lack of skills and reinforcement mechanisms, role models were few and far between.
In hindsight:

  1. If we’d covered the other levers better, we would have had people better positioned to act as role models
  2. Perhaps we should have considered finding external coaches with experience in this kind of change to help teams role model
  3. Another option may have been to have had a pilot team initially that could later act as a role model for the rest of the teams

 

Have you ever had a successful change? If so, which levers did you manage well? Which ones did you miss?

chasm1I’m currently a member of a coaching circle and our topic last week was “Non co-located or distributed teams”. Then this week someone from our marketing department approached me. We’ve decided to bring our public website in-house and they need a new team, but history has shown us that finding new developers of the calibre we want is no easy feat. One option we haven’t explored to date is developers who work remotely (which would mean we could look farther afield) and he wanted to know what my opinions were. I offered to compile some information for him, including some of the conclusions from my coaching circle discussion, and then figured it was worth adding it to my blog too.

I have worked in distributed teams before – from formal project teams with developers on-site in Cape Town working with off-site analysts, testers and project managers in London; to informal ‘leadership’ teams across offices over three locations and two countries; to performance managing direct reports working in an office 2000 km away. The common point of success or failure every time? The people on the team. The next biggest point of frustration every time? The type and quality of communication tools available.

People

There is a lot of literature out there around the types of behaviours you want in agile team members and, obviously, each company also has its own set of values and culture it looks for in new hires. This list is a list of traits that stood out for me as things that made someone great to collaborate with when working in a distributed fashion. All too often, people without these traits made distributed collaboration quite difficult and sometimes impossible!

  • Emotional maturity – able to communicate without getting emotional about things; can handle openness.
  • Strong team norms – when the team agrees to try something or work in a particular way, this person will take accountability for whatever they have to do even if it wasn’t their preferred way of doing things
  • Effective communication – verbal and written. Can write and read well. Can get their point across and summarise well.
  • Takes responsibility for preparation – if things are distributed beforehand for people to read or prepare, then this work is done. Doesn’t walk into meetings “blind”.
  • Knows how to use tools and knows about conference call etiquette (this could be covered as training for the whole team).
  • Is proactive in staying ‘in the loop’ – will ask questions if they see something that they don’t know about (defaults to information gathering rather than assuming if it was important enough someone would let them know).
  • A T-shaped person – or at least someone who is prepared to do tasks that are not ‘part of their job description’

 

Other Success Factors

This list was compiled by my coaching circle:

  1. Face-to-face is necessary for initial trust building and to create coherence. Where possible, having everyone co-located for a period at the start of the project makes a huge difference. Also gets people used to the way others work without the ‘noise’ of not being co-located.
  2. Would be helpful to find ways to have online virtual interaction (outside of work) e.g. online games/ice breakers/other experiences.
  3. Face-to-face tools are a must.
  4. After an extended period, it helps to have team members ‘rotate’ as traveling ambassadors.
  5. Need to understand cultural differences. Probably worth having a facilitated session to highlight/understand these.
  6. If you have teams working in areas, then have an on-site SM/coach per team.
  7. Keep teams small.
  8. Try pair/collaborate with an offsite person as often as possible.
  9. If you have teams in different locations, have a dedicated facilitator in each location for meetings (like planning, review).

CoherenceCulture

Links

What has your experience been with working in or with distributed teams? What did and did not work for you?

This year I managed to float an idea I thought would not see the light of day for a while yet: I convinced my domain owner to let us try a team self-selection exercise. In the end, he and the rest of the management team didn’t need much convincing, and we were given the green light to start the process.

We relied heavily on the very comprehensive self-selection toolkit from NOMAD8 and pretty much followed it as suggested, with a couple of customisations to suit our context. The first piece of work was agreeing on our three missions (for three teams) which for various reasons led to some debate and much to-ing and fro-ing. The conversations to clarify the three missions proved really valuable and I think influenced some better decisions than otherwise would have been made if we hadn’t gone through the process.

Missions agreed we sat down and agreed what kind of skills (hats) would be required to deliver each mission end-to-end. We decided to extend this list to include capabilities that we did not necessarily currently have within the team itself (sadly, our organisation is still organised along components/technologies) to create awareness and hopefully improve cross-team communication. Once we had those, I compiled one slide summaries for each mission with the help of each Product Owner, and emailed them together with a modified FAQ to everyone who would be involved.

Before sending off the self-selection pack, we did also try to ensure every team member had had some face time to know about the new process we were trying. Most people heard about it during team retrospectives (when the question around ‘what happens next’ came up), but I did have to have some one-on-ones with certain team members who had missed out on such sessions for whatever reasons. Feedback was that people appreciated this: some people don’t enjoy finding out about stuff for the first time via email!

Once the missions were defined and the packs sent out, it didn’t feel like a lot of preparation was really required for the session itself. On the day I spent some time setting up the room, but otherwise from a facilitation point of view, everything was pretty much covered in the pack. People expressed various levels of nervousness and excitement leading up to the event, but thankfully everyone was happy to take things as it came and see what happened. I did have a detailed walk-through of the plan for the day with my Product Owners to ensure that they were suitably prepped on their role and involvement.

The session itself went really well. There were about 25 attendees so I’d booked the session for two hours. In the end, we only used just over an hour. Levels of engagement were high. We did about four rounds before stopping and there was a lot of good conversation and switching based on needs highlighted in each squad review. Making the Product Owner’s green/red vote on the viability of the team very visible also helped highlight a problem in rounds three and four: no matter how the teams shuffled, we didn’t have enough developers to create three effective teams. This was very visible to all and has subsequently led to the re-consideration of priorities in line with the capacity that we do have.

Feedback on the session in general has been good. The Product Owners were very happy with the outcomes of the session. Various team members said it was the most useful and best facilitated session that we’d ever run. Some did express the view that people generally ended up where their managers wanted them anyway so felt that we may as well have just decided on the teams in advance, but I’d like to believe that, even if people had been asked beforehand to select a particular mission, ultimately they still made their final choice/decision using their feet on the day.

Some take-aways:
1. The pack recommends sending information out at least a week in advance. I think this is very important, if only because it forces the Product Owners to commit to what their mission actually is.
2. Having conversations either one-on-one or with groups of people before sending out formal communications is important. It helps them get used to the idea and allows them to ask questions. Don’t be afraid of not knowing all the answers: quite often the answer is that it depends on what team forms on the day.
3. The pack also says this, but I’ll reinforce, because of all the unknowns due to not knowing how the teams will form, don’t expect to have new teams up and running from the very next day. One of our teams kicked off the following working day (with some team members handing over other work over an agreed duration), but another of our teams will only get going in a couple of weeks.
4. Don’t underestimate the power of physical movement. Ensure your stations are far enough apart that people have to physically move to join a station (and can’t hover somewhere in between). Having the Product Owners stand in their own ‘corner’ and then add a green or red sticky to the squad board after each round was also helpful (in our case all the Product Owners had a say in approving the team composition).
5. My biggest challenge was getting people to quieten down when we needed to end a time-box! Ensure you have some mechanism agreed for this up-front. The usual agile workshop “hand up and mouth shut” works well (if some people know what it means). Someone in the session suggested playing music in the background and stopping it when we needed to evaluate the squads: a bit like musical chairs. I thought that sounded quite fun 🙂

Have you ever tried to run a self-selection exercise before? What was your experience?

Sample Squad

Sample Squad

 

Constraints to consider

Constraints to consider

Focus On / Focus OffRecently another team, that had recently read an article around team dysfunctions and appreciation exercises and wanted to explore the health of their team, particularly their trust levels, more closely, asked me to facilitate a retrospective for them. Facilitating a team you don’t know or regularly observe is usually a challenge, but it was one that I was up for, and thankfully the team member who arranged the session had a good idea of the feel/discussions he was hoping for as an outcome.

I started the session with the Focus On / Focus Off exercise. As English was not everyone’s first language, I prepared cards with synonyms for each of the word pairs. I had the team discuss each word pair and what they understood the differences to be together, and then gave them the synonyms to arrange under the correct ‘word’ in the pair. After we had been through the exercise for each word pair, I asked the team whether they were willing to commit to continuing the session with the correct focus (and, thankfully, they said yes!).

From there we moved to a simple Sad/Mad/Glad exercise with a strong focus on the team and its interactions. I’d found out before that the team members were more introverted, so I opted for silent brainstorming separately before each team member shared their feedback. The first two parts of the session actually went by more quickly than I’d planned, so I took the opportunity to then allow the group to move their Sads and Mads into physical circles of control, influence and soup. While they were doing this at the whiteboard, they automatically moved into a space of discussing what actions they wanted to take for the issues within their control. It was nice to see them being so pro-active about taking control.

We finished the one-hour session with a Temperature Reading. As this session was more about sharing and uncovering information rather than actions, I wanted to leave the team with some items for them to possibly work on or think about more deeply in their next sprints. The temperature reading also includes an appreciations section, which I have found really energises teams and helps end the session on a positive and optimistic note.

Feedback is that the guys really enjoyed the session and would be keen to have me back to facilitate another. Hopefully this one wasn’t a fluke! 🙂 Have you ever had to do a once-off session with a team you don’t work with? What tools or activities did you find generated the most valuable outcomes for the session?

Resources

Story Oscars

Posted: September 30, 2014 in Scrum
Tags: , , ,

We recently used the Story Oscars game from Retr-O-Mat in a retrospective to gather data. It’s a fairly simple tool but the team seemed to really enjoy it! We even applauded the eventual winners. The third category (selected by my team) was “Best Horror”. We also discussed the stories as they were nominated, rather than waiting until the end and only discussing the winners.

Oscards

 

Retro1I have a team that is a funny space in the project (winding down development, winding up environment related activities – it’s a very technical piece) and, as the team is also changing after the end of the project, it’s becoming increasingly difficult to find things that are obvious areas to focus on medium- to long-term. For this reason, I try to get the team to drive topics as much as possible (as they’re on the front-line), however the culture of retrospectives isn’t very established here yet so that is easier said than done!

Always wanting to find new ways to run retrospectives so that the team does not get bored, I decided to try an agenda from Retr-o-mat, an awesome site for building your own retrospective agenda. The one I eventually decided on was Plan ID-81-51-69-73-45. Herewith follows my comments on how it went 🙂

1. Outcome Expectations (#81) – Everyone states what they want out of the retrospective [10 min]
I picked this one as I’m trying (continuously) to get my team to think about retrospectives and what they’re for. As it turned out, what I thought would be a quick whip around the room, was almost painful, but I did eventually get an outcome from every team member and I believe that the process meant they were in a better space for what was to follow. I’d say this one affected my timing most (I only had 75 minutes to get through the whole retro), especially as we always struggle to get everyone into the room on time.

2. Lean Coffee (#51) – Use the Lean Coffee format for a focused discussion of the top topics [30 min]
I’ve been to sessions before where we used this technique and quite enjoyed it. I brought some rusks along (and gave people the opportunity to get a drink during the topic brain-storming) to help create the mood. I think the layout of the room helped this: I’d moved all the tables to the side and the team sat in an almost-circle in easy-to-wheel office chairs. We didn’t get through all the topics and probably could have done with a little more time, but my observation is that the team touched on some key points plus got to ‘vent’ a little around topics that were bugging people but had never really been discussed. The nice thing about Lean Coffee is that it is up to the group how long they spend on a topic before moving on, which takes some of the pressure off the facilitator!

3. The Worst We Could Do – Explore how to ruin the next sprint for sure [20 min]
I included this one because I thought it was an interesting way to think about things and because we have a major release coming up with a new team, so there may have been things that worried certain people but had not been raised. One could probably run an entire retro around this activity alone.

4. Pitch – Ideas for actions compete for 2 available ‘Will do’-slots [10 min]
A great way to generate team ownership of the actions/outcomes for the session! Watch out for people trying to ‘group’ a number of actions into one though. Be quite strict about them limiting what the focus for the next sprint will be! Also, the actions were not always concrete, so some needed rewording (e.g. we re-worded “improve code quality” to “increase code coverage every time we add new stuff”). I did also put actions that didn’t make the cut below the line so that we didn’t lose them. I review actions every couple of retrospectives so that the team can see which have started, which are still in progress, which are done (celebrate!), and which may no longer be applicable.

We did deviate from the agenda slightly as there was an item the team needed to discuss and sort out in the session (moving the times of ceremonies). My feeling is unfortunately this detracted a bit from the overall impact as when we did our pleased and surprised round, that was front of mind for them. Had I known it would come up, I probably would have dealt with it earlier in the process, but one sometimes does need to adapt in the moment 🙂

Have you used any of the above tools before? Any situations where you found them particularly useful (or not!)?

One of my teams has recently expressed a desire to become more familiar with XP practices, including more formalised pairing and TDD, and when I offered them the option of doing a code kata during one of our retrospective slots, they were happy to give it a go.

Kata

 

Last year I was fortunate enough to attend a kata at the annual South African Scrum Gathering. The session included some information on what a kata is, some advice on how to run one, plus actually pairing up and practicing. Thankfully the kata used during the Scrum Gathering session was a TDD/Pairing Kata, so I was able to use one that I was familiar with for our first session.

Prep

Preparation entailed
– Sending out some kata context (as not everyone knew what they were)
– Having the team select a development and testing framework and make sure it was all set up
– Finding a suitable room (easier said than done, as it turned out!)
– Sending out some pre-reading about pair programming, particularly the roles and red-green-refactor

From my side, I had to source the kata (thank you, Google) and also enlist the help of some co-facilitators. For this I employed the services of another Scrum Master (who was familiar with katas) and our team architect (who is an advocate of TDD and pairing practices). As you will see from the team’s feedback, having the right coaches/advisors in the room added significant value to the exercise, especially if the majority of your team members are new to the techniques being practiced.

The Session

The Agenda

The Agenda

My session started with an overview of the kata concept and I repeated some key points around TDD and pairing. This probably would not have been necessary if your team is quite good at doing their pre-reading. I did get some feedback that there was too much information and too many instructions up-front, which is probably because both the concept (katas) and the topics (TDD/Pairing) were new ones. In hindsight, some way of splitting the learning probably would have been less confusing. I also printing some of the key TDD/Pairing slides on the back of the kata exercise (although I didn’t notice anyone actually referring back to them during the session). It is important to emphasise that the kata is NOT about how quickly/efficiently you solve the problem, but more about practicing the process to get there. Thankfully, although my team can be quite competitive, I think everyone grasped this.

I decided to run the same kata twice as part of the goal was to actually practice the process itself and one could see the team became a lot more comfortable by the time we ran the second round. We only had an hour, so I opted for 15 minutes per round, which unfortunately meant the round ended just as most teams were hitting ‘flow’. Ideally one would run the kata for half an hour, but in the situation where a team is doing a kata for the first time, I’m not sure whether having one thirty minute Kata is better than two shorter katas to get them familiar. Now that my team have done one, I’d be very comfortable running a single thirty minute session.

The other thing I did was mix up the pairs between sessions. This worked well because people got to see how other teams had approached the same problem plus for more ‘reluctant’ participants (mostly the non-developers), they warmed up by the second round when they realised that most of the team were enjoying the process.

Feedback

 

3Hs

I wrapped up the session with a quick 3H (Helped, Hindered, Hypothesis) exercise to get some feedback for myself on how the session went. Overall, I think it went well and it was nice to see the level of interaction and energy it generated. Generally the feedback was good too, so hopefully I’ll have an opportunity to run more of these in the future. In case you’re thinking of running one yourself, here are some of the things the team felt

  • Helped them:
    – Having an early example (built into our selected kata)
    – Coaches for technical and ‘process’ support
    – Certain team members having experience with the techniques we were practicing
    – Switching partners during the exercise
    – Having the developers write the code (based on instructions) even when they weren’t driving
    – Pairing with someone who thought differently to you
    – Learning in pairs
  • Hindered them:
    – Testing framework (the team could pick what they wanted to use)
    – Too little time
    – Inexperience with katas
    – Initial example was not very clear
    – Non developers lacked context

Links

What have you learnt from running katas? Are there any you would particularly recommend?

 

This morning I ran an initiation session with one of my (merged) teams. As the team will be splitting into two smaller teams in the near future, I had to shift the activities a level higher than usual and didn’t cover things like team norms, but I thought I would share what we did do anyway as the team seemed to find the session valuable and hopefully you will too.

My agenda consisted of three parts:

  1. A discussion of the learning styles of the team members.
  2. A Definition of Done exercise.
  3. Identification of tasks we need to do to ensure everyone has access to what they need (I won’t discuss this in this post).

I booked the session for 90 minutes and, as it ran over lunch time, I organised some snacks which were available from when the team arrived. The room had a ‘loose’ structure with tables pushed to the side for the first part of the session to encourage engagement. I time-boxed each piece and allowed everyone to get up and get more snacks/drinks when I felt that the energy was starting to ebb.

Learning Styles

The team completed the learning styles questionnaire in advance and I put up the summary for all team members on flip chart paper. I circled any preferences that were very low or very strong and basically invited open dialogue and questions. Before starting, I emphasised that:

  • Learning styles are situational: you may have a different profile depending on the context in which you answered
  • Learning styles are PREFERENCES. Just because you have a low preference, it does not mean you cannot learn in the other styles.
  • Learning styles can change over time. For this reason, it is recommended one shouldn’t use data that is more than a year old.

For more on Learning Styles, you can refer to the following links:

Definition of Done exercise

I based this section on this guide.

As I had a big group of people (about 13), I decided split the group into smaller teams for some of the exercise. I noticed that the teams tended to be the same ‘usual suspects’, so I did a team shuffle between steps 1 and 2.

  1. In small teams (of about four people), the group listed the activities needed to get a piece of work from the beginning of development to production.
  2. Still in teams (after shuffling), they grouped the activities based on at what level they are done (ours deviated from the guide slightly as we don’t have iterations/sprints and have two ways of getting into production depending on the story).
  3. Each team took a turn to present back. The first team presented their solution as-is with the other two teams adding or discussing levels based on what they had.
  4. We created our Story Definition of Done checklist – as well as a Story Definition of Ready as it turned out we’d specified some of those as well.

Note that we didn’t move onto identifying impediments causing us to have tasks at the release level as there wasn’t time. I do feel this is a great way to identify impediments and ways for the team to improve their delivery process.

The above conversations led to a better understanding of new team members and a common understanding and agreement of what done meant. Engagement levels were greatest when the group was able to work in smaller teams and I felt the shuffling mid way helped to bring some ‘cross pollination’ into the process. One aspect we didn’t touch on, which I will definitely include when the teams split, is a discussion of team norms and preferred ways of working.

What kick-off or initiation activities have you found have worked well with your teams?

77fb900d0923c2cdff00ab346fa453abOnce upon a time there was a team called the Kanban team. But they didn’t do Kanban. They had a task board and they tracked their work, but that was it. The Kanban team also had a new agile coach. This new coach wasn’t very familiar with Kanban, but did some research and realised that the team was missing some basics. The team met and did some exercises and reviewed their processes and board and made some changes – changes supporting Kanban basics. But these changes didn’t stick. And the team didn’t really like the new board. And the team didn’t feel empowered to improve on the new board. The fact that the lanes were taped onto the whiteboard also meant it was hard to change things on the fly, so in some cases the team could also not be bothered.

The agile coach was frustrated. The team was not learning and they were not growing. Their environment was changing and they were not adapting. They didn’t actually want to do Kanban. They had a broken information radiator. The agile coach decided to watch and wait. The more she watched the more she realised what the gaps were – where the team needed feedback – but she was at a loss as to how to resolve those gaps. She had conversations with certain key team members and stakeholders. She shared her concerns and observations. She created awareness through conversation. However, the team were not yet ready to tackle these things. They needed some guidance and direction. So she waited. And researched. And discussed. And watched some more. And then another agile coach recommended she read “Lean from the Trenches”. And she realised that this was what she had been searching for.

Having learnt from the first attempted iteration of the board, the agile coach decided on a different approach to presenting the changes. Thankfully these changes could coincide with a team reshuffle: the Kanban team would be no more. There would be a new team – a mix of old and new team members. And the incoming team members were familiar with Scrum and simple task boards. This provided a ‘good excuse’ to advocate some changes to the information radiator. So what she did was:

  1. Use the ideas from “Lean from the Trenches” together with her observations of how the team worked and the type of work that they tackled to draft a new task board. She made sure that the new task board still included the information the team found valuable on the current board – even where it was currently hidden or somewhat confusing. She also included some basic changes she hoped would help generate feedback to guide team-driven improvements.
  2. She shared her board with team members individually. She asked them what their thoughts were. She asked them for questions and feedback. She was happy to see that they quickly connected the new and the old and found the new version simpler to understand.
  3. She shared that she was going to remove the permanent lines. The lines would be drawn with whiteboard markers. It would be easy to change the board. Things would be more flexible.

Eventually the big day arrived. The agile coach and one of the analysts mapped the existing stories and tasks to the new board and then the hard work of ripping down the old board began. Come Monday morning, the new board was ready for stand-up. There were still some questions and some of the stories/tasks moved a bit during the session, but even from day one the process was working better:

  1. The team solved discrepancies on the board themselves. With the previous board iteration, they had turned to the agile coach when they had a question or weren’t sure how to use their information radiator instead of finding a solution for themselves.
  2. There were already suggestions from the team around how to tweak the board. They were already taking on ownership of their information radiator.
  3. It was VERY visible that work-in-progress was piling up on one of the key stories – and that there were in progress stories or tasks that were not currently being worked on by anyone.

The key take-aways?

  • Don’t be afraid to try something new.
  • Be ready for failure – ideas won’t always work the first time round, but that’s how one learns.
  • Persevere: if at first you don’t succeed; try, try again.
  • And once it’s initiated, let it go – ultimately the team needs to own the change.

What changes have you struggled then succeeded to make in your team recently? What techniques eventually led to your success?