I did this retrospective with one of my teams recently, and seeing as the feedback afterwards was really positive, I thought I would share. First off, kudos to The Virtual Agile Coach for creating this retrospective in the first place. I don’t have a paid Miro account so had to recreate the experience in Powerpoint (file attached), but he had done the heavy lifting and Miro templates are available on his amazing site for those with the power to import.

Some context to add is that I have had some push-back from the team in the past when it came to activities in meetings that they did not regard as “work”. For example, some people don’t like random check-ins while others complain when we have to interact with a visual aid (“can’t we just talk?”). As an experiment, for this retrospective, I explained the “why” behind each of the facilitation activities I was using. And I believe it helped 🙂

Another note is that although I have indicated the timeboxes I used, it’s worth mentioning that this was a fairly large group (14 people) which brings its own challenges. For example, if you have a smaller team, then potentially break-out rooms would not be necessary.

1) Setting the Scene [10 minutes]

First I shared the Miro board link in the chat (I had already shared it before the session, but wanted to make sure everyone could get there) while we were waiting for people to join.

Then we started with an activity. I pasted the following instruction in the chat:

Take turns to answer this
==========================
What FRUIT are you today? (and, if you want to share, why?)

We used https://wheeldecide.com/ (which I had set up with the team names and to remove an entry after each spin) to decide the order for people to share in.

I also gave the instructions verbally.

Before we started, I explained why we were doing the activity:
“When a person doesn’t speak at the beginning of a meeting, that person has tacit permission to remain silent for the rest of the session (and often will). We want to ensure that no one has a subconscious barrier to speaking up in our retrospective because it is important that everyone feels able to contribute equally.”

Remember to pause before you spin the wheel for the first time so people have a chance to think about what they will answer.

After the “fruit” check-in, we went to the Miro board and I reminded everyone why we did retrospectives (The Agile Principle) and the mindset we were meant to have during the discussion (The Prime Directive). I also ran through the agenda.

2) Gather Data (1): Silent Writing [4 minutes]

Gather Data

I pasted the next instruction into the chat before I started speaking (it included a link to the correct Miro frame):

Silent writing for 4 minutes. Add stickies to the sections on the Miro board.

I explained why we were doing silent writing:
“Some people need time to stop and think or are not able to think while other people are talking. We’d like to give those people a chance to gather their thoughts before we move into a stage where we want them to listen to the “speak to think” people in the room. Another reason we’re doing silent writing is it is easier for the human brain to stay engaged and focused when we engage at least two of our senses. So in this instance people are seeing and typing (movement). And having a whiteboard in general means you can capture notes (movement) and listen at the same time. Another reason it’s good to capture notes is sometimes people might have to step away, or will be briefly distracted, and then there is a visual representation of what they might have missed and they can catch up with the rest of the group. We also have some team members who cannot be here today, and this creates something they can refer to when they are back at work. Lastly, our brains can only hold 5-8 items at any one time, so by writing down our thoughts before we move into conversation, it means we can focus on listening rather than trying to remember what we want to say.”

I then reiterated the instruction and played the song “Another One Bites the Dust” while they performed the activity (because the length of the song is almost 4 minutes).

3) Gather Data (2): Conversation [6 minutes]

Once again, I started with pasting the instructions (the same link as for silent writing) into the chat:

--- Breakout Rooms: Discuss Queen ----
Discuss what's been written. Add any new thoughts.

I explained that we would be going into break-out rooms of 3-4 people randomly assigned by Zoom and that each group would have about five minutes to discuss what they could see on the board plus add any new thoughts out of the discussion.

I explained why we were using breakout rooms:
“We want everyone to have a chance to speak, and because we are so many people, to do so in one group would take a long time. Breaking up into smaller groups means we can have some of those conversations in parallel. It’s also easier for people to connect online when they are in a smaller group and our brains find smaller groups less tiring, particularly when using a tool like Zoom.”

I then repeated the instructions and sent everyone into break-out rooms.

4) Generate Insights [15 – 17 minutes]

Generate Insights

Once everyone was back, I added the next instruction to the chat (with a new link to the relevant frame on Miro):

----- Breakout Rooms: Generate Insights ---
Talk about what stands out? What surprised you? What patterns do you see? What is missing?
(You can add notes -> miro_link )

I then told the group we would be going into break-out rooms again (this time with new people) and this time the idea was to step back and look for patterns or trends. To try see things from a new/different perspective. I mentioned that the instructions were in the chat and that there was a space on the Miro board to add notes (if they wanted to). I also said that we would be having a debrief of what was discussed as a single group when everyone came back from the break-out rooms.

Before I opened the new rooms, I checked in as to whether the first break-out slot had been long enough. One group mentioned that they hadn’t really managed to get to everything so, as we were running ahead of schedule, I added an extra two minutes to the five-minute break-out room timebox.

While people were in the break-out rooms, I added a funny hat to my Zoom attire.

When everyone had returned from the break-out rooms, we made time for discussions. This is where, as a facilitator, you need to be prepared for awkward silences! Once the first group had had their turn, things flowed again. I was ready to add any additional comments/notes that came out of the debrief however, in this instance, there were none.

5) What could we do? [5 minutes]

The chat instruction:

Please add ideas for what we can try next sprint to the "Ideas" section -> miro_link
We will then have a 10 minute break until xx:xx

I explained that we had gathered data and highlighted any trends or observations, and now we had to decide what we wanted to try in our next sprint. The first part of that process was to capture and share our ideas. We would do this by having five minutes of silent writing, followed by a 10 minute break, and when we returned after our break we would discuss the ideas on the board. I told the team that I would be playing music for the silent-writing part, however people could use the time as they chose, as long as they had captured their ideas by the time we returned from our break. After checking for any questions, I started playing the song: “I want it all“.

While the song was playing, I kept my camera on as a visible indication that the break hadn’t officially started. When the song finished playing, I turned my camera and microphone off (we generally stay in the Zoom room for breaks) and re-iterated in the chat window what time everyone was expected to be back.

6) What will we do? [Remainder of the timebox – approx 30 min]

I changed my Zoom hat again for the final part of the session, and reminded the team that we were aiming to shape our ideas into at least one action or first step that we wanted to try in our next sprint. We started with a debrief of the ideas that had been added to the board, particularly the ones that were not so specific and more like thinking items (so that we could generate some specific ideas for what we could try).

Once we were done discussing, we used Menti to rank vote the actions we’d come up with. One could also use something like dot voting. I used Menti because my team are familiar with Menti and I find it’s quicker to update ‘just in time’. As an aside, before we rank vote, we also usually get the team’s input as to the impact/effort for each of the proposed actions. For this session, it actually led to further discussion, because one team member realised that they didn’t know enough about the action to be able to rate the effort to do it.

Effort vs Impact of Actions

Once ranked, we took the top ranked action and ensured that it was SMART. At that point we were out of time, so I asked the team if we were OK to end the session. Sometimes in the past we have decided to take more than one action into our sprint (but we try limit it to no more than three).

We also always do a fist-to-five to ensure everyone can live with the SMART action(s) as described. I like to use a Zoom poll for this.

7) Close

To close the session, I re-capped what we had done during the session (starting with the check-in activity) and that we had agreed to one action for the next sprint. I reminded people who had specific actions from our discussions about their tasks. Finally, I asked the team if they would give me feedback on Miro by

  1. Dragging a dot to rate the retrospective out of five
  2. Where they had comments (things to keep doing, things to stop doing), adding those thoughts to a sticky

And, with that, we thanked each other and ended our sprint retrospective.


If you give a similar retrospective a try, let me know how it goes. I would be interested what did and did not work for you and your team.

noganttMy team were struggling to make their sprint commitments. Sprint after sprint, we’d go into planning, pick the stories that we believed we could finish in the sprint, and get to the end with things not where we wanted them to be. To make matters worse, stories were piling up towards the end of the sprint, leaving testing (and feedback) right to the end. Our best intentions were just not translating into a workable plan and it was hard to see why. And then someone suggested that we try visualise our plan in Sprint Planning 2 (SP02): would it help if we created a Gantt?

My gut reaction (as their Scrum Master) was pretty strong. I had flashbacks to my Project Administration and Project Manager days where my life was ruled by Microsoft Project Plans and unrealistic resource-leveling. I recalled long arguments about how, just because you could get a document through our intensive quality checks in two days, usually, it took about a week, plus you needed some days to rugby-tackle your busy stakeholder into signing it once it was ready. All this meant that your team was (in Gantt terms) “not working” for the duration of the document review task – which was unacceptable when everyone needed to be at a consistent 75% utilisation for the project. Then there were the status reports and percentage complete updates (how do you know you’re 50% complete if you’re not done yet?) and lines that jumped and moved (or didn’t) when your dependencies were mapped incorrectly.

All of the above happened in my head, of course, and we agreed to give the Gantt chart a try during SP02. Thankfully, besides knowing how to draw a basic frame, all my historic experience with Gantt charts meant I also knew which questions the team would need to answer to complete one – plus the mistakes people usually make when creating a visualisation of duration.

Before I share with you what we did, I think I’d better let you know how things turned out. The first few times we did it, people really struggled and it took a while to create the chart. However, with practice, it became just one more SP02 activity, and eventually, I didn’t need to help the team at all.

The visualisation helped us highlight where we were overcommitting ourselves (expecting one person to work on too many things at the same time; forgetting when a team member was on leave at a crucial point; or not finding ways to do some testing earlier). In making our poor planning assumptions visible, it helped the team figure out workarounds before the sprint even started e.g. for very long stories, they’d identify smaller milestones for feedback. Or where they realised that the type of work was very heavily weighted towards a particular skillset, they identified simpler pieces where less experienced team members could pair up and work together saving our “big hitters” for the more complicated stories. We also got better at accommodating sprint interruptions (like public holidays or whole-team-training) and adjusting our sprint forecast accordingly. Lastly, we started taking our Gantt into daily stand-up, and the team would adjust their Gantt plan at the end of stand-up which was a great way to see if we were on track and, where we weren’t, what needed to change.

How did we do it?

This is how we used the Gantt chart. Firstly, after SP01, I would create an empty framework for the 10 days of the sprint. I’d add any known “events” that would impact us, for example:

  • Our sprint ceremonies were on there
  • Any planned grooming slots were indicated
  • Planned leave/training was reflected- with the box size representing whether it was a half-day or full-day
  • Other significant “distractions” were added, like floor release dates
  • Any other planned meetings we were aware of after Sprint Planning 1 (SP01) were added
  • Weekends and any public holidays were blocked out
  • We also made the sprint goal visible on the last day (Review box)

The team would then work down their list of Backlog Items agreed in SP01. After discussing the item and the tasks for the work involved, they would then plot its expected start and finish date on the Gantt. As this was duration-based, in the beginning, I sometimes needed to remind them to add an extra day where a task ran over a public holiday or the person(s) the team assumed would be picking up the task (based on what was going on/availability) was going to be out of office. As they generally had an idea of who might be doing what, durations were also adjusted based on the person doing the work e.g. they would sometimes add extra time if the person was working in a space they were less familiar with. Even without thinking about who would be working on a specific task, the Gantt made it very clear when they were expecting to start more stories on the same day than there were people available to work on them. As previously mentioned, where stories looked to have a longer than usual duration, the team also brain-stormed mini-milestones where testing/checking could happen (e.g. if it was a five-day task, they’d try have something that could be tested/checked every 1-2 days). I added the tasks to the Gantt chart the first few sessions we used it, and once they’d got used to the idea, then a team member started doing it.

Finally, if the Gantt showed we’d been mistaken about our forecast in SP01, it meant we were able to communicate changes to the forecast before the sprint even started.

ganttall


This team had a specific problem to solve. As it turned out, the Gantt chart helped them create a shared view of their sprint plan which could be used to help them test their thinking/approach. It had a bit of a learning curve and took time and energy to create though, so I’d still caution against “just using it” for the sake of it. However, I’m also now likely to suggest a team tries it if they are experiencing similar planning problems to this team.

Have you ever used a Gantt chart for sprint planning? What did your team learn? Or have you been surprised by some other tool that you’d previously had bad experiences with? What was it and what did you learn?

 

 

My team had been working together for three sprints. During this time we’d been grooming and delivering stories (into Prod) but we had not done any sizing. Our Product Owner and business stakeholders were getting twitchy (“how long will we take?” – see How big is it?) and it was time to use our data to create some baselines for us to use going forward (and, as a side benefit, to find out what our velocity was).

Besides the fact that it was a new team, this team was also very large (15 people), some of them had never done an Affinity Sizing type exercise before, and we were 100% distributed (thanks to COVID19). Quite the facilitation challenge compared to the usual exercise requiring nothing more than a couple of index cards, masking tape and some planning poker cards. This is what I did and how it worked out.

1. Preparation

First, I needed something I could use to mimic the laying out of cards in a single view. As we’d already done three sprints of stories, there were a number of cards to distribute and I didn’t want to be limited to an A4 Word document page or Powerpoint slide. This meant a whiteboard (unlimited space) was required and we eventually ended up using a free version of  Miro.

Second, with my tool selected, I needed to make sure everyone in the team could actually access/use the tool. Unfortunately, Miro does require one to create an account, so prior to the workshop I sent a request to everyone on the team to try and access an “icebreaker” board.

Third, I needed to prepare my two boards:

  • The Icebreaker board which was to serve three purposes:
    1. Give people something to play around with so they could practise dragging and interacting with Miro
    2. Set the scene in terms of how sizing is different to estimating. Hopefully as a reminder to those who already knew, or as an eye-opener to those who might not.
    3. Use a similar format/process to the board I would be using for the Affinity Estimation exercise so that the team could get used to the process in a “safe” context before doing the “real thing”.
  • The Affinity Estimation board and related facilitation resources.

The Icebreaker Board

Ball game start

This layout matched the starting point of the Affinity Estimation exercise.

There was a reminder of what “size” was for the purposes of the exercise in red (1) and instructions for how to add the items to the scale (2). The block on the left was for the “stories” (balls) that needed to be arranged on the scale.

The Affinity Sizing Board

(I forgot to take a screenshot of the blank version, so this is a “simulation” of what it looked like.)

same blank stories

“Simulation”

For the Affinity Sizing, besides the board, I also prepared a few more things:

  1. A list of the stories (from JIRA) including their JIRA number and story title in a format that would be easy to copy and paste.
  2. The description of each story (from JIRA) prefixed with the JIRA number in a format that was easy to copy and paste
  3. I asked one of the team members if they would be prepared to track the exercise and ensure we didn’t accidentally skip a story.

A reminder that at the point when we did this exercise, we were about to end our third sprint, so we used all the stories from our first three sprints for the workshop (even the ones still in progress).

2. The session

The session was done in Zoom and started with the usual introduction: what was the purpose and desired outcomes.

From there, I asked the team members to access the “icebreaker board”. In the end, I had to leave the team to figure out how to use this board for themselves while I dealt with some technical issues certain team members were experiencing, so couldn’t observe what happened. However, when I was able to get back to them, I was happy enough with the final outcome to move on.

balls 2

Round 1: Small to Large

To kick things off, I copied and pasted the first story from my prepared list (random order) into a sticky and the story description (in case people needed more detail) into a separate “reference” block on the edge of the whiteboard. The first person to go then had to drag the story to where they thought it best fit on the scale.

From the second person onwards, we went down the list and asked each person whether they:

  1. Wanted to move any of the story-stickies that had already been placed or,
  2. Wanted a new story to add to the scale

A note here – it might be tempting to have some team members observe rather than participate (e.g. your designer or a brand new team member); however, I find that because mistakes will self-correct, there is more benefit in including everyone in the process.

We repeated the process until all the stories had been placed on the scale. At this point, it looked something like this (again, a “simulation”):

round 1 Round 2: Buckets

At this point I used two data points to make an educated “guess” to create a reference point.

  1. I knew that our biggest story to date was of a size that we could probably fit 2-3 of them in a sprint
  2. I could see where the stories had “bunched” on the scale.

So I picked the first biggest bunch and created a bucket for them which I numbered “5”. Then I drew buckets to the left (1,2,3) and to the right (8,13,20) and moved everything that wasn’t in the “5” bucket down to below the updated scale/grid (but still in the same order left-to-right).

buckets

Before we continued, I checked with the team whether the felt all the stories in the 5-bucket were actually about the same size. They did (but if there had been one that they felt might not be, it would have been moved out to join the others below the buckets). After this point, the stickies that had been placed in bucket five at the start of the process were fixed/locked i.e. they could not move.

Then we repeated the process again where each person was asked whether they

  1. Wanted to move a story-sticky that had already been placed into a bucket, or
  2. Move one of the unplaced story-stickies into a bucket

Initially, some people moved a couple of stories on their turn into buckets, which I didn’t object to as long as they were moving them all into the same bucket. Again, I was confident that the team would self-correct any really off assumptions.

We had one story that moved back-and-forth between bucket 1 and 2 a few times, and eventually, the team had a more detailed discussion and made a call and that story wasn’t allowed to move again (I also flagged it as a bad baseline and didn’t include it in future sizing conversations).

Once all the story-stickies had been placed in a bucket, everyone had one last turn to either approve the board or move something. When we got through a round of everyone with no moves, the exercise was done:

stories

The actual outcome of the workshop

Even with technical difficulties and approximately 15 people in the room, we got all of this done in 90 minutes. This is still longer than it would usually take face-to-face (I’d have expected to have needed half the time for a collocated session), but I thought it was pretty good going. And the feedback from the participants was also generally positive 🙂

These stories (except for the one I mentioned) then became baseline stories for comparing to when we did future backlog refinement. Also, because I now knew the total number of points the team had completed in the three sprints (sum of all the stories), we also now knew what our initial velocity was.

Have you ever tried to use Affinity Estimation to determine baselines? Have you tried to do so with a distributed team? What tools did you use? How did it go?

 

I have a new team. It is a fairly large team with varied experiences working in an agile fashion. We’re also still figuring how design fits into our overall value-delivery chain (as an organisation, not only as a team) and recently I found myself having to help clarify some expectations and terminology which had led to some confusion and conflict.

At this point, if you haven’t read these already, I’d recommend you read these two blog posts for some background context:

  1. Change Managing our Backlog Refinement
  2. Overlaying needs, phases and quality concerns

The conversations I had with various team members led to me creating this rough visualisation of what where you are in your Epic and backlog refinement conversations means for the necessary level of granularity of front-end design.

A caveat: I believe there could be a third axis – one for “figure out what the problem is” (Product Discovery), but due to where my team was in the delivery cycle, I left it off to reduce confusion. This scribble assumes we know what problem we need to solve.

UX vs ADKAR

So how does the above work?

The vertical (y) axis represents where your Epic is in its “quality” journey. Are we still just focusing on getting something working (Functional/Reliable)? Or are we polishing a feature that is almost ready for release or is already released (Usable/Pleasurable)?

The horizontal (x) axis represents where in the refinement process we are with the team for the piece we’re hoping to take into our next few sprints.

The colourful blocks (intersections) represent what level of visualisation the team probably needs for their refinement conversation based on x and y. So if we’re just starting on an Epic and want to get to a slice that does the basics for internal testing or Beta feedback (usable) and the team have had a conversation to understand the problem and are starting to think a bit more how to slice the next chunk into pieces of user value (Break-down), then you probably don’t need anything more granular than a wireframe to work with to have the conversation and figure out the next steps. If you’re already bringing pixel-perfect designs into a conversation at this point, there’s a lot of risk that there will be expensive re-work required. There’s also a risk you’ll miss out on a more creative approach/idea because the team will be less willing to suggest changes to something that has already had so much effort put into it. Finally, because pixel-perfect isn’t necessarily required for something usable, having a lower level of granularity (in this example, high fidelity) means it’s easier to make changes when we get feedback from users of the “usable” version of the Epic.

So far I have used this tool to help team members know what they need to prep for a backlog refinement conversation, and I plan to use it when facilitating conversations to the right-hand side of the y-axis so that team members remember that “grooming” isn’t always about ready stories.

Would you find this a useful tool? If you have used it, let me know how it went.

One of the challenges of working with software is that one is rarely building something from scratch. Usually, a team is working across a mix of maintaining existing (sometimes old) code; refactoring/migrating old concepts to new places; experimenting with new ideas; and/or just changing things to make life better. This can make it really hard to decide just how much effort needs to be put into a piece of work. Does it need to be the all-singing-dancing-tested story? Or will a cowboy-hack for fast feedback work just as well? And what does it mean to cowboy-hack in the financial industry vs for a game used by ten of my family members?

I work with a team that is working in the financial industry for a fairly high profile client that is nothing less than obsessed with their brand and reputation. Which is a good thing to worry about when people are trusting us with their life savings. We’re also busy migrating our web offering to a new technical platform, except it’s not purely a migration because we’re also extending our current offering to work on mobile devices, so there’s experimentation and user testing required too. In this environment, it’s very hard to deploy anything that hasn’t got a solid safety net (automated testing) whether it’s feature toggled on or not. In this environment, the focus is on keeping things safe rather than learning fast. And, sometimes, that leads to us over-engineering solutions when we’re not actually even sure we’re going to keep them. And that all means sometimes learning is very expensive and usually takes a long time.

With that context, we have recently had some conversations as a team about how we can narrow down our options and learn more quickly and cheaply for those pieces where we really don’t know what we need to build. A lot of good questions came out of that session, including one about when can we just hack something together to test a theory and when should we be applying due diligence to ensure code is “safe” and tested. Here are my thoughts (using two well-known and useful models – especially when used together) for thinking about when it is OK to experiment without worrying about things that keep us “safe”, and when we need to keep “safety” in mind while we iterate over an idea.

(In case it isn’t clear, there are cheaper ways than writing code to test out an idea – and those would be first prize. However, at some point you probably will need to build something to progress the fact-finding of ideas that have passed the through the first few experiment filters.)

The first model is fairly well known (and you can find a full explanation here):

MVP kniberg

The second is one I’ve newly learned about, and using them together was a great a-ha moment for me 🙂

 

Hierarchy of User Needs

 

How “safe” you need to keep stuff depends on where in the learning phase you are and what your experiment outcomes relate to.

The first phase is testing an idea. This is figuring out if something will even work at all. Will someone like it enough to buy it or pay for it? Do we even have the skills to use this? Does that third-party plugin do all the special things we need it to? At this point, you want to learn as quickly as possible whether or not your idea is even feasible. You may never even use it/build it. If you can even avoid writing code (the book “The Lean Startup” is full of great examples), then you should. It’s about getting as short a feedback loop going as possible. In the diagram, the person wanted something to help them got from A to B. One of the things that was tested was a bus ticket.

The second phase is building something usable. This speaks to the two bottom layers (and some of the third) of the hierarchy of user needs triangle above. Here we probably do need to keep our code safe because it’s not really usable (properly usable) until it is released. We will still be experimenting and learning, but some of those experiments will be around how does this work “in the wild” – e.g. from a user perspective, a performance perspective, or a feature perspective. Whatever you build in this phase you’re probably going to mostly keep and iterate over (because you’ve already tested the idea is valid), so you do have to consider maintenance (and, thus, care about things like automated testing) in this phase. At a micro level, within this phase, there probably will still be ideas that you need to test out and most of the experiments are more likely to be around how rather than what, but there is a blend of both.

The final phase – tackling the rest of the pyramid – includes adding in those “special effects” but ALSO incorporating more of the nice-to-have feedback from users that you should have gained from all your experiments along the way. In the transport example, the car at the end is a convertible rather than a sedan. A sedan would have been perfectly usable, however at some point when they were testing out ideas, the user mentioned that the one reason they preferred the skateboard over the bus ticket is that they liked to feel the fresh air in their face. That kind of feedback is what makes things lovable/special – although in no way does it may it more usable. FYI most organisations running an iterative approach often choose to stop once they get here, due to the law of diminishing returns. The problem with waterfall is because everything was done big bang, you didn’t have the option of stopping when you’d “done enough” (because there were no feedback loops to figure out when done enough was).

Depending on a team’s context, we could be anywhere on this journey depending on which idea or feature we are talking about. Which is why the WHY becomes super important when we’re discussing a product backlog item. WHY are we doing this piece of work? WHAT do we need to get out of it? WHAT is the assumption that we want to test? And what does that mean for how much effort and rigor we need to apply to that piece of work?

Besides helping teams identify what kind of work they should be focusing on, these two models (especially the user needs one) have been helpful to me when having conversations of where UX and UI fit into the Scrum team mix. If you think of UX as part of making things basically usable (the bottom part of that layer if you will); a combination of UX/UI moving you into the pleasurable layer; and UI creating the cherry-on-top – then it’s easier to see when just building to a wireframe is OK (to get feedback on functional and reliable) vs having something pixel perfect with animations (that incorporates what we learned from testing functional, reliable features).

Or, roughly:

User Needs wth UI-UX

Let me know if you find these two models helpful. Have you used them before? Where have you found that they didn’t translate that well into a real life scenario? What did you learn?

Scrum Ceremony Puzzle

Posted: November 19, 2018 in facilitation, Scrum
Tags: , , ,

Image result for puzzleThis is a “game” I played as a kick-off activity with a new team. Thought I would share in case someone might find it useful 🙂

The basic idea is to match the correct Purpose, Outcome(s) and Output(s) to each Scrum ceremony.

Preparation:

  • Print out the Ceremony Boards (each team will need its own set)
  • Prepare the Ceremony Cards (each team will need its own set)

Game Play:

  • Place the cards on the boards in the correct spaces
  • The cards are colour coded as a Purpose/Outcome/Output to help

How we played it:

  • I divided my participants into two teams and made a bit of a competition of it.
  • Both teams started in opposite corners of the room and ran to a table in the middle to try solve the puzzle.
  • When a team thought they had solved it, they would signal and both teams went back to their respective corners (think “Survivor style).
  • I would then check the solve against my solution. If the team had solved the puzzle correctly, then they were the winners. If the solution was not correct, both teams could return to their tables to continue working on the puzzle.

Feel free to try out the game and give me feedback!

Resources:

Image result for yes but improv gameRecently my team played a loose version of the “Yes, But” improv game at the beginning of a retrospective (retro) as an icebreaker. I say loose, because we played it in a round (rather than in pairs) and did two rounds. I started each round with the same statement: “I think we should have snacks at Retro” (this is something that often comes up – tongue-in-cheek – during retro conversations).

For round one, the next person in line always had to respond starting with “Yes, but”. At the end of the round (we were seated in a circle), I asked the group to silently pay attention to how they felt and what they experienced during the exercise.

For round two, the next person in line had to respond starting with “Yes, and…”. At the end of the second round I asked some questions about how the team experienced both rounds:

  • How did the first round feel?
  • How did the second round feel?
  • What made a round difficult?
  • What did or could you do to make a round easier?
  • What does this mean for how we respond to each other as a team?

Interestingly (and unexpectedly), my team struggled more with the “Yes, and” round than the “Yes, but” round. To the extent that one team member couldn’t even think of something to say for the “Yes, and” round! At first I was a little stumped, but as we discussed further we realised that:

  1. As a team, we found it more natural to poke holes in ideas rather than add to ideas we didn’t completely agree with.
  2. When we didn’t agree completely with a statement, we got “stuck” and couldn’t think (easily) of a way to add to the statement.

As an example for point 2, above, one person responded to the statement with: “Yes, and we will need to do exercise”. The person following them really struggled to respond (because they don’t like exercise) and didn’t really come up with anything convincing. As a group, after some thought, we eventually realised that “Yes, and it can be optional” would have been a perfectly valid response. However, as a group, it took us a while to get there. So it definitely wasn’t something that came naturally to us.

For me, these were quite cool insights, and probably good for a team to be aware of, particularly when we’re struggling with new problems or trying to find creative solutions.

Have you tried similar games? What did you learn or experience? How has it helped your team?

6hatsThis is my take on using “Six Thinking Hats” to reflect on a period of time. You could use a light version for a retro – or the full version to review something longer like the stage of a project or a release. It’s usually most effective after some milestone event and where the learnings can be applied going forward. There is still value in doing it at the end of a project, but what you get out of it for future teams may not be as valuable as you won’t always know what will be applicable.

Preparation

In order to save time in the session, you do need to do a fair bit of preparation. Try and collect as many facts about the time period as you possibly can before the session. Facts are anything backed by data and some common “facts” one might include are:
– Team changes (people joining/leaving)
– Events (e.g. release dates)
– Sprint information (velocity; commitment; actuals; sprint goal; etc.)
– Changes in process
– Special workshops or meetings
– Any data provided by metrics

I’ve found the most effective way to use the facts in the session (and the rest of my post assumes you have done this) is to map them onto a really large timeline. I typically use a sequence of flip chart pages that can be laid out so that attendees can literally “walk the line”. I’ve stuck them up on long walls or laid them out on a row of tables and even used the floor where I needed to.

It is also useful (for the reflectors in the team) to send out a description of the hats in advance and ask them to think about each one before the session.

Before you start your workshop, you have to set up the room (also see the tips at the end of this post):

  1. Lay out your timeline
  2. Ensure there is space for people to easily walk along it
  3. Have various stationary points along the timeline with pens and stickies
  4. Don’t forget to have a whiteboard or some other space for grouping the ideas

Materials

Besides your “timeline” of Facts, you will also need:

  • Small pieces of paper that people can write appreciations on
  • Pens
  • Stickies: one colour per hat
  • *Optional* Snacks

For the different hats, I usually use the following colours

  • Facts: N/A (people write directly on the timeline)
  • Instincts: Red
  • Discernment: Blue
  • Positives: Yellow
  • Ideas: Green

Process

The process I follow is largely based on this one and, as they mention, I have found that the order is fairly important.

For an average team, I time-box each section to about 10 minutes. Breaks need to be at least 5 minutes, but could vary depending on the time of the day (e.g. you may need a lunch break). If you are going to use the data in the session to come up with actions and improvements, then your time-box for that part will depend on what technique you plan on using. Obviously these may need to be adjusted based on the size of the group, but as most of the steps are self-paced, one advantage of this workshop is that it works quite well with larger groups.

Round 1: Facts

Have the attendees “walk the line” from the beginning to the end. This is a walk down memory lane and also a chance to fill in any blanks and ensure everyone agrees that the facts are correct. There are no stickies for this step – if people want to add or change anything they do that by writing directly onto the timeline (at the right point in time, of course). Remember to remind everyone that they should only be adding facts.

Round 2: Instincts

“Gut Feel”

Hand out your “instinct” stickies. Remind every one of the definition of an “instinct”. I sometimes skip this round because people struggle to differentiate between “instincts” and “positives/negatives”.

Appreciations and Break

Give everyone a chance to write appreciations (these will be shared later – either at the end of the session or afterwards). It’s also a good point to have a short break.

Round 3: Discernment

“Devil’s Advocate”

Make sure you’ve collected the “instinct” stickies and that the next colour of stickies is available. Remind everyone what the definition of “discernment” is. Everyone repeats their walk of the timeline, this time adding stickies to the timeline for things that didn’t go well or were disappointments.

Cool off

Have another break (in case things got emotional). Have people write more appreciations.

Round 4: Positives

“Keep doing this”

This is the last walk of the timeline. Again, remind people of the definition of “positives” and ensure there are only “positive” stickies lying around for people to use. They walk the timeline one final time and add stickies for things that went well.

Lastly: Ideas

“If I did it again”

There are various ways to capture and categorise ideas. The intention of this round is that attendees use the timeline to stimulate their thinking of how they could have done things better. Or how they would do things differently if they had to do it again. This is  sometimes also described as “green fields” thinking.

And then (now or later)…

If you were using this technique for a retrospective, you would ideally get actions from the information as part of your session. If the session was to reflect on a project, perhaps the data would be grouped into things like “Good ways to kick off” and shared with other teams. I’m quite a fan of the quadrant method of grouping similar stickies to find topics to address (see photos below for examples from a retrospective I did). What you do next all depends on the ultimate purpose of your session.

quadrants

Tips

  • Only let the attendees have access to the writing materials relevant for the round i.e. gather up the stickies from the previous round and “change colours” for the next round.
  • Have a number of “stationary points” – so that people can grab stickies and pens as soon as they have a thought.
  • Related to the above, have an excess of stationary and pens so people don’t have to wait for each other.
  • When preparing your timeline, try use pictures/symbols/colours to create visual patterns and cues for repeat facts e.g. document your sprint information in the same colour and layout for every sprint on the timeline or have a bug symbol where you have added statistics around bugs.
  • Don’t forget to share the appreciations! Especially if you’ve decided not to do so in the session.

I have applied this technique a couple of times and used the output for various things:

  1. We’ve used it to gather data for a team which was unfortunately not very transparent and then used that data to paint a “real picture” for external stakeholders.
  2. We’ve used it in retrospectives to identify improvements to team / process / products.
  3. We’ve used it at the end of a project to create guidelines and lessons learned for future projects and teams.

timeline

Have you used this technique before? What worked or did not work for you? Where might this approach be useful?

I not-so-recently attended the regional Scrum Gathering for 2017 in Cape Town. This is a post in a series on some of the talks that I attended. All posts are based on the sketch-notes that I took in the sessions. 

Image result for help your team

I think a better title for this talk by Gavin Stevens would have been “How to be an awesome Product Owner”.

The basic structure of this talk for Product Owners was:

  1. A dysfunction and how you will notice it
  2. What you (as the Product Owner) can do about it
  3. Some ideas for how you can take action

The seven dysfunctions that Gavin touched on were:

  1. The team has lost sight of the vision
  2. The Corporate Mentality is that the PO is the leader of the team
  3. The team is not autonomous in the sprint
  4. The team isn’t motivated to finish the sprint early
  5. The PO is the middle man
  6. The PO needs to check everything
  7. The PO is not with the team

The team has lost sight of the vision

What you will notice are “weird” statements or questions like:

  • “How does this fit in?”
  • “Why now?”
  • “If I had known…”

The solution is to ask the team those weird questions when they are planning / discussing their work before the sprint starts.

Corporate Mentality: the PO is the leader

You will hear language like “Gavin and team”. Or “well done to Gavin for this great delivery”. The organisation has not yet realised that the PO is a leader – not THE leader.

What you need to do is create an understanding of the team structure. That the team structure is flat and that there is no single leader in a self-organising team.

The team is not autonomous in the sprint

The team doesn’t seem to be able to make decisions about how they work in the sprint. You, as the PO, are always the “go-to” guy.

What you need to do is empower the team.

Some ways you can do this are:

  • Don’t interfere during the sprint. Trust that the team knows what the goal is (because that was your job) and let them decide how to achieve it.
  • You make the decisions about why, and maybe what, but never how.

The team is not motivated to finish the sprint early

The team doesn’t seem motivated to try to achieve the sprint goal before the end of the sprint. This may be because, as soon as they finish, more work is automatically added to the sprint.

You need to create space. People need time to be creative. One way to achieve this is to tell the team that when they hit the sprint goal, the remainder of the sprint time is theirs to do with what they please.

Gavin mentioned that what the team then chooses to do is also often insightful. If they’re bought into the roadmap, they’ll usually choose to pick up the next pieces on the backlog. If they choose to do something completely different, then it’s usually a good idea to question why they feel that the work they chose to do is more valuable.

The Product Owner is the Middle Man

Which also means the PO is a blocker. Because if you’re not there, everything stops.

Some of the signs that you are a blocker are:

  1. You are the admin secretary who needs to check everything before the team releases
  2. Selfish backlogs – no one besides the PO is allowed to touch the Product Backlog
  3. You are a layer between the team and stakeholders

If you find the reason you are checking everything at the end is because it’s not aligned to what you expected, then you need to examine all the up-front areas where you are responsible for conveying what the team needs to deliver the right thing

  • Have you communicated the vision properly?
  • Did you help the team ask (and answer) the right questions in grooming and planning?

Once you believe that you have given the team the right information to build the right thing, then make each team member the “final checker”. Any team member can do that final check that something is ready to release (because a second set of eyes is usually a good thing – it just doesn’t need to always be the PO).

Fixing selfish backlogs is “relatively” easy – let others add to the Backlog. Ordering is still your decision, but what goes onto the list can be from anyone.

The Product Owner needs to check everything

The reason for this is normally related to a lack of trust: this happens when the PO doesn’t trust the team. Some of the signs are the same as when the PO is the middle man.

Building trust is a two-way street: the team need to trust the PO as much as the PO needs to trust the team.

One way to build trust is to create a safe space. Do this by

  • Allowing team members to learn from their mistakes
  • Not blaming
  • Protecting the team from external factors
  • Taking the fall when necessary

A second way to build trust is to tell team members that you trust them. For example, when a team member says “I’ve done all the tests and I think this is ready to go. Can we release?”, then don’t answer with a yes. Rather say “I trust you know when it is ready so you can make that call”.

The Product Owner is not with the team

The Product Owner needs to be available – to the team and to stakeholders. Although the PO should not be a middle man, one of the main functions of the role is to act as a translator between the team and stakeholders. How much you need to sit with your team does depends, and is somewhere on a continuum between “Always” and “Never”.

Some things you can try to make yourself more available are:

  1. Don’t schedule/accept consecutive meetings
  2. If you do have downtime and work in an environment where working distributed would be an option, rather choose to spend it at your desk if possible.

 

I found this an interesting talk and what was especially great was it paralleled much of what we have experienced as a team. Are you a Product Owner? What are your thoughts on what Gavin had to say?

I have a new Product Owner (PO). She is new to the team and also relatively new to the Product Owner role as a whole. One of the things that I have needed to coach her on is Backlog Refinement or Grooming – and how to do so at the various levels of the Product Backlog. This is something I’ve seen many teams struggle with in the past. Either the team sees stories too late (and there’s a rush and they don’t really problem-solve); or they see them too soon (and we go into the detail far too early and forget most of it when/if we eventually pull the work into a sprint); or there’s some gap between understanding the problem and getting into the detail which means some team members get left behind and never really own the work.

So, this time, I decided to try a different way of evaluating how far a story/epic/feature was in its “readiness” journey (and, thus, how much more grooming the team still needed to do and at what level). And, what I decided to try was overlaying the ADKAR change management steps on each of the items that the PO had ordered near the top of our Product Backlog for the next couple of sprints.

So, for each item we care about, we regularly check whether the team has moved through:

1. Awareness

Do they even know this is coming? Do they know what it is? Are they familiar with the domain and the problem space?

2. Desire

Has the team engaged with the problem? Do they understand what we are trying to solve for the user? Do they WANT to solve this problem for the user?

3. Knowledge

Have we covered all the aspects of the problem? Have we started thinking about how we might want to solve the problem and what questions we want to answer or explore? Are we starting to get a feel for the steps involved and how we can break this thing down?

4. Action

We’re doing what we need to do to be ready to start on this work in the next sprint. We’re ensuring ourhighest ordered stories are INVEST.

5. Reinforcement

We’ve finished our story – what did we learn? (The Sprint Review is one natural place where this happens.)

 

Mostly, both in change management and backlog refinement, people seem to tend to launch in around steps 3 (Knowledge) or 4 (Action). Together with the Product Owner, I now try make sure we’ve moved through each of the first four steps (ideally in order) before we pull a story into the sprint. And if we realise we’ve missed a step, we try to go back to it as quickly as we can. This doesn’t need to take long at all – for the Awareness step it’s often just a short communication – after a stand-up, or perhaps in a Review when sharing the roadmap. In fact, for the Awareness step, I’ve noticed that it doesn’t have to be a long thing at all. Often the best way to move teams through the Awareness step is to repeat something brief many times, over many channels, and over a short period.

We’ve been trying this for about two months now and, so far, it seems to be working better than anything I’ve tried before when checking to see that the state of the backlog matches where the team’s understanding is of a particular piece of work. And, because we’re not skipping any important parts of our team coming to grips with the problem, we seem to be uncovering important questions and potential challenges earlier in the process.

What tools have you tried to help with Backlog Refinement? Or where have you borrowed from one problem space (e.g. Change Management) and found it has helped with a challenge in another problem space (e.g. Backlog Refinement)?

Image result for grooming backlog