Posts Tagged ‘lessons’

top-30-open-ended-questions-570x375I’ve always been aware that open-ended questions are good. They allow someone to answer from their perspective and context rather than being constrained to the limitations imposed by your yes/no options. They also allow someone to challenge an assumption or idea (“How do I look in this dress?”) in a way that is potentially less confrontational (“Does this dress make me look fat?”).

All that said, even after some training in “better questions”, I often find myself regressing back to the good old yes/no in conversations – usually without even being aware of it. That is, until I went on some awesome Agile Facilitation training recently where we learnt a really useful trick:
1. Assume the answer to your question is “yes” (Do we have anything to share with the other teams?)
2. Ask the follow-up question (What will we be sharing with the other teams?)

Ta-da: easy peasy open-ended question! And, if you’re worried about the fact that the answer to your yes/no question may actually be no, if you think about it “no one” or “nothing” are both valid responses to an open-ended question.

I still ask yes/no questions. I am trying very hard not to (especially when facilitating). This trick seems to have helped me become better at self-correcting and gives me an easy way to figure what I can ask instead.

This trick has helped me immensely. Give it a try. Let me know what you discover.

This is a post in a series on the talks that I attended at my first ever Agile Africa Conference in Johannesburg. All posts are based on the sketch-notes that I took in the sessions. 

I’m not sure if you recall our first experiment with self-selection? Imagine my surprise when I realised that our keynote speaker on the topic, Sandy Mamoli, was the very same person who had been part of the team that created the material we’d used for our own self-selection attempts. As we’d also run a second less successful experiment, I was quite interested to hear a little more “from the horse’s mouth”, so to speak.

agileafrica.JPG

There were some key points shared by Sandy that helped me understand a little more why our second attempt had not succeeded from an expected outcome perspective, but had, in fact, succeeded from a feedback perspective.

  1. Purpose is important. Squads form around a strong purpose that they can buy into.
  2. Self-selection will always fail if management selection is going to be done afterwards to ‘tweak’ the outcomes. (Her solution: make the blueprints for the new squads very visible everywhere as soon as possible after the session.)
  3. She shared a story where squads would not form around a particular vision or goal and usually where that happened there were deeper issues at the root which needed to be resolved before a team would be successful.
  4. Self-selection should NOT directly or explicitly impact reporting lines.

So, upon reflection, after comparing our first and second experiment and adding in some of the tips from Sandy, these were my conclusions:

  • Self-selection should happen independently of reporting lines
    • The first time we did it, there was no impact on reporting lines; the second time we did it, reporting lines were impacted by which squad you moved into.
  • Try keep “people owners” as observers not players
    • There was a subtle form of “Liar Liar” in our second attempt as every time there was a significant shift in numbers to one or other ‘cost center’ then the managers (who were also the Product Owners) had a quick chat about how to re-balance things. In our first experiment, everyone remained in the same “cost center”.
  • Ensure the Product Owners are well prepared in terms of their vision
    • The second time we ran self-selection, one Product Owner had a very clear and mature view of what the squad would be achieving; whereas the other Product Owner was new to the space and hadn’t really had time to formulate their thinking and strategy properly.

Sandy also advised the following:

  • Where squads have fully formed after a couple of rounds, move forward with those squads (regardless of the state of the others).
  • Where squads aren’t fully formed because there just aren’t enough people, have the squad members identify what ‘imaginary friends’ they need to ‘hire’ to form a full squad.
  • Where squads haven’t formed for less obvious reasons and/or people refuse to participate in the process (and choose no-squad), revert to traditional management selection for those people and root cause the reasons for the resistance.

This keynote was very valuable to me as it shifted my perspective of our second attempt from being a failure to being a great source of useful feedback about the state of a particular space. Have you tried self-selection? What was the outcome? What did you learn?

 

Being Brave

Posted: March 24, 2016 in Team
Tags: , , , , ,

braveMy natural personality is more introverted. My natural reaction to conflict is  avoidance where possible. These personal attributes often make some of what I, as an Agile Coach, need to do, quite challenging. This week there were two conversations that I knew I had to have. One was with my Product Owner around something he had done that had affected me negatively on an emotional level. The second was with a ‘difficult’ team member who was responding to me quite defensively in sessions and I needed to unpack the why. Neither of these were conversations I relished having (especially as I would need to raise the topic of discussion) and they caused me at least one sleepless night.

Thankfully, over the years, I have had training in various ways to open conversations like these and I often rely on this training to try to at least start the conversation off with the right language and framing (invariably the wheels do fall off as the conversation progresses). One of my favourite fall-backs is the “I think, I feel, I need” tool which, as artificial as it sounds and feels, seems to work really nicely – especially when you’re raising something where the impact on you is subjective (like an emotion). I also try to remember to focus on describing actions and behaviours rather than attributes and, finally, to try to start any question with “What”. As I mentioned, I try. I’m not always successful 🙂

Anyway, in both cases this week, I kicked off the session with much trepidation, introduced the elephant I wanted to discuss, and then mentally closed my eyes and waited for the fall-out, unsure of whether I had the courage to face whatever that fall-out was in a constructive way. Thankfully, for both, there was no real fall-out and I felt that we managed to have a constructive conversation without damaging any relationships. I ended the day feeling quite buoyant and really grateful that I had screwed my courage to the wall to have both conversations. Being brave usually pays off, you see, because often what we fear is only in our own heads.

What are the things that you fear in your role? What tools and techniques do you use to help you feel more brave?

Hindsight is 20/20

Posted: December 2, 2015 in Team
Tags: , , ,

There has been a lot of change in my space in the past year. A lot of it hasn’t been managed very well (which creates a lot of ‘people noise’). I’ve been exposed to a number of change management models over the years, including ADKAR and this cool exercise. I quite liked this tool about Levers of Influence which one of our People Operations (a.k.a. HR) team members shared with us. Although I already knew we’d done very badly when it came to change management, when I reviewed the two biggest changes (moving to Feature Teams and reducing our release cycle to having a release window every month rather than a synchronised release every nine-ish weeks), the tool helped highlight examples of what we had done badly, which also meant we could see where we needed to focus our efforts from a recovery perspective.

Levers of Influence

Levers of Influence

This is my retrospective on the change relating to monthly releases and what we did, did not, and should have (probably) done.

1. A Compelling Story

The idea to move to monthly release cycles had been brewing in the senior heads for a while, however we were going through a major upgrade of one of our core systems (in a very waterfall fashion) which meant that anything unrelated to that was not really discussed (to avoid distractions). Our upgrade was remarkably smooth and successful (considering its size and time span) and about two weeks after we went live with it, senior management announced that the release cycle was changing. Not many people had seen this coming and the announcement was all of two sentences (and mixed in with the other left-field announcement of the move to vertical feature teams rather than systems-based domain teams). In hindsight, most people didn’t know what this shorter release cycle meant (or what was being asked of us). Nor was the reason why we were making this change well communicated (so, to most people, it didn’t really make sense).

In hindsight:

  1. The why for the change should have been better communicated. We want to be able to respond more quickly and move to a space where we can release when we’re ready.
  2. The impact should have been better understood. One of the spaces that has most felt the pain is our testers. With a lack of automation throughout our systems and the business IP sitting with a few, select people, the weight of the testing has fallen onto a few poor souls. Combined with this, our testing environments are horrible (unstable, not Production-like, and a pain to deploy to), so merely getting a testing environment that is in a state to test requires a lot of effort across the board.
  3. We should have explored the mechanics/reasons in more detail with smaller groups. For example, it was about 3-4 months before people began to grasp that just because one COULD release monthly, it didn’t mean that one had to. The release window was just that: a window to release stuff that was ready for Production. If you had to skip a release window because you weren’t ready, then that was OK. (A reminder here that our monthly release ‘trains’ are an interim/transition phase – we ultimately want to be able to release as often as we like.)

2. Reinforcement mechanisms

One of the motivating factors for senior management to shorten the release cycle from nine weeks to one month was that our structures, processes and systems for releasing were monolithic and, although the intention had always been to improve the process, it just wasn’t happening. In a way, they created the pain knowing full well that we didn’t have the infrastructure in place to support it, because they wanted to force teams to find ways to deal with that pain. So, in this case, there weren’t any reinforcement mechanisms at all. The closest thing we had was a team that was dedicated to automating deployments across the board that had worked together for about six weeks before the announcement.

In hindsight:

  1. There should have been greater acknowledgement of the fact that we didn’t have the support structures, etc. in place to support the change.
  2. We shouldn’t have done the release cycle and Feature Team change at the same time (as there weren’t structures, processes and systems in place to support that change either).
  3. We should have been more explicit about the support that would be provided to help people align structures, systems and processes to the change.

3. Skills required for change

Shortening the release cycle certainly created opportunities for people to change their behaviour (whether in a good or bad way). Unfortunately most of our teams didn’t have the skill sets to cope with the changes: we were lacking in automation – both testing and deployment – skills. Throwing in the Feature team change with its related ‘team member shuffle’ also meant that some teams were left without the necessary domain knowledge skills too.

In hindsight:

  1. We should have understood better what skills each team would need to benefit from the opportunities in the change.
  2. We should have understood the gaps.
  3. We should have identified and communicated how we would support teams in their new behaviours and in gaining new skill sets.

4. Role modeling

Most people heard the announcement and then went back to work and continued the way they had before. When the change became tangible, they tried to find ways to continue doing what they did before. (A common response in the beginning to “why are we doing this” would be “that’s the way we’ve always done it”.) The leaders who had decided to enforce the change were not involved operationally, so could not be role modelled. Considering the lack of skills and reinforcement mechanisms, role models were few and far between.
In hindsight:

  1. If we’d covered the other levers better, we would have had people better positioned to act as role models
  2. Perhaps we should have considered finding external coaches with experience in this kind of change to help teams role model
  3. Another option may have been to have had a pilot team initially that could later act as a role model for the rest of the teams

 

Have you ever had a successful change? If so, which levers did you manage well? Which ones did you miss?

I recently attended the regional Scrum Gathering for 2015 in Johannesburg. This is a post in a series on the talks that I attended. All posts are based on the sketch-notes that I took in the sessions. 

devops

I was extremely interested in hearing Biase talk about the journey a large bank in South Africa has recently embarked on to realise the benefits of integrated DevOps. Sadly for me, as we have many of the same challenges, their journey is not yet complete so they haven’t yet answered all the questions! I guess no one will ever have the perfect answer, however some recommendations would have been helpful 🙂

Their journey began with a pilot to try to realise the benefits of a team owning the entire value stream and not having different hand-offs between delivery, release management, and support. Some of the benefits of doing this would include:

  • Improved quality
  • Increased knowledge sharing
  • Increased organisational effectiveness
  • Shorter time to market
  • The ability to deploy faster with fewer failures

Their pilot initially consisted of two teams:

  1. A feature team (building the features); and
  2. A DevOps team (building tools to support CI and deployments for the feature team)

The feature team prioritised the work that the DevOps team did and their working relationship was governed by a set of principles and a working agreement. Apparently, through this experiment, they have realised that having two teams doesn’t really work and it is better to integrate DevOps skills into each feature team. Their challenge now is there aren’t enough DevOps skills available for the number of teams that they currently have, so they are trying to find ways to change that. Rather than taking a push approach, they are trying pull techniques like hackathons, demodays and gamification, to encourage the Feature Teams to build the skills from within.

Biase highlighted a number of challenges they experienced at the start of their journey and also the value of finding experts to help teams work through the technical issues. Their next set of experiments on this journey are related to

  • Growing skills from the ground up
  • Creating the necessary culture shift
  • Allowing for organic growth

I look forward to hearing more about their journey to come. What is your experience in including DevOps skills in your cross-functional feature teams?

StoriesThis photo is the list of actions/changes/learnings one of my teams came up with in their most recent retrospective. This did not come from someone who went on training or read an article. It also didn’t come from a new Agile coach or Scrum Master. It came from them missing their sprint commitment and goal. This team only managed (on paper) to complete 8 out of 18 points; but they all knew they had delivered and learned a lot more than that measure reflected.  Here are some things that they decided to do going forward:

1. If the team cannot reach consensus about the size of a story, then split it into two stories and size the smaller stories

One of the main reasons the team had such a poor burn-down is that they took in one quite large story which did not quite meet the INVEST requirements. For one, it was a ‘common component’ that was to be used in most of the later stories (so not independent). It also was not small enough – and turned out to be even bigger than the team had thought. During sizing, there had been some debate about its size and eventually reluctant consensus was to make it the smaller size. Turns out the less optimistic team members were right. This was one of the stories that was not done at the end of the sprint.

2. Keep Planning II – and use it to verify the sprint commitment

This is a team that often decides to skip Planning II (I don’t like it, but ultimately it is the team’s decision and so far we’ve muddled along without it). For this sprint, they decided that they did need a session to unpack the stories and how they would be implemented. Everyone agreed that without Planning II we would have been even worse off. They also realised that at the end of Planning II, there were already some red flags that the big story was bigger than everyone had thought and they could have flagged,  at that point, that the commitment for the sprint was optimistic. The team agreed that, in future, if going into the detail during Planning II revealed some mistaken assumptions, then the team would review the sprint commitment with the Product Owner before kicking off the sprint.

3. Feel free to review story-splits in-sprint

Early in the sprint, the team were already aware that the big story was very big and probably could be split into smaller components. Their assumption was that this wasn’t possible once the sprint had started. For me, re-visiting a story split mid-sprint or once you start the work is not a bad thing: sometimes you don’t know everything up-front. It also, in the case where a story is bigger than expected, gives the Product Owner some more wiggle room (negotiation part of INVEST) to drop/keep parts of the story to successfully meet the sprint goal. Of course, where we have got things really wrong, then sometimes the sprint goal cannot be rescued and the sprint would be cancelled.

4. Raise issues as they happen

Pretty much summarises most of the above points. One of the agile principles is responding to change over following a plan, so when change happens make it visible and decide how to adjust the plan at that point. There’s no point in battling on as planned when you know that the planned path is doomed to fail.

Some references:

One of my teams has recently expressed a desire to become more familiar with XP practices, including more formalised pairing and TDD, and when I offered them the option of doing a code kata during one of our retrospective slots, they were happy to give it a go.

Kata

 

Last year I was fortunate enough to attend a kata at the annual South African Scrum Gathering. The session included some information on what a kata is, some advice on how to run one, plus actually pairing up and practicing. Thankfully the kata used during the Scrum Gathering session was a TDD/Pairing Kata, so I was able to use one that I was familiar with for our first session.

Prep

Preparation entailed
– Sending out some kata context (as not everyone knew what they were)
– Having the team select a development and testing framework and make sure it was all set up
– Finding a suitable room (easier said than done, as it turned out!)
– Sending out some pre-reading about pair programming, particularly the roles and red-green-refactor

From my side, I had to source the kata (thank you, Google) and also enlist the help of some co-facilitators. For this I employed the services of another Scrum Master (who was familiar with katas) and our team architect (who is an advocate of TDD and pairing practices). As you will see from the team’s feedback, having the right coaches/advisors in the room added significant value to the exercise, especially if the majority of your team members are new to the techniques being practiced.

The Session

The Agenda

The Agenda

My session started with an overview of the kata concept and I repeated some key points around TDD and pairing. This probably would not have been necessary if your team is quite good at doing their pre-reading. I did get some feedback that there was too much information and too many instructions up-front, which is probably because both the concept (katas) and the topics (TDD/Pairing) were new ones. In hindsight, some way of splitting the learning probably would have been less confusing. I also printing some of the key TDD/Pairing slides on the back of the kata exercise (although I didn’t notice anyone actually referring back to them during the session). It is important to emphasise that the kata is NOT about how quickly/efficiently you solve the problem, but more about practicing the process to get there. Thankfully, although my team can be quite competitive, I think everyone grasped this.

I decided to run the same kata twice as part of the goal was to actually practice the process itself and one could see the team became a lot more comfortable by the time we ran the second round. We only had an hour, so I opted for 15 minutes per round, which unfortunately meant the round ended just as most teams were hitting ‘flow’. Ideally one would run the kata for half an hour, but in the situation where a team is doing a kata for the first time, I’m not sure whether having one thirty minute Kata is better than two shorter katas to get them familiar. Now that my team have done one, I’d be very comfortable running a single thirty minute session.

The other thing I did was mix up the pairs between sessions. This worked well because people got to see how other teams had approached the same problem plus for more ‘reluctant’ participants (mostly the non-developers), they warmed up by the second round when they realised that most of the team were enjoying the process.

Feedback

 

3Hs

I wrapped up the session with a quick 3H (Helped, Hindered, Hypothesis) exercise to get some feedback for myself on how the session went. Overall, I think it went well and it was nice to see the level of interaction and energy it generated. Generally the feedback was good too, so hopefully I’ll have an opportunity to run more of these in the future. In case you’re thinking of running one yourself, here are some of the things the team felt

  • Helped them:
    – Having an early example (built into our selected kata)
    – Coaches for technical and ‘process’ support
    – Certain team members having experience with the techniques we were practicing
    – Switching partners during the exercise
    – Having the developers write the code (based on instructions) even when they weren’t driving
    – Pairing with someone who thought differently to you
    – Learning in pairs
  • Hindered them:
    – Testing framework (the team could pick what they wanted to use)
    – Too little time
    – Inexperience with katas
    – Initial example was not very clear
    – Non developers lacked context

Links

What have you learnt from running katas? Are there any you would particularly recommend?