This is a post in a series on the talks that I attended at my first ever Agile Africa Conference in Johannesburg. All posts are based on the sketch-notes that I took in the sessions. agileafrica

Despite its short length, this was an interesting talk for me. There were two new ideas that I’d not been exposed to before and quite liked. The first was the term “Most Loved Product” (MLPLOL-48). A MLPLOL-48 (rather than MVP) is a slice of a Product that can be used to gather feedback about the user experience i.e. does my Product makes my customer smile? The important part about a MLPLOL-48 is to not forget the smiley!

CaptureSecond, and largely related, was the concept of what a MLPLOL-48 vertical slice needed to consist of. Whereas vertical slices traditionally relate to the stack on which the Product was built, the UX slice needs to include the functional, reliable, usable and emotional design layers to be a full vertical slice.

The presenters also touched on user feedback and how it was important to consider all aspects of the user experience: what they hear, see, say, do, feel and think. They did caution, though, that “not every colourless liquid in a glass is water”, so although user feedback can tell us what they are hearing, seeing, saying, doing, feeling and thinking; we need to probe a little deeper to determine WHY they are having that experience.

Have you ever heard of a MLPLOL-48? What are your thoughts on the UX vertical slice concept?

This is a post in a series on the talks that I attended at my first ever Agile Africa Conference in Johannesburg. All posts are based on the sketch-notes that I took in the sessions. 

Kniberg

I was very excited at the fact that Henrik Kniberg was presenting at Agile Africa this year. And his opening keynote did not disappoint. First, he made it clear what he meant by Scale. He didn’t mean a couple of teams, or even many teams working on different teams working on separate things – he meant numerous team with various inter-dependencies. And then he told us some ways we could avoid becoming “unagile” when working with lots of teams.

 

He said that the first thing to realise is that for such a complicated ecosystem to work, you want everyone to be operating from a space of high autonomy and high alignment. When you achieve that, then everyone understands the problem and also has the mandate to figure out how to solve it. kniberg1

Kniberg then introduced the concept of a “soup” of ingredients required for an environment in which many teams could thrive and continue to operate in an agile fashion. The ingredients of his soup included:

  1. Shared Purpose
  2. Transparency
  3. Feedback Loops
  4. Clear Priorities
  5. Organisational Learning

1. Shared Purpose

Kniberg emphasised that in order for everyone to have a shared purpose, over-communication was essential. Every team member needs to be in a position where they are able to answer the questions:

  • What are you working on?
  • Why are you working on it – what is its impact?
  • How will you measure when it is done?

He described the following top-down model as an example:

  1. The company agrees on their beliefs. This belief relates to how they are to remain a viable operation. It could be around how the company believes they make money; who their customers are; or what is likely to happen in the future.
  2. Beliefs then break down into North Star Goals. This is a goal that the organisation wants to achieve in order to realise a belief. It needs to have metrics.
  3. Goals are then broken down into “Bets“. It is called a bet because it is a hypothesis: if we do this, then we think (bet) that this will be the impact on our goal.
  4. And, finally, teams have goals which contribute to proving or disproving bets (Kniberg called these tribe goals).

2. Transparency

Being clear about what everyone is working on and why helps to provide context that enables autonomous teams to make intentional rather than accidental decisions. One of the tools Kniberg shared with us was the DIBB Framework.

DIBB Framework

DIBB stands for Data, Insights, Belief and Bets. Basically you have a single space listing all the prioritised Bets and it is very clear what data was used to generate insights that led to the Bets, and then the Belief that particular Bet relates too. Bets can be tracked on a single Kanban taskboard with the columns: Now (what we’re working on now); Next (the Bets we will work on next); and Later (not priority for now). Kniberg emphasised that all of this information should be in a simple format that was easy to share with all (one of the companies he work for used a simple spreadsheet).

3. Feedback Loops

We all know feedback loops are important. Feedback loops help us adjust our course. Scrum and other Agile frameworks already have built-in feedback loops for single teams. It is necessary to increase the feedback loops to cover multiple teams when scaling. Examples of multi-team feedback loops could be a Scrum of Scrums, multi-team retrospectives, Friday demos, Whole Product Review or Alignment Events, and Management Reviews.

Whole Product Review

A whole product review is when all the teams working on a particular product (whether building features or working as a support team) get together to plan for their next timebox. It includes the following components:

  • A demo by each team of what they have done since the last Product Review (to create context)
  • Break-outs per team to plan ahead for the next timebox.
  • Creation of Team Boards that include
    • The goals for the next timebox ordered by impact
    • Stretch goals for the next timebox – these may not be achieved
    • Risks in the next timebox
    • The deliverables that will be delivered at the end of the next timebox
  • A management review of the team boards on the same day before the next timebox starts

Triple Feedback Loop

This actually came out of a conversation I was listening to during one of the Fish Bowl sessions at the conference (and not from Kniberg), but an idea for these multi-level feedback loops could be:

  1. Normal team reviews (Loop 1)
  2. Monthly product reviews (Loop 2 – with all the teams)
  3. Quarterly Sponsor review (Loop 3 whole day with team and management

4. Ordered Clear Priorities

Ordered priorities help teams make the right decisions when trade-offs are required. Priorities should feed down into teams from the Company Beliefs and North Star Goals. Clear priorities help teams answer the questions “If we can only do one, which one would we do and why?”.

5. Organisational Learning

In order to improve, we need to learn. We cannot learn if we don’t have time to stop and process feedback that we receive. Or have time to practice skills we are growing. Kniberg suggested a couple of ways to create slack time in teams:

  • A pull team schedule where teams pull work as they have capacity to do it
  • Scheduled slack time e.g. retrospectives, gaps in sprints, etc.
  • A culture of promoting learning over busyness

Some ideas for cross-team learning:

  • Holding “Lunch and Learn” events where teams get together in an informal session to discuss a particular topic
  • Cross-team retrospectives
  • Embedding team members i.e. having a team member from the team that is sharing temporarily sit with the team that is learning

chef The last part of the keynote dealt with the “chef”. The chef is the person who ensures all the ingredients are available for the soup and continuously tweaks them to ensure the soup is as tasty as possible. In traditional organisations, we used to call this person or persons “leaders”. Leaders are not there to tell the team how to solve the problem. They are there to ensure that the team environment contains the ingredients for autonomy and alignments.

Leader chefs:

  • Create a shared sense of accountability
  • Enable teams to align their decisions to company goals and with other teams
  • Ensure that decisions can be made through transparency and clear ordering of priorities
  • Create slack

My main take-outs were:

  • Intentional rather than accidental decisions
  • Scaling needs autonomous teams that are aligned
  • Some cool ideas for creating top-down goal alignment and transparency
  • Some cool ideas for creating peer-alignment and transparency
  • The role of leaders in an Agile set-up

What were yours?

 

https://sketchingscrummaster.files.wordpress.com/2016/08/hendrik-kniberg1.jpg?w=648

With thanks to sketchingscrummaster.com

Dark Stories

Posted: August 19, 2016 in Team
Tags: , ,

darkstoriesI was following one of the Australian Agile conferences on Twitter recently and someone posted something about a talk on “Black Stories”. The click-trail eventually led me to this link and I decided that the card game sounded quite appealing (regardless of whether it would prove to be useful as a tool or not). A search for “Black Stories” for sale to South Africans almost ended in vain, but thankfully I decided to submit a query to http://www.timelessboardgames.co.za/ and they got back to me almost immediately to say that they had a set of “Dark Stories” in stock. A quick Google confirmed that this was the equivalent thing and, seeing as it was a Friday afternoon and I was in the mood for shopping, I ordered a box, which arrived the following week.

Since then, I’ve played a couple of games with my team. Our organisation has a horrid culture of people being late for meetings (generally five minutes; when I’m planning a session I always plan for at least 10 minutes) so having something for us to do while people dribbled in seemed a great idea. And, if it encouraged people to get there on time, even better! At first, people didn’t really figure out what was going on (I only play one card per session and we time-box it to 10 minutes, so sometimes people arrive after we’re done) but they’re slowly getting the hang of it and I have noticed that the percentage of on-time arrivals is increasing. I also like that it encourages some lateral thinking, which is also something we don’t practice very often in our day-to-day, so hopefully it will stimulate some more creative problem-solving too.

So far, I haven’t managed to take my box home yet to play it in my personal capacity, but that’s OK. A chance to make meetings more fun is far more valuable. If you’ve been struggling with this or getting people to rooms on time, I’d really give this a try. Do you have any other easy tricks or tools you’ve used in the past to set a good tone for a meeting or encourage other helpful behaviours?

Being Brave

Posted: March 24, 2016 in Team
Tags: , , , , ,

braveMy natural personality is more introverted. My natural reaction to conflict is  avoidance where possible. These personal attributes often make some of what I, as an Agile Coach, need to do, quite challenging. This week there were two conversations that I knew I had to have. One was with my Product Owner around something he had done that had affected me negatively on an emotional level. The second was with a ‘difficult’ team member who was responding to me quite defensively in sessions and I needed to unpack the why. Neither of these were conversations I relished having (especially as I would need to raise the topic of discussion) and they caused me at least one sleepless night.

Thankfully, over the years, I have had training in various ways to open conversations like these and I often rely on this training to try to at least start the conversation off with the right language and framing (invariably the wheels do fall off as the conversation progresses). One of my favourite fall-backs is the “I think, I feel, I need” tool which, as artificial as it sounds and feels, seems to work really nicely – especially when you’re raising something where the impact on you is subjective (like an emotion). I also try to remember to focus on describing actions and behaviours rather than attributes and, finally, to try to start any question with “What”. As I mentioned, I try. I’m not always successful🙂

Anyway, in both cases this week, I kicked off the session with much trepidation, introduced the elephant I wanted to discuss, and then mentally closed my eyes and waited for the fall-out, unsure of whether I had the courage to face whatever that fall-out was in a constructive way. Thankfully, for both, there was no real fall-out and I felt that we managed to have a constructive conversation without damaging any relationships. I ended the day feeling quite buoyant and really grateful that I had screwed my courage to the wall to have both conversations. Being brave usually pays off, you see, because often what we fear is only in our own heads.

What are the things that you fear in your role? What tools and techniques do you use to help you feel more brave?

We’ve all bbigcompanyeen there. Someone describes a problem that they want solved (and possibly suggests how they think you should solve it) and in the very next breath asks: “So, how long will it take?”.

Invariably, we get talked into providing some kind of gut feel indication (days/weeks/months/quarters) based on nothing much besides (perhaps) experience. But how often in software do you actually do a rinse-and-repeat of something you’ve done before? In my 10 plus years in IT, never. Never ever.

Unfortunately, we don’t yet work in a world where most people are happy with “we’ll be done when we’re done” so a vague timeline is always needed: if only for coordinating training or the launch email and party. So where does one start?

First, there are some principles/facts-of-life that are important to bear in mind:
1. The cone of uncertainty
2. Some practical examples of the developer cone of uncertainty
3. A good analogy of why our estimates always suck, no matter what data we’ve used

In the beginning….

For me, the first time you can possibly try get a feel for how long your horizon might be, is after you’ve shared the problem with the team and they have had a chance to bandy around some options for solutions. At this point, assuming your team has worked together before, you can try do some planning poker at an Epic level. Pick a “big thing” that the team recently worked on, allocate it a number (3 usually works) and then have the group size “this big thing” relative to the one they have previously completed. I prefer to use a completely random number (like 3) rather than the actual story points delivered for this exercise because otherwise the team might get tied up debating the actual points and not the gut feel relative size.

Now, if you have a known velocity and also know the points delivered for the big thing we already built, you should be able to calculate an approximate size for the new piece and use your velocity to find a date range with variance (don’t forget about that cone!). For example:
– If we agreed our “Bake a cake” epic was a 3
– And then sized the “Bake a wedding cake” epic as a 5
– And “Bake a cake” was about 150 points to deliver
– Then “Bake a wedding cake” is probably about 3/5*150 = 225 points to deliver
– Which means you’re probably in for 225/velocity sprints (with 50% variance)

At the very least, this should help you pin-point which year and perhaps even quarter this thing is likely to be delivered. (Don’t make any promises though – remember the cone!)

When we know more….

Now, if you’re doing things properly, your team will groom the big epic and slowly start agreeing on small to medium stories and perhaps even slices. Ideally you’ll have a story map. At some point, you should have a list of stories (or themes or whatever) that more-or-less cover the full solution that the team intends to build. At this point, it is possible to do some Affinity Estimation, which will give you another estimate of the total size (in points) relatively quickly that you can sanity check with the help of velocity against your previous guesstimate. If you’re working with a new team and haven’t got a velocity yet, this is also the time when you can try ‘guess’ your velocity by placing a couple of stories into two-week buckets based on whether the team feels that they can finish them or not. This article explains this process in a bit more detail.

Keep checking yourself…

You will probably find that when you do Affinity Estimation that you will still have some biggish stories in the list, which is OK. Over time as these break down, it’s probably a good idea to repeat the exercise (unless you’re so great at grooming that your entire backlog has already been sized using Planning Poker). Until you have sized everything in detail to ready stories, Affinity Estimation is the quickest way to determine a total remaining backlog size that is reasonably useful. Over time, if you maintain your burn-up, you’ll be able to track your progress and re-adjust plans as you go along.

Did you find this post useful? Have you used these techniques before? What other techniques have you used to try build a view of your release roadmap?

Hindsight is 20/20

Posted: December 2, 2015 in Team
Tags: , , ,

There has been a lot of change in my space in the past year. A lot of it hasn’t been managed very well (which creates a lot of ‘people noise’). I’ve been exposed to a number of change management models over the years, including ADKAR and this cool exercise. I quite liked this tool about Levers of Influence which one of our People Operations (a.k.a. HR) team members shared with us. Although I already knew we’d done very badly when it came to change management, when I reviewed the two biggest changes (moving to Feature Teams and reducing our release cycle to having a release window every month rather than a synchronised release every nine-ish weeks), the tool helped highlight examples of what we had done badly, which also meant we could see where we needed to focus our efforts from a recovery perspective.

Levers of Influence

Levers of Influence

This is my retrospective on the change relating to monthly releases and what we did, did not, and should have (probably) done.

1. A Compelling Story

The idea to move to monthly release cycles had been brewing in the senior heads for a while, however we were going through a major upgrade of one of our core systems (in a very waterfall fashion) which meant that anything unrelated to that was not really discussed (to avoid distractions). Our upgrade was remarkably smooth and successful (considering its size and time span) and about two weeks after we went live with it, senior management announced that the release cycle was changing. Not many people had seen this coming and the announcement was all of two sentences (and mixed in with the other left-field announcement of the move to vertical feature teams rather than systems-based domain teams). In hindsight, most people didn’t know what this shorter release cycle meant (or what was being asked of us). Nor was the reason why we were making this change well communicated (so, to most people, it didn’t really make sense).

In hindsight:

  1. The why for the change should have been better communicated. We want to be able to respond more quickly and move to a space where we can release when we’re ready.
  2. The impact should have been better understood. One of the spaces that has most felt the pain is our testers. With a lack of automation throughout our systems and the business IP sitting with a few, select people, the weight of the testing has fallen onto a few poor souls. Combined with this, our testing environments are horrible (unstable, not Production-like, and a pain to deploy to), so merely getting a testing environment that is in a state to test requires a lot of effort across the board.
  3. We should have explored the mechanics/reasons in more detail with smaller groups. For example, it was about 3-4 months before people began to grasp that just because one COULD release monthly, it didn’t mean that one had to. The release window was just that: a window to release stuff that was ready for Production. If you had to skip a release window because you weren’t ready, then that was OK. (A reminder here that our monthly release ‘trains’ are an interim/transition phase – we ultimately want to be able to release as often as we like.)

2. Reinforcement mechanisms

One of the motivating factors for senior management to shorten the release cycle from nine weeks to one month was that our structures, processes and systems for releasing were monolithic and, although the intention had always been to improve the process, it just wasn’t happening. In a way, they created the pain knowing full well that we didn’t have the infrastructure in place to support it, because they wanted to force teams to find ways to deal with that pain. So, in this case, there weren’t any reinforcement mechanisms at all. The closest thing we had was a team that was dedicated to automating deployments across the board that had worked together for about six weeks before the announcement.

In hindsight:

  1. There should have been greater acknowledgement of the fact that we didn’t have the support structures, etc. in place to support the change.
  2. We shouldn’t have done the release cycle and Feature Team change at the same time (as there weren’t structures, processes and systems in place to support that change either).
  3. We should have been more explicit about the support that would be provided to help people align structures, systems and processes to the change.

3. Skills required for change

Shortening the release cycle certainly created opportunities for people to change their behaviour (whether in a good or bad way). Unfortunately most of our teams didn’t have the skill sets to cope with the changes: we were lacking in automation – both testing and deployment – skills. Throwing in the Feature team change with its related ‘team member shuffle’ also meant that some teams were left without the necessary domain knowledge skills too.

In hindsight:

  1. We should have understood better what skills each team would need to benefit from the opportunities in the change.
  2. We should have understood the gaps.
  3. We should have identified and communicated how we would support teams in their new behaviours and in gaining new skill sets.

4. Role modeling

Most people heard the announcement and then went back to work and continued the way they had before. When the change became tangible, they tried to find ways to continue doing what they did before. (A common response in the beginning to “why are we doing this” would be “that’s the way we’ve always done it”.) The leaders who had decided to enforce the change were not involved operationally, so could not be role modelled. Considering the lack of skills and reinforcement mechanisms, role models were few and far between.
In hindsight:

  1. If we’d covered the other levers better, we would have had people better positioned to act as role models
  2. Perhaps we should have considered finding external coaches with experience in this kind of change to help teams role model
  3. Another option may have been to have had a pilot team initially that could later act as a role model for the rest of the teams

 

Have you ever had a successful change? If so, which levers did you manage well? Which ones did you miss?

I recently attended the regional Scrum Gathering for 2015 in Johannesburg. This is a post in a series on the talks that I attended. All posts are based on the sketch-notes that I took in the sessions. 

This was one of the most enlightening talks of the two days for me (enlightening in that I had a new insight). As you can see, even my sketch-note is crammed with un-creative text.

change

An started the talk with some group feedback on the word ‘judgment’ and its associations. Largely, judgement is associated with negative emotions and often results in fear. Fearful people struggle to think clearly which means people who fear a change will self-limit their vision and possibilities. I went to another session later in the conference where we explored the difference between opinion (facts-based) and judgement (interpretation-based) a little more explicitly, but more on that in a later post.

An facilitated most of the session using a tool that I think would be useful for most change in organisations. The tool starts with identifying ‘change monsters’ – these are things that can create anxiety in the people affected by the change – and include:

  • Uncertainty/unknowns/change from the what we know
  • Fear of failure
  • Loss
  • Culture
  • Complexity

For each “monster”, we unpacked the following questions:

  1. What behaviours would we observe in people who were experiencing that particular monster? For example, people who have a fear of failure may want to micromanage everything.
  2. For each behaviour, how did the “change” (in our example, agile adoption) reinforce or enhance the negative behaviour. This was an interesting concept for me as usually change agents associate only positive things with the change that they are championing, and we do not stop to think about how the change itself increases resistant behaviours. For example, in the case of agile, the lack of clear roles may reinforce behaviours related to people’s fear of loss.
  3. For each reinforcement, identify ways in which you can manage or counteract the behaviour and/or the impact of the change on that behaviour.

An provided us with a long list of ideas for dealing with the ‘change monsters’ – you can refer to my sketch-note for most of them. Think about a change you are or have implemented in your organisation. Which monsters and behaviours did that change enhance?